Prosecution Insights
Last updated: April 19, 2026
Application No. 18/446,859

FEDERATED LEARNING METHOD USING ARTIFICIAL INTELLIGENCE

Non-Final OA §103
Filed
Aug 09, 2023
Examiner
DWIVEDI, MAHESH H
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Research & Business Foundation Sungkyunkwan University
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
74%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
521 granted / 751 resolved
+14.4% vs TC avg
Minimal +4% lift
Without
With
+4.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
21 currently pending
Career history
772
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
40.2%
+0.2% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 751 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Priority 2. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement 3. The information disclosure statement (IDS) submitted on 07/04/2025 has been received, entered into the record, and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 7. Claims 1 , 5-7, and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over McMahan et al. ( U.S. PGPUB 2019/0340534 ), in view of Lee et al. (U.S. PGPUB 2023/0136378), and further in view of Zhang et al. (Article entitled “ Federated Learning with Domain Generalization ”, dated 20 November 2021) . 8. Regarding claim 1 , McMahan teaches a federated learning method comprising: A) receiving a model from a central server (Paragraphs 28 and 83); B) learning the local model using internal data based on the global model (Paragraph 84 ); and C) transmitting the learned local model to the central server (Paragraphs 28-29, 83, and 90). The examiner notes that McMahan teaches “receiving a model from a central server” as “In round t≥0, the server distributes the current model W.sub.t to a subset S.sub.t of n.sub.t clients (for example, to a selected subset of clients whose devices are plugged into power, have access to broadband, and are idle). Some or all of these clients independently update the model based on their local data” (Paragraph 28) and “At (310), method (300) can include providing the global model to each client device, and at (312), method (300) can include receiving the global model” (Paragraph 83). The examiner further notes that a distribution of a global model from a server to multiple clients teaches the claimed receiving . The examiner further notes that McMahan teaches “learning the local model using internal data based on the global model” as “At (314), method (300) can include determining, by the client device, a local update. In a particular implementation, the local update can be determined by retraining or otherwise updating the global model based on the locally stored training data” (Paragraph 84). The examiner further notes that the updating (i.e. learning) of a received global model at a client includes the use of local training data (i.e. the claimed internal data). The examiner further notes that McMahan teaches “transmitting the learned local model to the central server” as “In round t≥0, the server distributes the current model W.sub.t to a subset S.sub.t of n.sub.t clients (for example, to a selected subset of clients whose devices are plugged into power, have access to broadband, and are idle). Some or all of these clients independently update the model based on their local data…Each client then sends the update back to the server, where the global update is computed by aggregating all the client-side updates” (Paragraphs 28-29), “At (310), method (300) can include providing the global model to each client device, and at (312), method (300) can include receiving the global model” (Paragraph 83), and “Any number of iterations of local and global updates can be performed. That is, method (300) can be performed iteratively to update the global model based on locally stored training data over time” (Paragraph 90). The examiner further notes that the iterative updating of a global model includes the server receiving updated (i.e. learned) local models from clients. McMahan does not explicitly teach: A ) to store the model as a global model and a local model . Lee , however, teaches “ to store the model as a global model and a local model ” as “the client device 200 may store the global model and the local model, respectively, in order to train the local model based on the learning direction of the global model” (Paragraph 60). The examiner further notes that although McMahan teaches the concept of federated learning (including the transmission of a global model to multiple clients), there is no explicit teaching that such clients each store a global model and a local model. Nevertheless, Lee teaches the concept of clients in a federated learning system storing a global model and a local model. The combination would result in the clients of McMahan to store a global model and a local model. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Lee’s would have allowed McMahan’s to provide a method for avoiding the loss of learning direction of a model in a federated system, as noted by Lee (Paragraph 06). McMahan and Lee do not explicitly teach: D ) wherein learning the local model comprises: learning a discriminator model and a classifier using features of the global model and features of the local model in each batch of the internal data to generate the features of the local model through the learning of the local model so that the features of the local model are not discriminated as the discriminator model and to operate the classifier to classify a correct answer . Zhang , however, teaches “ wherein learning the local model comprises: learning a discriminator model and a classifier using features of the global model and features of the local model in each batch of the internal data to generate the features of the local model through the learning of the local model so that the features of the local model are not discriminated as the discriminator model and to operate the classifier to classify a correct answer ” as “(2) Discriminator. Given features extracted from raw data (from a source domain) and features generated by distribution generator, the discriminator is used to distinguish the extracted features and the generated features. During training the discriminator gains its ability to distinguish the above two types of features. Besides, a Random Projection (RP) layer is pre-pended to the discriminator, making it harder to distinguish features from different distributions…Classifier. Given features as the input, the classifier outputs the predicted label ” (Page 3, Section 3.1) and “In the training process, the client uses the local discriminator and receives the parameters w of other components from the server to train on the local data” (Page 5, Section 3.3). The examiner further notes that the secondary reference of Zhang teaches the concept of a client (in a federated learning environment) housing a discriminator and classifier (See Figure 2) that are trained. The combination would result in the use of Lee’s clients (which house both a global and local model) to train a locally stored discriminator and classifier. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Zhang’s would have allowed McMahan’s and Lee’s to provide a method for avoiding poor generalization performance in Federated Learning, as noted by Zhang (Abstract). Regarding claim 5, McMahan further teaches a federated learning method comprising: A) after transmitting the learned local model to the central server, repeatedly performing steps of receiving a new model from the central server to store the new model, learning the local model, and transmitting the learned local model to the central server (Paragraphs 28-29, 83, and 90); B) wherein the new model is a model newly generated by averaging the learned local model of the local client and learned local models of other local clients in the central server (Paragraphs 89-90). The examiner notes that McMahan teaches “after transmitting the learned local model to the central server, repeatedly performing steps of receiving a new model from the central server to store the new model, learning the local model, and transmitting the learned local model to the central server” as “In round t≥0, the server distributes the current model W.sub.t to a subset S.sub.t of n.sub.t clients (for example, to a selected subset of clients whose devices are plugged into power, have access to broadband, and are idle). Some or all of these clients independently update the model based on their local data…Each client then sends the update back to the server, where the global update is computed by aggregating all the client-side updates” (Paragraphs 28-29), “At (310), method (300) can include providing the global model to each client device, and at (312), method (300) can include receiving the global model” (Paragraph 83), and “Any number of iterations of local and global updates can be performed. That is, method (300) can be performed iteratively to update the global model based on locally stored training data over time” (Paragraph 90). The examiner further notes that the iterative (i.e. repetitive) updating of a global model includes a transmission of a generated new model to a client, updating (i.e. learning) the transmitted new model, and sending back the learned new model to a server. The examiner further notes that McMahan teaches “wherein the new model is a model newly generated by averaging the learned local model of the local client and learned local models of other local clients in the central server” as “method ( 300 ) can include again determining the global model. In particular, the global model can be determined based at least in part on the received local update(s). For instance, the received local updates can be aggregated to determine the global model. The aggregation can be an additive aggregation and/or an averaging aggregation” (Paragraph 89) and “Any number of iterations of local and global updates can be performed. That is, method (300) can be performed iteratively to update the global model based on locally stored training data over time” (Paragraph 90). The examiner further notes that the iterative (i.e. repetitive) updating of a global model includes averaging updates from multiple local clients. Regarding claim 6, McMahan further teaches a federated learning method comprising: A) wherein the local client operates under a non-independent identically distributed environment (Paragraph 20). The examiner notes that McMahan teaches “wherein the local client operates under a non-independent identically distributed environment” as “the federated learning framework differs from conventional distributed machine learning due to the large number of clients, data that is highly unbalanced and not independent and identically distributed (“IID”), and unreliable network connections” (Paragraph 20). The examiner further notes that the federated learning framework is non-independent identically distributed. Regarding claim 7, McMahan teaches a federated learning method comprising: A) transmitting a model to a local client (Paragraphs 28 and 83) ; B) receiving a local model learned in the local client (Paragraphs 28-29, 83, and 90) ; and C) generating a new model based on the learned local model of the local client (Paragraphs 28-29, 83, and 90) ; D) ; and E) learning the local model using internal data of the local client based on the global model (Paragraph 84) . The examiner notes that McMahan teaches “transmitting a model to a local client” as “In round t≥0, the server distributes the current model W.sub.t to a subset S.sub.t of n.sub.t clients (for example, to a selected subset of clients whose devices are plugged into power, have access to broadband, and are idle). Some or all of these clients independently update the model based on their local data” (Paragraph 28) and “At (310), method (300) can include providing the global model to each client device, and at (312), method (300) can include receiving the global model” (Paragraph 83). The examiner further notes that a distribution of a global model from a server to multiple clients teaches the claimed transmitting. The examiner further notes that McMahan teaches “receiving a local model learned in the local client” as “In round t≥0, the server distributes the current model W.sub.t to a subset S.sub.t of n.sub.t clients (for example, to a selected subset of clients whose devices are plugged into power, have access to broadband, and are idle). Some or all of these clients independently update the model based on their local data…Each client then sends the update back to the server, where the global update is computed by aggregating all the client-side updates” (Paragraphs 28-29), “At (310), method (300) can include providing the global model to each client device, and at (312), method (300) can include receiving the global model” (Paragraph 83), and “Any number of iterations of local and global updates can be performed. That is, method (300) can be performed iteratively to update the global model based on locally stored training data over time” (Paragraph 90). The examiner further notes that the iterative updating of a global model includes the server receiving updated (i.e. learned) local models from clients. The examiner further notes that McMahan teaches “ generating a new model based on the learned local model of the local client ” as “In round t≥0, the server distributes the current model W.sub.t to a subset S.sub.t of n.sub.t clients (for example, to a selected subset of clients whose devices are plugged into power, have access to broadband, and are idle). Some or all of these clients independently update the model based on their local data…Each client then sends the update back to the server, where the global update is computed by aggregating all the client-side updates” (Paragraphs 28-29), “At (310), method (300) can include providing the global model to each client device, and at (312), method (300) can include receiving the global model” (Paragraph 83), and “Any number of iterations of local and global updates can be performed. That is, method (300) can be performed iteratively to update the global model based on locally stored training data over time” (Paragraph 90). The examiner further notes that the iterative updating of a global model at a server entails generation of a “new” model at a server based on the received updated models from clients at each iteration. The examiner further notes that McMahan teaches “learning the local model using internal data of the local client based on the global model” as “At (314), method (300) can include determining, by the client device, a local update. In a particular implementation, the local update can be determined by retraining or otherwise updating the global model based on the locally stored training data” (Paragraph 84). The examiner further notes that the updating (i.e. learning) of a received global model at a client includes the use of local training data (i.e. the claimed internal data). McMahan does not explicitly teach: D) wherein the learned local model is generated by storing the model received by the local client as a global model and a local model . Lee , however, teaches “ wherein the learned local model is generated by storing the model received by the local client as a global model and a local model ” as “ the client device 200 may store the global model and the local model, respectively, in order to train the local model based on the learning direction of the global model ” (Paragraph 60). The examiner further notes that although McMahan teaches the concept of federated learning (including the transmission of a global model to multiple clients), there is no explicit teaching that such clients each store a global model and a local model. Nevertheless, Lee teaches the concept of clients in a federated learning system storing a global model and a local model. The combination would result in the clients of McMahan to store a global model and a local model. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Lee’s would have allowed McMahan’s to provide a method for avoiding the loss of learning direction of a model in a federated system , as noted by Lee ( Paragraph 06 ). McMahan and Lee do not explicitly teach: F) wherein the learning of the local model comprises learning a discriminator model and a classifier using features of the global model and features of the local model in each batch of the internal data to generate the features of the local model through the learning of the local model so that the features of the local model are not discriminated as the discriminator model and to operate the classifier to classify a correct answer. Zhang , however, teaches “ wherein the learning of the local model comprises learning a discriminator model and a classifier using features of the global model and features of the local model in each batch of the internal data to generate the features of the local model through the learning of the local model so that the features of the local model are not discriminated as the discriminator model and to operate the classifier to classify a correct answer ” as “ (2) Discriminator. Given features extracted from raw data (from a source domain) and features generated by distribution generator, the discriminator is used to distinguish the extracted features and the generated features. During train ing the discriminator gains its ability to distinguish the above two types of features. Besides, a Random Projection (RP) layer is pre-pended to the discriminator, making it harder to distinguish features from different distributions … Classifier. Given features as the input, the classifier outputs the predicted label ” ( Page 3, Section 3.1 ) and “ In the training process, the client uses the local discriminator and receives the parameters w of other components from the server to train on the local data ” (Page 5, Section 3.3). The examiner further notes that the secondary reference of Zhang teaches the concept of a client (in a federated learning environment) housing a discriminator and classifier (See Figure 2) that are trained. The combination would result in the use of Lee’s clients (which house both a global and local model) to train a locally stored discriminator and classifier. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Zhang ’s would have allowed McMahan’s and Lee’s to provide a method for avoiding poor generalization performance in Federated Learning , as noted by Zhang ( Abstract ). Regarding claim 11, McMahan further teaches a federated learning method comprising: A) after generating the new model, repeatedly performing steps of transmitting the new model to the local client, receiving the learned local model, and generating the new model (Paragraphs 28-29, 83, and 90); B) wherein the new model is a model newly generated by averaging the learned local model of the local client and learned local models of other local clients in the central server (Paragraphs 89-90). The examiner notes that McMahan teaches “after generating the new model, repeatedly performing steps of transmitting the new model to the local client, receiving the learned local model, and generating the new model” as “In round t≥0, the server distributes the current model W.sub.t to a subset S.sub.t of n.sub.t clients (for example, to a selected subset of clients whose devices are plugged into power, have access to broadband, and are idle). Some or all of these clients independently update the model based on their local data…Each client then sends the update back to the server, where the global update is computed by aggregating all the client-side updates” (Paragraphs 28-29), “At (310), method (300) can include providing the global model to each client device, and at (312), method (300) can include receiving the global model” (Paragraph 83), and “Any number of iterations of local and global updates can be performed. That is, method (300) can be performed iteratively to update the global model based on locally stored training data over time” (Paragraph 90). The examiner further notes that the iterative (i.e. repetitive) updating of a global model includes a transmission of a generated new model to a client, updating (i.e. learning) the transmitted new model, and sending back the learned new model to a server. The examiner further notes that McMahan teaches “wherein the new model is a model newly generated by averaging the learned local model of the local client and learned local models of other local clients in the central server” as “method ( 300 ) can include again determining the global model. In particular, the global model can be determined based at least in part on the received local update(s). For instance, the received local updates can be aggregated to determine the global model. The aggregation can be an additive aggregation and/or an averaging aggregation” (Paragraph 89) and “Any number of iterations of local and global updates can be performed. That is, method (300) can be performed iteratively to update the global model based on locally stored training data over time” (Paragraph 90). The examiner further notes that the iterative (i.e. repetitive) updating of a global model includes averaging updates from multiple local clients. Regarding claim 12, McMahan further teaches a federated learning method comprising: A) wherein the local client operates under a non-independent identically distributed environment (Paragraph 20). The examiner notes that McMahan teaches “wherein the local client operates under a non-independent identically distributed environment” as “the federated learning framework differs from conventional distributed machine learning due to the large number of clients, data that is highly unbalanced and not independent and identically distributed (“IID”), and unreliable network connections” (Paragraph 20). The examiner further notes that the federated learning framework is non-independent identically distributed. Allowable Subject Matter 9. Claims 2 and 8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Specifically, although the prior art (See Zhang ) clearly teaches a loss function for a classifier, the detailed explicitly defined loss function equation is not found in the prior art in conjunction with the rest of the limitations of the parent claim. Dependent claims 3-4 and 9-10 are deemed allowable for depending on the deemed allowable subject matter of dependent claims 1 and 7 respectively. Conclusion 1 0 . The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. PGPUB 2023/0010686 issued to Lesh et al. on 12 January 2023 . The subject matter disclosed therein is pertinent to that of claims 1- 12 (e.g., methods to perform federated learning ). U.S. PGPUB 2022/0129706 issued to Vivona et al. on 28 April 2022 . The subject matter disclosed therein is pertinent to that of claims 1-12 (e.g., methods to perform federated learning). Contact Information 1 1 . Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mahesh Dwivedi whose telephone number is (571) 272-2731. The examiner can normally be reached on Monday to Friday 8:20 am – 4:40 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached (571) 272-4085. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov . Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Mahesh Dwivedi Primary Examiner Art Unit 2168 March 09, 2026 /MAHESH H DWIVEDI/ Primary Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Aug 09, 2023
Application Filed
Mar 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591818
FORECASTING AND MITIGATING CONCEPT DRIFT USING NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12585690
COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION VERIFICATION PROGRAM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12561366
Real-Time Micro-Profile Generation Using a Dynamic Tree Structure
2y 5m to grant Granted Feb 24, 2026
Patent 12561469
INFERRING SCHEMA STRUCTURE OF FLAT FILE
2y 5m to grant Granted Feb 24, 2026
Patent 12554730
HYBRID DATABASE IMPLEMENTATIONS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
74%
With Interview (+4.3%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 751 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month