Prosecution Insights
Last updated: April 19, 2026
Application No. 18/219,691

APPARATUS AND METHOD OF PERSONALIZED FEDERATED LEARNING BASED ON PARTIAL PARAMETERS SHARING

Non-Final OA §103
Filed
Jul 09, 2023
Examiner
OBISESAN, AUGUSTINE KUNLE
Art Unit
2156
Tech Center
2100 — Computer Architecture & Software
Assignee
Foundation Of Soongsil University-Industry Cooperation
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
86%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
480 granted / 755 resolved
+8.6% vs TC avg
Strong +22% interview lift
Without
With
+22.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
34 currently pending
Career history
789
Total Applications
across all art units

Statute-Specific Performance

§101
15.0%
-25.0% vs TC avg
§103
58.8%
+18.8% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
5.9%
-34.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 755 resolved cases

Office Action

§103
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This action is in response to application filed on 7/9/20223, in which claims 1 – 10 was presented for examination. 3. Claims 1 – 10 are pending in the application. Information Disclosure Statement 4. The information disclosure statement (IDS) submitted on 7/9/2023 and 1/16/2026 has been reviewed and entered into the record. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 1 – 10 are rejected under 35 U.S.C. 103 as being unpatentable over Singhal et al (US 2022/0398500 A1), in view of Nitta et al (US 2023/0090616 A1). As per claim 1, Singhal et al (US 2022/0398500 A1) discloses, A method of personalized federated learning (para.[0020]; “Federated training has traditionally been used to train the machine learning model across multiple user devices”). which is performed by an electronic device including one or more processors, a communication circuit which communicates with an external device, and one or more memories storing at least one instruction executed by the one or more processors (para.[0006]; “client computing device that is in data communication with a server computing device over a data communication network” and para.[0116]; “Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read only memory or a random access memory or both”). the method comprising: by the one or more processors, training a local model using local data (para.[0006]; “maintaining local data and data defining a set of local parameters of a machine learning model, wherein the local data is a proper subset of a plurality of proper subsets of data that are used to train the machine learning model, each proper subset of data maintained on a separate client computing device, and is used to train the machine learning model only on the client computing device on which the proper subset is maintained”). wherein the local model as an artificial neural network model includes a first parameter set corresponding to a global parameter set and a second parameter set corresponding to a local parameter set (para.[0074]; “machine learning model can be any appropriate machine learning model having a set of trainable global model parameters and a set of trainable local model parameters that can be iteratively trained on local data maintained”). receiving a 1-1st parameter set for renewing the first parameter set from the external device (para.[0036]; “local training system receives, over a data communication network and from the global training system that maintains the set of global (i.e., shared) parameter of the machine learning model, a copy of the set of global parameters” and para.[0041]; “local training systems of the user devices obtain the current values 112 of the set of global model parameters 110 from the global training system”). changing the first parameter set included in the local model to the 1-1st parameter set (para.[0036]; “uses the copy of set of global parameters to obtain a reconstruction of the values of its own local parameters of the machine learning model. The local training system then uses the local parameter values to determine parameter value updates to its copy of the set of global parameters. Reconstructing the local parameter values and updating the global parameter values of the machine learning model at the local training system”). and training the local model including the 1-1st parameter set (“para.[0041]; “local training systems of the user devices obtain the current values 112 of the set of global model parameters 110 from the global training system and generate parameter value updates 114 to the set of global model parameters 110 using local training data generated on the user devices”). Singhal does not specifically disclose transmitting the first parameter set to the external device. However, Nitta et al (US 2023/0090616 A1) in an analogous art discloses, transmitting the first parameter set to the external device (para.[0030]; “local selection unit 104 selects a first parameter set to be transmitted to the server 11 from among a plurality of parameters related to the local model”). Therefore, it would have been obvious to one of ordinary skill in the art before the invention was filed to incorporate selection and transmission of local parameter to server of the system of Nitta into local collaboratively training of a machine learning model of the system of Singhal to improve implementation of a federated learning system among devices with different resources and specification. As per claim 2, the rejection of claim 1 is incorporated and further Singhal et al (US 2022/0398500 Al) discloses, wherein the first parameter set is a global parameter set before a value is renewed, and the 1-1st parameter set is a global parameter set after the value is renewed through the external device (para.[0041]; “local training systems of the user devices obtain the current values 112 of the set of global model parameters 110 from the global training system 102 and generate parameter value updates 114 to the set of global model parameters 110 using local training data generated on the user devices”). As per claim 3, the rejection of claim 1 is incorporated and further Singhal et al (US 2022/0398500 Al) discloses, wherein the external device is configured to receive the global parameter set from each of a plurality of electronic devices including the electronic device, and generate the 1-1st parameter set based on the plurality of received global parameter sets (para.[0036]; “local training system receives, over a data communication network and from the global training system that maintains the set of global (i.e., shared) parameter of the machine learning model, a copy of the set of global parameters and then uses the copy of set of global parameters to obtain a reconstruction of the values of its own”). As per claim 4, the rejection of claim 1 is incorporated and further Nitta et al (US2023/0090616 A1) discloses, wherein the training of the local model includes fixing the 1-1st parameter set included in the local model not to be renewed, and training the local model to renew the second parameter set included in the local model (para.[0115]; “the layer structure of the local model included in each local device 10 is fixed, but the layer structure of the local model may be changed in a case where desired performance of the local model is required due to a change in training data”). Therefore, it would have been obvious to one of ordinary skill in the art before the invention was filed to incorporate selection and transmission of local parameter to server of the system of Nitta into local collaboratively training of a machine learning model of the system of Singhal to improve implementation of a federated learning system among devices with different resources and specification. As per claim 5, the rejection of claim 1 is incorporated and further Nitta et al (US2023/0090616 A1) discloses, wherein the local model is an artificial neural network model including a plurality of layers having an order (para.[0021]; “scalable neural network is a neural network that varies a model size such as the number of convolution layers of a network model according to a required operation amount or performance” and para.[0037]; “the network model has a model structure used in general machine learning, such as a multilayer”). the global parameter set includes parameters from a first layer to a specific layer included in the artificial neural network model (para.[0042]; “a subset of model parameters of a convolution layer corresponding to at least a part of the global model”). and the local parameter set includes parameters from a next layer of the specific layer to a last layer (para.[0042]; “the layer having the parameter unique to the local model is not limited to the output layer, and the first two layers including the input layer of each local model may be set as the layer having the parameter unique to each local model”). Therefore, it would have been obvious to one of ordinary skill in the art before the invention was filed to incorporate selection and transmission of local parameter to server of the system of Nitta into local collaboratively training of a machine learning model of the system of Singhal to improve implementation of a federated learning system among devices with different resources and specification. As per claim 6, the rejection of claim 1 is incorporated and further Nitta et al (US2023/0090616 A1) discloses, further comprising: determining the size of the global parameter set (para.[0067]; “where the processing speed is required according to the throughput required in each environment, the model size may be set small by giving priority to the speed”). wherein the size of the global parameter set is determined based on an information processing amount per unit time of the communication circuit (para.[0067]; “the model size may be determined according to the communication environment of each environment”). Therefore, it would have been obvious to one of ordinary skill in the art before the invention was filed to incorporate selection and transmission of local parameter to server of the system of Nitta into local collaboratively training of a machine learning model of the system of Singhal to improve implementation of a federated learning system among devices with different resources and specification. As per claim 7, the rejection of claim 6 is incorporated and further Nitta et al (US2023/0090616 A1) discloses, wherein the determining the size of the global parameter set includes calculating a parameter capacity of each of the plurality of layers included in the local model (para.[0067]; “the model size may be determined according to the communication environment of each environment. ……. when the communication speed between local device 10 and the server is high, the model size may be set large, and when the communication speed is low, the model size may be set small”). aggregating parameter capacities of respective layers from the first layer to the specific layer among the plurality of layers (para.[0081]; “the global model may be updated using an average or a weighted average of values obtained by integrating the respective first parameter sets for the corresponding conversion layers and values of the parameters of the global model in the latest update”). judging whether the aggregated parameter capacity becomes the maximum while not exceeding the information processing amount per unit time (para.[0049]; “calculating an average or a weighted average of the first parameter sets related to a layer common between the local models”). and determining the size of the global parameter set based on the aggregated parameter capacity when the aggregated parameter capacity becomes the maximum while not exceeding the information processing amount per unit time as a judgment result (para.[0078]; “the global data. Note that the data size (for example, the image size) of the global data held on the server 11, the number of types of target labels, and the like are desirably equal to or larger than the maximum number of local data”). Therefore, it would have been obvious to one of ordinary skill in the art before the invention was filed to incorporate selection and transmission of local parameter to server of the system of Nitta into local collaboratively training of a machine learning model of the system of Singhal to improve implementation of a federated learning system among devices with different resources and specification. Claims 8, 9, and 10 are electronic device claim corresponding to method claims 1, 4, and 6 respectively, and rejected under the same reason set forth in connection to the rejection of claims 1, 4, and 6 respectively above. Conclusion 6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. TITLE: Method and device for training federated learning model, CN 110263921 B authors: Huang Anbu. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AUGUSTINE K. OBISESAN whose telephone number is (571)272-2020. The examiner can normally be reached Monday - Friday 8:30am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached at (571) 272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AUGUSTINE K. OBISESAN/ Primary Examiner Art Unit 2156 2/16/2026
Read full office action

Prosecution Timeline

Jul 09, 2023
Application Filed
Feb 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602616
SECURE MACHINE LEARNING MODEL TRAINING USING ENCRYPTION
2y 5m to grant Granted Apr 14, 2026
Patent 12591573
AUTOMATIC ERROR MITIGATION IN DATABASE STATEMENTS USING ALTERNATE PLANS
2y 5m to grant Granted Mar 31, 2026
Patent 12566784
PREDICTIVE QUERY COMPLETION AND PREDICTIVE SEARCH RESULTS
2y 5m to grant Granted Mar 03, 2026
Patent 12566788
Conversation Graphs
2y 5m to grant Granted Mar 03, 2026
Patent 12566738
Methods and Apparatus to Estimate Audience Sizes of Media Using Deduplication Based on Vector of Counts Sketch Data
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
86%
With Interview (+22.5%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 755 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month