Prosecution Insights
Last updated: April 19, 2026
Application No. 18/031,383

SAMPLING USER EQUIPMENTS FOR FEDERATED LEARNING MODEL COLLECTION

Non-Final OA §102§103
Filed
Apr 12, 2023
Examiner
MCINTOSH, ANDREW T
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Nokia Technologies Oy
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
95%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
393 granted / 511 resolved
+21.9% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
538
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
56.7%
+16.7% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 511 resolved cases

Office Action

§102 §103
DETAILED ACTION This action is responsive to communications filed on April 12, 2023. This action is made Non-Final. Claims 1-17, 30, 32, and 33 are pending in the case. Claims 1, 15, 17, 30, 32, and 33 are independent claims. Claims 1-10, 12-17, 30, 32, and 33 are rejected. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS(s)) submitted on 04/12/2023 is/are in compliance with the provisions of 37 C.F.R. 1.97. Accordingly, the IDS(s) is/are being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 2, 4, 6, 14, 17, and 32 is/are rejected under 35 U.S.C. 102(A)(1) as being anticipated by Nishio, Takayuki, and Ryo Yonetani. "Client selection for federated learning with heterogeneous resources in mobile edge." ICC 2019-2019 IEEE international conference on communications (ICC). IEEE, 2019, (“Nishio”). Claim 1: Nishio discloses an apparatus for use at a network side of a cellular communication system, the apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to perform: detecting first user equipment out of a plurality of user equipment of the cellular communication system (see Fig. 1; Protocol 1, 2; III, B - the new Resource Request step asks random clients to inform the MEC operator of their resource information. MEC operator asks KxC random clients to participate in the current training task. Clients who receive the request notify the operator of their resource information); wherein the user equipment respectively corresponds to a distributed node of a federated machine-learning concept and respectively generate a partial machine learning model (see Fig. 1; Protocol 1, 2; I - FL iteratively asks random clients to 1) download parameters of a trainable model from a certain server, 2) update the model with their own data; III, B - clients update global models.); wherein partial machine-learning models generated by the plurality of user equipment are to be used to update a global machine learning model at the network side of the cellular communication system (see Fig. 1; Protocol 1, 2; I - FL iteratively asks random clients to 1) download parameters of a trainable model from a certain server, 2) update the model with their own data; III, B - clients update global models; III, B - The clients update global models and upload the new parameters using the RBs allocated by the MEC operator.); wherein the first user equipment are user equipment comprising ready partial machine-learning models (see Fig. 1; Protocol 1, 2; I - FL iteratively asks random clients to 1) download parameters of a trainable model from a certain server, 2) update the model with their own data; III, B - clients update global models.); the at least one memory and computer program code being further configured, with the one processor, to cause the apparatus to perform selecting, out of the first user equipment, second user equipment at least based on time information associated with the first user equipment (see III, B - Using the information, the MEC operator determines which of the clients go to the subsequent steps to complete the steps within a certain deadline. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps; III, C - Our goal in the step is to allow Client Selection the server to aggregate as many client updates as possible within a specified deadline. This criterion is based on the result from [5] that a larger fraction of clients used in each round saves the time required for global models to achieve a desired performance. Based on the criterion, the MEC operator selects clients who can complete the and Distribution and Scheduled Update and Upload steps within a deadline.); acquiring the ready partial machine-learning models respectively generated by the second user equipment (see Fig. 1; Protocol 1, 2; I - FL iteratively asks random clients to 1) download parameters of a trainable model from a certain server, 2) update the model with their own data and uploads the updated model parameters to the server; III, B - clients update global models; III, B - The clients update global models and upload the new parameters using the RBs allocated by the MEC operator.); updating the global machine-learning model using the ready partial machine-learning models acquired (see Fig. 1; Protocol 1, 2; I - FL iteratively asks random clients to 1) download parameters of a trainable model from a certain server, 2) update the model with their own data. and uploads the updated model parameters to the server. server averages the updated parameters and replaces the global model by the averaged model; III, B - clients update global models; III, B - The clients update global models and upload the new parameters using the RBs allocated by the MEC operator. aggregation in Protocol 1); determining convergence of the global machine-learning model updated by the ready partial machine-learning models acquired (see III, B - iterated for multiple rounds until the global model achieves a desired performance. Until the model achieves a certain desired performance (e.g., a classification accuracy of 90%) or the final deadline arrives, all steps but are iterated for multiple rounds Initialization.); and in case of convergence of the global machine-learning model is not determined, repeating a process comprising the detecting, selecting, acquiring, updating and determining (see III, B - iterated for multiple rounds until the global model achieves a desired performance. Until the model achieves a certain desired performance (e.g., a classification accuracy of 90%) or the final deadline arrives, all steps but are iterated for multiple rounds Initialization.). Claim(s) 17 and 32: Claim(s) 17 and 32 correspond to Claim 1, and thus, Nishio discloses the limitations of claim(s) 17 and 32 as well. Claim 2: Nishio further discloses wherien the ready partial machine-learning models comprise at least one of: partial machine-learning models that have matured; partial machine-learning models that have matured for a predetermined first time period; partial machine-learning models that have been updated; partial machine-learning models that have been updated since a predetermined second time period (see Fig. 1; Protocol 1, 2; I - FL iteratively asks random clients to 1) download parameters of a trainable model from a certain server, 2) update the model with their own data and uploads the updated model parameters to the server; III, B - clients update global models; III, B - The clients update global models and upload the new parameters using the RBs allocated by the MEC operator.). Claim 4: Nishio further discloses wherein the selecting comprises: selecting the second user equipment out of the first user equipment also based on channel conditions associated with the first user conditions (see Abstract - when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions; I - heterogeneous mobile devices with different data resources, computational capabilities, and wireless channel conditions; II, B - upload time will be longer if a client is under a severely poor channel condition; III, A - assume that the modulation and coding scheme of radio communications for each client are determined appropriately while considering its channel state so that packet-loss rate is negligible; III, B - asks random clients to inform the MEC operator of their resource information such as wireless channel states, computational capacities (e.g., if they can spare CPUs or GPUs for updating models), and the size of data resources relevant to the current training task. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps.). Claim 6: Nishio further discloses prioritizing the first user equipment based on their time information (see III, B - Using the information, the MEC operator determines which of the clients go to the subsequent steps to complete the steps within a certain deadline. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps; III, C - Our goal in the step is to allow Client Selection the server to aggregate as many client updates as possible within a specified deadline. This criterion is based on the result from [5] that a larger fraction of clients used in each round saves the time required for global models to achieve a desired performance. Based on the criterion, the MEC operator selects clients who can complete the and Distribution and Scheduled Update and Upload steps within a deadline.). Claim 14: Nishio further discloses wherein the apparatus comprises at least one of: a node of a access network of the cellular communication system; a gNodeB; a central unit of a gNodeB; a machine-learning application layer; a machine-learning host of the bNodeB (see Fig. 1, 2; Protocol 1, 2.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3, 5, 5, and 7-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nishio, and further in view of Tuor et al., US Publication 2021/0158099 (“Tuor”). Claim 3: Tuor further teaches or suggests wherein the information comprises a waiting time duration for which a user equipment of the first user equipment has been waiting for transmitting its ready partial machine-learning model (see para. 0034 - receive the datasets from the further clients and/or programs that train the local models on the local data samples and generate packets of the datasets that are to be transmitted to a further processing component that generates the global model; para. 0038 - based on datasets received from the plurality of smart devices 110 associated with respective contributors while the contributors are currently available; para. 0041 - improve an efficiency with which the federated learning process is performed (e.g., reducing wasted processing, generating a global model with a more comprehensive and impactful collection of datasets, etc.); para. 0045 - when the contribution program 132 determines that the currently available contributors have an insufficient usefulness to perform the federated learning process, the contribution program 132 may determine that the modelling program 122 should hold on performing the federated learning process and wait for more and/or different contributors to become available; para. 0046 - the contribution program 132 and/or the availability program 134 may dynamically determine when and for how long to wait to perform the federated learning process based on the current state of the federated learning system 100. The contribution program 132 may receive information from the availability program 134 and collaborate in determining when and for how long to recommend waiting to perform the federated learning process. For example, the contribution program 132 and the availability program 134 may rely on a waiting threshold so that the modelling program 122 is not unduly waiting a relatively long time to perform the federated learning process; para. 0047 - how long to wait in performing the federated learning process.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Tuor, to include wherein the information comprises a waiting time duration for which a user equipment of the first user equipment has been waiting for transmitting its ready partial machine-learning model for the purpose of efficiently managing waiting durations in a federated learning framework to avoid undue waiting durations and reduce wasted processing, as taught by Tuor (0041 and 0046). Claim 5: As indicated above, Nishio teaches or suggests selecting the second user equipment based on ... uplink resources available for acquiring ready partial machine-learning models (see Abstract - when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions; I - heterogeneous mobile devices with different data resources, computational capabilities, and wireless channel conditions; II, B - upload time will be longer if a client is under a severely poor channel condition; III, A - assume that the modulation and coding scheme of radio communications for each client are determined appropriately while considering its channel state so that packet-loss rate is negligible; III, B - asks random clients to inform the MEC operator of their resource information such as wireless channel states, computational capacities (e.g., if they can spare CPUs or GPUs for updating models), and the size of data resources relevant to the current training task. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps.). Tuor further teaches or suggests also based on a quota (see para. 0044 – may determine that the currently available contributors have a sufficient usefulness to perform the federated learning process when a predetermined number of contributors satisfy the local usefulness threshold. may determine that the currently available contributors have a sufficient usefulness to perform the federated learning process when the predetermined number of contributors satisfy; para. 0045 - When the contribution program 132 determines that the currently available contributors have a sufficient usefulness to perform the federated learning process, the contribution program 132 may determine that the modelling program 122 should perform the federated learning process based on the datasets of the currently available contributors.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Tuor, to include also based on a quota for the purpose of efficiently managing waiting durations in a federated learning framework to avoid undue waiting durations and reduce wasted processing, as taught by Tuor (0041 and 0046). Claim 7: Nishio further teaches or suggests selecting the second user equipment out of the first user equipment which have been prioritized based on their time information, based on ... uplink resources available for acquiring ready partial machine-learning models (see III, B - Using the information, the MEC operator determines which of the clients go to the subsequent steps to complete the steps within a certain deadline. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps; III, C - Our goal in the step is to allow Client Selection the server to aggregate as many client updates as possible within a specified deadline. This criterion is based on the result from [5] that a larger fraction of clients used in each round saves the time required for global models to achieve a desired performance. Based on the criterion, the MEC operator selects clients who can complete the and Distribution and Scheduled Update and Upload steps within a deadline; Abstract - when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions; I - heterogeneous mobile devices with different data resources, computational capabilities, and wireless channel conditions; II, B - upload time will be longer if a client is under a severely poor channel condition; III, A - assume that the modulation and coding scheme of radio communications for each client are determined appropriately while considering its channel state so that packet-loss rate is negligible; III, B - asks random clients to inform the MEC operator of their resource information such as wireless channel states, computational capacities (e.g., if they can spare CPUs or GPUs for updating models), and the size of data resources relevant to the current training task. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps.). Tuor further teaches or suggests also based on a quota (see para. 0044 – may determine that the currently available contributors have a sufficient usefulness to perform the federated learning process when a predetermined number of contributors satisfy the local usefulness threshold. may determine that the currently available contributors have a sufficient usefulness to perform the federated learning process when the predetermined number of contributors satisfy; para. 0045 - When the contribution program 132 determines that the currently available contributors have a sufficient usefulness to perform the federated learning process, the contribution program 132 may determine that the modelling program 122 should perform the federated learning process based on the datasets of the currently available contributors.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Tuor, to include also based on a quota for the purpose of efficiently managing waiting durations in a federated learning framework to avoid undue waiting durations and reduce wasted processing, as taught by Tuor (0041 and 0046). Claim 8: Nishio further teaches or suggests prioritizing the first user equipment prioritized based on their time information also based on channel conditions associated with the first user equipment (see III, B - Using the information, the MEC operator determines which of the clients go to the subsequent steps to complete the steps within a certain deadline. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps; III, C - Our goal in the step is to allow Client Selection the server to aggregate as many client updates as possible within a specified deadline. This criterion is based on the result from [5] that a larger fraction of clients used in each round saves the time required for global models to achieve a desired performance. Based on the criterion, the MEC operator selects clients who can complete the and Distribution and Scheduled Update and Upload steps within a deadline; Abstract - when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions; I - heterogeneous mobile devices with different data resources, computational capabilities, and wireless channel conditions; II, B - upload time will be longer if a client is under a severely poor channel condition; III, A - assume that the modulation and coding scheme of radio communications for each client are determined appropriately while considering its channel state so that packet-loss rate is negligible; III, B - asks random clients to inform the MEC operator of their resource information such as wireless channel states, computational capacities (e.g., if they can spare CPUs or GPUs for updating models), and the size of data resources relevant to the current training task. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps.). Claim 9: Nishio further teaches or suggests prioritizing, for being selected as second user equipment, the first user equipment which have been prioritized based on their time information and are associated with channel conditions meeting a predetermined threshold (see III, B - Using the information, the MEC operator determines which of the clients go to the subsequent steps to complete the steps within a certain deadline. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps; III, C - Our goal in the step is to allow Client Selection the server to aggregate as many client updates as possible within a specified deadline. This criterion is based on the result from [5] that a larger fraction of clients used in each round saves the time required for global models to achieve a desired performance. Based on the criterion, the MEC operator selects clients who can complete the and Distribution and Scheduled Update and Upload steps within a deadline; Abstract - when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions; I - heterogeneous mobile devices with different data resources, computational capabilities, and wireless channel conditions; II, B - upload time will be longer if a client is under a severely poor channel condition; III, A - assume that the modulation and coding scheme of radio communications for each client are determined appropriately while considering its channel state so that packet-loss rate is negligible; III, B - asks random clients to inform the MEC operator of their resource information such as wireless channel states, computational capacities (e.g., if they can spare CPUs or GPUs for updating models), and the size of data resources relevant to the current training task. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps.). Claim 10: Nishio further teaches or suggests selecting the second user equipment out of the first user equipment which have been prioritized based on their time information and channel conditions, based on ... uplink resources available for acquiring ready partial machine-learning models (see III, B - Using the information, the MEC operator determines which of the clients go to the subsequent steps to complete the steps within a certain deadline. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps; III, C - Our goal in the step is to allow Client Selection the server to aggregate as many client updates as possible within a specified deadline. This criterion is based on the result from [5] that a larger fraction of clients used in each round saves the time required for global models to achieve a desired performance. Based on the criterion, the MEC operator selects clients who can complete the and Distribution and Scheduled Update and Upload steps within a deadline; Abstract - when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions; I - heterogeneous mobile devices with different data resources, computational capabilities, and wireless channel conditions; II, B - upload time will be longer if a client is under a severely poor channel condition; III, A - assume that the modulation and coding scheme of radio communications for each client are determined appropriately while considering its channel state so that packet-loss rate is negligible; III, B - asks random clients to inform the MEC operator of their resource information such as wireless channel states, computational capacities (e.g., if they can spare CPUs or GPUs for updating models), and the size of data resources relevant to the current training task. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps.). Tuor further teaches or suggests based on a quota of (see para. 0044 – may determine that the currently available contributors have a sufficient usefulness to perform the federated learning process when a predetermined number of contributors satisfy the local usefulness threshold. may determine that the currently available contributors have a sufficient usefulness to perform the federated learning process when the predetermined number of contributors satisfy; para. 0045 - When the contribution program 132 determines that the currently available contributors have a sufficient usefulness to perform the federated learning process, the contribution program 132 may determine that the modelling program 122 should perform the federated learning process based on the datasets of the currently available contributors.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Tuor, to include based on a quota of for the purpose of efficiently managing waiting durations in a federated learning framework to avoid undue waiting durations and reduce wasted processing, as taught by Tuor (0041 and 0046). Claim(s) 12, 13, 15, 16, 30, and 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nishio, and further in view of Zhang et al., US Publication US 2022/0044162 (“Zhang”). Claim 12: Zhang further teaches or suggests wherein the acquiring the ready partial machine-learning models respectively generated by the second user equipment comprises at least one of: requesting the second user equipment to transmit the ready partial machine-learning model by using uplink resources available for acquiring the ready partial machine-learning models; transmitting updated time information to the first user equipment not selected as second user equipment (see para. 0014 - implementing a blockchain of published metadata associated with a global machine-learning model and local datasets each corresponding to a respective client to facilitate federated learning for the global machine-learning model; para. 0033 - metadata published to the blockchain 130 may include one or more metadata fields, such as a training task identifier, a training round identifier, a client identifier, a number of training samples (e.g., data entries) included in a given local dataset, a training accuracy of a locally trained machine-learning model and/or a global machine-learning model, a testing accuracy of a locally trained machine-learning model and/or a global machine learning model; para. 0044 - central server 202 may read the blockchain 206 and identify which of the clients 204 published metadata relating to their respective local model updates to the blockchain 206. In some embodiments, the central server 202 may read the metadata of the local model updates corresponding to the clients 204. the central server 202 may send requests to each of the clients 204 that published metadata indicating their local model updates are ready to transfer their local model updates to the central server 202 at operations 224. At operations 226, the clients 204 may transfer the local model updates to the central server 202 such that the central server 202 may select and obtain at least the threshold number of local model updates from the clients 204 to update the global machine-learning model.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Nishio, to include wherein the acquiring the ready partial machine-learning models respectively generated by the second user equipment comprises at least one of: requesting the second user equipment to transmit the ready partial machine-learning model by using uplink resources available for acquiring the ready partial machine-learning models; transmitting updated time information to the first user equipment not selected as second user equipment for the purpose of efficiently indicating metadata pertaining to local machine-learning model training, improving client selection process within a federated learning framework, as taught by Zhang (0014, 0033, and 0044). Claim 13: Nishio further teaches or suggests wherein detecting the first user equipment comprises at least one of: requesting the plurality of user equipment of the cellular communication system to indicate a status ...; requesting the plurality of user equipment of the cellular communication system ... to indicate the time information (see III, B - Using the information, the MEC operator determines which of the clients go to the subsequent steps to complete the steps within a certain deadline. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps; III, C - Our goal in the step is to allow Client Selection the server to aggregate as many client updates as possible within a specified deadline. This criterion is based on the result from [5] that a larger fraction of clients used in each round saves the time required for global models to achieve a desired performance. Based on the criterion, the MEC operator selects clients who can complete the and Distribution and Scheduled Update and Upload steps within a deadline; Abstract - when some clients are with limited computational resources (i.e., requiring longer update time) or under poor wireless channel conditions (longer upload time). Our new FL protocol, which we refer to as FedCS, mitigates this problem and performs FL efficiently while actively managing clients based on their resource conditions; I - heterogeneous mobile devices with different data resources, computational capabilities, and wireless channel conditions; II, B - upload time will be longer if a client is under a severely poor channel condition; III, A - assume that the modulation and coding scheme of radio communications for each client are determined appropriately while considering its channel state so that packet-loss rate is negligible; III, B - asks random clients to inform the MEC operator of their resource information such as wireless channel states, computational capacities (e.g., if they can spare CPUs or GPUs for updating models), and the size of data resources relevant to the current training task. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps.). Zhang further teaches or suggests status of the partial machine learning models, wherein the status indicates whether or not the partial machine learning models are ready; ... which comprise a ready partial machine-learning model (see para. 0014 - implementing a blockchain of published metadata associated with a global machine-learning model and local datasets each corresponding to a respective client to facilitate federated learning for the global machine-learning model; para. 0033 - metadata published to the blockchain 130 may include one or more metadata fields, such as a training task identifier, a training round identifier, a client identifier, a number of training samples (e.g., data entries) included in a given local dataset, a training accuracy of a locally trained machine-learning model and/or a global machine-learning model, a testing accuracy of a locally trained machine-learning model and/or a global machine learning model; para. 0044 - central server 202 may read the blockchain 206 and identify which of the clients 204 published metadata relating to their respective local model updates to the blockchain 206. In some embodiments, the central server 202 may read the metadata of the local model updates corresponding to the clients 204. the central server 202 may send requests to each of the clients 204 that published metadata indicating their local model updates are ready to transfer their local model updates to the central server 202 at operations 224. At operations 226, the clients 204 may transfer the local model updates to the central server 202 such that the central server 202 may select and obtain at least the threshold number of local model updates from the clients 204 to update the global machine-learning model.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Nishio, to include status of the partial machine learning models, wherein the status indicates whether or not the partial machine learning models are ready; ... which comprise a ready partial machine-learning model for the purpose of efficiently indicating metadata pertaining to local machine-learning model training, improving client selection process within a federated learning framework, as taught by Zhang (0014, 0033, and 0044). Claim 15: Nishio teaches or suggests a user equipment of a plurality of user equipment for use in a cellular communication system, the user equipment comprising: at least one processor; and at least one memory including computer program code configured to, with the at least one processor, cause the user equipment to perform: indicating ... as a distributed node of a federated machine-learning concept ... time information associated with the ready partial machine-learning model (see Fig. 1; Protocol 1, 2; I - sets a certain deadline FedCS for clients to download, update, and upload ML models in the FL protocol. Then, the MEC operator selects clients such that the server can aggregate as many client updates as possible in limited time frames, which makes the overall training process efficient and reduces a required time for training ML models. This is technically formulated by a client selection problem that determines which clients participate in the training process and when each client has to complete the process while considering the computation and communication resource constraints imposed by the client; III, B - Using the information, the MEC operator determines which of the clients go to the subsequent steps to complete the steps within a certain deadline. operator refers to this information in the subsequent step to estimate the time Client Selection required for the and Distribution Scheduled Update steps and to determine which clients go to and Upload these steps; III, C - Our goal in the step is to allow Client Selection the server to aggregate as many client updates as possible within a specified deadline. This criterion is based on the result from [5] that a larger fraction of clients used in each round saves the time required for global models to achieve a desired performance. Based on the criterion, the MEC operator selects clients who can complete the and Distribution and Scheduled Update and Upload steps within a deadline.); wherein partial machine-learning models generated by the plurality of user equipment are to be used to update a global machine-learning model at a network side of the cellular communication system (see Fig. 1; Protocol 1, 2; I - FL iteratively asks random clients to 1) download parameters of a trainable model from a certain server, 2) update the model with their own data; III, B - clients update global models; III, B - The clients update global models and upload the new parameters using the RBs allocated by the MEC operator.); the at least one memory and computer program code being further configured t, with the at least one processor, to cause the apparatus to perform transmitting the ready partial machine-learning model using uplink resources available for acquiring the ready partial machine-learning models (see Fig. 1; Protocol 1, 2; I - FL iteratively asks random clients to 1) download parameters of a trainable model from a certain server, 2) update the model with their own data and uploads the updated model parameters to the server; III, B - clients update global models; III, B - The clients update global models and upload the new parameters using the RBs allocated by the MEC operator.). Nishio does not explicitly disclose a status whether or not the user equipment ... has a ready partial machine-learning model ...; ... upon a corresponding request from the network side. Zhang teaches or suggests disclose a status whether or not the user equipment ... has a ready partial machine-learning model ...; ... upon a corresponding request from the network side (see para. 0014 - implementing a blockchain of published metadata associated with a global machine-learning model and local datasets each corresponding to a respective client to facilitate federated learning for the global machine-learning model; para. 0033 - metadata published to the blockchain 130 may include one or more metadata fields, such as a training task identifier, a training round identifier, a client identifier, a number of training samples (e.g., data entries) included in a given local dataset, a training accuracy of a locally trained machine-learning model and/or a global machine-learning model, a testing accuracy of a locally trained machine-learning model and/or a global machine learning model; para. 0044 - central server 202 may read the blockchain 206 and identify which of the clients 204 published metadata relating to their respective local model updates to the blockchain 206. In some embodiments, the central server 202 may read the metadata of the local model updates corresponding to the clients 204. the central server 202 may send requests to each of the clients 204 that published metadata indicating their local model updates are ready to transfer their local model updates to the central server 202 at operations 224. At operations 226, the clients 204 may transfer the local model updates to the central server 202 such that the central server 202 may select and obtain at least the threshold number of local model updates from the clients 204 to update the global machine-learning model.). Accordingly, it would have been obvious to one having ordinary skill before the effective filing date of the claimed invention to modify the system and method, taught in Nishio, to include disclose a status whether or not the user equipment ... has a ready partial machine-learning model ...; ... upon a corresponding request from the network side for the purpose of efficiently indicating metadata pertaining to local machine-learning model training, improving client selection process within a federated learning framework, as taught by Zhang (0014, 0033, and 0044). Claim(s) 30 and 33: Claim(s) 30 and 33 correspond to Claim 15, and thus, Nishio and Zhang teach or suggest the limitations of claim(s) 30 and 33 as well. Claim 16: Nishio further teaches or suggests wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the user equipment to further perform: resetting the time information after transmitting the ready partial machine-learning model; or updating the time information upon receiving corresponding signaling from the network side (see III, B - iterated for multiple rounds until the global model achieves a desired performance. Until the model achieves a certain desired performance (e.g., a classification accuracy of 90%) or the final deadline arrives, all steps but are iterated for multiple rounds Initialization.). Allowable Subject Matter Claims 11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew T McIntosh whose telephone number is (571)270-7790. The examiner can normally be reached M-Th 8:00am-5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571-272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW T MCINTOSH/Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Apr 12, 2023
Application Filed
Jan 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602534
Method and System to Display Content from a PDF Document on a Small Screen
2y 5m to grant Granted Apr 14, 2026
Patent 12596757
NATIVE INTEGRATION OF ARBITRARY DATA SOURCES
2y 5m to grant Granted Apr 07, 2026
Patent 12572617
SYSTEM AND METHOD FOR THE GENERATION AND EDITING OF TEXT CONTENT IN WEBSITE BUILDING SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12561191
TRAINING METHOD AND APPARATUS FOR FAULT RECOGNITION MODEL, FAULT RECOGNITION METHOD AND APPARATUS, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12547874
DEPLOYING PARALLELIZABLE DEEP LEARNING MODELS BY ADAPTING TO THE COMPUTING DEVICES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
95%
With Interview (+18.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 511 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month