Prosecution Insights
Last updated: April 19, 2026
Application No. 17/784,570

METHODS FOR CASCADE FEDERATED LEARNING FOR TELECOMMUNICATIONS NETWORK PERFORMANCE AND RELATED APPARATUS

Non-Final OA §103
Filed
Jun 10, 2022
Examiner
LIN, SHERMAN L
Art Unit
2447
Tech Center
2400 — Computer Networks
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
7 (Non-Final)
29%
Grant Probability
At Risk
7-8
OA Rounds
6y 3m
To Grant
66%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
75 granted / 255 resolved
-28.6% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
6y 3m
Avg Prosecution
42 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
73.2%
+33.2% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
3.9%
-36.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 255 resolved cases

Office Action

§103
DETAILED ACTION In a communication received on 5 November 2025, the applicants amended claim 1. The claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, and 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2021/0042628 A1) in view of Pezzillo et al. (US 2019/0370686 A1) and Flanagan et al. (US 2016/0212633 A1), and further in view of Frederiksen et al. (US 2008/0026744 A1). With respect to claim 1, Zhou discloses: a method performed by a network computing device (i.e., computer system for use with an AI platform in Zhou, ¶0003) in a telecommunications network for adaptively deploying an aggregated machine learning model (i.e., federated learning framework with primary MLM and local MLMs established in a hierarchy in Zhou, ¶0001) and an output parameter (i.e., secondary local MLM includes model updates to the secondary MLM in Zhou, ¶0002), the method comprising: aggregating a plurality of client machine learning models received from a plurality of client computing devices (i.e., local data secondary hierarchy MLM providing model updates to the primary node in Zhou, ¶0037) to obtain an aggregated machine learning model (i.e., aggregating query responses which include model updates based on local data in Zhou, ¶0026, ¶0037), wherein the aggregating of the output performance metric comprises determining variations between a type of output performance metric output by the network machine learning model and the type of the output performance metric output by each of the client machine learning models (i.e., F1 score represents a performance metric of the client model compared to an average of previous iterations of the client model which correspond to sync'ed model of the primary model in Zhou, ¶0051); training a network machine learning model with inputs (i.e., training the upper hierarchy MLM using the collected replies and model updates of the secondary MLMs on the client devices in Zhou, ¶0032) comprising 1) the aggregated output performance metric (i.e., the training of the model is based on a performance metric that must indicate a sufficient improvement over previous performance in Zhou, ¶0051) and 2) at least one measurement (i.e., local data from the tier1 node used as basis for generating gradients with the tier1 local MLMs in Zhou, ¶0036) to obtain an output parameter of the network machine learning model (i.e., communicating gradients, model parameters or weight adjustments to the upper tier model in Zhou, ¶0036); sending to the plurality of client computing devices the aggregated machine learning model and the output parameter of the network machine learning model (i.e., synchronize the local models with the global model serving to add privacy and local pattern learning in Zhou, ¶0024, ¶0031), and exchanging models and/or outputs with the plurality of client computing devices (i.e., global models synchronize model weights with the local models to perform local training on local data in Zhou, ¶0024, ¶0031), the exchanging models and/or outputs of the models with the plurality of client computing devices comprises receiving the output performance metric of the plurality of the client machine learning models received from the plurality of client computing devices. (i.e., the global model receives an aggregated or averaging of gradients in the local tier MLMs trained on local data using the synchronized local models from the primary global model in Zhou, ¶0036-0037). Zhou discloses the global model receives an aggregated or averaging of gradients in the local tier MLMs trained on local data using the synchronized local models from the primary global model (¶0036-0037). Zhou do(es) not explicitly disclose the following. Pezzillo, in order to improve versatility of machine learning models for lower latency, less cloud dependency, privacy, and model customization per device or audience (¶0003), discloses: aggregating an output performance metric (i.e., performance and confidence scores corresponding to level of confidence of an observation of data and performance scores as an accuracy of the labeled observation in Pezzillo, ¶0015) of the plurality of the client machine learning models received from the plurality of client computing devices to obtain an aggregated output performance metric (i.e., comparing the performance score against the performance scores of other ML models to determine best performing model in Pezzillo, ¶0052, ¶0072,), and aggregating the variations to obtain the aggregated output performance metric (i.e., aggregating performance scores corresponding to historical and other ML model performance scores in Pezzillo, ¶0052). Based on Zhou in view of Pezzillo, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Pezzillo to improve upon those of Zhou in order to improve versatility of machine learning models for lower latency, less cloud dependency, privacy, and model customization per device or audience. Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou and Pezzillo do(es) not explicitly disclose the following. Flanagan, in order to scale network systems to meet demand by optimizing network resources through a self-optimizing network (¶0002), discloses: in the telecommunications network to control an operation in the telecommunications network (i.e., self-optimizing network to automatically alter various network parameters to improve network performance in Flanagan, ¶0012) in the telecommunications network (i.e., a self-organizing network system comprising base stations to adjust and make changes for better user experience in Flanagan, ¶0012, ¶0051) of a network parameter (i.e., measurements of RF, speed of travel, and other metrics corresponding to dropped calls in Flanagan, ¶0012, ¶0051). Based on Zhou in view of Pezzillo, and further in view of Flanagan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Flanagan to improve upon those of Zhou in order to scale network systems to meet demand by optimizing network resources through a self-optimizing network. Zhou discloses controlled training of the second model based on if a performance metric of the model satisfies sufficient improvement suggesting convergence (¶0051). Zhou, Pezzillo, and Flanagan do(es) not explicitly disclose the following. Frederiksen, in order to control and management of control signaling bandwidth for data reporting (¶0078), discloses: wherein a signal type for each of the receiving and/or sending (i.e., CQI reporting mode selected corresponding to physical transmission method for channel quality indicator (CQI) in Frederiksen, ¶0080) and a frequency of the exchanging (i.e., modifying the signaling rate based on rate of change in CQI slowing in Frederiksen, ¶0125) is determined based on at least one of a target rate that the at least one of the plurality of client computing devices sets for reaching a convergence for the aggregated machine learning model and a rate of change of at least one change in a speed of network parameter of the telecommunications network (i.e., CQI reporting mode based on physical transmission method and controlling signaling rate based on changes or variance in CQI in Frederiksen, ¶0080, ¶0125). Based on Zhou in view of Pezzillo and Flanagan, and further in view of Frederiksen, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Frederiksen to improve upon those of Zhou in order to control and management of control signaling bandwidth for data reporting. With respect to claim 3, Zhou discloses: the method of Claim 1, wherein the network machine learning model comprises a neural network (i.e., machine learning utilizing neural networks demonstrating learned behavior in Zhou, ¶00020). With respect to claim 4, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou and Pezzillo do(es) not explicitly disclose the following. Flanagan, in order to scale network systems to meet demand by optimizing network resources through a self-optimizing network (¶0002), discloses: the method of Claim 1, wherein the at least one measurement of network parameter comprises at least one measurement of a parameter of a cell of the telecommunications network (i.e., measurements of signal strength, propagation delay, transmitter power levels in Flanagan, ¶0036, ¶0052). Based on Zhou in view of Pezzillo, and further in view of Flanagan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Flanagan to improve upon those of Zhou in order to scale network systems to meet demand by optimizing network resources through a self-optimizing network. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2021/0042628 A1) in view of Pezzillo et al. (US 2019/0370686 A1), Flanagan et al. (US 2016/0212633 A1), and Frederiksen et al. (US 2008/0026744 A1), and further in view of Sanketi et al. (US 2019/0050749 A1). With respect to claim 2, Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. Sanketi, in order to achieve higher accuracy in inferences by retraining the model on data centralized by data collection (¶0044), discloses: the method of Claim 1, wherein the output performance metric of the plurality of the client machine learning models comprises a gradient of the variations determined between the type of output performance metric output by the network machine learning model and the type of the output performance metric output by each of the client machine learning models (i.e., generate an update to global model based on aggregating the gradient, changes to parameters of the model based on locally stored data, of the computing devices utilizing the global model; the gradient is determined based on changes to the parameters or metrics of the model to retrain and improve accuracy in Sanketi, ¶0018). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of Sanketi, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Sanketi to improve upon those of Zhou in order to achieve higher accuracy in inferences by retraining the model on data centralized by data collection. Claim(s) 5 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2021/0042628 A1) in view of Pezzillo et al. (US 2019/0370686 A1), Flanagan et al. (US 2016/0212633 A1) and Frederiksen et al. (US 2008/0026744 A1), and further in view of Bhalla et al. (US 20150135012 A1) and McMahan et al. (US 20190340534 A1). With respect to claim 5, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. Bhalla, in order to predict by extracting performance metrics for the network nodes from a plurality of data sources (abstract), discloses: the method of Claim 3, wherein the training the network machine learning model with the inputs comprising 1) the aggregated output performance metric and 2) at least one measurement of a network parameter to obtain the output parameter of the network machine learning model comprises: Bhalla further teaches providing to input nodes of a neural network the aggregated output performance metric, (i.e., section 0024 teaches training a model based on the aggregated performance metrics section 0038 teaches aggregating performance metrics from network nodes to create a NPAR; section 0045 teaches at least one identified input variable during training which teaches a measured network parameter used to train a model; section 0015 teaches that the input variable includes network telemetry data which ties the input variable of section 0045 to network parameters; section 0021 teaches that the input variable is from the most current and updated data ). Bhalla further teaches continuing to perform the training of the neural network to obtain a trained network machine learning model based on a further output parameter of the at least one output layer of the neural network, the at least one output layer providing the further output responsive to processing through the input nodes of the neural network a stream of 1) the aggregated output performance metric and 2) at least one measurement of the network parameter, (i.e., section 0024 teaches training a model based on the aggregated performance metrics section 0038 teaches aggregating performance metrics from network nodes to create a NPAR; section 0045 teaches at least one identified input variable during training which teaches a measured network parameter used to train a model; section 0015 teaches that the input variable includes network telemetry data which ties the input variable of section 0045 to network parameters; section 0021 teaches that the input variable is from the most current and updated data). Based on Zhou in view of Pezzillo, Flanagan, and Frederiksen, and further in view of Bhalla, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Bhalla to improve upon those of Zhou in order to predict by extracting performance metrics for the network nodes from a plurality of data sources. Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, Frederiksen, and Bhalla do(es) not explicitly disclose the following. McMahan, in order to scale machine learning corresponding to growing datasets and models by distributing the optimization (¶0002), discloses adapting weights that are used by at least the input nodes of the neural network with a weight vector responsive to a reward value or a loss value of the output parameter of at least one output layer of the neural network, (i.e., section 0046 teaches quantizing the weights; section 0042 teaches the quantization technique is related to loss values). Based on Zhou in view of Pezzillo, Flanagan, Frederiksen and Bhalla, and further in view of McMahan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of McMahan to improve upon those of Zhou in order to scale machine learning corresponding to growing datasets and models by distributing the optimization. With respect to claim 15, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. Bhalla, in order to predict by extracting performance metrics for the network nodes from a plurality of data sources (abstract), discloses: the method of claim 1, further comprising: wherein the output parameter of the network machine learning model is an input to the aggregated machine learning model, (i.e., section 0024 teaches training a model based on the aggregated performance metrics section 0038 teaches aggregating performance metrics from network nodes to create a NPAR; section 0045 teaches at least one identified input variable during training which teaches a measured network parameter used to train a model; section 0015 teaches that the input variable includes network telemetry data which ties the input variable of section 0045 to network parameters; section 0021 teaches that the input variable is from the most current and updated data; section 0045 teaches identifying input variables during the training process); deciding an action to control the operation in the telecommunications network based on an output of the aggregated machine learning model, (i.e., section 0023 teaches remotely configuring network nodes). Based on Zhou in view of Pezzillo, Flanagan, and Frederiksen, and further in view of Bhalla, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Bhalla to improve upon those of Zhou in order to predict by extracting performance metrics for the network nodes from a plurality of data sources. Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, Frederiksen, and Bhalla do(es) not explicitly disclose the following. McMahan, in order to scale machine learning corresponding to growing datasets and models by distributing the optimization (¶0002), discloses: running the aggregated machine learning model after the training, (i.e., McMahan, section 0022 teaches aggregating the model updates from each client in the subset of clients to improve the global model and redistribute the updated global model to clients). Based on Zhou in view of Pezzillo, Flanagan, Frederiksen and Bhalla, and further in view of McMahan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of McMahan to improve upon those of Zhou in order to scale machine learning corresponding to growing datasets and models by distributing the optimization. Claim(s) 6, 12-14, 16, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2021/0042628 A1) in view of Pezzillo et al. (US 2019/0370686 A1), Flanagan et al. (US 2016/0212633 A1) and Frederiksen et al. (US 2008/0026744 A1), and further in view of McMahan et al. (US 20190340534 A1). With respect to claim 6, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. McMahan, in order to scale machine learning corresponding to growing datasets and models by distributing the optimization (¶0002), discloses: receiving a decision from a client computing device running the aggregated machine learning model to control the operation in the telecommunications network; and performing an action on the decision to control the operation in the telecommunications network, (i.e., section 0057 teaches providing the user with controls to enable control of communications on the network and performing actions including selecting what data should transmitted or not). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of McMahan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of McMahan to improve upon those of Zhou in order to scale machine learning corresponding to growing datasets and models by distributing the optimization. With respect to claim 12, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. McMahan, in order to scale machine learning corresponding to growing datasets and models by distributing the optimization (¶0002), discloses: the method of claim 1, further comprising: dynamically deciding on a machine learning model to predict an output parameter to control the operation in the telecommunications network, wherein the machine learning model is chosen from 1) a machine learning model accessible to the network computing device, 2) the aggregated machine learning model, and 3) the aggregated machine learning model and the network machine learning model, (i.e., section 0022 teaches aggregating the model updates from each client in the subset of clients to improve the global model and redistribute the updated global model to clients). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of McMahan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of McMahan to improve upon those of Zhou in order to scale machine learning corresponding to growing datasets and models by distributing the optimization. With respect to claim 13, Zhou discloses: the method of claim 12, wherein the dynamically deciding on a machine learning model comprises a decision based on at least one change in a network parameter (i.e., when model predicts with closer delta score on an active channel in Zhou, ¶0002) Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. McMahan, in order to scale machine learning corresponding to growing datasets and models by distributing the optimization (¶0002), discloses: the method of claim 12, wherein the dynamically deciding on a machine learning model comprises a decision based on at least one change in a network parameter of the telecommunications network and one of: 1) local information of at least one of the plurality of client computing devices is used to predict the parameter, 2) a measurement by the network computing device of at least one change in the network parameter is used to predict the parameter; and 3) both the local information of at least one of the plurality of client computing devices and the measurement by the network computing device of at least one change in the network parameter is used to predict the parameter, (i.e., section 0004 teaches a local dataset stored locally is used). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of McMahan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of McMahan to improve upon those of Zhou in order to scale machine learning corresponding to growing datasets and models by distributing the optimization. With respect to claim 14, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. McMahan, in order to scale machine learning corresponding to growing datasets and models by distributing the optimization (¶0002), discloses: the method of claim 13, further comprising: communicating a signal to at least one client computing device corresponding to the decision, (i.e., section 0028 teaches sending a signal with the current model to a subset of clients and having them independently update the model based on the local data). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of McMahan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of McMahan to improve upon those of Zhou in order to scale machine learning corresponding to growing datasets and models by distributing the optimization. With respect to claim 16, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. McMahan, in order to scale machine learning corresponding to growing datasets and models by distributing the optimization (¶0002), discloses: the method of claim 1, further comprising: iterating on the network machine learning model during the training until the output parameter of the network machine learning model has a defined accuracy, (i.e., section 0022 teaches iteratively performing a plurality or rounds until global model improves; section 0116 teaches improvement based on accuracy). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of McMahan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of McMahan to improve upon those of Zhou in order to scale machine learning corresponding to growing datasets and models by distributing the optimization. With respect to claim 17, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. McMahan, in order to scale machine learning corresponding to growing datasets and models by distributing the optimization (¶0002), discloses: the method of claim 1, wherein the output parameter of the network machine learning model comprises at least one of: an aggregated weight of the aggregated machine learning model; a gradient of a variation between the output performance metric and the output parameter over a defined time period; and a loss metric indicating an accuracy of the network machine learning model, (i.e., section 0022 teaches iteratively performing a plurality or rounds until global model improves; section 0116 teaches improvement based on accuracy). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of McMahan, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of McMahan to improve upon those of Zhou in order to scale machine learning corresponding to growing datasets and models by distributing the optimization. Claim(s) 7-11, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2021/0042628 A1) in view of Pezzillo et al. (US 2019/0370686 A1), Flanagan et al. (US 2016/0212633 A1) and Frederiksen et al. (US 2008/0026744 A1), and further in view of Bhalla et al. (US 20150135012 A1). With respect to claim 7, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. Bhalla, in order to predict by extracting performance metrics for the network nodes from a plurality of data sources (abstract), discloses: receiving, from a client computing device, a confidence value for a first decision by the client computing device running the aggregated machine learning model to control the operation in the telecommunications network; running the network machine learning model to obtain a second decision to control the operation of the telecommunications network; and determining a third decision to control the operation in the telecommunications network based on combining the first decision and the second decision, (i.e., section 0016 teaches rating each model based on accuracy(confidence) and efficiency amongst other considerations and selecting the best model based on multiple considerations). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of Bhalla, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Bhalla to improve upon those of Zhou in order to predict by extracting performance metrics for the network nodes from a plurality of data sources. With respect to claim 8, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. Bhalla, in order to predict by extracting performance metrics for the network nodes from a plurality of data sources (abstract), discloses: deciding an action to control the operation in the telecommunications network based on the output parameter of the network machine learning model after the network machine learning model is trained, (i.e., section 0023 teaches remotely configuring network nodes). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of Bhalla, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Bhalla to improve upon those of Zhou in order to predict by extracting performance metrics for the network nodes from a plurality of data sources. With respect to claim 9, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. Bhalla, in order to predict by extracting performance metrics for the network nodes from a plurality of data sources (abstract), discloses: at least one of: receiving at least one of the plurality of client machine learning models from a client computing device while iterating on the network machine learning model during the training; and receiving at least one of the output performance metric and at least one of the plurality of client machine learning models from the client computing device while iterating on the network machine learning model during the training, (i.e., section 0024 teaches training a model based on the aggregated performance metrics section 0038 teaches aggregating performance metrics from network nodes to create a NPAR; section 0045 teaches at least one identified input variable during training which teaches a measured network parameter used to train a model; section 0015 teaches that the input variable includes network telemetry data which ties the input variable of section 0045 to network parameters; section 0021 teaches that the input variable is from the most current and updated data; section 0045 teaches identifying input variables during the training process). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of Bhalla, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Bhalla to improve upon those of Zhou in order to predict by extracting performance metrics for the network nodes from a plurality of data sources. With respect to claim 10, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. Bhalla, in order to predict by extracting performance metrics for the network nodes from a plurality of data sources (abstract), discloses: the method of claim 1, wherein the sending to the plurality of client computing devices the aggregated machine learning model and the output parameter of the network machine learning model comprises at least one of: sending the aggregated machine learning model to the plurality of client computing devices while iterating on the network machine learning model during the training; and sending the output parameter of the network machine learning model and the aggregated machine learning model to the plurality of client computing devices while iterating on the network machine learning model during the training, (i.e., section 0024 teaches training a model based on the aggregated performance metrics section 0038 teaches aggregating performance metrics from network nodes to create a NPAR; section 0045 teaches at least one identified input variable during training which teaches a measured network parameter used to train a model; section 0015 teaches that the input variable includes network telemetry data which ties the input variable of section 0045 to network parameters; section 0021 teaches that the input variable is from the most current and updated data; section 0045 teaches identifying input variables during the training process). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of Bhalla, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Bhalla to improve upon those of Zhou in order to predict by extracting performance metrics for the network nodes from a plurality of data sources. With respect to claim 11, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. Bhalla, in order to predict by extracting performance metrics for the network nodes from a plurality of data sources (abstract), discloses: the method of claim 1, wherein the aggregated output performance metric further comprises adapting the aggregated output performance metric to a number of client computing devices that report the output performance metric to the network computing device based on one of: a weighted average of the output performance metric of the plurality of the client machine learning models; a statistical combination of the output performance metric of the plurality of the client machine learning models; and a minimum and a maximum of the output performance metric of the plurality of the client machine learning models, (i.e., section 0042 teaches a minimum maximum and average value). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of Bhalla, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Bhalla to improve upon those of Zhou in order to predict by extracting performance metrics for the network nodes from a plurality of data sources. With respect to claim 18, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. Bhalla, in order to predict by extracting performance metrics for the network nodes from a plurality of data sources (abstract), discloses: the method of claim 1, updating the aggregated machine learning model after the training, wherein the updating is performed based on one of: an environmental change in the telecommunications network; an event in a neighboring cell of the telecommunications network; a fluctuation in a channel of the telecommunications network; a fluctuation in a load of a target cell and a neighbor cell, respectively; and an event in the telecommunications network, (i.e., section 0046 teaches event outcomes including failed conditions which teaches and event in the network). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of Bhalla, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Bhalla to improve upon those of Zhou in order to predict by extracting performance metrics for the network nodes from a plurality of data sources. Claims 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2021/0042628 A1) in view of Pezzillo et al. (US 2019/0370686 A1), Flanagan et al. (US 2016/0212633 A1) and Frederiksen et al. (US 2008/0026744 A1), and further in view of Bhalla et al. (US 20150135012 A1) and Reynolds (US 20210099552 A1). With respect to claim 19, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, Frederiksen, and Bhalla do(es) not explicitly disclose the following. Reynolds, in order to provide an artificial neural network that may, in some embodiments, be configured to determine and detect a base station networking device's proper response to network traffic (¶0055), discloses: the method of claim 18, wherein the updating the aggregated machine learning model after the training is sent to at least one of the plurality of the client computing devices based on one of based on one of: enabling a physical (PHY) layer, a medium access control (MAC) layer, a resource radio control (RRC) layer, a packet data convergence protocol (PDCP) layer, and an application layer for sending the aggregated machine learning model to the plurality of client computing devices; enabling a PHY layer with a mini slot for sending the aggregated machine learning model to the plurality of client computing devices; and enabling an application layer for sending the aggregated machine learning model to the plurality of client computing devices, (i.e., section 0069 teaches RRC section 0070 teaches MAC, Physical Layer and PDCP). Based on Zhou in view of Pezzillo, Flanagan, Frederiksen, and Bhalla, and further in view of Reynolds, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Reynolds to improve upon those of Zhou in order to provide an artificial neural network that may, in some embodiments, be configured to determine and detect a base station networking device's proper response to network traffic. Claims 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 2021/0042628 A1) in view of Pezzillo et al. (US 2019/0370686 A1), Flanagan et al. (US 2016/0212633 A1), and Frederiksen et al. (US 2008/0026744 A1), and further in view of Reynolds (US 20210099552 A1). With respect to claim 20, Zhou discloses federated learning framework with primary MLM and local MLMs established in a hierarchy (¶0001). Zhou, Pezzillo, Flanagan, and Frederiksen do(es) not explicitly disclose the following. Reynolds, in order to provide an artificial neural network that may, in some embodiments, be configured to determine and detect a base station networking device's proper response to network traffic (¶0055), discloses: the method of claim1, wherein the plurality of client machine learning models received from the plurality of client computing devices and the sending to the plurality of client computing devices the aggregated machine learning model comprises the receiving and/or the sending, respectively, performed via a first message received and/or sent using the signal type as follows: a resource radio control (RRC), configuration signal; a physical downlink control channel (PDCCH) signal from the network computing device; a physical uplink control channel (PUCCH), signal from at least one client computing device; and a medium access control (MAC) control element signal, (i.e., section 0069 teaches RRC section 0070 teaches MAC, Physical Layer and PDCP). Based on Zhou in view of Pezzillo, Flanagan and Frederiksen, and further in view of Reynolds, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Reynolds to improve upon those of Zhou in order to provide an artificial neural network that may, in some embodiments, be configured to determine and detect a base station networking device's proper response to network traffic. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERMAN L LIN whose telephone number is (571)270-7446. The examiner can normally be reached Monday through Friday 9:00 AM - 5:00 PM (Eastern). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joon Hwang can be reached on 571-272-4036. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Sherman Lin 1/9/2025 /S. L./Examiner, Art Unit 2447 /JOON H HWANG/Supervisory Patent Examiner, Art Unit 2447
Read full office action

Prosecution Timeline

Jun 10, 2022
Application Filed
Jul 01, 2023
Non-Final Rejection — §103
Oct 09, 2023
Response Filed
Oct 20, 2023
Final Rejection — §103
Jan 31, 2024
Request for Continued Examination
Feb 06, 2024
Response after Non-Final Action
Mar 19, 2024
Non-Final Rejection — §103
Jun 28, 2024
Response Filed
Oct 15, 2024
Final Rejection — §103
Dec 20, 2024
Response after Non-Final Action
Mar 10, 2025
Request for Continued Examination
Mar 19, 2025
Response after Non-Final Action
Apr 20, 2025
Non-Final Rejection — §103
Jul 25, 2025
Response Filed
Aug 28, 2025
Final Rejection — §103
Nov 05, 2025
Response after Non-Final Action
Dec 04, 2025
Request for Continued Examination
Dec 18, 2025
Response after Non-Final Action
Jan 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12494926
QUIC TRANSPORT PROTOCOL-BASED COMMUNICATION METHOD AND SYSTEM
2y 5m to grant Granted Dec 09, 2025
Patent 12445523
DISCOVERY AND CONFIGURATION OF IOT DEVICES
2y 5m to grant Granted Oct 14, 2025
Patent 12267257
VIRTUAL MACHINE MIGRATION IN CLOUD INFRASTRUCTURE NETWORKS
2y 5m to grant Granted Apr 01, 2025
Patent 12206751
METHODS AND SYSTEMS FOR CONTENT DISTRIBUTION
2y 5m to grant Granted Jan 21, 2025
Patent 12058057
SCHEDULING OF DATA TRAFFIC
2y 5m to grant Granted Aug 06, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
29%
Grant Probability
66%
With Interview (+36.9%)
6y 3m
Median Time to Grant
High
PTA Risk
Based on 255 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month