Prosecution Insights
Last updated: April 19, 2026
Application No. 17/286,287

HANDLING OF MACHINE LEARNING TO IMPROVE PERFORMANCE OF A WIRELESS COMMUNICATIONS NETWORK

Final Rejection §103
Filed
Apr 16, 2021
Examiner
WELCH, JENNIFER N
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
4 (Final)
75%
Grant Probability
Favorable
5-6
OA Rounds
4y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
249 granted / 334 resolved
+19.6% vs TC avg
Strong +29% interview lift
Without
With
+29.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
24 currently pending
Career history
358
Total Applications
across all art units

Statute-Specific Performance

§101
16.8%
-23.2% vs TC avg
§103
40.6%
+0.6% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
18.5%
-21.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 334 resolved cases

Office Action

§103
DETAILED ACTION Remarks Claims 1-7, 10-14, 16-18, and 25-27 have been examined and rejected. This Office action is responsive to the amendment filed on 09/02/2025, which has been entered in the above identified application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 5, 10, 13, 14, 16, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Johnsson et al. (WO 2018101862 A1, published 06/07/2018), hereinafter Johnsson, in view of Li et al. (US 11075929 B1, published 07/27/2021), hereinafter Li. Regarding claim 1, Johnsson teaches the claim comprising: A method performed in a wireless communications system for handling of machine learning to improve performance of a wireless communications network operating in the wireless communications system, the wireless communications system comprising a central network node and one or more intermediate network nodes arranged between the central network node and one or more leaf network nodes operating in the wireless communications network, at least one out of: the central network node, the one or more intermediate network nodes or the one or more leaf network nodes comprising a machine learning unit, the method comprising (Johnsson Figs. 1-10; [0043], Briefly described, a master node, a local node, a service assurance system, and a respective method performed thereby for predicting one or more metrics associated with a communication network are provided. Nodes in the distributed machine learning scenario that do not have noticeable contribution in improving the overall accuracy of the distributed learning activity are identified and their participation in the distributed learning is deactivated or alternatively limited, via signalling. This allows sparing there local resources for other important system activities/services or simply reduce their bandwidth/energy consumption; [00088], The master node and the local nodes are comprised in (or are part of) the communication network. As described above, the communication network may comprise system with servers which are interconnected via a network, but the system is a data network within the communication network. Still further, the communication network may be a wired communication network, e.g. a landline communication network such as a Public Switched Telephone Network, PSTN, or a radio/wireless communication network, where e.g. the master node may be a base station and the local node(s) may be sensors, stations, wireless devices etc; [0091], many eNodeBs may potentially be or comprising a local node e.g. by locally installing an Analytics Agent, AA. A Central Analytics Agent, CAA, can be installed e.g. on a master node e.g. on Evolved Packet Core, EPC, or virtual EPC, vEPC; [0092], A typical architecture of an Evolved UMTS Terrestrial Radio Access Network (E-UTRAN) and interaction with EPC in an LTE network is shown in the figure 4a. Functional modules of the eNodeB and EPC are shown in figure 4b. An exemplifying implementation of the solution comprising the methods described above is illustrated in figure 4c and 4d. In this exemplifying implementation of the solution, the eNodeB and EPC with installed AA and CAA agents are illustrated in the figures. In these networks (e.g. LTE) an eNodeB works as a base-station responsible for the communication with the mobiles (wireless devices) in one or multiple cells; [0117], the master node 500 comprising a processor 521; [0133], the local node 700 comprising a processor 721): by means of the machine learning unit and a machine learning model relating to at least one network node out of the one or more intermediate network nodes or the one or more leaf network nodes, determining a prediction of a performance of the at least one network node based on input data relating to the at least one network node; based on the determined prediction, performing one or more operations relating to the at least one network node (Johnsson Figs. 1-10; [0049], Figure 1a illustrates the method 100 comprising receiving 120 prediction(s) based on training data from local nodes in the communication network; [0054], The metrics associated with the communication network may relate to performance, anomalies and other information relative to current circumstances of the communication network. The training data may consists of both measurement data (X) from e.g. sensors associated with local nodes and actual, true or measured values (Y) that the local model shall be learned to predict later. In the prediction phase there are only X, while the Y will be predicted. The predictions may further relate to Operation, Administration and Management, OAM, data; [0077], Based on the received local reporting policy from the master node, the method comprises building 230 a local model based on locally available data; performing 240 a prediction based on the local model; and transmitting 250 the prediction to the master node in accordance with the received local reporting policy; [0079], Once the local node has built its local model based on the locally available data, the local node may perform 240 the prediction based on the local model. The prediction may comprise an indication of a likely value of one or more metrics based on the part of the communication network represented by the local node. Once the local node has performed the prediction, the local node may send the prediction to the master node in accordance with the received local reporting policy; [0078], The master node will use the prediction from the local node, possible together with prediction(s) from other local node(s), in order to determine one or more metrics associated with the communication network; [0084-0085], The local node may send also the weight parameter to the master node, wherein the master node may adjust the local reporting policy for the local node based on either one of or both the sent prediction and the sent determined weight parameter; [0087], the master node receiving 330 the prediction(s) from local nodes in the communication network, determining weight parameter(s) associated with the local nodes based on the received prediction(s) and previously received predictions, and adjusting a respective local reporting policy for one or more local nodes based on the determined weight parameter(s). The method 300 also comprises the one or more local nodes receiving 340 the local reporting policy from the master node in the communication network, the local reporting policy informing the local node of how to send prediction(s) to the master node. Based on the received reporting policy from the master node, the method comprises the one or more local nodes building 350 a local model based on locally available data, performing a prediction based on the local model, and transmitting the prediction to the master node in accordance with the received local reporting policy; [0124], the master node 500, 600 is configured for determining a global model for predicting one or more metrics associated with the communication network based on the received prediction(s) from the local nodes and the determined weight parameter(s); [0126], the master node 500, 600 is configured for adjusting the respective local reporting policy by deactivating a local node having a determined weight parameter not meeting a first threshold, wherein the local node will stop to send predictions to the master node; [0128-0129], adjusting the respective local reporting policy by changing the frequency with which a local node sends predictions to the master node depending on (i) prediction accuracy of the received prediction(s) from the local nodes, and/or (ii) the determined weight parameter(s); [0139], According to an embodiment, the received reporting policy comprises information instructing the local node to activate its predictions, deactivate its predictions or changing the frequency with which the local node transmits predictions to the master node; see also [0096], [0098-0100]). by means one of the at least one network node and of another network node comprising the machine learning unit, training the machine learning model by using an input parameter relating to the performance of the at least one network node in order to choose one or more operations relating to the performance of the at least one network node, evaluating the machine learning model after performing the one or more operations relating to the performance of the at least one network node, and updating the machine learning model based on the one or more operations relating to the performance of the at least one network node (Johnsson Figs. 1-10; [0052], Once the master node has received the prediction(s) from the one or more local nodes, the master node may determine 130 weight parameter(s) associated with the local nodes based on the received prediction(s) and previously received predictions; the master node may calculate accuracy values for received predictions, which accuracy values may be based on the extent of which respective received prediction from respective local node differs from those respective local nodes' GT value and optionally also previously received one or more predictions. Then the master node may use these accuracy values when determining the weight parameter(s) for respective local nodes; [0054], The metrics associated with the communication network may relate to performance, anomalies and other information relative to current circumstances of the communication network. The training data may consists of both measurement data (X) from e.g. sensors associated with local nodes and actual, true or measured values (Y) that the local model shall be learned to predict later. In the prediction phase there are only X, while the Y will be predicted. The predictions may further relate to Operation, Administration and Management, OAM, data; [0068], overall accuracy at the master node goes below a certain threshold (or some other trigger), wherein the master node may activate a previously deactivated local node in order to achieve higher accuracy of the global prediction done by the master node (global model) based on the local predictions; [0084], Different predictions may be more or less accurate. In an example, the local node may itself determine the accuracy of a newly performed prediction. The local node may itself also determine if a newly performed prediction deviates more than to a certain extent from previously performed predictions. Depending on the determined accuracy, the local node may determine a weight parameter associated with itself; [0085], The local node may send also the weight parameter to the master node, wherein the master node may adjust the local reporting policy for the local node based on either one of or both the sent prediction and the sent determined weight parameter; [0096-0097], At the high level, CAA (at M0) uses the Winnow algorithm to make the predictions and the CAA works as a data fusion module based on the prediction inputs of all AAs (see figure 4e). The algorithm also assigns weights to all participating AA nodes. These weights may have to be calculated/updated e.g. using the training data gathered from local nodes (usually at periodic intervals) participating the distributed learning; [0103], One phase of the solution is model building; If the global model is outdated because of system state changes or concept drift; An update cycle can also be triggered periodically if new training data is available; [0104-0105], During each update cycle collect new training data at the CAA arriving from AAs of different node; Updating the global model at CAA involves updating the weight parameters for the Winnow algorithm. Compute/update the weights when the new training data becomes available; [0106], IF there is concept-drift detection or accuracy at CAA falls below a certain threshold value: 1 . Trigger signalling and send all the N local nodes: an ACTIVATE signal; [0124], the master node 500, 600 is configured for determining a global model for predicting one or more metrics associated with the communication network based on the received prediction(s) from the local nodes and the determined weight parameter(s); see also [0078], [0087], [0098-0100], [0126], [0128-0129], [0139]). the training of the machine learning model comprising training the machine learning model by using the received input parameter and a state relating to an environment of the at least one network node to choose one or more actions relating to the performance of the at least one network node; the updating of the machine learning model based on the one or more operations comprising updating the machine learning model based on the one or more operations and based on the state relating to the environment of the at least one network node (Johnsson Figs. 1-10; [0052], Once the master node has received the prediction(s) from the one or more local nodes, the master node may determine 130 weight parameter(s) associated with the local nodes based on the received prediction(s) and previously received predictions; the master node may calculate accuracy values for received predictions, which accuracy values may be based on the extent of which respective received prediction from respective local node differs from those respective local nodes' GT value and optionally also previously received one or more predictions. Then the master node may use these accuracy values when determining the weight parameter(s) for respective local nodes; [0054], The metrics associated with the communication network may relate to performance, anomalies and other information relative to current circumstances of the communication network. The training data may consists of both measurement data (X) from e.g. sensors associated with local nodes and actual, true or measured values (Y) that the local model shall be learned to predict later. In the prediction phase there are only X, while the Y will be predicted; [0068], overall accuracy at the master node goes below a certain threshold (or some other trigger), wherein the master node may activate a previously deactivated local node in order to achieve higher accuracy of the global prediction done by the master node (global model) based on the local predictions; [0084], Different predictions may be more or less accurate. In an example, the local node may itself determine the accuracy of a newly performed prediction. The local node may itself also determine if a newly performed prediction deviates more than to a certain extent from previously performed predictions. Depending on the determined accuracy, the local node may determine a weight parameter associated with itself; [0085], The local node may send also the weight parameter to the master node, wherein the master node may adjust the local reporting policy for the local node based on either one of or both the sent prediction and the sent determined weight parameter; [0096-0097], At the high level, CAA (at M0) uses the Winnow algorithm to make the predictions and the CAA works as a data fusion module based on the prediction inputs of all AAs (see figure 4e). The algorithm also assigns weights to all participating AA nodes. These weights may have to be calculated/updated e.g. using the training data gathered from local nodes (usually at periodic intervals) participating the distributed learning; [0103], One phase of the solution is model building; If the global model is outdated because of system state changes or concept drift; An update cycle can also be triggered periodically if new training data is available; [0104-0105], During each update cycle collect new training data at the CAA arriving from AAs of different node; Updating the global model at CAA involves updating the weight parameters for the Winnow algorithm. Compute/update the weights when the new training data becomes available; [0106], IF there is concept-drift detection or accuracy at CAA falls below a certain threshold value: 1 . Trigger signalling and send all the N local nodes: an ACTIVATE signal; [0124], the master node 500, 600 is configured for determining a global model for predicting one or more metrics associated with the communication network based on the received prediction(s) from the local nodes and the determined weight parameter(s); see also [0078], [0087], [0098-0100], [0126], [0128-0129], [0139]). and transmitting information relating to the machine learning model to one or more other network nodes (Johnsson Figs. 1-10; [0078], The master node will use the prediction from the local node, possible together with prediction(s) from other local node(s), in order to determine one or more metrics associated with the communication network; [0084], Different predictions may be more or less accurate. In an example, the local node may itself determine the accuracy of a newly performed prediction. The local node may itself also determine if a newly performed prediction deviates more than to a certain extent from previously performed predictions. Depending on the determined accuracy, the local node may determine a weight parameter associated with itself; [0085], The local node may send also the weight parameter to the master node, wherein the master node may adjust the local reporting policy for the local node based on either one of or both the sent prediction and the sent determined weight parameter; [0087], the master node receiving 330 the prediction(s) from local nodes in the communication network, determining weight parameter(s) associated with the local nodes based on the received prediction(s) and previously received predictions, and adjusting a respective local reporting policy for one or more local nodes based on the determined weight parameter(s). The method 300 also comprises the one or more local nodes receiving 340 the local reporting policy from the master node in the communication network, the local reporting policy informing the local node of how to send prediction(s) to the master node. Based on the received reporting policy from the master node, the method comprises the one or more local nodes building 350 a local model based on locally available data, performing a prediction based on the local model, and transmitting the prediction to the master node in accordance with the received local reporting policy; [0096], The way the typical distributed learning algorithm operates (e.g. Winnow) will now be described. From figure 4e, local service predictions from AAs at these machines are first executed and local prediction results are then sent to the CAA (at Mo). CAA updates the associated weights for the different server machines or nodes based on the prediction results (previous steps need to be executed whenever weights needs to be updated at CAA). After the previous steps, the fusion step is executed by the CAA (e.g. weighted majority algorithm) so as to compute the final prediction; [0098-0100], Defined control signals are sent from the CAA (at the master node) to the AAs (at the local nodes) to control their participation behaviour in the distributed learning algorithm dynamically under changing workload and resource availability; [0124], the master node 500, 600 is configured for determining a global model for predicting one or more metrics associated with the communication network based on the received prediction(s) from the local nodes and the determined weight parameter(s); [0126], According to a further embodiment, the master node 500, 600 is configured for adjusting the respective local reporting policy by deactivating a local node having a determined weight parameter not meeting a first threshold, wherein the local node will stop to send predictions to the master node; [0128], According to an embodiment, the master node 500, 600 is configured for adjusting the respective local reporting policy by changing the frequency with which a local node sends predictions to the master node depending on the determined weight parameter(s) of the local node; [0129], According to yet an embodiment, the master node 500, 600 is configured for adjusting the respective local reporting policy by changing the frequency with which a local node sends predictions to the master node depending on (i) prediction accuracy of the received prediction(s) from the local nodes, and/or (ii) the determined weight parameter(s); [0139], According to an embodiment, the received reporting policy comprises information instructing the local node to activate its predictions, deactivate its predictions or changing the frequency with which the local node transmits predictions to the master node) However, Johnsson fails to expressly disclose the prediction comprising one or more of a modulation and coding scheme (MCS) to use, which transmitter beam to use, and which receiver beam to use; the one or more operations comprising at least one of a change of transmit beam, a change of receive beam, and a change of MCS selection operation, based on the determined prediction of the performance of the at least one network node. In the same field of endeavor, Li teaches: the prediction comprising one or more of a modulation and coding scheme (MCS) to use, which transmitter beam to use, and which receiver beam to use; the one or more operations comprising at least one of a change of transmit beam, a change of receive beam, and a change of MCS selection operation, based on the determined prediction of the performance of the at least one network node (Li Figs. 1-19; abs. inputting one or more features derived from the time series data into a machine learning module; generating a report comprising an indication that the anomaly exists and a description of the anomaly type, and determining one or more treatments for the determined anomaly; col. 1 [line 53], An anomaly may be any problem that occurs within the network; col. 7 [line 52], transmission beam index indicator, receiving beam index indicator; col. 8 [line 10], The treatment may include one or more of the following: fast link adaptation (e.g., lower the initial modulation and coding scheme (MCS) configuration, increase the initial transmitting power configuration, increase step sizes related to link recovery, increase the transmitting power modification step sizes, increase MCS modification step sizes, and increase the rate of link adaptation. The treatments may further include fast link recovery, such as switching to other beam pairs quickly; a treatment may be to fast-switch beam pairs; Table 1 lower the initial MCS unvarying configuration, increase MCS modification step sizes, increase MCS modification step sizes; col. 17 [line 17], the anomaly detection system may recommend using special adaptation (e.g., starting MCS being lower than in general), or very fast link recovery (e.g., large step size), or even skip certain adaptation and jump to the process of using alternative links. If these treatments are labeled, then the machine learning algorithm may be used to learn the model where the characteristics of the channel related measurements as the input of the model, where the output can be the way of link adaption, or the treatment) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the prediction comprising one or more of a modulation and coding scheme (MCS) to use, which transmitter beam to use, and which receiver beam to use; the one or more operations comprising at least one of a change of transmit beam, a change of receive beam, and a change of MCS selection operation, based on the determined prediction of the performance of the at least one network node as suggested in Li into Johnsson. Doing so would be desirable because the anomaly detection system may use data analytics and machine learning to automatically detect various anomalies that occur on the millimeter wave communication network. To detect anomalies automatically, machine learning algorithms can help enhance the accuracy of determining when an anomaly exists and determining a categorization for the anomaly. The anomaly detection system may enable automatic detection and treatment of anomalies, which may reduce the need for engineers to make field trips and may also lower operational costs. Furthermore, automatically detecting and eliminating anomalies may improve the reliability and speed of the communication network (see Li col. 6 [line 23]). The anomaly detection system may not need to rely on interference measurement which may need to take resources to do that and which may hurt the system's efficiency. Rather, the machine-learning based approach can tell whether a link suffers interference via the waveform of the dynamic signal such as path loss, signal to interference and noise ratio, etc. One of the advantages is to increase the system efficiency by reducing the interference measurement necessity (see Li col. 20 [line 52]). As disclosed in Johnsson, systems are needed to detect and resolve problems that impact service quality (see Johnsson background). Li would improve the system of Johnsson by enhancing the ability to detect and eradicates anomalies, which can include any problem that occurs within the network (see Li col. 1 [line 53]). Li contemplates using any suitable network environment including any suitable number of any suitable systems and components arranged in any suitable manner (see Li col. 2 [line 40]), using any suitable frequency (see Li col. 1 [line 53]). Regarding claims 10, 16, and 25, claims 10, 16, and 25 contain substantially similar limitations to those found in claim 1. Claims 16 and 25 further recite by means of the at least one network node to which the machine learning model relates, receive an input parameter relating to a performance of the at least one network node (Johnsson Figs. 1-10; [0054], The metrics associated with the communication network may relate to performance, anomalies and other information relative to current circumstances of the communication network. The training data may consists of both measurement data (X) from e.g. sensors associated with local nodes and actual, true or measured values (Y) that the local model shall be learned to predict later. In the prediction phase there are only X, while the Y will be predicted. The predictions may further relate to Operation, Administration and Management, OAM, data; [0077], Based on the received local reporting policy from the master node, the method comprises building 230 a local model based on locally available data; performing 240 a prediction based on the local model; and transmitting 250 the prediction to the master node in accordance with the received local reporting policy; [0079], Once the local node has built its local model based on the locally available data, the local node may perform 240 the prediction based on the local model. The prediction may comprise an indication of a likely value of one or more metrics based on the part of the communication network represented by the local node. Once the local node has performed the prediction, the local node may send the prediction to the master node in accordance with the received local reporting policy; [0078], The master node will use the prediction from the local node, possible together with prediction(s) from other local node(s), in order to determine one or more metrics associated with the communication network; [0084-0085], The local node may send also the weight parameter to the master node, wherein the master node may adjust the local reporting policy for the local node based on either one of or both the sent prediction and the sent determined weight parameter; [0087], the master node receiving 330 the prediction(s) from local nodes in the communication network, determining weight parameter(s) associated with the local nodes based on the received prediction(s) and previously received predictions, and adjusting a respective local reporting policy for one or more local nodes based on the determined weight parameter(s). The method 300 also comprises the one or more local nodes receiving 340 the local reporting policy from the master node in the communication network, the local reporting policy informing the local node of how to send prediction(s) to the master node. Based on the received reporting policy from the master node, the method comprises the one or more local nodes building 350 a local model based on locally available data, performing a prediction based on the local model, and transmitting the prediction to the master node in accordance with the received local reporting policy; see also [0049], [0096], [0098-0100], [0124-0129], [0139]) Consequently, claims 10, 16, and 25 are rejected for the same reasons. Regarding claim 4, Johnsson in view of Li teaches all the limitations of claim 1, further comprising: wherein the determining of the prediction of the performance of the at least one network node comprises: by means of the at least one network node, performing one or more measurements; and by means of the machine learning unit, using information relating to the performed one or more measurements as input data to the machine learning model in order to determine the prediction of the performance of the at least one network node, wherein the prediction is based on output data from the machine learning model (Johnsson Figs. 1-10; [0078], The master node will use the prediction from the local node, possible together with prediction(s) from other local node(s), in order to determine one or more metrics associated with the communication network; [0084], Different predictions may be more or less accurate. In an example, the local node may itself determine the accuracy of a newly performed prediction. The local node may itself also determine if a newly performed prediction deviates more than to a certain extent from previously performed predictions. Depending on the determined accuracy, the local node may determine a weight parameter associated with itself; [0085], The local node may send also the weight parameter to the master node, wherein the master node may adjust the local reporting policy for the local node based on either one of or both the sent prediction and the sent determined weight parameter; [0087], the master node receiving 330 the prediction(s) from local nodes in the communication network, determining weight parameter(s) associated with the local nodes based on the received prediction(s) and previously received predictions, and adjusting a respective local reporting policy for one or more local nodes based on the determined weight parameter(s); [0096], The way the typical distributed learning algorithm operates (e.g. Winnow) will now be described. From figure 4e, local service predictions from AAs at these machines are first executed and local prediction results are then sent to the CAA (at Mo). CAA updates the associated weights for the different server machines or nodes based on the prediction results (previous steps need to be executed whenever weights needs to be updated at CAA). After the previous steps, the fusion step is executed by the CAA (e.g. weighted majority algorithm) so as to compute the final prediction; [0124], the master node 500, 600 is configured for determining a global model for predicting one or more metrics associated with the communication network based on the received prediction(s) from the local nodes and the determined weight parameter(s); see also [0098-0100], [0126], [0128-0129], [0139]). Regarding claim 13, claim 13 contains substantially similar limitations to those found in claim 4. Consequently, claim 13 is rejected for the same reasons. Regarding claim 5, Johnsson in view of Li teaches all the limitations of claim 1, further comprising: further comprising: evaluating the machine learning model after the performing of the one or more operations relating to the at least one network node based on the determined prediction; and updating the machine learning model based on an evaluation (Johnsson Figs. 1-10; [0052] Once the master node has received the prediction(s) from the one or more local nodes, the master node may determine 130 weight parameter(s) associated with the local nodes based on the received prediction(s) and previously received predictions; the master node may calculate accuracy values for received predictions, which accuracy values may be based on the extent of which respective received prediction from respective local node differs from those respective local nodes' GT value and optionally also previously received one or more predictions. Then the master node may use these accuracy values when determining the weight parameter(s) for respective local nodes; [0068], overall accuracy at the master node goes below a certain threshold (or some other trigger), wherein the master node may activate a previously deactivated local node in order to achieve higher accuracy of the global prediction done by the master node (global model) based on the local predictions; [0084], Different predictions may be more or less accurate. In an example, the local node may itself determine the accuracy of a newly performed prediction. The local node may itself also determine if a newly performed prediction deviates more than to a certain extent from previously performed predictions. Depending on the determined accuracy, the local node may determine a weight parameter associated with itself; [0085], The local node may send also the weight parameter to the master node, wherein the master node may adjust the local reporting policy for the local node based on either one of or both the sent prediction and the sent determined weight parameter; [0096-0097], At the high level, CAA (at M0) uses the Winnow algorithm to make the predictions and the CAA works as a data fusion module based on the prediction inputs of all AAs (see figure 4e). The algorithm also assigns weights to all participating AA nodes. These weights may have to be calculated/updated e.g. using the training data gathered from local nodes (usually at periodic intervals) participating the distributed learning; [0103], One phase of the solution is model building; An update cycle can also be triggered periodically if new training data is available; [0104-0105], During each update cycle collect new training data at the CAA arriving from AAs of different node; Updating the global model at CAA involves updating the weight parameters for the Winnow algorithm. Compute/update the weights when the new training data becomes available; [0106], IF there is concept-drift detection or accuracy at CAA falls below a certain threshold value: 1 . Trigger signalling and send all the N local nodes: an ACTIVATE signal; [0124], the master node 500, 600 is configured for determining a global model for predicting one or more metrics associated with the communication network based on the received prediction(s) from the local nodes and the determined weight parameter(s); see also [0078], [0087], [0098-0100], [0126], [0128-0129], [0139]). Regarding claim 14, claim 14 contains substantially similar limitations to those found in claim 5. Consequently, claim 14 is rejected for the same reasons. Claims 2, 3, 11, 12, 17, 18, 26, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Johnsson in view of Li in view of Kasaragod et al. (US 20190037040 A1, published 01/31/2019), hereinafter Kasaragod. Regarding claim 26, Johnsson in view of Li teaches all the limitations of claim 25, further comprising: wherein the network node is a radio network node, wherein the processor is further configured to: receive from the communications device, information relating to one or more objectives of the communications device when a leaf network node being a communications device connects to the radio network node; transmit, to the communications device, a request to collect data to be used as input data for training of a machine learning model relating to the communications device; receive, from the communications device, the collected data; based on the received collected data, update the machine learning model suitable for the communications device's one or more objectives (Johnsson Figs. 1-10; [0054], in case a local node is not powerful enough to process its own data, then it can share its data with a neighbouring local node so that neighbouring node may act and process this data on the behalf of the less powerful node (i.e. building local model, doing local predictions and transmitting these predictions to the master node); [0078], The master node will use the prediction from the local node, possible together with prediction(s) from other local node(s), in order to determine one or more metrics associated with the communication network; [0084], Different predictions may be more or less accurate. In an example, the local node may itself determine the accuracy of a newly performed prediction. The local node may itself also determine if a newly performed prediction deviates more than to a certain extent from previously performed predictions. Depending on the determined accuracy, the local node may determine a weight parameter associated with itself; [0085], The local node may send also the weight parameter to the master node, wherein the master node may adjust the local reporting policy for the local node based on either one of or both the sent prediction and the sent determined weight parameter; [0087], the master node receiving 330 the prediction(s) from local nodes in the communication network, determining weight parameter(s) associated with the local nodes based on the received prediction(s) and previously received predictions, and adjusting a respective local reporting policy for one or more local nodes based on the determined weight parameter(s). The method 300 also comprises the one or more local nodes receiving 340 the local reporting policy from the master node in the communication network, the local reporting policy informing the local node of how to send prediction(s) to the master node. Based on the received reporting policy from the master node, the method comprises the one or more local nodes building 350 a local model based on locally available data, performing a prediction based on the local model, and transmitting the prediction to the master node in accordance with the received local reporting policy; [0096], The way the typical distributed learning algorithm operates (e.g. Winnow) will now be described. From figure 4e, local service predictions from AAs at these machines are first executed and local prediction results are then sent to the CAA (at Mo). CAA updates the associated weights for the different server machines or nodes based on the prediction results (previous steps need to be executed whenever weights needs to be updated at CAA). After the previous steps, the fusion step is executed by the CAA (e.g. weighted majority algorithm) so as to compute the final prediction; [0098-0100], Defined control signals are sent from the CAA (at the master node) to the AAs (at the local nodes) to control their participation behaviour in the distributed learning algorithm dynamically under changing workload and resource availability; [0124], the master node 500, 600 is configured for determining a global model for predicting one or more metrics associated with the communication network based on the received prediction(s) from the local nodes and the determined weight parameter(s); [0126], According to a further embodiment, the master node 500, 600 is configured for adjusting the respective local reporting policy by deactivating a local node having a determined weight parameter not meeting a first threshold, wherein the local node will stop to send predictions to the master node; [0128], According to an embodiment, the master node 500, 600 is configured for adjusting the respective local reporting policy by changing the frequency with which a local node sends predictions to the master node depending on the determined weight parameter(s) of the local node; [0129], According to yet an embodiment, the master node 500, 600 is configured for adjusting the respective local reporting policy by changing the frequency with which a local node sends predictions to the master node depending on (i) prediction accuracy of the received prediction(s) from the local nodes, and/or (ii) the determined weight parameter(s); [0139], According to an embodiment, the received reporting policy comprises information instructing the local node to activate its predictions, deactivate its predictions or changing the frequency with which the local node transmits predictions to the master node; [0117], the master node 500 comprising a processor 521; [0133], the local node 700 comprising a processor 721). However, Johnsson in view of Li fails to expressly disclose receive from the communications device, information relating to one or more objectives of the communications device when a leaf network node being a communications device connects to the radio network node; transmit, to the communications device, a machine learning model suitable for the communications device's one or more objectives; transmit, to the communications device, a request to collect data to be used as input data for training of a machine learning model relating to the communications device; receive, from the communications device, the collected data; based on the received collected data, update the machine learning model suitable for the communications device's one or more objectives; and transmit the updated machine learning model to the communications device. In the same field of endeavor, Kasaragod teaches: receive from the communications device, information relating to one or more objectives of the communications device when a leaf network node being a communications device connects to the radio network node; transmit, to the communications device, a machine learning model suitable for the communications device's one or more objectives; transmit, to the communications device, a request to collect data to be used as input data for training of a machine learning model relating to the communications device; receive, from the communications device, the collected data; based on the received collected data, update the machine learning model suitable for the communications device's one or more objectives; and transmit the updated machine learning model to the communications device (Kasaragod Figs. 1-19; [0027], a remote provider network and a local network device (e.g., hub device or edge device) may be used to generate a split prediction. For example, a hub device of a local network may receive data from sensors and process the data using a local model (e.g., data processing model) to generate a local prediction. The sensor data may also be transmitted to a remote provider network, which returns another prediction. If the returned prediction from the provider network is more accurate, then the local prediction may be corrected by the returned prediction; [0028] In some embodiments, a provider network and/or a hub device may update local data models of edge devices based on data collected by the edge devices. For example, a provider network and/or a hub device may periodically receive data from edge devices and generate new updates to local models based on the new data. The provider network and/or a hub device may then deploy the updates to the respective edge devices. In embodiments, entirely new versions of the local models are deployed to replace current models of the respective edge devices; [0029-0030], multiple models may be implemented across multiple respective edge devices (e.g., tier devices) of a network; [0035], A hub device 100, a provider network 102, local network 108, edge devices 106; [0036], multiple hub devices may be used as redundant hub devices; [0038], the hub device 100 includes a local model 108 that may receive data from one or more edge devices 106 and process the received data; [0039], the provider network 102 includes a data processing service 116 that includes a model 118 that receives the data from the hub device 112 and processes the received data; [0049], data collector 122 may be a sensor or other device that detects performance and/or other operational aspects of the network (e.g., network bandwidth or traffic, power consumption of a data source device, etc.) and generates data based on the detected performance. Thus, the generated data may indicate performance or other operational aspects of the local network and/or the edge device 106. In embodiments, the generated data may be sent to the hub device 100; [0063], FIG. 4 illustrates a system for implementing a split prediction based on a local model of an edge device and a provider network, according to some embodiments. The edge device 400 includes a result manager 112, a local model 108, and one or more data collectors 122; [0065], the data processing service 116 and/or the model 118 receives the data sent from the hub device 112 and processes the received data; [0088], FIG. 6 illustrates a system for updating models for edge devices by a provider network, according to some embodiments. In the depicted embodiment, the edge devices 600 are connected to a local network 104; [0090], The model training service 604 may then generate a local model update 606a to the local model 602a based on the analysis of the data 608a and generate a local model update 606n to the local model 602n based on the analysis of the data 608n; [0091], the local model update 606a is configured to update the local model 602a and the local model update 606n is configured to update the local model 602n; [0092], the model training service 604 may deploy the local model updates 606a, 606n to the local network 104; [0093], instead of just modifying an existing local model, in some cases it is replaced by a different model that is a more recent version; [0095], the edge device 600 may also send the data received from the data collector 122 to the model training service 604. The edge device 600 may then receive another local model update 606 from the model training service, wherein the local model update 606 is based on the data received from the data collector 122; [0114], FIG. 9 illustrates a system for updating models for edge devices by a hub device; [0115-0116], the model trainer 902 of the hub device 100 may receive the data 608 from one or more of the edge devices 600; [0117-0118], In response to the generating of the local model updates 606a, 606n, the model trainer 902 may deploy the local model updates 606a, 606n to the respective edge device 600a, 600n; [0119], The model trainer may then receive one or more local model updates 606 from the provider network 102. The local model updates 606 may then be deployed to one or more respective edge devices 600; [0121], the model trainer may generate a given local model based on topology data or any other data received from a corresponding edge device that will be implementing the local model; [0130], the model training service of the provider network and/or the model trainer may obtain one or more indications of the state of one or more edge devices. The indications may be used to optimize and/or generate respective local model updates that are sent to the edge device and applied to the local model to update the local model; the indications may include reliability of a connection for an edge device, an amount of free memory available at an edge device, an amount of non-volatile storage available at an edge device, a health of an edge device (e.g., with respect to a previous health state or with respect to other edge devices of the local network), and any other suitable indication of state of an edge device, where the state may affect how the local model update is optimized and/or generated; [0132], the edge device may receive updates to one or more of its local models 602 and/or may receive the local models 602 as deployed models in the same way or similar way as described for the figures above; [0133], the local models 602 are different from each other (e.g., perform one or more different operations than each other for given input data) and the different local models 602 are configured to process different data received at different times by the at least one edge device) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated receive from the communications device, information relating to one or more objectives of the communications device when a leaf network node being a communications device
Read full office action

Prosecution Timeline

Apr 16, 2021
Application Filed
Apr 16, 2021
Response after Non-Final Action
Apr 19, 2024
Non-Final Rejection — §103
Jul 17, 2024
Response Filed
Aug 08, 2024
Final Rejection — §103
Nov 13, 2024
Request for Continued Examination
Nov 16, 2024
Response after Non-Final Action
May 31, 2025
Non-Final Rejection — §103
Sep 02, 2025
Response Filed
Sep 26, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585984
POINT-OF-INTEREST RECOMMENDATION
2y 5m to grant Granted Mar 24, 2026
Patent 12585929
Layered Gradient Accumulation and Modular Pipeline Parallelism for Improved Training of Machine Learning Models
2y 5m to grant Granted Mar 24, 2026
Patent 12581159
METHOD AND APPARATUS FOR OPTIMIZING VIDEO PLAYBACK START, DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12541282
LEARNING USER INTERFACE
2y 5m to grant Granted Feb 03, 2026
Patent 12530106
STACKED MEDIA ELEMENTS WITH SELECTIVE PARALLAX EFFECTS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+29.1%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 334 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month