Prosecution Insights
Last updated: April 18, 2026
Application No. 18/015,520

FEDERATED LEARNING FOR DEEP NEURAL NETWORKS IN A WIRELESS COMMUNICATION SYSTEM

Non-Final OA §102§103
Filed
Jan 10, 2023
Examiner
VANWORMER, SKYLAR K
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
39%
Grant Probability
At Risk
1-2
OA Rounds
4y 4m
To Grant
62%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
11 granted / 28 resolved
-15.7% vs TC avg
Strong +22% interview lift
Without
With
+22.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
29 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
27.7%
-12.3% vs TC avg
§103
61.4%
+21.4% vs TC avg
§102
2.8%
-37.2% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§102 §103
DETAILED ACTION Claim 1-20 are pending. Claims 1, 9 and 15 are independent. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04.30/2025, 05/07/2025 and 06/30/2025 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-6, 9, 13, 15 and 19 are rejected under 35 U.S.C. 102 102(a)(2) as being anticipated by Prasad (US Published Patent Application No. 20200053591). In regard to claim 1, Prasad teaches A method performed by a network entity for determining at least one machine- learning (ML) configuration using distributed training in a wireless network, the method comprising: directing each user equipment (UE) in a set of user equipments (UEs) to form, using an initial ML configuration, a respective deep neural network (DNN) that processes wireless network communications, each DNN performing some or all of a transmitter and/or receiver processing chain functionality; (Prasad, paragraph 0016, “Implementations described herein relate to systems and method for a wireless low latency traffic scheduler that learns and adapts to radio channel conditions. The traffic scheduler may, in conjunction with a physical layer scheduler, schedule application layer traffic to be transmitted over the air interface to a UE device based on radio channel conditions associated with the UE device.” And paragraph 0020, “The machine learning model may be selected from a set of machine learning models ( e.g., a logistic regression classifier, a linear discriminant analysis (LDA) classifier, a quadratic linear discriminant analysis (QDA) classifier, a decision tree classifier, a naive Bayes classifier, a K-nearest neighbors classifier, a support vector machine (SVM) classifier, tree based ( e.g., a random forest) classifier using Euclidian and/or cosine distance methods, a maximum entropy classifier, a kernel density estimation classifier, a principal component analysis (PCA) classifier, an artificial neural network classifier [a respective deep neural network (DNN)], etc.)”), and paragraph 0089, “Moreover, base station 125 may send a request to update the ML model to channel modeling system 140 (signal 1050) based on the information obtained from UE device 110-Aand/orUE device 110-B and/or the determined channel quality classes for UE device 110-A and 110-B. Channel modeling system 1240 may update the ML model and may provide an updated ML model to base station 125 (signal 1052) [each DNN performing some or all of a transmitter and/or receiver processing chain functionality].”) requesting, from each UE in the set of UEs, a report of updated ML information about the respective DNN of the UE, the updated ML information generated by the UE using a training procedure and input data local to the UE; (Prasad, paragraph 0045, “Base station interface 310 may be configured to communicate with base station 125. As an example, base station interface 310 may provide one or more trained machine learning models to base station 125 and/or may update a particular machine learning model used by base station 125. As another example, base station interface 310 may obtain data (e.g., locations of UE devices [using a training procedure and input data local to the UE;] 110, radio signal quality parameters, application performance parameters, etc.) [a report of updated ML information about the respective DNN of the UE] from base station 125 to train and/or update a particular machine learning model.”) receiving, from at least some UEs in the set of UEs, respective updated ML information determined by the UE and one or more respective link or signal quality parameters; (Prasad, paragraph 0045, “Base station interface 310 may be configured to communicate with base station 125. As an example, base station interface 310 may provide one or more trained machine learning models to base station 125 and/or may update a particular machine learning model used by base station 125. As another example, base station interface 310 may obtain data (e.g., locations of UE devices 110, radio signal quality parameters b[one or more respective link or signal quality parameters], application performance parameters, etc.) from base station 125 to train and/or update a particular machine learning model [respective updated ML information determined by the UE].”) identifying, by using the one or more respective link or quality parameters, a subset of UEs in the set of UEs with one or more commensurate link or signal quality parameters, the subset of UEs having one or more common characteristics or common channel conditions; (Prasad, paragraph 0065, “The process of FIG. 6 may include obtaining radio signal quality parameter values (block 610), obtaining application performance parameter values (block 620), and obtaining location data associated with the obtained radio signal quality parameter values [the set of UEs with one or more commensurate link or signal quality parameters] and application performance parameter values (block 630). For example, modeling manager 320 of channel modeling system 140 may generate one or more training sets to train a particular machine learning model. A training set may be based on a set of radio signal quality parameter values, application performance parameter values, and/or UE device 110 location data obtained from a set of base stations [one or more common characteristics or common channel conditions, Examiner would like to point out that the base station is the common characteristic as this is the same for the UEs] 125.”) determining, using the respective updated ML information from each UE in the subset of UEs, a common ML configuration; and (Prasad, paragraph 0067, “The process of FIG. 6 may include training a first set of machine learning models to determine a channel quality class based on the obtained radio signal quality parameter values [using the respective updated ML information] and/or application performance parameter values (block 640). For example, modeling manager 320 may instruct channel quality classification manager 330 to train models 335-A to 335-N using one or more training sets as a logistic regression classifier, a linear discriminant analysis classifier, a decision tree classifier, a naive Bayes classifier, a K-nearest neighbors classifier, an SVM classifier, a random forest classifier, a maximum entropy classifier, a kernel density estimation classifier, an artificial neural network classifier, and/or another type of machine learning model [a common ML configuration], classifier, and/or algorithm to determine a channel quality class based on a set of radio signal quality parameter values and/or application performance parameter values.”) directing each UE in the subset of UEs to form an updated DNN that processes the wireless network communications using the common ML configuration. (Prasad, paragraph 0089, “Moreover, base station 125 may send a request to update the ML model to channel modeling system 140 (signal 1050) based on the information obtained from UE device 110-Aand/orUE device 110-B and/or the determined channel quality classes for UE device 110-A and 110-B. Channel modeling system 1240 may update the ML model and may provide an updated ML model to base station [an updated DNN that processes the wireless network communications using the common ML configuration.] 125 (signal 1052).”) In regard to claim 2, Prasad teaches the method of claim 1. Prasad further teaches determining, at the network entity and based on the common ML configuration, a complementary ML architecture for a network-side DNN at the network entity that performs complementary processing of the wireless network communications to processing performed by the updated DNN. (Prasad, paragraph 0020, “The machine learning model may be selected from a set of machine learning models [based on the common ML configuration] ( e.g., a logistic regression classifier, a linear discriminant analysis (LDA) classifier, a quadratic linear discriminant analysis (QDA) classifier, a decision tree classifier, a naive Bayes classifier, a K-nearest neighbors classifier, a support vector machine (SVM) classifier, tree based ( e.g., a random forest) classifier using Euclidian and/or cosine distance methods, a maximum entropy classifier, a kernel density estimation classifier, a principal component analysis (PCA) classifier, an artificial neural network classifier, etc.)”) [a complementary ML architecture for a network-side DNN at the network entity that performs complementary processing of the wireless network communications, Examiner would like to point out that with this set of machine learning models being able to be used by being selected, it is being interpreted as selecting one of them as the complementary ML configuration.], and paragraph 0089, “Moreover, base station 125 may send a request to update the ML model to channel modeling system 140 (signal 1050) based on the information obtained from UE device 110-Aand/orUE device 110-B and/or the determined channel quality classes for UE device 110-A and 110-B. Channel modeling system 1240 may update the ML model and may provide an updated ML model to base station 125 (signal 1052) [performed by the updated DNN].”) In regard to claim 3, Prasad teaches the method of claim 1. Prasad further teaches determining the common ML configuration based on the one or more commensurate link or signal quality parameters. (Prasad, paragraph 0047, “Channel quality classification manager 330 may manage one or more models 335-A to 335-N trained to determine a channel quality class based on one or more parameters, such as, for example, radio signal quality parameters [signal quality parameters] and/or application performance parameters. Each model 335 may correspond to a particular machine learning model, classifier, and/or algorithm. For example, model 335 may correspond to a logistic regression classifier, a linear discriminant analysis classifier, a decision tree classifier, a naive Bayes classifier, a K-nearest neighbors classifier, a SVM classifier, a random forest classifier, a maximum entropy classifier, a kernel density estimation classifier, an artificial neural network classifier, and/or another type of machine learning model, classifier, and/or algorithm.”) In regard to claim 4, Prasad teaches the method of claim 3. Prasad further teaches determining the common ML configuration using uplink link or signal quality parameters generated by the network entity; or determining the common ML configuration using downlink link or signal quality parameters received from one or more UEs in the set of UEs. (Prasad, paragraph 0017, ““Once the data is transmitted to a UE device over the physical radio resources, the base station may receive an acknowledgement (ACK) or negative ACK (NACK) from the UE device over an uplink (UL) control channel. [determining the common ML configuration for an uplink DNN]” and paragraph 0018, “The radio signal quality parameter values may be used to estimate a channel quality class for the channel associated with a UE device and the channel quality class may be used to select an application bandwidth for sending application data to the UE device. The channel quality class may be determined based on one or more radio signal quality parameter values using one or more machine learning (ML) models.”) In regard to claim 5, Prasad teaches the method of claim 1. Prasad further teaches selecting at least two UEs, from the set of UEs, with commensurate UE-locations. (Prasad, Fig. 10, examiner would like to point that two UE devices are being used being interpreted as selecting the UE devices and paragraph 0032, “Channel modeling system 140 may provide a trained machine learning model to base station 125 and/or may enable base station 125 to access a trained machine learning model to determine channel quality classes for UE devices 110 associated with base station 125.”) In regard to claim 6, Prasad teaches the method of claim 1. Prasad further teaches determining the common ML configuration for a downlink DNN that processes downlink wireless communications, or determining the common ML configuration for an uplink DNN that processes uplink wireless communications. (Prasad, paragraph 0017, “Once the data is transmitted to a UE device over the physical radio resources, the base station may receive an acknowledgement (ACK) or negative ACK (NACK) from the UE device over an uplink (UL) control channel. [determining the common ML configuration for an uplink DNN]” and paragraph 0018, “The radio signal quality parameter values may be used to estimate a channel quality class for the channel associated with a UE device and the channel quality class may be used to select an application bandwidth for sending application data to the UE device. The channel quality class may be determined based on one or more radio signal quality parameter values using one or more machine learning (ML) models.”)) In regard to claim 9, Prasad teaches A method performed by a user equipment (UE) for participating in distributed training of a machine-learning (ML) algorithm in a wireless network, the method comprising: receiving directions from a network entity to form, using an initial ML configuration, a deep neural network (DNN) that processes wireless network communications, the DNN performing some or all of a transmitter and/or receiver processing chain functionality; (Prasad, paragraph 0016, “Implementations described herein relate to systems and method for a wireless low latency traffic scheduler that learns and adapts to radio channel conditions. The traffic scheduler may, in conjunction with a physical layer scheduler, schedule application layer traffic to be transmitted over the air interface to a UE device based on radio channel conditions associated with the UE device.” And paragraph 0020, “The machine learning model may be selected from a set of machine learning models ( e.g., a logistic regression classifier, a linear discriminant analysis (LDA) classifier, a quadratic linear discriminant analysis (QDA) classifier, a decision tree classifier, a naive Bayes classifier, a K-nearest neighbors classifier, a support vector machine (SVM) classifier, tree based ( e.g., a random forest) classifier using Euclidian and/or cosine distance methods, a maximum entropy classifier, a kernel density estimation classifier, a principal component analysis (PCA) classifier, an artificial neural network classifier [a deep neural network (DNN)], etc.)”) receiving, from the network entity, a request to report updated ML information for the DNN based on a training process; (Prasad, paragraph 0045, “Base station interface 310 may be configured to communicate with base station 125. As an example, base station interface 310 may provide one or more trained machine learning models to base station 125 and/or may update a particular machine learning model used by base station 125. As another example, base station interface 310 may obtain data (e.g., locations of UE devices 110, radio signal quality parameters, application performance parameters, etc.) from base station 125 to train and/or update a particular machine learning model [a request to report updated ML information].”) generating the updated ML information by performing the training process using data local to the UE; (Prasad, paragraph 0046, “Modeling manager 320 may further manage training and/or updating of particular machine learning models. Modeling manager 320 may manage channel quality classification manager 330, channel quality prediction manager 340, and/ or channel quality map manager 350.”) transmitting, to the network entity, a first indication of the updated ML information and one or more signal or link quality parameters observed by the UE as part of generating the updated ML information; (Prasad, paragraph 0053, “Parameters monitor 420 may monitor parameters associated with UE devices 110. For example, parameters monitor 420 may monitor one or more radio signal quality parameters associated with a particular UE device 110, such as, for example, CQI, SNR, SINR, BLER, RSSI, RSRQ, RSRP, a throughput value, and/or another indication of radio signal quality [a first indication of the updated ML information]. Radio signal quality parameter values, associated with a particular UE device 110, may be reported by the particular UE device 110 to base station 125 via radio access network interface 490. receiving, from the network entity, a second indication to update the DNN using a common ML configuration; and (Prasad, paragraph 0071, “The process of FIG. 7 may include obtaining radio signal quality parameter values for UE device 110 (block 710). For example, parameters monitor 420 may obtain, via radio access network interface 490, one or more radio signal quality parameters associated with a particular UE device 110, such as, for example, CQI, SNR, SINR, BLER, RSSI, RSRQ, RSRP, a throughput value, and/or another indication of radio signal quality [a second indication to update the DNN].” and paragraph 0018, “The radio signal quality parameter values may be used to estimate a channel quality class for the channel associated with a UE device and the channel quality class may be used to select an application bandwidth for sending application data to the UE device. The channel quality class may be determined based on one or more radio signal quality parameter values using one or more machine learning (ML) models. [using a common ML configuration;]”) updating the DNN using the common ML configuration. (Prasad, paragraph 0089, “Moreover, base station 125 may send a request to update the ML model to channel modeling system 140 (signal 1050) based on the information obtained from UE device 110-Aand/orUE device 110-B and/or the determined channel quality classes for UE device 110-A and 110-B. Channel modeling system 1240 may update the ML model and may provide an updated ML model to base station 125 (signal 1052).”) In regard to claim 15, the claim recites similar limitations as corresponding claim 9, and is rejected for similar reasons as claim 1 using similar teachings and rationale. Prasad further teaches A user equipment (UE) comprising a wireless transceiver; (Prasad, paragraph 0016, “Implementations described herein relate to systems and method for a wireless low latency traffic scheduler that learns and adapts to radio channel conditions. The traffic scheduler may, in conjunction with a physical layer scheduler, schedule application layer traffic to be transmitted over the air interface to a UE device based on radio channel conditions associated with the UE device.”) a processor; and (Prasad, paragraph 0037, “Memory 230 may include any type of dynamic storage device that may store information and/or instructions, for execution by processor 220, and/or any type of non-volatile storage device that may store information for use by processor 220.”) computer-readable storage media comprising instructions, responsive to execution by the processor, cause the UE to: (Prasad, paragraph 0037, “Memory 230 may include any type of dynamic storage device that may store information and/or instructions, for execution by processor 220, and/or any type of non-volatile storage device that may store information for use by processor 220.”) In regard to claim 13 and analogous claim 19, Prasad teaches the method of claim 9. Prasad further teaches transmitting, to the network entity, information usable by the network entity to select a subset of UEs for participating in the distributed training of the ML algorithm, the subset of UEs having one or more common characteristics or common channel conditions, the information comprising at least one of: (Prasad, paragraph 0065, “For example, modeling manager 320 of channel modeling system 140 may generate one or more training sets to train a particular machine learning model. A training set may be based on a set of radio signal quality parameter values, application performance parameter values, and/or UE device 110 location data obtained from a set of base stations 125 [information usable by the network entity to select a subset of UEs for participating in the distributed training of the ML algorithm]. A training set may be based on standards based channel models and/or based on a live network scenario in which data is collected and a channel quality class manually determined by an operator.”) an estimated UE-location; or a UE ML capability. (Prasad, paragraph 0023, “In some implementations, the computer device may be configured to obtain location [an estimated UE-location] information associated with the UE device and determining the channel quality class associated with the UE device using the machine learning model may be further based on the obtained location information.”) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 7-8, 10-12, 14, 16-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Prasad, in view of Ellinikos et al (US Published Patent Application No. 20170188363, "Ellinikos"). In regard to claim 7, Prasad teaches the method of claim 1. However, Prasad does not explicitly teach wherein requesting the report of updated ML information includes indicating one or more update conditions that specify when to report the updated ML information, the one or more update conditions comprising at least one of: a first signal or link quality parameter changing by more than a first threshold value; or a UE-location changing by at least a second threshold value. Ellinikos teaches wherein requesting the report of updated ML information includes indicating one or more update conditions that specify when to report the updated ML information, the one or more update conditions comprising at least one of: a first signal or link quality parameter changing by more than a first threshold value; or a UE-location changing by at least a second threshold value. (Ellinikos, paragraph 0114, ““For example, channels can be selected that have channel quality values indicating they are available. In some examples, channels having noise below a selected threshold, or SNR above a selected threshold [changing by more than a first threshold value], can be selected. In some examples, thresholds for noise or SNR can be adjusted depending on load. For example, the noise threshold may be increased, or the SNR threshold decreased, as second-network utilization increases. In some examples, channels not in use by the second network can be selected, or channels not in use by the second network can be selected only when first-network utilization rises above a selected threshold.”) Prasad and Ellinikos are related to the same field of endeavor (i.e. wireless networks). In view of the teachings of Ellinikos, it would have been obvious for a person with ordinary skill in the art to apply the teachings of Ellinikos to Prasad before the effective filing date of the claimed invention in order to improve efficiency for the networks. (Ellinikos, paragraph 0068, “This will also permit interspersing first-network channels and second-network channels within a band, improving efficiency of usage of that band compared to allocating separate, spaced-apart bands for each network.”) In regard to claim 8 Prasad and Ellinikos teaches the method of claim 7. Prasad further teaches received signal strength indicator, RSSI; reference signal receive quality, RSRQ; reference signal receive power, RSRP; signal-to-interference-plus-noise ratio, SINR; channel quality indicator, CQI; a number of acknowledgements/negative-acknowledgements, ACK/NACKs; channel delay spread; or Doppler spread. (Prasad, paragraph 0017, “Once the data is transmitted to a UE device over the physical radio resources, the base station may receive an acknowledgement (ACK) or negative ACK (NACK) from the UE device over an uplink (UL) control channel [report the updated ML information based on the first signal]. A channel estimation block may collect a time series data that includes radio signal quality parameter values such as, for example, SNR, a signal-to-interference-plus-noise ratio (SINR), Received Signal Strength Indication (RSSI), Reference Signal Received Quality (RSRQ), Reference Signal Received Power (RSRP), and/or other types of radio signal quality parameter values.” And paragraph 0019, “Selecting an application bandwidth based on a determined channel quality class enables a traffic scheduler to adapt to changing radio channel conditions [the first signal or link quality parameter changing], resulting in more efficient delivery of application data over the air interface.”) However, Prasad does not explicitly teach indicating, to each UE in the set of UEs, to report the updated ML information based on the first signal or link quality parameter changing by more than the first threshold value, the first signal or link quality parameter comprising: Ellinikos further teaches indicating, to each UE in the set of UEs, to report the updated ML information based on the first signal or link quality parameter changing by more than the first threshold value, the first signal or link quality parameter comprising: (Ellinikos, paragraph 0114, ““For example, channels can be selected that have channel quality values indicating they are available. In some examples, channels having noise below a selected threshold, or SNR above a selected threshold [changing by more than a first threshold value], can be selected. In some examples, thresholds for noise or SNR can be adjusted depending on load. For example, the noise threshold may be increased, or the SNR threshold decreased, as second-network utilization increases. In some examples, channels not in use by the second network can be selected, or channels not in use by the second network can be selected only when first-network utilization rises above a selected threshold.”) Prasad and Ellinikos are combinable for the same rationale as set forth above with respect to claim 7. In regard to claim 10 and analogous claim 16, Prasad teaches the method of claim 9. However, Prasad does not explicitly teach receiving instructions to report the updated ML information in response to detecting an update condition, the update condition comprising at least one of: a first signal or link quality parameter changing by more than a first threshold value; or a UE-location changing by at least a second threshold value. Ellinikos teaches receiving instructions to report the updated ML information in response to detecting an update condition, the update condition comprising at least one of: a first signal or link quality parameter changing by more than a first threshold value; or a UE-location changing by at least a second threshold value. (Ellinikos, paragraph 0114, “For example, channels can be selected that have channel quality values indicating they are available. In some examples, channels having noise below a selected threshold, or SNR above a selected threshold [changing by more than a first threshold value], can be selected. In some examples, thresholds for noise or SNR can be adjusted depending on load. For example, the noise threshold may be increased, or the SNR threshold decreased, as second-network utilization increases. In some examples, channels not in use by the second network can be selected, or channels not in use by the second network can be selected only when first-network utilization rises above a selected threshold.”) Prasad and Ellinikos are combinable for the same rationale as set forth above with respect to claim 7. In regard to claim 11 and analogous claim 17, Prasad and Ellinikos teach the method of claim 10. Prasad further teaches detecting the update condition; and (Prasad, paragraph 0089, “Moreover, base station 125 may send a request to update the ML model to channel modeling system 140 (signal 1050) based on the information obtained from UE device 110-Aand/orUE device 110-B and/or the determined channel quality classes for UE device 110-A and 110-B. Channel modeling system 1240 may update the ML model and may provide an updated ML model to base station 125 (signal 1052).”) performing an online training procedure or an offline training procedure in response to detecting the update condition. (Prasad, paragraph 0065, “For example, modeling manager 320 of channel modeling system 140 may generate one or more training sets to train a particular machine learning model. A training set may be based on a set of radio signal quality parameter values, application performance parameter values, and/or UE device 110 location data obtained from a set of base stations 125. A training set may be based on standards based channel models and/or based on a live network scenario in which data is collected and a channel quality class manually determined by an operator.”) In regard to claim 12, Prasad and Ellinikos teaches the method of claim 11. Prasad further teaches received signal strength indicator, RSSI; reference signal receive quality, RSRQ; reference signal receive power, RSRP; signal-to-interference-plus-noise ratio, SINR; channel quality indicator, CQI; a number of acknowledgements/negative-acknowledgements, ACK/NACKs; channel delay spread; or Doppler spread. (Prasad, paragraph 0017, “Once the data is transmitted to a UE device over the physical radio resources, the base station may receive an acknowledgement (ACK) or negative ACK (NACK) from the UE device over an uplink (UL) control channel [report the updated ML information based on the first signal]. A channel estimation block may collect a time series data that includes radio signal quality parameter values such as, for example, SNR, a signal-to-interference-plus-noise ratio (SINR), Received Signal Strength Indication (RSSI), Reference Signal Received Quality (RSRQ), Reference Signal Received Power (RSRP), and/or other types of radio signal quality parameter values.” And paragraph 0019, “Selecting an application bandwidth based on a determined channel quality class enables a traffic scheduler to adapt to changing radio channel conditions [a first signal or link quality parameter changing], resulting in more efficient delivery of application data over the air interface.”) However, Prasad does not explicitly teach wherein detecting the update condition comprises: detecting that the first signal or link quality parameter has changed by more than the first threshold value, the first signal or link quality parameter comprising: Ellinikos further teaches wherein detecting the update condition comprises: detecting that the first signal or link quality parameter has changed by more than the first threshold value, the first signal or link quality parameter comprising: (Ellinikos, paragraph 0114, ““For example, channels can be selected that have channel quality values indicating they are available. In some examples, channels having noise below a selected threshold, or SNR above a selected threshold [changing by more than a first threshold value], can be selected. In some examples, thresholds for noise or SNR can be adjusted depending on load. For example, the noise threshold may be increased, or the SNR threshold decreased, as second-network utilization increases. In some examples, channels not in use by the second network can be selected, or channels not in use by the second network can be selected only when first-network utilization rises above a selected threshold.”) Prasad and Ellinikos are combinable for the same rationale as set forth above with respect to claim 7. In regard to claim 14 and analogous claim 20, Prasad teaches the method of claim 13. However, Prasad does not explicitly teach transmitting the information with the first indication. Ellinikos teaches transmitting the information with the first indication. (Ellinikos, paragraph 0020, “transmit media information of the first network via first and second first-network channels spaced apart in frequency within a third frequency sub-band;”) Prasad and Ellinikos are combinable for the same rationale as set forth above with respect to claim 7. In regard to claim 18, Prasad and Ellinikos teaches the method of claim 17. Prasad further teaches received signal strength indicator, RSSI; reference signal receive quality, RSRQ; reference signal receive power, RSRP; signal-to-interference-plus-noise ratio, SINR channel quality indicator, COI; a number of acknowledgements/negative-acknowledgements, ACK/NACKs; channel delay spread or Doppler spread. (Prasad, paragraph 0017, “Once the data is transmitted to a UE device over the physical radio resources, the base station may receive an acknowledgement (ACK) or negative ACK (NACK) from the UE device over an uplink (UL) control channel [report the updated ML information based on the first signal]. A channel estimation block may collect a time series data that includes radio signal quality parameter values such as, for example, SNR, a signal-to-interference-plus-noise ratio (SINR), Received Signal Strength Indication (RSSI), Reference Signal Received Quality (RSRQ), Reference Signal Received Power (RSRP), and/or other types of radio signal quality parameter values.” And paragraph 0019, “Selecting an application bandwidth based on a determined channel quality class enables a traffic scheduler to adapt to changing radio channel conditions [a first signal or link quality parameter changing], resulting in more efficient delivery of application data over the air interface.”) However, Prasad does not explicitly teach wherein to detect the update condition, the UE to : detect that the first signal or link quality parameter has changed by more than the first threshold value, the first signal or link quality parameter comprising : Ellinikos further teaches wherein to detect the update condition, the UE to : detect that the first signal or link quality parameter has changed by more than the first threshold value, the first signal or link quality parameter comprising : (Ellinikos, paragraph 0114, ““For example, channels can be selected that have channel quality values indicating they are available. In some examples, channels having noise below a selected threshold, or SNR above a selected threshold [changing by more than a first threshold value], can be selected. In some examples, thresholds for noise or SNR can be adjusted depending on load. For example, the noise threshold may be increased, or the SNR threshold decreased, as second-network utilization increases. In some examples, channels not in use by the second network can be selected, or channels not in use by the second network can be selected only when first-network utilization rises above a selected threshold.”) Prasad and Ellinikos are combinable for the same rationale as set forth above with respect to claim 7. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SKYLAR K VANWORMER whose telephone number is (703)756-1571. The examiner can normally be reached M-F 6:00am to 3:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.K.V./Examiner, Art Unit 2146 /USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Jan 10, 2023
Application Filed
Feb 19, 2026
Non-Final Rejection — §102, §103
Mar 12, 2026
Interview Requested
Mar 25, 2026
Applicant Interview (Telephonic)
Mar 25, 2026
Examiner Interview Summary
Apr 01, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591789
Knowledge distillation in multi-arm bandit, neural network models for real-time online optimization
2y 5m to grant Granted Mar 31, 2026
Patent 12541680
REDUCED COMPUTATION REAL TIME RECURRENT LEARNING
2y 5m to grant Granted Feb 03, 2026
Patent 12524655
ARTIFICIAL NEURAL NETWORK PROCESSING METHODS AND SYSTEM
2y 5m to grant Granted Jan 13, 2026
Patent 12511554
Complex System for End-to-End Causal Inference
2y 5m to grant Granted Dec 30, 2025
Patent 12505358
Methods and Systems for Approximating Embeddings of Out-Of-Knowledge-Graph Entities for Link Prediction in Knowledge Graph
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
39%
Grant Probability
62%
With Interview (+22.5%)
4y 4m
Median Time to Grant
Low
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month