Prosecution Insights
Last updated: April 19, 2026
Application No. 17/555,560

METHODS AND DEVICES FOR MANAGEMENT OF THE RADIO RESOURCES

Non-Final OA §103
Filed
Dec 20, 2021
Examiner
LANGER, PAUL ANTHONY
Art Unit
2419
Tech Center
2400 — Computer Networks
Assignee
Intel Corporation
OA Round
3 (Non-Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 1m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 6 resolved
-58.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
55 currently pending
Career history
61
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
51.2%
+11.2% vs TC avg
§102
28.2%
-11.8% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to remarks filed 09/16/2025. Claims 1-9 and 13-25 are pending and presented for examination. Claims 1, 13, and 24 are currently amended. Claims 10-12 have been cancelled. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/16/2025 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-6, 8, and 9 are rejected under 35 U.S.C. 103 as being unpatentable by Vankayala et al. (US 20220278728 A1, hereinafter “Vankayala”), in view of Chavva et al. (US 20210351885 A1, hereinafter “Chavva”), in view of Rácz et al. (US 20230155705 A1, hereinafter “Rácz”). RE Claim 1, Vankayala discloses: A device (¶0015, Fig. 9) comprising: a memory (¶0015, Fig. 9: 904) configured to store channel quality data comprising information indicating a quality of a communication channel (Processor, 902, extracts data with support data processing device, 908, for a plurality of CQI parameters. Extracted data, stored CQI data, is used by the CQS, Channel Quality Status, estimation controller, 912. ¶0124, Fig. 9; ¶0064, Fig. 1) between a base station (BS) and a user equipment (UE) (One or more UEs transmit one or more CQI reports to the base station. ¶0131, Fig. 10: 1102); a processor (¶0015, Fig. 9: 902) configured to: control a measurement circuit to perform a first channel quality indicator (CQI) measurement at a first instance of time and perform a second CQI measurement at a second instance of time (UE measures or monitors CQS information (CQI) at a periodic or aperiodic time, instances of time. ¶0066; Information can be used in real time or near real time basis, instances of time. Some parameters are available on a non-real time basis with different periodicity. ¶0079; Measurement of a 1st and a 2nd CQI measurement at an instance of time.), wherein the second CQI measurement is the next periodic CQI measurement performed after the first CQI measurement (UE measures or monitors CQS information (CQI) at a periodic or aperiodic time, instances of time and transmits reports to BS. ¶0066; Information can be used in real time or near real time basis, instances of time. Some parameters are available on a non-real time basis with different periodicity. ¶0079; Measurement of a 1st and a 2nd CQI measurement at an instance of time.); provide an input comprising the channel quality data comprising the first CQI measurement to a machine learning model (One or more CQI reports are input to a Neural Network, machine learning. ¶0075, Fig. 2: 202; CQS estimation controller of the UE utilizes a Neural Network, machine learning model. ¶0124, Fig. 9: 912, 912a; Each extracted network parameter is an input element of the neural network, machine learning. ¶0079; Neural network, learning machine, may be deployed at both the BS and UE for predicting the CQS information. ¶0074), wherein the machine learning model is configured to predict CQI for a third instance of time based on the input and machine learning model parameters (RL based trained neural network deployed at BS or UE for predicting CQS information. Optimal weights, ML model parameters, are assigned to each layer of the neural network. ¶0074, Fig.2: 206; CQS estimation controller of the UE predicts CQI based on CQI reports. ¶0124, Fig. 9: 912, 912a); encode a channel quality information based on the predicted CQI (Encoder, 910, sends encoded predicted CQI information to the BS. ¶0126, Fig. 9: 910) for a transmission to the BS (UE reports CQS, predicted CQI, information to the BS. ¶0087, Fig. 5: 502b, 502c; Predicted CQI is provided to the BS via communication interface of UE. ¶0127, Fig. 9: 906);. Vankayala does not explicitly disclose: wherein the third instance of time is between the first instance of time and a second instance of time; adjust the machine learning model parameters by performing a comparison between the predicted CQI for the third instance of time and the second CQI measurement However, Chavva discloses: wherein the third instance of time is between the first instance of time and a second instance of time(UE generates CSI reports that include computed values of CSI feedback parameters, measured, and predicted values of the CSI feedback parameters. ¶¶0033, 0040-0042. The generated CSI report by the UE based on correlation between data of previous CSI report and data of the generated CSI report. ¶0043; UE sends a CSI report at time interval t1. UE predicts the probable values of the feedback parameters at t2 = t1 + time delta. gNB adjusts transmission values based on CSI report at transmits at t2. UE sends CSI report at time interval t3 which is the CSI report for new transmission parameters by the gNB during interval t2-t3. UE also sends probable values of the feedback parameters for the next prediction at t4 = t3 + time delta. ¶¶0148-0149, Fig. 8); adjust the machine learning model parameters by performing a comparison between the predicted CQI for the third instance of time and the second CQI measurement (The neural network (602c) is trained to minimize a cost function, for fitting the generated CSI report between periodic CSI reporting slots, wherein the cost function is minimized if the predicted values of the feedback parameters at the future time instance match actual values of the feedback parameters at the future time instance in one of the UE. ¶¶0038, 0047-0048, Fig. 6; Matching the predicted values and the actual values comprises updating at least one weight associated with at least one activation element of at least one layer of the neural network (602c), wherein the at least one weight is updated based on at least one of channel metrics, sensor measurements, a difference between the predicted values and the actual values, and PDSCH transmission error statistics. ¶0039). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Vankayala, prediction of CQI based on last CQI measurements, with the teachings of Chavva, prediction of CQI values at a time in the future based on the gNB delay in transmission and channel state time variations since last CQI report. Further, Chavva teaches that predictive models are updated based on measured and predicted data. The motivation in doing so would be to provide a prediction of a future CQI value in time from last CQI report such that the gNB can adjust transmission parameters at the actual transmission time in order to adjust for time varying network channel conditions. This improves the accuracy of transmission parameters to improve network efficiency, such as signal quality, between gNB and a UE. Feedback delay of channel quality reporting in a time varying channel resulting in mismatched transmission parameters is a known challenge, Vankayala ¶0005, Chavva ¶0103. (Vankayala: Abstract, ¶¶0005, 0007, 0009; Chavva: Abstract, ¶¶0007, 0033-0040, 0057, 0103, 0168-0169, Fig. 11) RE Claim 2, Vankayala discloses: The device, wherein the channel quality data comprises a plurality of measurement results performed on received radio communication signals via the communication channel (UE performs various measurements, a plurality, on downlink signals such as RSSI, RSRQ, RSRP, channel quality data. ¶0125); wherein each measurement result is configured to represent an estimated quality of the communication channel for an instance of time (UE measures or monitors CQS information (CQI) at a periodic or aperiodic time, an instance of time. ¶0066); wherein the plurality of measurement results comprises one of a plurality of in-phase and quadrature samples based on the received radio communication signals (UEs measures received reference signals from a 5G system. ¶0044; CQI Index Modulation code comprises QPSK, 16QAM, and 64QAM signals, quadrature modulation. ¶0069; Table 1, pg. 6; Fig. 1) , a plurality of Fast Fourier Transform (FFT) samples based on the received radio communication signals (Element is optional), signal measurements comprising at least one of a reference signal received power (RSRP), a received signal strength indicator (RSSI), or reference signal received quality (RSRQ) (UE performs various measurements, a plurality, on downlink signals such as RSSI, RSRQ, RSRP, channel quality data. ¶0125). RE Claim 3, Vankayala discloses: The device, wherein the memory is further configured to store context information (Neural net, machine learning, is configured to extract different input network parameters, context information as stored data. ¶0075) comprising information indicating at least one of a mobility of the UE (Element is optional), a location of the UE (Neural net, machine learning, is configured to extract different input network parameters, context information as stored data, including but not limited to a location of the UE. ¶0075), a velocity of the UE (Element is optional), or a moving direction of the UE relative to the BS (Element is optional); wherein the input of the machine learning model comprises the context information (Neural net, machine learning, is configured to extract different input network parameters, context information as stored data, including but not limited to a location of the UE. ¶0075; Each extracted network parameter is an input element of the neural network, machine learning. ¶0079; Neural network, learning machine, may be deployed at both the BS and UE for predicting the CQS information. ¶0074); wherein the context information further comprises information (Neural net, machine learning, is configured to extract different input network parameters, context information as stored data, ¶0075) indicating at least one of a time (time of day, ¶0075), a velocity of the UE (Element is optional), an identifier for a network operator operating through the BS (Element is optional), an identifier of the BS (Element is optional), a network mode (mode of operation of the BS. ¶0075), a measured downlink or uplink rate for a period of time (Element is optional), a modulation level (BS allocates resource blocks and MCS value, a modulation level. ¶0103), a past power level (UE measures or monitors CQS information (CQI) at a periodic or aperiodic time, an instance of time. ¶0066; Network parameter of maximum transmitting power in a CQI report. ¶0075.), a number of resource blocks allocated for the UE (BS allocates resource blocks to individual UEs. ¶0067), a number of retransmissions to transmit communication signals to the BS (Element is optional). RE Claim 4, Vankayala discloses: The device, wherein the context information (Neural net, machine learning, is configured to extract different input network parameters, context information as stored data. ¶0075) further comprises an indication (CQS information indicators includes CQI, PMI and RI reported by UE to BS. ¶0066) for at least one of a first instance of time of a generation of at least one previous CQI (UE measures or monitors CQS information (CQI) at a periodic or aperiodic time, an instance of time. ¶0066; Information can be used in real time or near real time basis, an instance of time. Some parameters are available on a non-real time basis with different periodicity. ¶0079), a second instance of time of a transmission of information comprising an indication of the at least one previous CQI (UE measures or monitors CQS information (CQI) at a periodic or aperiodic time, an instance of time. ¶0066; One or more UEs transmit one or more CQI reports to the base station. ¶0131, Fig. 10: 1102), or a third instance of time of a downlink communication scheduled in response to the at least one previous CQI (BS performs scheduling based on CQSs reported by the UEs to determine allocation of resource blocks to individual UEs. ¶0067; Scheduler of BS allocates resource to UE for uplink, downlink, and grants. Encoder generates encoded control and data signals sent to the UE. ¶0103, Fig. 7: 708, 712), or a predetermined time gap information representing a period of time between a generation of the channel quality information and a reception of the generated channel quality information by the BS (Element is optional). RE Claim 5, Vankayala and Chavva do not explicitly disclose: The device, wherein the processor is configured to determine the time gap information based on at least one of the first instance of time, the second instance of time, or the third instance of time using a second machine learning model; wherein the machine learning model is configured to predict the CQI for an instance of time after a period of time comprising the period of time indicated by the determined time gap information. However, Rácz discloses: The device (¶0049, Fig. 3: 302), wherein the processor (¶0071, Fig. 6: 602; ¶0076, Fig. 7) is configured to determine the time gap information (Predictive model based on time-series data for channel quality. n is a specific time-based channel quality data sample. Output channel quality prediction provided at the (n+k)th data value. Time gap is the time between the time-series data samples and reports. k is when to provide predicted channel quality. k=1 results in predicted channel quality output immediately succeeding the channel quality value. k>1 results in predicted channel quality output a value at k time periods in future. ¶0043) based on at least one of the first instance of time (Channel quality information for first time period. ¶0029; First instance of time is at the nth sample. ¶0043), the second instance of time (If first instance is n then second instance is n+1. ¶0043), or the third instance of time using a second machine learning model (Predicted channel quality for second time period. ¶0029; A time period for the channel quality report depends on k where k is offset of time periods. ¶0043); wherein the machine learning model is configured to predict the CQI for an instance of time after a period of time comprising the period of time indicated by the determined time gap information (Predictive model based on time-series data for channel quality. n is a specific time-based channel quality data sample. Output channel quality prediction provided at the (n+k)th data value. Time gap is the time between the time-series data samples and reports. k is when to provide predicted channel quality. k=1 results in predicted channel quality output immediately succeeding the channel quality value. k>1 results in predicted channel quality output a value at k time periods in future. ¶0043). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Vankayala, prediction of CQI between last and the next CQI measurements, with the teachings of Rácz, use of numbered time-series samples of data and provide CQI prediction at n+k where k is an offset for a time gap in reporting . The motivation in doing so would be to provide a dynamic adjusting the specific reporting time of CQI depending on UE status, BS status, and network channel conditions. RE Claim 6, Vankayala discloses: The device, further comprising: the measurement circuit to perform CQI measurements on a plurality of resource blocks (Processor, 902, extracts data with support data processing device, 908, and communication interface, 906, for a plurality of CQI parameters. ¶0124, Fig. 9; ¶0064, Fig. 1; Scheduler of BS allocates resource blocks to UE for uplink, downlink, and grants. ¶0103, Fig. 7: 708; UE performs various measurements, a plurality, on downlink signals such as RSSI, RSRQ, RSRP, channel quality data on allocated resources. ¶0125); RE Claim 8, Vankayala discloses: The device, wherein the processor is configured to perform application layer functions for an application layer of a communication reference model, and lower layer functions for a lower layer of the communication reference model that is lower than the application layer (Machine learning module will convey estimated CQI values to MAC/Physical/L3 layers, layers of a communication reference model. ¶0154; Neural network, learning machine, may be deployed at both the BS and UE for predicting the CQS information. ¶0074); wherein the processor is configured to provide the predicted CQI via a cross-layer information from the lower layer to the application layer using the lower layer functions (Machine learning module will convey estimated CQI values to MAC/Physical/L3 layers, layers of a communication reference model. ¶0154); wherein the processor is configured to perform the application layer functions based on the predicted CQI (Neural network, machine learning, predicts CQI for a channel and shares information with MAC/RRC layers. MAC layer uses information to determine number of resources and MCS values, radio settings. ¶0168, Fig. 16). RE Claim 9, Vankayala discloses: The device, wherein the processor is configured to adjust at least one of quality of service (QoS) parameters for applications running in the application layer (Neural network, machine learning, configured to extract different network parameters from a CQI report. Network parameters consist of QoS. ¶0075), uplink communication requests for the applications running in the application layer (Scheduler allocates uplink, downlink and grants based on predicted CQI. Scheduler determines appropriate number of resource blocks allocated and a MCS value. ¶0103, Fig. 7: 708; Machine learning based on various or sub-set of network parameters such as Voice Application. ¶0061), or a limit of scheduled uplink traffic for the applications running in the application layer (Scheduler allocates uplink, downlink and grants based on predicted CQI. Scheduler determines appropriate number of resource blocks allocated and a MCS value. ¶0103, Fig. 7: 708; Machine learning based on various or sub-set of network parameters such as Voice Application. ¶0061), based on the predicted CQI (Predicted CQI can be used for resource allocation or interference estimation or QoS/QCI management. ¶0079; Machine learning based on various or sub-set of network parameters such as Voice Application. ¶0061). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Vankayala, in view of Chavva, in view of Wang et al. (US 20210158151 A1, hereinafter “Wang”) RE Claim 7, Vankayala discloses: The device, wherein the memory comprises a plurality of the machine learning model parameters (Neural network, machine learning, trained by input plurality of network parameters, determining an optimal weight of each node, training of model. ¶0049, ¶¶0090-0091, Fig. 6, Fig. 9); with the configured radio settings (Neural network, machine learning, predicts CQI for a channel and shares information with MAC/RRC layers. MAC layer uses information to determine number of resources and MCS values, radio settings. ¶0168, Fig. 16), a received hybrid automatic repeat request (HARQ) feedback (Element is optional), or buffer lengths (Element is optional). Vankayala and Chavva do not explicitly disclose: wherein the processor is further configured to adjust the machine learning model parameters based on the determined CQI and a number of retransmissions; However, Wang discloses: wherein the processor is further configured to adjust the machine learning model parameters based on the determined CQI and a number of retransmissions; (Neural network, machine learning, stores multiple NN configuration elements and/or configurations including input characteristics of channel quality indicator (CQI) and hybrid automatic repeat request (HARQ) information, e.g. maximum retransmissions. ¶0046, Fig. 2; UE neural network manager, machine learning, forms a deep neural network with the NN configuration elements. ¶0039) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Vankayala, machine learning based on a plurality of network parameters, with the teachings of Wang, use another network parameter, HARQ, for machine learning. The motivation in doing so would be to provide another dimension, retransmissions, to the CQI prediction models to further optimize radio setting configurations for best performance. Claims 13 is rejected under 35 U.S.C. 103 as being unpatentable over Vankayala in view of Dalmiya et al. (US-20220038945-A1, hereinafter “Dalmiya”, in view of Azizi et al. (US 20190364492 A1, hereinafter “Azizi”) RE Claim 13, Vankayala does not explicitly disclose: A device comprising: a memory configured to store uplink buffer data comprising information indicating one or more past states of an uplink buffer of a user equipment (UE) used for transmissions to a base station (BS) and context information comprising an amount of data predicted by a running application, wherein the context information is received from an application layer via cross-layer information exchange; a processor configured to: provide an input comprising the uplink buffer data and the context information to a machine learning model configured to predict an amount of data to be scheduled for the upcoming uplink transmission to be transmitted to the BS based on the input; in response to the predicted amount of data, encode a message comprising a buffer status report (BSR) to be transmitted to the BS, wherein the BSR comprises information representing the amount of data to be scheduled for the upcoming uplink transmission. However, Dalmiya discloses: A device (UE. ¶¶0029, 0032-0034 , Fig. 2: 120a) comprising: a processor (UE. ¶¶0029, 0032-0034, Fig. 2: 120a, 248, 264, 280, 281) configured to: provide an input comprising the uplink buffer data and the context information to a machine learning model configured to predict an amount of data to be scheduled for the upcoming uplink transmission to be transmitted to the BS based on the input (UE determines the amount of additional uplink data arriving, based on context of a new data arrival rate from an application processor (AP) of the UE, to determine the amount of UL data to report in the BSR. ¶¶0072, 0074; UE prediction of a parameter related to transmission of uplink data by use of ML includes information current buffer status, UL grant resources, and new data arrival from an application processor. ¶¶0084, 0094;); in response to the predicted amount of data, encode a message comprising a buffer status report (BSR) to be transmitted to the BS), wherein the BSR comprises information representing the amount of data to be scheduled for the upcoming uplink transmission (UE determines the amount of additional uplink data arriving, based on context of a new data arrival rate from an application processor (AP) of the UE, to determine the amount of UL data to report in the BSR. ¶¶0072, 0074; UE prediction of a parameter related to transmission of uplink data by use of ML includes information current buffer status, UL grant resources, and new data arrival rate from an application processor. ¶¶0084, 0094). Vankayala and Dalmiya do not explicitly disclose: a memory configured to store uplink buffer data comprising information indicating one or more past states of an uplink buffer of a user equipment (UE) used for transmissions to a base station (BS) and context information comprising an amount of data predicted by a running application, wherein the context information is received from an application layer via cross-layer information exchange; However, Azizi discloses: a memory (Terminal device with memory, ¶0313, Fig. 3; Local terminal device prediction engine, ¶1051, Fig. 103) configured to store uplink buffer data comprising information indicating one or more past states of an uplink buffer of a user equipment (UE) used for transmissions to a base station (BS) and context information comprising an amount of data predicted by a running application (Local learning module looks at current and recent data traffic demands and requirements at terminal device including overall throughput demands, QoS demands, data speed demands, and reliability demands. Local learning module predicts upcoming data service requirements based on terminal demands. ¶1052. One of ordinary skill in the art before the effective filing date of the claimed invention would understand that ‘throughput demands’ and other ‘demands’ are a request for a future or next set of data, a prediction.), wherein the context information is received from an application layer via cross-layer information exchange (Preprocessing module of terminal local device prediction receives context information. The module preprocesses context info and provides to local repository for storage and local learning module for learning and prediction. ¶1051, Fig. 103; Terminal devices utilize application-layer context information to optimize operation of an application program. Operating system, executed on the application processor, utilizes local context information to adapt operation. ¶¶1002-1003, Fig. 94; Prediction and decision engine configured to process and evaluate context information obtained an application layer of the device and apply the context information to influence radio activity at baseband modem. ¶¶1008, 1010, Fig. 96); It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Vankayala, machine learning based on a plurality of network parameters, with the teachings of Dalmiya, present and historical data of buffer status and application traffic applied to machine learning, with the teachings of Azizi, use of data service parameters demanded, requested for future time, provided at the application layer for the prediction engine. The motivation in doing so would be to provide additional granularity and improvements to the prediction models based on context awareness to further optimize radio activity or battery life configurations for best performance in varying signal conditions. (Vankayala: Abstract, ¶¶0002, 0007-0016; Dalmiya: Abstract, ¶¶0001, 0006-0009, 0072, 0084, 0094; Azizi: Abstract, ¶¶1001-1003, 1007-1009) Claims 14-22 are rejected under 35 U.S.C. 103 as being unpatentable over Vankayala in view of Dalmiya, in view of Azizi, in view of Fu et al. (US 20170202000 A1, hereinafter “Fu”) RE Claim 14, Vankayala discloses: The device, wherein the memory (¶0015, Fig. 9: 904) is configured to store context information (Processor, 902, extracts data with support data processing device, 908, for a plurality of CQI parameters, context information. Extracted data, stored CQI data, is used by the CQS, Channel Quality Status, estimation controller, 912. ¶0124, Fig. 9; ¶0064, Fig. 1); wherein the context information comprises at least one of running applications (Machine learning based on various or sub-set of network parameters such as Voice Application. ¶0061), types of the running applications (Machine learning based on various or sub-set of network parameters such as Voice Application. ¶0061), quality of service (QoS) requirements of the running applications (Neural network, machine learning, configured to extract different network parameters from a CQI report. Network parameters consist of QoS. ¶0075), an amount of received downlink data at a period of time for a plurality of periods of time (Element is optional), predicted network traffic received from an application layer for the running applications (Element is optional); wherein the input of the machine learning model further comprises the context information (Scheduler allocates uplink, downlink and grants based on predicted CQI. Scheduler determines appropriate number of resource blocks allocated and a MCS value. ¶0103, Fig. 7: 708; Predicted CQI can be used for resource allocation or interference estimation or QoS/QCI management. ¶0079; Neural network, learning machine, may be deployed at both the BS and UE for predicting the CQS information. ¶0074). RE Claim 15, Vankayala discloses: The device, wherein the processor is configured to obtain at least a portion of the context information (Processor, 902, extracts data with support data processing device, 908, for a plurality of CQI parameters, context information. Extracted data, stored CQI data, is used by the CQS, Channel Quality Status, estimation controller, 912. ¶0124, Fig. 9; ¶0064, Fig. 1) from the application layer entity (Machine learning based on various or sub-set of network parameters such as Voice Application. ¶0061) via a cross layer information (Machine learning module will convey estimated CQI values to MAC/Physical/L3 layers, layers of a communication reference model. ¶0154); Vankayala, Azizi, and Dalmiya do not explicitly disclose: wherein the processor is configured to obtain at least the predicted network traffic and the QoS requirements of the running applications from the application layer entity via a cross layer information. However, Fu discloses: wherein the processor is configured to obtain at least the predicted network traffic UE optimizes QoS based on predicted available throughput and predicted load, predicted network traffic. ¶0129.) and the QoS requirements of the running applications (Optimization of QoS may adapt based on application such as Skype by adjusting video resolution. ¶0130.) from the application layer entity via a cross layer information. (Scheduler identifies additional, predicted, required resources including uplink buffer status and send MAC messages over PUSCH or L1/L2 control signaling over PUCCH, encoded messages for UE to obtain needed resources. ¶0043) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Vankayala, context information from application layer for machine learning, with the teachings of Fu, combining application layer information with predicted network traffic and QoS requirements. The motivation in doing so would be to provide another dimension, application traffic and QoS dependency, to the prediction models to further optimize radio setting configurations for best performance. RE Claim 16, Vankayala, Azizi, and Dalmiya do not explicitly disclose: The device, wherein the encoded message comprises a medium access layer control element (MAC CE) indicating the predicted amount of data to be scheduled for the uplink transmission. However, Fu discloses: The device, wherein the encoded message comprises a medium access layer control element (MAC CE) indicating the predicted amount of data to be scheduled for the uplink transmission. (Scheduler identifies additional, predicted, required resources including uplink buffer status and send MAC messages over PUSCH or L1/L2 control signaling over PUCCH, encoded messages for UE to obtain needed resources. ¶0043) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Vankayala, machine learning based on a plurality of network parameters, with the teachings of Fu, use additional network parameters of data buffer, amount of data, and scheduling needs, for machine learning. The motivation in doing so would be to provide additional dimensions, buffer status, of data buffer, amount of data, and scheduling needs to the prediction models to further optimize radio setting configurations for best performance. RE Claim 17, Vankayala discloses: The device, wherein the memory comprises a plurality of machine learning model parameters (Neural network, machine learning, trained by input plurality of network parameters, determining an optimal weight of each node, training of model. ¶0049, ¶¶0090-0091, Fig. 6, Fig. 9); wherein the machine learning model is configured to provide the output based on the machine learning model parameters (Prediction, output, using a machine learning mode. ¶0071, ¶¶0090-0091, Fig. 6, Fig. 9); Vankayala, Azizi, and Dalmiya do not explicitly disclose: wherein the processor is further configured to adjust the machine learning model parameters based on the output of the machine learning model and the amount of data scheduled for the uplink transmission. However, Fu discloses: wherein the processor is further configured to adjust the machine learning model parameters based on the output of the machine learning model (¶0058) and the amount of data scheduled for the uplink transmission (Throughput prediction module in the UE predicts available throughput based on past scheduling and past throughput of uplink data sent. ¶0146, Fig. 7:740; UE predicts throughput by past scheduling and throughput of uplink for prior UE uplink data transmissions. ¶0146; Information about the buffer status indicates an amount of uplink data. ¶0070). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Vankayala, machine learning based on a plurality of network parameters, with the teachings of Fu, use additional network parameter, amount of data scheduled for uplink, for machine learning. The motivation in doing so would be to provide additional dimension, amount of data scheduled for uplink, to the prediction models to further optimize radio setting configurations for best performance. RE Claim 18, Vankayala, Azizi, and Dalmiya do not explicitly disclose: The device, wherein the processor is configured to adjust the machine learning model parameters based on the amount of data scheduled for the uplink transmission and the predicted amount of data to be scheduled for the uplink transmission. However, Fu discloses: The device, wherein the processor is configured to adjust the machine learning model parameters (¶0058) based on the amount of data scheduled for the uplink transmission and the predicted amount of data to be scheduled for the uplink transmission. (UE predicts throughput by past scheduling and throughput of uplink for prior UE uplink data transmissions. ¶0146; Throughput prediction module in the UE predicts available throughput based on past scheduling and past throughput of uplink data sent. ¶0146, Fig. 7:740; Information about the buffer status indicates an amount of uplink data. ¶0070) It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Vankayala, machine learning based on a plurality of network parameters, with the teachings of Fu, use additional network parameter, data scheduled for uplink, for machine learning. The motivation in doing so would be to provide additional dimension, data scheduled for uplink, to further optimize radio setting configurations for best performance. RE Claim 19, Vankayala discloses: The device, wherein the machine learning model comprises a recursive neural network long short-term memory (LSTM) (System can use recurrent neural network, RNN, and hybrid architectures. ¶0062. LSTM neural networks are a type of neural network that is in the class of RNN networks.); wherein the processor is configured to provide the input in a time-series data configuration to the LSTM (UE measures or monitors CQS information (CQI) at a periodic or aperiodic time, an instance of time. ¶0066; Information can be used in real time or near real time basis, an instance of time. Some parameters are available on a non-real time basis with different periodicity. ¶0079; Neural net, machine learning, is configured to extract different input network parameters. ¶0075; Each extracted network parameter is an input element of the neural network, machine learning. ¶0079). RE Claim 20, Vankayala discloses: The device, wherein the LSTM (System can use recurrent neural network, RNN, and hybrid architectures. ¶0062. LSTM neural networks are a type of neural network that is in the class of RNN networks.) is configured to provide the output (Prediction, output, using a machine learning mode. ¶0071, ¶¶0090-0091, Fig. 6, Fig. 9) based on input features of a time window comprising a plurality of consecutive instances of time (UE measures or monitors CQS information (CQI) at a periodic or aperiodic time, an instance of time. ¶0066; Information can be used in real time or near real time basis, an instance of time. Some parameters are available on a non-real time basis with different periodicity. ¶0079; Neural net, machine learning, is configured to extract different input network parameters. ¶0075; Each extracted network parameter is an input element of the neural network, machine learning. ¶0079). RE Claim 21, Vankayala discloses: The device, wherein the machine learning model comprises a reinforcement learning model (¶0010; ¶0089, Fig. 6; ¶0120, Fig. 9: 912, 912a; Neural network, learning machine, may be deployed at both the BS and UE for predicting the CQS information. ¶0074); wherein the processor is further configured to determine a first output parameter (¶0090, Fig. 6: 612) based on a first state indicated by the input (¶0090, Fig. 6: 602) at a first instance of time (UE measures or monitors CQS information (CQI) at a periodic or aperiodic time, control of a measurement at an instance of time. ¶0066; Prediction, output, using a machine learning mode. ¶0071, ¶¶0090-0091, Fig. 6, Fig. 9); wherein the processor is further configured to determine a reward for an observation state (For every interaction with environment, the RL agent, 618a, generates an output. Output is given to error function controller, 614. Output of error function controller is given to feedback function controller, 616 which provides feedback to the RL model as a form of rewards based on performance. ¶0092; Fig. 6) in which the UE communicates according to the configured radio resources according to the first output parameter (Neural network, machine learning, predicts CQI for a channel and shares information with MAC/RRC layers. MAC layer uses information to determine number of resources and MCS values, radio settings. ¶0168, Fig. 16); wherein the processor is further configured to determine a second output parameter ¶0090, Fig. 6: 612 based on the determined reward and a second state (¶0090, Fig. 6: 602) indicated by the input at a second instance of time (UE measures or monitors CQS information (CQI) at a periodic or aperiodic time, control of a measurement at an instance of time. ¶0066; For every interaction with environment, a second instance, the RL agent 618a generates an output. Output is given to error function controller, 614. Output of error function controller is given to feedback function controller, 616 which provides feedback to the RL model as a form of rewards based on performance. Process is repeated until error function of the output is less than threshold error value. ¶0092; Fig. 6). RE Claim 22, Vankayala, Azizi, and Dalmiya do not explicitly disclose: The device, wherein the processor is configured to determine the reward for the observation state based on the amount of data scheduled for the uplink transmission. However, Fu discloses: The device, wherein the processor is configured to determine the reward for the observation state (As new data, observations, are obtained they model may be updated by machine learning. ¶0058; Throughput and scheduling module estimates correctness of the model. For an estimation error, feedback, a reward to update a model, is provided to the throughput module. ¶0121, Fig. 6) based on the amount of data scheduled for the uplink transmission. (UE predicts throughput by past scheduling and throughput of uplink for prior UE uplink data transmissions. ¶0146; Scheduler identifies required resources and send MAC messages over PUSCH or L1/L2 control signaling over PUCCH, encoded messages for UE to obtain needed resources. ¶0043) Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Vankayala, in view of Dalmiya, in view of Azizi, and in further view of Tayamon. (US 20240205956 A1, hereinafter “Tayamon”) RE Claim 23, Vankayala, Azizi, and Dalmiya do not explicitly disclose: The device of claim, wherein the reinforcement learning model comprises a multi-armed bandit reinforcement learning model. However, Tayamon discloses: The device of claim, wherein the reinforcement learning model comprises a multi-armed bandit reinforcement learning model (¶0087). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to substitute the method of Vankayala, machine learning based using neural network and reinforcement learning , with the teachings of Tayamon, use of multi-armed bandit reinforcement for machine learning. The motivation in doing so would be to provide another form of machine learning models to the prediction models to further optimize radio setting configurations for best performance. Claims 24 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Vankayala, in view of Azizi, in view of Singh et al. (WO 2021255107 A1, hereinafter “Singh”) RE Claim 24, Vankayala discloses: A device (Base station. ¶0098, Fig. 7) comprising: a memory (Base station. ¶0098, Fig. 7: 704) configured to store uplink communication activity data (Extract network parameters by the BS from a plurality of network parameters from CQI, PMI, and RI reports. ¶0048. Different network parameters extracted include transmission mode of the UE, number of antennas of UE, full or half duplex modes, type of wireless link, QoS and QCI, uplink communication activities. ¶0075; Fig. 11: 1104 ) comprising information indicating uplink communication activities between one or more user equipments (UEs) and a base station (BS) (Different network parameters extracted include transmission mode of the UE, number of antennas of UE, full or half duplex modes, type of wireless link, QoS and QCI, uplink communication activities. ¶0075; User classification of various parameters used in neural network include QCI/QOS, MCS, location, or other UE parameters, uplink communication activities. ¶0088; Fig. 11: 1102); a processor (Base station. ¶0098, Fig. 7: 702) configured to: provide an input comprising the uplink communication activity data to a machine learning model (Extract network parameters by the BS from a plurality of network parameters from CQI, PMI, and RI reports. ¶0048. Different network parameters extracted include transmission mode of the UE, number of antennas of UE, full or half duplex modes, type of wireless link, QoS and QCI, uplink communication activities. ¶0075; Fig. 11: 1104; One or more CQI, PMI, and RI reports provided to input of neural network. ¶0140, Fig. 6: 602; Fig. 11: 1102; Neural network, learning machine, may be deployed at both the BS and UE for predicting the CQS information. ¶0074) configured to predict an uplink communication activity of the UE based on the input (Extract network parameters by the BS from a plurality of network parameters from CQI, PMI, and RI reports. ¶0048. Different network parameters extracted include transmission mode of the UE, number of antennas of UE, full or half duplex modes, type of wireless link, QoS and QCI, uplink communication activities. ¶0075; Fig. 11: 1104; Predicting CQI, PMI, and RI, uplink communication activity, by the BS. ¶0141; Fig. 11: 1106), Vankayala does not explicitly disclose: wherein the information represents at least one of one or more past buffer status reports (BSRs) received from a UE of the one or more UEs or past amounts of data received from the UE following a grant of uplink resources; wherein the predicted uplink communication activity comprises at least one of an anticipated amount of data expected to be reported in an upcoming BSR from the UE or an anticipated amount of data the UE is expected to have available for an upcoming uplink transmission; determine a load of a cell served by the BS; if the determined load of the cell is below a predefined threshold, configure uplink channel radio resources for the UE based on the anticipated amount of amount of data. However, Singh discloses: wherein the information represents at least one of one or more past buffer status reports (BSRs) received from a UE of the one or more UEs or past amounts of data received from the UE following a grant of uplink resources (gNB estimating, predicting, data volume by receiving BSRs from the UE with timestamp , time slot. Scheduler grant resources for the amount of data signaled by BSR plus extra amount of data estimated by scheduling based on history information. ¶0481); wherein the predicted uplink communication activity comprises at least one of an anticipated amount of data expected to be reported in an upcoming BSR from the UE or an anticipated amount of data the UE is expected to have available for an upcoming uplink transmission (Instead of the UE estimating data volume, gNB can perform that job. Assuming UE reports BSR with timestamp, gNB scheduler grants resources for amount of data signaled by the BSR plus extra amount of data estimated by the scheduler based on history information. ¶0481. Additional data volume estimation at scheduler (network side) after receiving estimate from UE may be further performed. ¶¶0482-0487); configure uplink channel radio resources for the UE based on the anticipated amount of amount of data. (Prediction of traffic data is based on at least one of : a size, a type, a content, or a required transmission rate with machine learning models. ¶¶0006, 0015-0016. UE reports to the gNB about new or change in traffic volume via BSR/SR/Multi-Bit SR (traffic reports/patterns). gNB can estimate the traffic based on past statistics, e.g. how frequent is SR transmitted by the UE. gNB performs traffic/data estimation and based on the analysis, gNB sends a new grant or updated grant. gNB can also update the existing resource allocation to change or allocate new parameters. ¶0455. ) Vankayala and Singh do not explicitly disclose: determine a load of a cell served by the BS; if the determined load of the cell is below a predefined threshold, However, Azizi discloses: determine a load of a cell served by the BS (Prediction engine 9600 may also receive information such as congestion levels, cell load. ¶1035, Fig. 96;The BS prediction engine may therefore be able to predict network conditions such as expected network traffic, expected load, expected congestion, expected latency, expected spectrum usage, and expected traffic types based on the predicted routes and predicted data requirements of each terminal device. ¶1047; A terminal device may utilize context information to optimize power consumption and/or data throughput during movement through areas of varying radio coverage. In particular, a terminal device may predict when or where the poor and strong radio coverage will occur and schedule radio activity such as cell scans and/or data transfers based on the predictions, which may enable the terminal device to conserve power by avoiding unnecessary failed cell scans and to optimize data transfer by executing transfers in high throughput conditions. ¶1100); if the determined load of the cell is below a predefined threshold, (Control module 11306 may adapt the configuration of wireless network 11200 based on estimated density and/or collision conditions, load, in order to improve the performance of wireless network 11200. Control module 11306 may therefore be configured to react to the instantaneous operating conditions and/or any changes to the operation of wireless network 11200. For example, control module 11306 may estimate network density and/or contention levels of wireless network 11200 and compare the estimated network density and/or contention level to a predefined threshold and, when the estimated network density and/or contention levels exceed the predefined threshold, trigger a reconfiguration of wireless network 11200 to reduce contention. ¶1103; Therefore, Azizi discloses that the cell load conditions and network conditions are monitored to react to the instantaneous conditions and/or changes and to determine estimated network density/contention levels, load, for both above and below cell threshold conditions.); It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Vankayala, apply uplink communication activity data to a ML model to predict uplink activity, with the teachings of Singh, use of specific traffic activity data details including BSR present and past reports with amount of data traffic to be transmitted to configure uplink resources based on predictions, with the teachings of Azizi, determination of the load of a cell and comparison to a threshold to make adjustments to device and network configurations accordingly.. The motivation in doing so would be to provide a dynamic adjustment of the device and network operations based on network conditions and device contexts to optimize network traffic activity to improve efficiency and scheduling of resources for each device. (Vankayala: Abstract, ¶¶0005, 0007, 0009, 0079; Singh: Abstract, ¶¶0006-0017; Azizi: Abstract, ¶¶0980, 0998-1000, 1001-1002, 0146-0147) RE Claim 25, Vankayala and Azizi do not explicitly disclose: The device, wherein the predicted communication activity comprises a predicted buffer status report; wherein the processor is configured to allocate resources for the UE based on the predicted buffer status report; wherein the processor is configured to encode a message indicating configured uplink radio resources to be transmitted to the respective UE. However Singh discloses: The device (Network node. ¶0537, Fig. 8A), wherein the predicted communication activity (Network node receives a scheduling request or buffer status report. ¶0073, Fig. 5A) comprises a predicted buffer status report (SR or BSR from UE indicates a predicted buffer size associated with data to be received. ¶0073); wherein the processor is configured to allocate resources for the UE (gNB, after receiving BSR, decides whether to allocate resources to the UE. ¶0331) based on the predicted buffer status report (SR or BSR from UE indicates a predicted buffer size associated with data to be received. ¶0073); wherein the processor is configured to encode a message (predicted buffer size is included, encoded, in a MAC message, control element, in UCI, or in configured grant uplink control CG-UCI. ¶0096) indicating configured uplink radio resources (BS transmits a UL grant for the data according to received SR or BSR. ¶0339, Fig. 5A) to be transmitted to the respective UE (BSR prediction configuration is specific for a terminal. ¶03311). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Vankayala, apply uplink communication activity data to a ML model to predict uplink activity, with the teachings of Singh, use of specific traffic activity data details including BSR present and past reports with amount of data traffic to be transmitted included in a predicted BSR report to configure uplink resources, with the teachings of Azizi, determination of the load of a cell and comparison to a threshold to make adjustments to device and network configurations accordingly.. The motivation in doing so would be to provide a dynamic adjustment of the device and network operations based on network conditions and device contexts to optimize network traffic activity to improve efficiency and scheduling of resources for each device. (Vankayala: Abstract, ¶¶0005, 0007, 0009, 0079; Singh: Abstract, ¶¶0006-0017; Azizi: Abstract, ¶¶0980, 0998-1000, 1001-1002, 0146-0147) Response to Arguments Applicant’s arguments with respect to claim(s) 1, 13, and 24 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s argument with respect to claim 13 and the limitation ‘in response to the predicted amount of data’ is directed towards a ‘proactive trigger’ and ‘UE to initiate a resource request before the data arrives’. Applicant further argues that ‘the predicted amount of data’ is the predicted data received from the application. Examiner respectfully disagrees. The limitation in question is preceded by “provide an input comprising the uplink buffer data and the context information to a machine learning model configured to predict an amount of data to be scheduled for the upcoming uplink transmission to be transmitted to the BS based on the input”, examiner emphasis. A processor is configured to these two limitations. Thus, ‘the predicted amount of data’ of the 2nd processor limitation is referring to the ‘a machine learning model configured to predict an amount of predicted data’ from the 1st processor limitation. Therefore, ‘the predicted amount of data’ is the prediction of the machine learning model and not the application. See Dalmiya, ¶¶0072, 0074, 0084, 0094. Applicant’s argument with respect to amended claim 24 is that the base station, BS, employs a specific condition for activating its machine learning prediction model. Further, applicant argues that the BS initiates prediction of UE uplink data volume ‘only when the current cell is determined to be below a predefined threshold.’ Examiner respectfully disagrees. Based on the interpretation of the amended claim language, as written, does not feature a limitation directed towards active and non-active states of the machine learning prediction model. Claim 24, as amended, is directed to a determining a load of a cell and a contingent claim ‘if the determined load of the cell is below a predefined threshold’ followed by providing an input to the machine learning model. Dependent claim 25 does not provide further limitations to the active state of the ML model. Referring to the specification of the instant application, the reference to cell load thresholds and input to the ML model, ¶0180, does not disclose an activity state or change of activity state of the ML model. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20050227697 A1 Borst et al. US 20210091838 A1 Bai et al. US 20210266247 A1 Sivaraj et al. US 20120188882 A1 Wilkinson et al. US 20210168841 A1 Vankayala et al. Liao, Run-Fa, Wen, Hong, Wu, Jinsong, Song, Huanhuan, Pan, Fei, Dong, Lian, The Rayleigh Fading Channel Prediction via Deep Learning, Wireless Communications and Mobile Computing, 2018, 6497340, 11 pages, 2018. https://doi.org/10.1155/2018/6497340 The above references disclose various aspects of channel state information measurement and prediction methods with time varying CSI states to improve network efficiency. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL A. LANGER whose telephone number is (703)756-1780. The examiner can normally be reached Monday - Friday, 8:00 am - 5:00 pm, Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nishant B. Divecha can be reached at 1 (571) 270-3125. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAUL A. LANGER/Examiner, Art Unit 2419 /Nishant Divecha/Supervisory Patent Examiner, Art Unit 2419
Read full office action

Prosecution Timeline

Dec 20, 2021
Application Filed
Feb 11, 2022
Response after Non-Final Action
Feb 19, 2025
Non-Final Rejection — §103
May 22, 2025
Response Filed
Jul 17, 2025
Final Rejection — §103
Aug 12, 2025
Interview Requested
Aug 27, 2025
Examiner Interview Summary
Aug 27, 2025
Applicant Interview (Telephonic)
Sep 16, 2025
Response after Non-Final Action
Oct 21, 2025
Request for Continued Examination
Oct 30, 2025
Response after Non-Final Action
Jan 07, 2026
Non-Final Rejection — §103
Mar 19, 2026
Interview Requested

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month