Prosecution Insights
Last updated: April 19, 2026
Application No. 17/919,573

Improving Random Access Based on Artificial Intelligence / Machine Learning (AI/ML)

Final Rejection §103
Filed
Oct 18, 2022
Examiner
SIXTO, NANCY
Art Unit
2465
Tech Center
2400 — Computer Networks
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
5 granted / 7 resolved
+13.4% vs TC avg
Strong +40% interview lift
Without
With
+40.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
38 currently pending
Career history
45
Total Applications
across all art units

Statute-Specific Performance

§101
0.9%
-39.1% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
27.5%
-12.5% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§103
DETAILED ACTION Claims 38-56 are presented for examination. Claims 38, 51 and 57 are amended. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 38, 51 and 57 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Regarding the dependent claims 39-48, 49-50, and 52-56, Applicant has not made specific arguments pertaining to why the cited references do not teach the recited claims, other than their dependency to claims 38, 51 and 57. Therefor for at least the reasons presented above for claims 38, 51 and 57, the dependent claims are rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 38, 49, 51, 57 are rejected under 35 U.S.C. 103 as being unpatentable over Shah (US 20200374926 A1) in view of Lee (US 20220104276 A1); further in view of Yan (US 20200186308 A1). Regarding claim 38, Shah teaches a method for a network node to manage random access by one or more user equipment (UEs) to a cell of a wireless network, the method comprising: providing one of the following to one or more UEs operating in the cell: an artificial intelligence/machine learning (AI/ML) predictive model that includes one or more input parameters and corresponding one or more output parameters that are associated with random-access configurations for the cell; or one or more random-access configurations for the cell (Fig. 7, [0110] “…the base station proceeds step S204, wherein the determined random access configuration parameters are transmitted to the UE”.), with each random-access configuration being associated with one or more values of output parameters ([0111] “…the determined random access configuration parameters comprise an initial configuration of power ramping and back-off parameter” (output parameters).); and detecting a random access to the cell by a particular UE, the random access according to a particular random-access configuration associated with particular values of the output parameters, (Fig. 6, [0088] “The UE then performs in step S105 a prioritized random access procedure with the base station using the determined random access parameters”.). Shah does not teach the output parameters are of the AI/ML predictive model and the output parameters comprisinq one or more of the following: an initial power level to be used by UE per beam; an initial power level to be used by UE per beam per measurement threshold; and a power rampinq step per beam. However, Lee in the same field of endeavor of wireless communications, teaches the output parameters are of the AI/ML predictive model (If an AI/ML predictive model is used to determine the random access configuration of Shah, then Figures 1 and 2 show AI devices that could be used for the UE and an AI server respectively and Fig. 3 shows an AI system with an AI server and UEs. Fig. 1 [0100] “The learning model is used to deduce a result value of new input data” and Fig. 2, [0123] “The processor 260 may deduce a result value of new input data using the learning model”.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of determining random access configuration parameters of Shah with the AI/ML predictive model of Lee. The motivation to do so would have been to improve the performance of a task, in this case determining a random access configuration, through continuous experiences for the task. (Lee; [0071]). Modified Shah teaches the output parameters comprising one or more of the following: an initial power level to be used by UE ([0111] “In a further variation, the random access configuration parameters comprise an initial power value”); an initial power level to be used by UE per measurement threshold ([0111] “According to an example of the embodiment, the determined random access configuration parameters comprise an initial configuration of power ramping and back-off parameter”); and a power ramping step ([0080] For example, the random access configuration parameters may include one or more of the following: [0084] one or a plurality of power ramping step size values each indicating a power increment to the previously-used transmit power value to determine a new transmit power to be used for a subsequent prioritized random access procedure performed within a secondary time interval). Modified Shah does not teach the output parameters are per beam. Yan, in the same field of endeavor of wireless communications teaches the output parameters are per beam: an initial power level to be used by UE per beam; an initial power level to be used by UE per beam per measurement threshold; and a power rampinq step per beam ([0092] Optionally, during random access, the network device may configure a parameter related to each downlink signal (per beam). Specifically, the parameter may include at least one of the following parameters: a maximum quantity of random access preamble transmission times, a maximum quantity of terminal beam switching times, a maximum quantity of base station beam switching times, a maximum quantity of beam pair switching times, a threshold parameter of a quantity of transmission times, downlink reference signal transmit power, a power ramping step (power ramping step per beam), preamble initial received target power (initial power level to be used by UE per beam), a preamble format, a maximum power ramping level, and maximum transmit power P_CMAX. The second configuration parameter may be used to determine preamble transmit power. [0329] In this embodiment, in a random access procedure, a terminal transmit beam is fixedly used. However, at each power ramping level, preamble retransmission is performed by using (N is configured by a network device, and may be a single value, or may be a set or a value range) random access resources associated with N different downlink signals, where N is a positive integer, and N is not greater than a maximum quantity of base station beam switching times.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of determining random access configuration parameters of Shah and the AI/ML predictive model of Lee with the per beam parameters of Yan. The motivation to do so would have been to provide a signal transmission method, a related apparatus, and a system, to improve a success rate of preamble retransmission and reduce latency. (Yan; [0071]). Regarding claim 49, Shah teaches the method of claim 38, wherein the output parameters of the AI/ML predictive model include any of the following: one or more power levels for an initial transmission of a random-access preamble ([0111] “In a further variation, the random access configuration parameters comprise an initial power value”), one or more measurement thresholds corresponding to the power levels, one or more power ramping steps for retransmissions of the random-access preamble ([0111] “According to an example of the embodiment, the determined random access configuration parameters comprise an initial configuration of power ramping and back-off parameter”), maximum number of preamble retransmissions before declaring random access failure, and set of downlink (DL) beams to be used for random access. Regarding claim 51, Shah teaches a network node arranged to manage random access by one or more user equipment (UEs) to a cell of a wireless network, the network node comprising: radio network interface circuitry configured to communicate with the UEs via the cell (Fig. 5 transceiver); and processing circuitry operatively coupled to the radio network interface circuitry (Fig. 5 processing circuitry), whereby the processing circuitry and the radio network interface circuitry are configured to: provide one of the following to one or more UEs operating in the cell: an artificial intelligence/machine learning (AI/ML) predictive model that includes one or more input parameters and corresponding one or more output parameters that are associated with random-access configurations for the cell, or one or more random-access configurations for the cell (Fig. 7, [0110] “…the base station proceeds step S204, wherein the determined random access configuration parameters are transmitted to the UE”.), with each random-access configuration being associated with one or more values of output parameters ([0111] “…the determined random access configuration parameters comprise an initial configuration of power ramping and back-off parameter” (output parameters).); and detect a random access to the cell by a particular UE, the random access according to a particular random-access configuration associated with particular values of the output parameters (Fig. 6, [0088] “The UE then performs in step S105 a prioritized random access procedure with the base station using the determined random access parameters”.). Shah does not teach the output parameters are of the AI/ML predictive model and the output parameters comprisinq one or more of the following: an initial power level to be used by UE per beam; an initial power level to be used by UE per beam per measurement threshold; and a power rampinq step per beam. However, Lee in the same field of endeavor of wireless communications, teaches the output parameters are of the AI/ML predictive model (If an AI/ML predictive model is used to determine the random access configuration of Shah, then Figures 1 and 2 show AI devices that could be used for the UE and an AI server respectively and Fig. 3 shows an AI system with an AI server and UEs. Fig. 1 [0100] “The learning model is used to deduce a result value of new input data” and Fig. 2, [0123] “The processor 260 may deduce a result value of new input data using the learning model”.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of determining random access configuration parameters of Shah with the AI/ML predictive model of Lee. The motivation to do so would have been to improve the performance of a task, in this case determining a random access configuration, through continuous experiences for the task. (Lee; [0071]). Lee does not teach the output parameters comprisinq one or more of the following: an initial power level to be used by UE per beam; an initial power level to be used by UE per beam per measurement threshold; and a power rampinq step per beam. Yan, in the same field of endeavor of wireless communications teaches the output parameters comprisinq one or more of the following: an initial power level to be used by UE per beam; an initial power level to be used by UE per beam per measurement threshold; and a power rampinq step per beam ([0092] Optionally, during random access, the network device may configure a parameter related to each downlink signal (which may be referred to as a second configuration parameter). Specifically, the parameter may include at least one of the following parameters: a maximum quantity of random access preamble transmission times, a maximum quantity of terminal beam switching times, a maximum quantity of base station beam switching times, a maximum quantity of beam pair switching times, a threshold parameter of a quantity of transmission times, downlink reference signal transmit power, a power ramping step (power ramping step per beam), preamble initial received target power (initial power level to be used by UE per beam), a preamble format, a maximum power ramping level, and maximum transmit power P_CMAX. The second configuration parameter may be used to determine preamble transmit power.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of determining random access configuration parameters of Shah and the AI/ML predictive model of Lee with the second configuration parameter of Yan. The motivation to do so would have been to provide a signal transmission method, a related apparatus, and a system, to improve a success rate of preamble retransmission and reduce latency. (Yan; [0071]). Regarding claim 57, Shah teaches a non-transitory, computer-readable medium ([0012] “It should be noted that general or specific embodiments may be implemented as a system, a method, an integrated circuit, a computer program, a storage medium, or any selective combination thereof”.) storing computer-executable instructions that, when executed by processing circuitry of a network node arranged to manage random access by one or more user equipment (UEs) to a cell of a wireless network, configure the network node to provide one of the following to one or more UEs operating in the cell: an artificial intelligence/machine learning (AI/ML) predictive model that includes one or more input parameters and corresponding one or more output parameters that are associated with random-access configurations for the cell; or one or more random-access configurations for the cell (Fig. 7, [0110] “…the base station proceeds step S204, wherein the determined random access configuration parameters are transmitted to the UE”.), with each random-access configuration being associated with one or more values of output parameters ([0111] “…the determined random access configuration parameters comprise an initial configuration of power ramping and back-off parameter” (output parameters).); and detect a random access to the cell by a particular UE, the random access according to a particular random-access configuration associated with particular values of the output parameters (Fig. 6, [] “The UE then performs in step S105 a prioritized random access procedure with the base station using the determined random access parameters”.). Shah does not teach the output parameters are of the AI/ML predictive model and the output parameters comprisinq one or more of the following: an initial power level to be used by UE per beam; an initial power level to be used by UE per beam per measurement threshold; and a power rampinq step per beam. However, Lee in the same field of endeavor of wireless communications, teaches the output parameters are of the AI/ML predictive model (If an AI/ML predictive model is used to determine the random access configuration of Shah, then Figures 1 and 2 show AI devices that could be used for the UE and an AI server respectively and Fig. 3 shows an AI system with an AI server and UEs. Fig. 1 [0100] “The learning model is used to deduce a result value of new input data” and Fig. 2, [0123] “The processor 260 may deduce a result value of new input data using the learning model”.)). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of determining random access configuration parameters of Shah with the AI/ML predictive model of Lee. The motivation to do so would have been to improve the performance of a task, in this case determining a random access configuration, through continuous experiences for the task. (Lee; [0071]). Lee does not teach the output parameters comprisinq one or more of the following: an initial power level to be used by UE per beam; an initial power level to be used by UE per beam per measurement threshold; and a power rampinq step per beam. Yan, in the same field of endeavor of wireless communications teaches the output parameters comprisinq one or more of the following: an initial power level to be used by UE per beam; an initial power level to be used by UE per beam per measurement threshold; and a power rampinq step per beam ([0092] Optionally, during random access, the network device may configure a parameter related to each downlink signal (which may be referred to as a second configuration parameter). Specifically, the parameter may include at least one of the following parameters: a maximum quantity of random access preamble transmission times, a maximum quantity of terminal beam switching times, a maximum quantity of base station beam switching times, a maximum quantity of beam pair switching times, a threshold parameter of a quantity of transmission times, downlink reference signal transmit power, a power ramping step (power ramping step per beam), preamble initial received target power (initial power level to be used by UE per beam), a preamble format, a maximum power ramping level, and maximum transmit power P_CMAX. The second configuration parameter may be used to determine preamble transmit power.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of determining random access configuration parameters of Shah and the AI/ML predictive model of Lee with the second configuration parameter of Yan. The motivation to do so would have been to provide a signal transmission method, a related apparatus, and a system, to improve a success rate of preamble retransmission and reduce latency. (Yan; [0071]). Claim Rejections - 35 USC § 103 Claims 39, 40, 41, 42, 43, 44, 52, 53, 54, 55 are rejected under 35 U.S.C. 103 as being unpatentable over Shah (US 20200374926 A1) in view of Lee (US 20220104276 A1) and Yan (US 20200186308 A1), further in view of Pezeshki (US 20210243073 A1). Regarding claim 39, Shah, Lee and Yan teaches the method of claim 38, further comprising collecting a dataset, wherein the dataset includes input parameter values and corresponding output parameter values (Input parameter values are the CSI reports and measurement reports the base station receives from the UE as shown in Fig. 7, S202. [0098] “In general, a CSI (Channel State Information) report as well as the measurement report are generated by the UE and include information on the quality of UE channels, such as downlink channels with current serving gNB and/or neighbor gNBs”. Output parameter values are the random access configuration parameters based on the CSI/measurement reports as shown in Fig. 7, S203. [0111] “According to an example of the embodiment, the determined random access configuration parameters comprise an initial configuration of power ramping and back-off parameter”.) However, Pezeshki in the same field of endeavor of wireless communications teaches a training dataset with a plurality of training dataset entries ([0069] “The predictive model training manager 532 may use the information in the training repository 515 (the training dataset) to determine the predictive model 524. The training repository 515 may receive training information (input parameter values and corresponding output parameter values as defined in Shah above) from the node 520, entities in the network 505 (e.g., BSs or UEs in the network 505), the cloud, or other sources” [0075] In some examples, when using a machine learning algorithm, the training system 530 generates vectors from the information in the training repository 515. In some examples, the training repository 515 stores vectors. In some examples, the vectors map one or more features to a label. The label may correspond to the predicted channel characteristics of the second band”. The vectors are the dataset entries, the features are the input, and the labels are the output.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Shah, Lee and Yan with the training system 530 and the training repository 515 and the ML techniques involved in training a predictive model of Pezeshki. The motivation to do so would have been to predict channel characteristics in a second radio frequency band based on measurements in a first radio frequency band (Pezeshki; [0038]). Regarding claim 40, Shah teaches the method of claim 39, wherein the training dataset includes one or more of the following: measurements of downlink (DL) signals made by UEs operating in the cell ([0098] “…the measurement report are generated by the UE and include information on the quality of UE channels, such as downlink channels with current serving gNB”); measurements of uplink (UL) signals transmitted by UEs operating in the cell; measurements from one or more network nodes serving neighbor cells; random access reports by UEs operating in the cell; connection establishment failure reports by UEs operating in the cell; location information for UEs operating in the cell; and timing advance for UEs operating in the cell. Regarding claim 41, Shah teaches the method of claim 39, wherein each training dataset entry includes: one or more of the following input parameter values: one or more measurements made by a UE on neighboring cells or frequencies ([0098] In general, a CSI (Channel State Information) report as well as the measurement report are generated by the UE and include information on the quality of UE channels, such as downlink channels with current serving gNB and/or neighbor gNBs.), and one or more beam measurements made by a UE in the cell; and one or more of the following corresponding output parameter values: an indication of failed or successful random access to the cell, and an indication of failed or successful connection establishment ([0090] “According to a further example of the embodiment, the UE informs the base station about a beam failure recovery failure or handover failure, if the primary time interval has lapsed without having completed a RACH procedure successfully”.). Regarding claim 42, Shah, Lee, Yan and Pezeshki teach the method of claim 39 and Pezeshki teaches wherein: the provided AI/ML predictive model is untrained ([0065] “In some examples, the ML techniques involve training a model, such as a predictive model”. The model requires training; thus it is untrained.); and the method further comprises sending at least a first portion of the training dataset to the one or more UEs ([0065] “The model may be trained based on training data. [0067] The training system 530 generally includes a predictive model training manager 532 that uses training data to generate the predictive model. The predictive model 524 may be determined based on the information in the training repository 515. [0069] The training system 530 may be located on the node 520 and the training repository 515 may be located on the node 520. [0066] The node 520 may be a UE (e.g., such as the UE 120a in the wireless communication network 100). [0069] The training repository 515 may receive training information from the node 520, entities in the network 505 (e.g., BSs or UEs in the network 505), the cloud, or other sources”. Therefore, the UE receives at least a first portion of the training dataset). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Shah, Lee and Yan with the training system 530 and the training repository 515 and the ML techniques involved in training a predictive model of Pezeshki. The motivation to do so would have been to predict channel characteristics in a second radio frequency band based on measurements in a first radio frequency band (Pezeshki; [0038]). Regarding claim 43, Shah teaches the method of claim 39 and a first portion of the training dataset ([0110] “…the base station may receive both, a CSI report and a measurement report from the UE”. A portion of the training dataset are the input values to the AI/ML predictive model, the learning data.) Shah does not teach training the AI/ML predictive model based on at least a first portion of the training dataset. However, Lee in the same field of endeavor of wireless communications, teaches training the AI/ML predictive model based on at least a first portion of the training dataset ([0100] “The learning processor 130 may be trained by a model configured with an artificial neural network using learning data” (a first portion of the training data set)). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of determining random access configuration parameters of Shah with the AI/ML predictive model of Lee. The motivation to do so would have been to reduce the PUSCH decoding overhead of the base station (Lee; [0024]). Regarding claim 44, Shah does not teach wherein the trained AI/ML predictive model is provided to the one or more UEs. However, Lee in the same field of endeavor of wireless communications, teaches wherein the trained AI/ML predictive model is provided to the one or more UEs ([0130] “In this case, the AI server 200 may train an artificial neural network based on a machine learning algorithm in place of the AI devices 100a to 100e, may directly store a learning model or may transmit the learning model to the AI devices 100a to 100e.” The trained AI/ML predictive model is provided to the one or more UEs.) It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of determining random access configuration parameters of Shah with the AI/ML predictive model of Lee. The motivation to do so would have been to reduce the PUSCH decoding overhead of the base station (Lee; [0024]). Regarding claim 52, Shah, Lee and Yan teaches the network node of claim 51, wherein the processing circuitry and the radio network interface circuitry are further configured to collect a dataset, wherein each dataset entry includes input parameter values and corresponding output parameter values (Input parameter values are the CSI reports and measurement reports the base station receives from the UE as shown in Fig. 7, S202. [0098] “In general, a CSI (Channel State Information) report as well as the measurement report are generated by the UE and include information on the quality of UE channels, such as downlink channels with current serving gNB and/or neighbor gNBs”. Output parameter values are the random access configuration parameters based on the CSI/measurement reports as shown in Fig. 7, S203. [0111] “According to an example of the embodiment, the determined random access configuration parameters comprise an initial configuration of power ramping and back-off parameter”.) However, Pezeshki in the same field of endeavor of wireless communications teaches a training dataset with a plurality of training dataset entries ([0069] “The predictive model training manager 532 may use the information in the training repository 515 (the training dataset) to determine the predictive model 524. The training repository 515 may receive training information (input parameter values and corresponding output parameter values as defined in Shah above) from the node 520, entities in the network 505 (e.g., BSs or UEs in the network 505), the cloud, or other sources” [0075] In some examples, when using a machine learning algorithm, the training system 530 generates vectors from the information in the training repository 515. In some examples, the training repository 515 stores vectors. In some examples, the vectors map one or more features to a label. The label may correspond to the predicted channel characteristics of the second band”. The vectors are the dataset entries, the features are the input, and the labels are the output.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Shah, Lee and Yan with the training system 530 and the training repository 515 and the ML techniques involved in training a predictive model of Pezeshki. The motivation to do so would have been to predict channel characteristics in a second radio frequency band based on measurements in a first radio frequency band (Pezeshki; [0038]). Regarding claim 53, Sha teaches the network node of claim 52, wherein the training dataset includes one or more of the following: measurements of downlink (DL) signals made by UEs operating in the cell ([0098] “…the measurement report are generated by the UE and include information on the quality of UE channels, such as downlink channels with current serving gNB”); measurements of uplink (UL) signals transmitted by UEs operating in the cell; measurements from one or more network nodes serving neighbor cells: random access reports by UEs operating in the cell; connection establishment failure reports by UEs operating in the cell; location information for UEs operating in the cell; and timing advance for UEs operating in the cell. Regarding claim 54, Shah teaches the network node of claim 52, wherein each training dataset entry includes: one or more of the following input parameter values: one or more measurements made by a UE on neighboring cells or frequencies ([0098] “In general, a CSI (Channel State Information) report as well as the measurement report are generated by the UE and include information on the quality of UE channels, such as downlink channels with current serving gNB and/or neighbor gNBs.”), and one or more beam measurements made by a UE in the cell; and one or more of the following corresponding output parameter values: an indication of failed or successful random access to the cell, and an indication of failed or successful connection establishment ([0090] “According to a further example of the embodiment, the UE informs the base station about a beam failure recovery failure or handover failure, if the primary time interval has lapsed without having completed a RACH procedure successfully”.). Regarding claim 55, Shah, Lee, Yan and Pezeshki teach the network node of claim 52, wherein: the provided AI/ML predictive model is untrained ([0065] “In some examples, the ML techniques involve training a model, such as a predictive model”. The model requires training; thus it is untrained.); and the method further comprises sending at least a first portion of the training dataset to the one or more UEs ([0065] “The model may be trained based on training data. [0067] The training system 530 generally includes a predictive model training manager 532 that uses training data to generate the predictive model. The predictive model 524 may be determined based on the information in the training repository 515. [0069] The training system 530 may be located on the node 520 and the training repository 515 may be located on the node 520. [0066] The node 520 may be a UE (e.g., such as the UE 120a in the wireless communication network 100). [0069] The training repository 515 may receive training information from the node 520, entities in the network 505 (e.g., BSs or UEs in the network 505), the cloud, or other sources”. Therefore, the UE receives at least a first portion of the training dataset). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Shah, Lee and Yan with the training system 530 and the training repository 515 and the ML techniques involved in training a predictive model of Pezeshki. The motivation to do so would have been to predict channel characteristics in a second radio frequency band based on measurements in a first radio frequency band (Pezeshki; [0038]). Claim Rejections - 35 USC § 103 Claims 45, 56 are rejected under 35 U.S.C. 103 as being unpatentable over Shah in view of Lee and Yan, in view of Pezeshki, further in view of Ma (US 20210160149 A1). Regarding claim 45, Shah in view of Lee, Yan and Pezeshki, do not teach receiving one or more of the following from a particular UE operating in the cell: an indication that the provided AI/ML predictive model needs to be retrained, and a request for a further training dataset for retraining the model; and performing one of the following: sending a second portion of the training dataset to the particular UE, or retraining the AI/ML predictive model based a second portion of the training dataset and sending the retrained AI/ML predictive model to the particular UE. However Ma, in the same field of endeavor of wireless communications, teaches receiving one or more of the following from a particular UE operating in the cell: an indication that the provided AI/ML predictive model needs to be retrained ([0159] “In this embodiment the re-training phase may also or instead be triggered by the UE, as indicated at 1312.”), and a request for a further training dataset for retraining the model; and performing one of the following: sending a second portion of the training dataset to the particular UE ([0159] “…during the re-training phase 1350 the UE and BS exchange re-training signaling as indicated at 1314 in order to facilitate re-training of AI/ML components in the network and/or at the UE. For example, in some embodiments the re-training signaling may include information exchanges and signaling such as that indicated at 1016, 1018 and 1020 in FIG. 12” [0122] “ At 1016 the BS starts the training phase 1050 by sending a training signal that includes a training sequence or training data (a second portion of the training dataset) to the UE.”), or retraining the AI/ML predictive model based a second portion of the training dataset and sending the retrained AI/ML predictive model to the particular UE. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Shah, Lee, Yan and Pezeshki with the over the air information exchange procedure for a re-training phase of machine learning components of Ma. The motivation to do so would have been to facilitate training (and re-training) of ML components of communicating devices (Ma; [0110]). Regarding claim 56, Shah in view of Lee, Yan and Pezeshki, do not teach, wherein the processing circuitry and the radio network interface circuitry are further configured to: receive one or more of the following from a particular UE operating in the cell: an indication that the provided AI/ML predictive model needs to be retrained, and a request for a further training dataset for retraining the model; and perform one of the following: send a second portion of the training dataset to the particular UE, or retrain the AI/ML predictive model based a second portion of the training dataset and sending the retrained AI/ML predictive model to the particular UE. However Ma, in the same field of endeavor of wireless communications, teaches receiving one or more of the following from a particular UE operating in the cell: an indication that the provided AI/ML predictive model needs to be retrained ([0159] “In this embodiment the re-training phase may also or instead be triggered by the UE, as indicated at 1312.”), and a request for a further training dataset for retraining the model; and performing one of the following: sending a second portion of the training dataset to the particular UE ([0159] “…during the re-training phase 1350 the UE and BS exchange re-training signaling as indicated at 1314 in order to facilitate re-training of AI/ML components in the network and/or at the UE. For example, in some embodiments the re-training signaling may include information exchanges and signaling such as that indicated at 1016, 1018 and 1020 in FIG. 12” [0122] “ At 1016 the BS starts the training phase 1050 by sending a training signal that includes a training sequence or training data (a second portion of the training dataset) to the UE.”), or retraining the AI/ML predictive model based a second portion of the training dataset and sending the retrained AI/ML predictive model to the particular UE. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Shah, Lee, Yan and Pezeshki with the over the air information exchange procedure for a re-training phase of machine learning components of Ma. The motivation to do so would have been to facilitate training (and re-training) of ML components of communicating devices (Ma; [0110]). Claim Rejections - 35 USC § 103 Claim 46 is rejected under 35 U.S.C. 103 as being unpatentable over Shah in view of Lee, Yan, and Pezeshki, further in view of Wang (US 20220322107 A1). Regarding claim 46, Shah, Lee, Yan and Pezeshki teach the method of claim 43, but do not teach obtaining the one or more random- access configurations for the cell based on the trained AI/ML predictive model, wherein the obtained random-access configurations are provided to the one or more UEs via broadcast in the cell. However, Wang, in the same field of endeavor of optimizing a cellular network using machine learning, teaches obtaining the one or more random- access configurations for the cell based on the trained AI/ML predictive model, wherein the obtained random-access configurations are provided to the one or more UEs via broadcast in the cell ([0065] “At 550, the base stations 120 pass the optimization message 470 to the UEs 110. Like the gradient-request message 430, each base station 120 may individually send optimization messages 470 to different UEs 110 or may broadcast or multicast a single optimization message to the UEs 110. [0052] The optimization-message generator 370 generates an optimization message 470, which includes the optimized network-configuration parameter 460. [0054] The at least one first network-configuration parameter 422 can include a first uplink transmit power configuration, a first time-multiplexed pilot pattern, a first data tone power, a first uplink slot allocation percentage, a first subframe configuration, a first multi-user scheduling configuration, a first random-access configuration, or some combination thereof”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Shah, Lee, Yan and Pezeshki with the network-optimization controller of Wang. The motivation to do so would have been to evaluate gradients from a group of entities within the cellular network to determine the optimized network-configuration parameter that optimizes performance for these entities as a group. (Wang; [0004]). Claim Rejections - 35 USC § 103 Claim 47 is rejected under 35 U.S.C. 103 as being unpatentable over Shah in view of Lee and Yan; further in view of Ma (US 20210160149 A1). Regarding claim 47, Shah, Lee and Yan teach the method of claim 38 but do not teach, further comprising selecting the AI/ML predictive model from a plurality of available model types based on one or more of the following criteria: wireless network capabilities, UE capabilities, model size and/or complexity, severity of random access problems in the cell, available inputs, necessary and/or desirable outputs, need for retraining the model ([0113] “The information indicating an AI/ML capability of the UE may indicate whether or not the UE supports AI/ML. If the UE is capable of supporting AI/ML optimization, the information may also or instead indicate what type and/or level of complexity of AI/ML the UE is capable of supporting, e.g., which function/operation AI/ML can be supported, what kind of AI/ML algorithm can be supported (for example, autoencoder, reinforcement learning, neural network (NN), deep neural network (DNN), how many layers of NN can be supported, etc.). In some embodiments, the information indicating an AI/ML capability of the UE may also or instead include information indicating whether the UE can assist with training”.). However, Ma, in the same field of endeavor of wireless communications, teaches selecting the AI/ML predictive model from a plurality of available model types based on one or more of the following criteria: wireless network capabilities, UE capabilities, model size and/or complexity, severity of random access problems in the cell, available inputs, necessary and/or desirable outputs, need for retraining the model (). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Shah, Lee and Yan with the over the air information exchange procedure for a re-training phase of machine learning components of Ma. The motivation to do so would have been to facilitate training (and re-training) of ML components of communicating devices (Ma; [0110]). Claim Rejections - 35 USC § 103 Claim 48 is rejected under 35 U.S.C. 103 as being unpatentable over Shah in view of Lee and Yan; further in view of Shi (US 20200275402 A1). Regarding claim 48, Shah teaches the method of claim 38, wherein the input parameters to the AI/ML predictive model include any of the following: cell- and beam-level link quality of the cell ([0098] “…a CSI (Channel State Information) report as well as the measurement report are generated by the UE and include information on the quality of UE channels, such as downlink channels with current serving gNB); cell- and beam-level link quality of the one or more neighbor cells ([0098] “…and/or neighbor gNBs”); relations between beams of the cell and the neighbor cells; UE timing advance; UE location; UE precoding matrix indicator, PMI ([0108] CSI report may consist of…precoding matrix indicator (PMI)); strength or quality of uplink (UL) reference signals received from UEs; random access collisions reported by UEs. Shah does not teach and one or more of the following UE-related information: model, class, type, manufacturer, receiver type, and number of antennas. However Shi, in the same field of endeavor of wireless communications that use sensing systems, teaches and one or more of the following UE-related information: model, class, type, manufacturer, receiver type, and number of antennas ([0145] “In the example table shown in FIG. 16, an entry may associate a device type (e.g., vehicle, IoT equipment or communication device, among others), global parameters (e.g., weather, time, date, temperature, or other environmental data), coordinate information (there may be multiple sets of 3D coordinate information for a mobile ED 110) and channel information (e.g., channel conditions on different bands, including non-mmWave band, for both UL and DL channels). The associated data may be used as input to a machine learning system 640 to output channel conditions”.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Shah, Lee and Yan with the machine learning system of Shi. The motivation to do so would have been so that the BS 170 may know predicted information about expected future movement and/or activity of the ED 110. This information may be useful for beam management. (Shi; [0142]). Claim Rejections - 35 USC § 103 Claim 50 is rejected under 35 U.S.C. 103 as being unpatentable over Shah in view of Lee and Yan; further in view of Chai (US 20190223117 A1). Regarding claim 50, Shah teaches a power level for an initial transmission of a random-access preamble ([0080] For example, the random access configuration parameters may include one or more of the following: [0081] a random access preamble sequence, transmitted with the random access message; [0082] time and frequency of the radio channel resources that are to be used by the UE when transmitting the random access preamble message to the gNB; [0083] an initial transmit power value, to be used by the UE when transmitting the initial random access preamble message to the gNB during a random access attempt). Shah does not teach wherein the AI/ML predictive model includes, for each beam of one of more downlink (DL) beams of the cell, one or more relations between a measurement range of a reference signal of the beam and a corresponding power level. Lee, in the same field of endeavor of wireless communications, teaches the AI/ML predictive model ([0100] The learning processor 130 may be trained by a model configured with an artificial neural network using learning data. In this case, the trained artificial neural network may be called a learning model. The learning model is used to deduce a result value of new input data not learning data. The deduced value may be used as a base for performing a given operation.). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of determining random access configuration parameters of Shah with the AI/ML predictive model of Lee. The motivation to do so would have been to improve the performance of a task, in this case determining a random access configuration, through continuous experiences for the task. (Lee; [0071]). Lee does not teach wherein the AI/ML predictive model includes, for each beam of one of more downlink (DL) beams of the cell, one or more relations between a measurement range of a reference signal of the beam and a corresponding power level. However, Chai in the same field of mobile communications, teaches wherein the AI/ML predictive model includes, for each beam of one of more downlink (DL) beams of the cell, one or more relations between a measurement range of a reference signal of the beam and a corresponding power level ([0128] “… the configuration information needs to include a beam identifier or label, or an identifier or a label of a signal used for identifying a beam. If each beam in the beam set is used as a granularity, it indicates that a set of configuration parameters is configured for each beam in the beam set, and includes the uplink power control parameter and the parameter information corresponding to the condition of triggering a power headroom report. [0141] estimating, according to a reference signal power and a reference signal received power that correspond to each beam in the beam set, the downlink path loss corresponding to each beam in the beam set, where the reference signal power is configured by the base station, and the reference signal received power is measured based on a channel state information-reference signal corresponding to each beam in the beam set and/or based on a beam reference signal corresponding to each beam in the beam set”. Thus, Chai teaches a relationship between a measurement of a reference signal of each DL beam and a corresponding power level). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method of determining random access configuration parameters of Shah, Lee and Yan with the teachings of calculating a power headroom of a beam set of Chai. The motivation to do so would have been to enable an eNodeB to dynamically allocate an appropriate resource to a terminal (Chai; [0003]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kim (US 20190394805 A1) discloses a beam switch method for RACH preamble transmission/retransmission. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY SIXTO whose telephone number is (571)272-3295. The examiner can normally be reached Mon - Friday 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gary Mui can be reached at 571-270-1420. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NANCY SIXTO/Examiner, Art Unit 2465 /GARY MUI/Supervisory Patent Examiner, Art Unit 2465
Read full office action

Prosecution Timeline

Oct 18, 2022
Application Filed
Sep 25, 2025
Non-Final Rejection — §103
Dec 30, 2025
Response Filed
Mar 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12457594
RAN APPLICATIONS FOR INTER-CELL INTERFERENCE MITIGATION FOR MASSIVE MIMO IN A RAN
2y 5m to grant Granted Oct 28, 2025
Patent 12363587
METHOD AND APPARATUS FOR DUPLICATE PDU DISCARDING FOR MULTI-PATH TRANSMISSION IN A WIRELESS COMMUNICATION SYSTEM
2y 5m to grant Granted Jul 15, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+40.0%)
2y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month