Prosecution Insights
Last updated: April 19, 2026
Application No. 18/266,006

PREDICTING RANDOM ACCESS PROCEDURE PERFORMANCE BASED ON AI/ML MODELS

Final Rejection §102§103§112
Filed
Jun 08, 2023
Examiner
NGUYEN, CHUONG M
Art Unit
2411
Tech Center
2400 — Computer Networks
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
330 granted / 457 resolved
+14.2% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
61 currently pending
Career history
518
Total Applications
across all art units

Statute-Specific Performance

§101
2.6%
-37.4% vs TC avg
§103
65.0%
+25.0% vs TC avg
§102
9.2%
-30.8% vs TC avg
§112
15.7%
-24.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 457 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION a. Claims 1-17, 20, 26, and 37 in the present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA : - claims 1, 7, 8, 20, 26, and 37 are amended - claim 10 is canceled b. This is a final action on the merits based on Applicant’s claims submitted on 11/26/2025. Response to Arguments Regarding claims 1, 7, 10, and 20 previously objected for informalities, claim 10 has been cancelled without prejudice and claims 1, 7, and 20 have been amended according to the examiner's recommendation and thus the previous objection has been withdrawn. Regarding claims 7, 8, and 10 previously rejected under 35 U.S.C. § 112(b), claim 10 has been cancelled without prejudice and claims 7, 8 have been amended according to the examiner's recommendation and thus the previous rejection has been withdrawn. Regarding Independent claims 1, 20, 26, and 27 previously rejected under 35 U.S.C. § 102(a)(2), Applicant's arguments, see “What Ottersten completely fails to disclose or suggest is “the one or more RA parameters comprising an initial power level to be used by the WCD to transmit an uplink signal including an initial preamble transmission for the RA procedure, the initial power level comprising at least one of a per beam initial power level and a per cell initial power level; and a power ramping step per beam” as recited in amended independent Claims 1, 20, 26 and 37. Applicant respectfully submits that Ottersten was not cited as disclosing, and does not disclose these features.” on pages 16-17, filed on 11/26/2025, with respect to Ottersten et al., (U.S. Patent Application Publication No. 2021/0345134, hereinafter referred to as “Ottersten”), have been fully considered but are moot, over the limitations of “the one or more RA parameters comprising an initial power level to be used by the WCD to transmit an uplink signal including an initial preamble transmission for the RA procedure, the initial power level comprising at least one of a per beam initial power level and a per cell initial power level; and a power ramping step per beam”. Said limitations are newly added to the amended Claims 1, 20, 26, and 27 and have been addressed in instant office action, as shown in section 35 USC 103 rejection below, with newly identified prior art teachings from newly found references , in combination with previously applied reference Ottersten, thus rendering said Applicant’s arguments moot. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-6, 8-9, 11-17, 20, 26, and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Ottersten et al. US Pub 2021/0345134 (hereinafter “Ottersten”), in view of Dinan US Pub 2013/0259008 (hereinafter “Dinan”), and further in view of Decarreau et al. US Pub 2022/0217781 (hereinafter “Decarreau”). Regarding claim 1 (Currently Amended) Ottersten discloses a computer implemented method (“FIG. 6 shows an example where the wireless device 120, 122, referred to as UE in FIG. 6, has limited ML capabilities and attaches to the radio network node 110, 111 and ML message exchange takes place.” [0215]) performed by a Wireless Communication Device (WCD) (e.g. “UE 120, 122” In Fig. 6), the method comprising: receiving information (step “ML model trans” in Fig. 6; “Thus, the BS requests the device to start collecting training data [Training data collection request] for later processing in the BS.” [0218]) from a network node (i.e. “base station” In Fig. 6), the information comprising: information about or that characterizes an Artificial Intelligence Machine Learning (AI/ML) model (“ML model” in Fig. 6) that enables the WCD to build the AI/ML model that outputs a set of output parameters (“Initially, the objective function may be more limited, e.g., relate to data rates, acceptable latencies, error rates. It may also comprise ML-related objectives, e.g., error function, training stopping criteria.” [0217]) that represent whether a Random Access (RA) procedure to be performed by the WCD will be successful (“The prediction of the performance may for example be which modulation and coding scheme (MCS) to use (i.e. MCS being used in a random access procedure), which transmitter beam and receiver beam to use, and user traffic needs, just to give some examples. Thus, the determined prediction of performance may for example be efficient utilization of radio resources, or scheduling of users, or user movement patterns.” [0101]) based on a set of input parameters (“train the machine learning model by using an input parameter relating to a performance of the at least one network node 110, 111, 120, 122, 130 in order to choose one or more operations relating to the performance of the at least one network node 110, 111, 120, 122, 130.” [0118]); and adapting one or more RA parameters for the RA procedure based on the AI/ML model (“evaluate the machine learning model after performing the one or more operations relating to the performance of the at least one network node 110, 111, 120, 122, 130, and update the machine learning model based on the one or more operations. Furthermore, the wireless communications system 10 may train the machine learning model by using the received input parameter and a state relating to a network environment of the at least one network node 110, 111, 120, 122, 130 to choose one or more actions relating to the performance of the at least one network node 110, 111, 120, 122, 130.” [0118]). Ottersten does not specifically teach a per cell initial power level. In an analogous art, Dinan discloses a per cell initial power level (“a power ramping step value for each cell in a first plurality of cells having configured random access resources” [0066]; Fig. 10). Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Ottersten’s method for handling of machine learning to include Dinan transmission method using increased transmission power depends on a cell specific power ramping step value (Dinan [Abstract]), in order to predict, adjust, and improve overall random access procedure. Ottersten and Dinan do not specifically teach the one or more RA parameters comprising an initial power level to be used by the WCD to transmit an uplink signal including an initial preamble transmission for the RA procedure, the initial power level comprising: at least one of a per beam initial power level; and a power ramping step per beam. In an analogous art, Decarreau discloses the one or more RA parameters comprising an initial power level (i.e. “the initial random access preamble power”) to be used by the WCD to transmit an uplink signal including an initial preamble transmission for the RA procedure, the initial power level comprising: at least one of a per beam initial power level (“associated with beams to be used”; “According to an example embodiment, the RACH procedure involves several parameters which are given to the UE by the network (or BS). Such parameters include (among others) the RACH (or PRACH) configuration index (which may identify a RACH preamble format, a subframe number, a slot number, a starting symbol, etc., and thus specifies the available set of PRACH occasions), the initial random access preamble power, the power ramping step in case of RACH failure, the scaling factor for prioritized random access procedure, the random access (RACH) preamble index, the thresholds for selecting synchronization signal blocks (SSBs) and channel state information reference signals (CSI-RS), associated with beams to be used (e.g., which may be useful especially during beam failure recovery procedures) as well as a reference signal received power (RSRP) threshold to select between the Supplementary Uplink Carrier versus the Normal Uplink Carrier. Those example RACH parameters may be given to the UE through system information (SIB1).” [0047-0048]); and a power ramping step per beam (“the power ramping step in case of RACH failure, the scaling factor for prioritized random access procedure, the random access (RACH) preamble index, the thresholds for selecting synchronization signal blocks (SSBs) and channel state information reference signals (CSI-RS), associated with beams to be used (e.g., which may be useful especially during beam failure recovery procedures)” [0047]). Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Ottersten’s method for handling of machine learning, as modified by Dinan, to include Decarreau machine learning/AI methods using a variety of relevant inputs/parameters (Decarreau [0055]), in order to predict, adjust, and improve overall random access procedure. Thus, a person of ordinary skill would have appreciated the ability to incorporate Decarreau machine learning/AI methods into Ottersten’s method for handling of machine learning since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 2 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 1, Ottersten further discloses further comprising performing the RA procedure based on the one or more adapted RA parameters (“The prediction of the performance may for example be which modulation and coding scheme (MCS) to use (i.e. MCS being used in a random access procedure), which transmitter beam and receiver beam to use, and user traffic needs, just to give some examples. Thus, the determined prediction of performance may for example be efficient utilization of radio resources, or scheduling of users, or user movement patterns.” [0101] and furthermore “For example, the wireless communications system 10 may evaluate the machine learning model by determining a block error rate after performing a change of an MCS operation. The block error rate is a ratio of the number of erroneous blocks to the total number of blocks transmitted” [0112]). Regarding claim 3 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 2, Ottersten further discloses further comprising providing feedback about the AI/ML model to the network node (“In some embodiments, the network node 110, 111, 120, 122, 130 evaluates the machine learning model after the performing of the one or more operations relating to the at least one network node 110, 111, 120, 122, 130” [0137]). Regarding claim 4 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 3, Ottersten further discloses wherein the feedback comprises an output of the AI/ML model (“The prediction of the performance may for example be which modulation and coding scheme (MCS) to use, which transmitter beam and receiver beam to use, and user traffic needs, just to give some examples. Thus, the determined prediction of performance may for example be efficient utilization of radio resources, or scheduling of users, or user movement patterns.” [0101]) and/or information that indicates an accuracy of the AI/ML model (“Reliable ML models are maintained during prediction by constantly updating them based on the accuracy of the prediction.” [0244]). Regarding claim 5 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 3, wherein providing the feedback about the AI/ML model to the network node comprises: Ottersten further discloses training the AI/ML model based at least in part on the RA procedure (“The BS then updates the ML model based on the received training data [ML model re-training].” [0221]) and the one or more adapted parameters to obtain an updated version of the AI/ML model (“As will be described in Action 305 below, the machine learning model may be updated based on the evaluation. For example, the wireless communications system 10 may evaluate the machine learning model by determining a block error rate after performing a change of an MCS operation. The block error rate is a ratio of the number of erroneous blocks to the total number of blocks transmitted.” [0112]). Regarding claim 6 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 5, Ottersten further discloses wherein providing the feedback about the AI/ML model to the network node further comprises: providing, to the network node: (a) the updated version of the AI/ML model (“The wireless device 120, 122 may then acquire the estimates and update the model before sending it back to the external node 201, the cloud node 202 or to one of the intermediate nodes 110, 111, 130.” [0242]); (b) data descriptive of updates to the AI/ML model included in the updated version of the AI/ML model (“Refined and reinforcement learning may be used to continuously update the one or more machine learning models based on new inputs. This provides flexibility if something in the network environment changes.” [0058]); (c) the set of input parameters (“train the machine learning model based on one or more known input data and on one or more known output data relating to a result of an operation of the one network node 110, 111, 120, 122, 130 with the known input data.” [0115]); or (d) instructions to perform the updates to the AI/ML model included in the updated version of the AI/ML model (“Refined and reinforcement learning may be used to continuously update the one or more machine learning models based on new inputs. This provides flexibility if something in the network environment changes.” [0058]). Regarding claim 8 (Currently Amended) Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 1, wherein the one or more output parameters of the AI/ML model comprises at least one of: Ottersten further discloses i) an estimated success or failure of the RA procedure given values for the set of input parameters (“When possible, depending on e.g. load and traffic in the communications network, the intermediate network nodes 110, 111, 130, may propagate information, e.g. data such as measurement data and information relating to the machine learning model, to the global network node 130, 201, 202. This makes the intelligent system robust against node failures, at which node failures the information otherwise may be lost.” [0090]); ii) a success probability of the RA procedure given the values of the set of input parameters (“train the machine learning model based on one or more known input data and on one or more known output data relating to a result of an operation of the one network node 110, 111, 120, 122, 130 with the known input data. Each one of the one or more known output data may correspond to a respective one of the one or more known input data. This is done to train the machine learning model to perform correct or improved predictions of the performance of the network node 110, 111, 120, 122, 130.” [0115]); iii) a failure probability of the RA procedure given the values of the set of input parameters (“Thus, if the prediction determined based on the machine learning model and the one or more operations performed based on the prediction do not achieve a desired result, the machine learning model may be updated.” [0115]); iv) a probability of having a successful random access on a first random access attempt of the RA procedure (“The prediction of the performance may for example be which modulation and coding scheme (MCS) to use, which transmitter beam and receiver beam to use, and user traffic needs, just to give some examples. Thus, the determined prediction may for example be efficient utilization of radio resources, or scheduling of users, or user movement patterns.” [0058]); v) a probability of having a successful random access in multiple attempts of the RA procedure (“the machine learning unit 300, evaluate the machine learning model after performing the one or more operations relating to the performance of the at least one network node 110, 111, 120, 122, 130, and update the machine learning model based on the one or more operations. Furthermore, the wireless communications system 10 may train the machine learning model by using the received input parameter and a state relating to a network environment of the at least one network node 110, 111, 120, 122, 130 to choose one or more actions relating to the performance of the at least one network node 110, 111, 120, 122, 130.” [0118]). Dinan further discloses ix) an actual Random Access Channel (RACH) transmission power to be used for the RA procedure (“a preamble transmission power P_PRACH may be determined; 4) a preamble sequence may be selected from the preamble sequence set using the preamble index; 5) a single preamble may be transmitted using selected preamble sequence(s) with transmission power P_PRACH on the indicated PRACH resource;” [0187]) Regarding claim 9 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 1, Ottersten further discloses wherein the one or more output parameters are either per beam (“For example, the wireless communications system 10 may perform a change of transmit beam and/or receive beam, change of MCS selection operation based on the determined prediction of the performance of the at least one network node 110, 111, 120, 122, 130. This may for example be the case when the angle of arrival or the received signal strength changes.” [0109]) or per cell (“Several different ML models may be provided for several different sites. Information gathered from the wireless device 120, 122, such as cell id and location information may be used to determine which site the wireless device 120, 122 currently occupies. Therefore, it is known which inputs to send to the correct ML learning model.” [0231]). Regarding claim 11 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 1, Ottersten further discloses wherein adapting the one or more RA parameters for the RA procedure based on the AI/ML model comprises: obtaining a first set of values for the set of input parameters based on a first set of values for the one or more RA parameters (“In Action 301, the wireless communications system 10 determines, by means of the machine learning unit 300 and a machine learning model relating to at least one network node 110, 111, 120, 122, 130 out of the one or more intermediate network nodes 110, 111, 130 or the one or more leaf network nodes 120, 122, a prediction of a performance of the at least one network node 110, 111, 120, 122, 130 based on input data relating to the at least one network node 110, 111, 120, 122, 130.” [0102]); feeding the first set of values for the set of input parameters into the AI/ML model (“by means of the machine learning unit 300, using information relating to the performed one or more measurements as input data to the machine learning model in order to determine the prediction of the performance of the one network node 110, 111, 120, 122, 130” [0104]); obtaining a set of values for the set of output parameters output by the AI/ML model responsive to the first set of values for the set of input parameters (“the wireless communications system 10 may, by means of the machine learning unit 300, train the machine learning model based on one or more known input data and on one or more known output data relating to a result of an operation of the one network node 110, 111, 120, 122, 130 with the known input data. Each one of the one or more known output data may correspond to a respective one of the one or more known input data.” [0115]); determining whether adaptation of at least one of the one or more RA parameters is needed based on the set of values for the set of output parameters output by the AI/ML model (“This is done to train the machine learning model to perform correct or improved predictions of the performance of the network node 110, 111, 120, 122, 130.” [0115]); and upon determining that adaptation is needed, changing at least one of the first set of values for the one or more RA parameters (“the wireless communications system 10, e.g. by means of the machine learning unit 300, trains the machine learning model by adjusting weighting coefficients and biases for one or more of the artificial neurons until the known output data is given as an output from the machine learning model when the corresponding known input data is given as an input to the machine learning model.” [0117]) to provide a second set of values for the one or more RA parameters (“Refined and reinforcement learning may be used to continuously update the one or more machine learning models based on new inputs. This provides flexibility if something in the network environment changes.” [0058]). Regarding claim 12 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 11, wherein adapting the one or more RA parameters for the RA procedure based on the AI/ML model further comprises: Ottersten further discloses obtaining a second set of values for the set of input parameters based on the second set of values for the one or more RA parameters (“the wireless communications system 10, e.g. by means of the machine learning unit 300, trains the machine learning model by adjusting weighting coefficients and biases for one or more of the artificial neurons until the known output data is given as an output from the machine learning model when the corresponding known input data is given as an input to the machine learning model.” [0117]); feeding the second set of values for the set of input parameters into the AI/ML model (“Refined and reinforcement learning may be used to continuously update the one or more machine learning models based on new inputs. This provides flexibility if something in the network environment changes.” [0058]); obtaining a second set of values for the set of output parameters output by the AI/ML model responsive to the second set of values for the set of input parameters (“until the known output data is given as an output from the machine learning model when the corresponding known input data is given as an input to the machine learning model.” [0117]); determining whether adaptation of at least one of the one or more RA parameters is needed based on the second set of values for the set of output parameters output by the AI/ML model (“Refined and reinforcement learning may be used to continuously update the one or more machine learning models based on new inputs. This provides flexibility if something in the network environment changes.” [0058]); and upon determining that adaptation is needed, changing at least one of the second set of values for the one or more RA parameters to provide a third set of values for the one or more RA parameters (“For example, the network node 110, 111, 120, 122, 130 may update the parameters of the machine learning model, e.g. update one or more weights in a neural network, after evaluating that the MCS selection is too conservative and thus not fully utilizes the channel.” [0143]). Regarding claim 13 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 1, Ottersten further discloses further comprising receiving, from the network node, information that defines a validity area for the AI/ML model (“For example, the location of the wireless device may be used to determine which of the machine learning models to use for the relevant predictions.” [0058]), wherein adapting the one or more RA parameters for the RA procedure based on the AI/ML model comprises adapting the one or more RA parameters for the RA procedure based on the AI/ML model while the WCD is within the validity area defined for the AI/ML model (“For example, the one or more measurements performed by the at least one network node 110, 111, 120, 122, 133 may be measurement of received signal strength, noise levels, angle of arrival, location and/or orientation. Thus, the information relating to the performed one or more measurements may be measurement data relating to measurements of received signal strength, noise levels, angle of arrival, location and/or orientation.” [0105]). Regarding claim 14 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 1, Ottersten further discloses further comprising sending (step “ML cap response” in Fig. 6), to the network node (i.e. “base station” in Fig. 6), information that indicates a capability of the WCD to execute the AI/ML model (“First, the device, e.g. the wireless device 120, 122, attaches to the radio network node 110, 111, referred to as BS in FIG. 6, [connection]. This may either be through the existing protocols or included in the presented protocol by addition of signalling messages and/or signalling capabilities. If the attachment procedure is a part of the Intelligent RAN protocol, the ML capabilities may be signalled in the attachment procedures, similar to 3GPP UE category signalling [3GPP TS 36.310 and 3GPP TS 36.331]. If the attachment procedure is not included, then a separate message exchange may take place to determine the wireless device's/UE's ML capabilities. The BS queries the UE/device about its ML capabilities [ML capability query] and the UE/device responds [ML capability response].” [0216]). Regarding claim 15 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 1, Ottersten further discloses wherein the AI/ML model is previously trained based at least in part on previously obtained WCD capability information (“When the ML capabilities have been determined, the BS, e.g. the radio network node 110, 111, queries the UE, e.g. the wireless device 120, 122, for its objective function(s) [Objective function query]. In the mature intelligent RAN, this objective function may be quite complex and describe a multi-faceted desire and/or purpose of the user/device. Initially, the objective function may be more limited, e.g., relate to data rates, acceptable latencies, error rates. It may also comprise ML-related objectives, e.g., error function, training stopping criteria. The UE responds with its objective [Objective function response]. This may include transmitting the UE's digital twin if this is not already available on the network side, e.g. at the BS.” [0217]). Regarding claim 16 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 1, further comprising: Ottersten further discloses receiving, from the network node (i.e. “BS”), instructions to execute the AI/ML model using a certain configuration to obtain an additional set of output parameters( “When the ML capabilities have been determined, the BS, e.g. the radio network node 110, 111, queries the UE, e.g. the wireless device 120, 122, for its objective function(s) [Objective function query]… Initially, the objective function may be more limited, e.g., relate to data rates, acceptable latencies, error rates. It may also comprise ML-related objectives, e.g., error function, training stopping criteria.” [0217]); executing the AI/ML model using the certain configuration to obtain the additional set of output parameters (“The BS, e.g. the radio network node 110, 111, then transmits a ML model suitable for the device's objective function and capabilities [ML model transmission].” [0219]; see Fig. 6); and providing, to the network node, the additional set of output parameters (“After some period of time, the wireless device has collected a suitable amount of training data, and this is transmitted to the BS [Training data transmission]. The BS then updates the ML model based on the received training data [ML model re-training]. After the refinement of the ML model(s), the BS transmits the updated model to the device [ML model transmission] and to nodes concerned with clustered/global models related to the current device type and objective function(s) [ML model transmission]. When the global model(s) has been refined, then the updated global model is distributed [Global ML model update]. If relevant, the global model may be sent to the wireless device 120, 122 (not shown in FIG. 6).” [0220-0221]). Regarding claim 17 Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 1, wherein receiving the information from the network node comprises: Ottersten further discloses receiving the information about or that characterizes the AI/ML model that enables the WCD to build the AI/ML model that outputs the set of output parameters that represent whether the RA procedure to be performed by the WCD will be successful based on the set of input parameters (“The BS, e.g. the radio network node 110, 111, then transmits a ML model suitable for the device's objective function and capabilities [ML model transmission].” [0219]; see Fig. 6); and building the AI/ML model based at least in part on the information (“After some period of time, the wireless device has collected a suitable amount of training data, and this is transmitted to the BS [Training data transmission]. The BS then updates the ML model based on the received training data [ML model re-training].” [0220-0221]). Regarding claim 20 (Currently Amended) Ottersten discloses A Wireless Communication Device (WCD) (e.g. “UE 120, 122” In Fig. 6), comprising: one or more transmitters (“transmitting unit 412” [0181]); one or more receivers (“receiving unit 411” [0181]); and processing circuitry (“processor 419” [0179]) associated with the one or more transmitters and the one or more receivers, the processing circuitry configured to cause the WCD to: receive information from a network node, the information comprising: information about or that characterizes an Artificial Intelligence Machine Learning (AI/ML) model that enables the WCD to build the AI/ML model that outputs a set of output parameters that represent whether a Random Access (RA) procedure to be performed by the WCD will be successful based on a set of input parameters; and adapt one or more RA parameters for the RA procedure based on the AI/ML model, the one or more RA parameters comprising an initial power level to be used by the WCD to transmit an uplink signal including an initial preamble transmission for the RA procedure, the initial power level comprising: at least one of a per beam initial power level and a per cell initial power level; and a power ramping step per beam. The scope and subject matter of apparatus claim 20 is drawn to the apparatus of using the corresponding method claimed in claim 1. Therefore apparatus claim 20 corresponds to method claim 1 and is rejected for the same reasons of anticipation as used in claim 1 rejection above. Regarding claim 26 (Currently Amended) A computer implemented method performed by a network node, the method comprising: obtaining an Artificial Intelligence Machine Learning (AI/ML) model that outputs a set of output parameters that represent whether a Random Access (RA) procedure to be performed by a Wireless Communication Device (WCD) will be successful based on a set of input parameters, the WCD using the AI/ML model to determine one or more RA parameters, the one or more RA parameters comprising an initial power level to be used by the WCD to transmit an uplink signal including an initial preamble transmission for the RA procedure, the initial power level comprising at least one of a per beam initial power level and a per cell initial power level: and a power ramping step per beam; and sending information to another node, the information comprising: information about or that characterizes the AI/ML model. The scope and subject matter of method claim 26 are reciprocal to the scope and subject matter as claimed in method claim 1. Therefore method claim 26 corresponds to method claim 1 and is rejected for the same reasons of anticipation as used in claim 1 rejection above. Regarding claim 37 (Currently Amended) Ottersten discloses a network node (i.e. “base station” In Fig. 6), comprising: one or more transmitters (“transmitting unit 412” [0181]); one or more receivers (“receiving unit 411” [0181]); and processing circuitry (“processor 419” [0179]), associated with the one or more transmitters and the one or more receivers, the processing circuitry configured to cause the network node to: obtain an Artificial Intelligence Machine Learning (AI/ML) model that outputs a set of output parameters that represent whether a Random Access (RA) procedure to be performed by a Wireless Communication Device (WCD) will be successful based on a set of input parameters, the WCD using the AI/ML model to determine one or more RA parameters, the one or more RA parameters comprising an initial power level to be used by the WCD to transmit an uplink signal including an initial preamble transmission for the RA procedure, the initial power level comprising; at least one of a per beam initial power level and a per cell initial power level; and a power ramping step per beam; and send information to another node, the information comprising: information about or that characterizes the AI/ML model. The scope and subject matter of apparatus claim 37 are reciprocal to the scope and subject matter as claimed in apparatus claim 20. Therefore apparatus claim 37 corresponds to apparatus claim 20 and is rejected for the same reasons of anticipation as used in claim 20 rejection above. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Ottersten, in view of Dinan and Decarreau, and further in view of Lei US Pub 2023/0292369, claiming foreign application priority 2020-08-05 (hereinafter “Lei”) and of Lee et al. US Pub 2021/0153061 (hereinafter “Lee”). Regarding claim 7 (Currently Amended) Ottersten, as modified by Dinan and Decarreau, previously discloses the method of claim 1, Ottersten further discloses wherein the set of input parameters of the AI/ML model comprises at least one of: b) a cell Identifier (ID) of the cell on which the RA procedure is to be performed (“Information gathered from the wireless device 120, 122, such as cell id and location information may be used to determine which site the wireless device 120, 122 currently occupies.” [0231]); e) set of the beams to be used to perform the RA procedure (“The prediction of the performance may for example be which modulation and coding scheme (MCS) to use, which transmitter beam and receiver beam to use” [0058]); f) cell and/or beam level measurements of a serving cell of the WCD (“The input data may comprise one or more input parameters such as received signal strength, angle of arrival, measured or estimated UE speed, target block or bit error rates, just to give some examples.” [0103]); g) cell and/or beam level measurements of the cell in which the RA procedure is to be performed (“the network node 110, 111, 120, 122, 130 may perform a change of a beam-steering operation or change where to execute a network function based on the determined prediction.” [0135]); j) measurement of uplink resources used by the WCD (“Said one or more communication devices may provide uplink beams, respectively, e.g. the wireless device 120 may provide an uplink beam 117 for communication with the wireless communication network 100.” [0066]); m) location information for the WCD (“Information gathered from the wireless device 120, 122, such as cell id and location information may be used to determine which site the wireless device 120, 122 currently occupies.” [0231]); n) absolute time information or relative time information for the WCD (“The UE measurements I, comprising measurements from e.g. UE.sub.1, UE.sub.2, . . . UE.sub.M, may be split into relevant subsections to prepare the inputs, for a point of time t, e.g. input data I.sub.1, I.sub.2, I.sub.3, for their respective ML model, e.g. ML.sub.1, ML.sub.2, . . . ML.sub.N. The system is to be run normally to acquire the target data (output) for a point of time t+1. The target data is the data desired to predict, e.g. p.sub.1, p.sub.2, p.sub.3, at the point of time t+1.” [0231]); q) any or all possible RACH configuration parameters in different Radio Access Technologies, RATs, that are available (“Each beam may be associated with a particular Radio Access Technology (RAT).” [0067]); or s) a combination of any two or more of (a)-(r). Decarreau further discloses a AI/ML method (“According to an example embodiment, the RACH optimization (or random access procedure improvement) procedure (or process) may be implemented via a machine learning (ML) algorithm.” [0055]) wherein the set of input parameters of the AI/ML model comprise: h) a Random Access Channel (RACH) transmission power level to be used for the RA procedure (“Some example RACH parameters that may be set or adjusted via a RACH optimization process may include, e.g., a RACH configuration (resource unit allocation), a RACH preamble split (among dedicated, group A, group B), a RACH backoff parameter value, RACH transmission power control parameters, and/or other RACH related parameters.” [0048]); i) cell and/or beam level measurements of one or more inter-frequency neighboring cells and/or one or more intra-frequency neighboring cells of the WCD (“The parameters 1, 2 and 3 can be used by the trigger type—specific ML submodules to estimate cn(k)(t) (it gives background information on the RACH allocations of other BSs which can e.g., help the algorithm compute the inter-RACH interference). A RACH optimization coordinator may receive RACH information and RACH trigger-specific costs reported by one or more neighbor BSs. Thus, for example, a BS or RACH optimization coordinator may also adjust RACH parameters and/or resource allocation to assist (e.g., decrease cost of) one or more RACH trigger types of a neighbor cell, e.g., to decrease conflicts or interference from current cell/BS to one or more high cost RACH trigger types reported by a neighboring cell.” [0109]); k) interference measurement(s) performed by a radio unit of a serving cell of the WCD and/or by a radio unit of one or more neighboring cells of the WCD (“The parameters 1, 2 and 3 can be used by the trigger type—specific ML submodules to estimate cn(k)(t) (it gives background information on the RACH allocations of other BSs which can e.g., help the algorithm compute the inter-RACH interference). A RACH optimization coordinator may receive RACH information and RACH trigger-specific costs reported by one or more neighbor BSs. Thus, for example, a BS or RACH optimization coordinator may also adjust RACH parameters and/or resource allocation to assist (e.g., decrease cost of) one or more RACH trigger types of a neighbor cell, e.g., to decrease conflicts or interference from current cell/BS to one or more high cost RACH trigger types reported by a neighboring cell.” [0109]); p) a power ramping value associated with the WCD (“Some example RACH parameters that may be set or adjusted via a RACH optimization process may include, e.g., a RACH configuration (resource unit allocation), a RACH preamble split (among dedicated, group A, group B), a RACH backoff parameter value, RACH transmission power control parameters, and/or other RACH related parameters.” [0048]); Ottersten, Dinan, and Decarreau do not specifically teach items a), c), d), and r). In an analogous art, Lei discloses a AI/ML method (“Embodiments of the present disclosure enable a base station to learn relevant parameters of an AI network model at the UE.” [0005]) wherein the set of input parameters of the AI/ML model comprise: a) a frequency of a cell on which the RA procedure is to be performed (“In some embodiments, S102 may include: reporting the capability of supporting the AI network model using a time-frequency resource for transmitting a preamble.” [0033]); r) a RACH report from the WCD (That is, the UE may report the capability of supporting the AI network model through Msg1 or Mgs3.” [0031]); In an analogous art, Lee discloses a AI/ML method (“The input part 1220 can acquire input data to be used when acquiring an output using learning data and a learning model for model learning.” [0156]) wherein the set of input parameters of the AI/ML model comprise: c) a Tracking Area Code (TAC) of the cell on which the RA procedure is to be performed (Lee “The UE shall have been allocated an identifier (ID) which uniquely identifies the UE in a tracking area.” [0086]); d) a Public Land Mobile Network (PLMN) ID of a PLMN of the cell on which the RA procedure is to be perform (Lee “A predetermined operation may be performed according to the RRC state. In RRC_IDLE, public land mobile network (PLMN) selection, broadcast of system information (SI), cell re-selection mobility, core network (CN) paging and discontinuous reception (DRX) configured by NAS may be performed. The UE shall have been allocated an identifier (ID) which uniquely identifies the UE in a tracking area.” [0086]); Before the effective filling date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Ottersten’s method for handling of machine learning, as modified by Dinan and Decarreau, to include Lei/Lee machine learning/AI methods using a variety of relevant inputs/parameters, in order to predict, adjust, and improve overall random access procedure. Thus, a person of ordinary skill would have appreciated the ability to incorporate Lei/Lee/ machine learning/AI methods into Ottersten’s method for handling of machine learning since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHUONG M NGUYEN whose telephone number is (571)272-8184. The examiner can normally be reached M-F 10:00am - 6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Derrick Ferris can be reached at 571-272-3123. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHUONG M NGUYEN/Primary Examiner, Art Unit 2411
Read full office action

Prosecution Timeline

Jun 08, 2023
Application Filed
Aug 25, 2025
Non-Final Rejection — §102, §103, §112
Nov 26, 2025
Response Filed
Jan 13, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598653
METHOD FOR NODE USED FOR WIRELESS COMMUNICATION AND APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12587820
FREQUENCY RANGE 2 (FR2) NON-STANDALONE SIDELINK DISCOVERY
2y 5m to grant Granted Mar 24, 2026
Patent 12587920
DETECTING PHYSICAL CELL IDENTIFIER (PCI) CONFUSION DURING SECONDARY NODE (SN) CHANGE PROCEDURE IN WIRELESS NETWORKS
2y 5m to grant Granted Mar 24, 2026
Patent 12581480
USER EQUIPMENTS, BASE STATIONS AND METHODS FOR UPLINK TRANSMISSION IN INTERRUPTED TRANSMISSION INDICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12538248
Expiry of Time Alignment Timer
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
92%
With Interview (+19.3%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 457 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month