Prosecution Insights
Last updated: April 19, 2026
Application No. 18/292,141

Configuring a Radio Access Node to Use One or More Radio Access Network Functions

Non-Final OA §102§103
Filed
Jan 25, 2024
Examiner
IQBAL, KHAWAR
Art Unit
2643
Tech Center
2600 — Communications
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
466 granted / 639 resolved
+10.9% vs TC avg
Strong +29% interview lift
Without
With
+28.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
34 currently pending
Career history
673
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
52.9%
+12.9% vs TC avg
§102
30.8%
-9.2% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 639 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 28-32, 34-41 and 42-50 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by PENG et al (20200195506). Regarding claim 28, PENG et al discloses, a method of configuring a Radio Access Network (RAN, fig. 3) node to use one or more RAN functions (abstract, fig. 3), the method comprising: obtaining input data for the RAN node (¶ 0066, with reference to figure 1 "a central computing logic module receives reported data which may include: measurement report data from user terminals, wireless transmission data from base stations, and operation and maintenance data from a radio access network), the input data comprising configuration information and performance information for the RAN node (¶ 0070, the operation and maintenance data from the radio access network may include, but is not limited to, service attributes, user mobility and social relationship attributes, service and user relationship attributes, and other historical data stored in the core network and related to the user terminals and the radio access network); based on the input data, one or more target performance indicators and a constraint on one or more resources for the RAN node, using an optimization process (¶ 0073, with reference to step 120 in figure 1, Based on the reported data obtained during a cycle T1 and a proper machine learning algorithm, the central computing logic module configures an operating mode of the radio access network that matches the user behavior information, the service attributes, and the radio access network performance indicators) to select one or more RAN functions from a plurality of RAN functions for activation by the RAN node (¶ 0082, the operating mode may include one of the following, but is not limited to: a wide-area seamless coverage mode, a hot spot high capacity mode, a massive-connection low power mode, and a low-latency high-reliability mode), the optimization process being configured to select the one or more RAN functions using one or more models for predicting, based on the input data and a selection of RAN functions usage of the one or more resources and a value of the one or more target performance indicators (¶ 0115- 0121, each operating mode is associated with different aspects to be optimized, i.e. different RAN functions are activated depending on the mode); and configuring the RAN node to use the one or more selected RAN functions ( ¶ 0149, Step 37: When the timing reaches the cycle T2 (T2=T3*M), the edge communication entity enters the cycle T2 trigger mode and performs the configuration optimization of edge communication entity according to data related to user terminals and access nodes covered by the edge communication entity.", as also implied by ¶ 0116, 0118, 0120 and 0121). Regarding claim 29, PENG et al discloses in claim 28, further, PENG et al discloses, wherein the one or more models are developed using training data for a plurality of RAN nodes, the training data comprising configuration information, performance information and RAN function activation information for the plurality of RAN nodes (¶ 0099-0100, and 0125, using training data for a plurality of RAN nodes). Regarding claim 30, PENG et al discloses in claims 28-29, further, PENG et al discloses, wherein the one or more models are developed by inputting the training data into a machine learning process (¶ 0068, 0099, the machine learning algorithms may include, but are not limited to, deep reinforcement learning algorithms The embodiments of the present disclosure are described by taking a deep reinforcement learning algorithm as an example. The machine learning algorithms may include representation learning and deep reinforcement learning algorithms, but are not limited thereto, and other machine learning algorithms like multi-task learning algorithms, migration learning algorithms, and deep unsupervised learning algorithms may also be adopted. Specific deep reinforcement learning algorithms may be selected according to networking requirements and available data). Regarding claim 31, PENG et al discloses in claim 28, further, PENG et al discloses, wherein the machine-learning process comprises a Bayesian learning process and wherein one or more priors for the Bayesian learning process are based on domain knowledge (¶ 0121, the operating mode is the low-latency high-reliability mode, the edge computing logic module utilizes a deep Bayesian learning algorithm to optimize the edge communication entities. According to historical information on the access nodes of user terminals, the future access node selection of a user terminal is predicted, and mobility-related parameters of the access node are optimized. Moreover, the deep reinforcement learning algorithm is also adopted here to perform distributed resource allocation. The state of the deep reinforcement learning algorithm herein may include, but is not limited to, the delay and remaining data of the user terminals, the interference distribution of the access nodes, and the link information between the access nodes. The action of the deep reinforcement learning algorithm may refer to the transmission power of the access nodes; the reward function of the deep reinforcement learning algorithm may refer to the weighted sum of the average throughput and delay; the deep reinforcement learning algorithm can realize distributed transmission power optimization with lower computational complexity). Regarding claim 32, PENG et al discloses in claim 28, further, PENG et al discloses, wherein the one or more models comprise a first model for predicting the usage of the one or more resources and a second model for predicting a value of the one or more target performance indicators (¶ 0121, the operating mode is the low-latency high-reliability mode, the edge computing logic module utilizes a deep Bayesian learning algorithm to optimize the edge communication entities. According to historical information on the access nodes of user terminals, the future access node selection of a user terminal is predicted, and mobility-related parameters of the access node are optimized. Moreover, the deep reinforcement learning algorithm is also adopted here to perform distributed resource allocation. The state of the deep reinforcement learning algorithm herein may include, but is not limited to, the delay and remaining data of the user terminals, the interference distribution of the access nodes, and the link information between the access nodes. The action of the deep reinforcement learning algorithm may refer to the transmission power of the access nodes; the reward function of the deep reinforcement learning algorithm may refer to the weighted sum of the average throughput and delay; the deep reinforcement learning algorithm can realize distributed transmission power optimization with lower computational complexity). Regarding claim 34, PENG et al discloses in claim 28, further, PENG et al discloses, wherein the optimization process is configured to select the one or more RAN functions that are predicted to cause the constraint on the one or more resources to be satisfied when activated at the RAN node (¶ 0115- 0121, each operating mode is associated with different aspects to be optimized, i.e. different RAN functions are activated depending on the mode). Regarding claim 35, PENG et al discloses in claim 28, further, PENG et al discloses, wherein the one or more target performance indicators comprise a first performance indicator and the optimization process is configured to select the one or more RAN functions that are predicted to maximise or minimize the first performance indicator when activated at the RAN node (see above, ¶ 0140, the edge computing logic module enters the trigger state to optimize the resource allocation of the edge communication entity. Here, taking the deep reinforcement learning as an example, referring to FIG. 7, the edge computing logic module performs actions according to the rewards brought by different actions in the current state. The choice according to the resource allocation strategy obtained by deep reinforcement learning maximizes the benefits in a continuous time and ¶ 0108, minimize the variation of target performance). Regarding claim 36, PENG et al discloses in claim 28, further, PENG et al discloses, wherein the one or more target performance indicators comprise a second performance indicator and the optimization process is configured to select the one or more RAN functions that are predicted to cause the second performance indicator to satisfy a constraint on the second performance indicator when activated at the RAN node (¶ 0007-0008, 0042-0045, 0070 etc.., performance indicators). Regarding claim 37, PENG et al discloses in claim 28, further, PENG et al discloses, further comprising: in response to determining that a number of selected RAN functions exceeds a threshold value, configuring the RAN node to use only a subset of the selected RAN functions (¶ 0106-0109, 0116, The preset threshold may be a fluctuation range value adapted to the actual situation, which is obtained by using deep reinforcement learning algorithms. If the obtained quality of service and the number of active user terminals do not exceed the preset thresholds, it is indicated that the network fluctuation is not obvious, the radio access network state is stable during the preset time period, the service demands do not change much, and the quality of service and the number of requests from user terminals are satisfied. Hence, the current configuration of the edge communication entity meets the networking aim). Regarding claim 38, PENG et al discloses in claim 28, further, PENG et al discloses, wherein the optimization process is further configured to select the one or more RAN functions for activation at the RAN node based on any RAN functions currently active at the RAN node (¶ 0070, 0099, 0115- 0121, each operating mode is associated with different aspects to be optimized, i.e. different RAN functions are activated depending on the mode). Regarding claim 39, PENG et al discloses in claim 28, further, PENG et al discloses, wherein the method is repeated at predetermined time intervals (¶ 0019, 0113, the configuration optimization performed by the edge computing logic module fails to meet the networking aim, there is a need to re-configure the operating mode to realize the networking aim. If it meets, the above steps 1 to 3 are repeated for a next cycle T2) . Regarding claim 40, PENG et al discloses in claim 28, further, PENG et al discloses, wherein the RAN node is in a Distributed RAN (¶ 0115- 0121, each operating mode is associated with different aspects to be optimized, i.e. different RAN functions are activated depending on the mode). Regarding claim 41, PENG et al discloses in claim 28, further, PENG et al discloses, wherein configuring the RAN node to use the selected RAN functions comprising configuring a RAN Intelligent Controller (RIC) in the RAN node to use the selected RAN functions (¶ 0036, 0115- 0121, each operating mode is associated with different aspects to be optimized, i.e. different RAN functions are activated depending on the mode). Regarding claim 43, PENG et al discloses in claim 28, further, PENG et al discloses, wherein configuring the RAN node to use the selected RAN functions comprises sending an indication of the selected RAN functions to the RAN node (¶ 0036, 0115- 0121, 0127, each operating mode is associated with different aspects to be optimized, i.e. different RAN functions are activated depending on the mode). Regarding claim 44, PENG et al discloses in claim 28, further, PENG et al discloses, wherein configuring the RAN node to use the selected RAN functions further comprises instructing the RAN node to request at least one of the selected RAN functions from a RAN function repository (¶ 0036, 0115- 0121, 0127, each operating mode is associated with different aspects to be optimized, i.e. different RAN functions are activated depending on the mode). Regarding claim 45, PENG et al discloses in claim 28, further, PENG et al discloses, wherein the method is performed by the RAN node and configuring the RAN node to use the selected RAN functions comprises one or more of the following: activating one or more first RAN functions of the selected RAN functions; and deactivating one or more second RAN functions of the plurality of RAN functions (¶ 0036, 0115- 0121, 0127, each operating mode is associated with different aspects to be optimized, i.e. different RAN functions are activated depending on the mode). Regarding claim 46, PENG et al discloses in claim 28, further, PENG et al discloses, further comprising: requesting, from a RAN function repository, at least one of the one or more first RAN functions (¶ 0036, 0115- 0121, 0127, each operating mode is associated with different aspects to be optimized, i.e. different RAN functions are activated depending on the mode). Regarding claim 47, PENG et al discloses in claim 28, further, PENG et al discloses, further comprising: determining the one or more first RAN functions and/or the one or more second RAN functions based on a comparison of the selected RAN functions with any RAN functions currently active at the RAN node (¶ 0036, 0115- 0121, 0127, each operating mode is associated with different aspects to be optimized, i.e. different RAN functions are activated depending on the mode). Regarding claim 48, PENG et al discloses in claim 28, further, PENG et al discloses, wherein each of the plurality of RAN functions comprises one or more executable software programs (¶ 0190, the present disclosure provides a computer readable storage medium, in which the storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the artificial intelligence-based networking method are implemented. Embodiments of the present disclosure provide a computer program that, when run on a computer, causes the computer to perform the steps of the above-described artificial intelligence-based networking method). Regarding claim 49, PENG et al discloses in claim 28, further, PENG et al discloses, a non-transitory computer-readable medium comprising, stored thereupon, a computer program comprising instructions configured so that, when executed on at least one processor, the instructions cause the at least one processor to carry out the method of claim 28 (see claim 28, further, ¶ 0190, the present disclosure provides a computer readable storage medium, in which the storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the artificial intelligence-based networking method are implemented. Embodiments of the present disclosure provide a computer program that, when run on a computer, causes the computer to perform the steps of the above-described artificial intelligence-based networking method). Regarding claim 50, PENG et al discloses in claim 28, further, PENG et al discloses, an apparatus for configuring a Radio Access Network (RAN) node to use one or more RAN functions, the apparatus comprising processing circuitry and a machine-readable medium, the machine-readable containing instructions executable by the processing circuitry such that the apparatus is operable to (¶ 0190, the present disclosure provides a computer readable storage medium, in which the storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the artificial intelligence-based networking method are implemented. Embodiments of the present disclosure provide a computer program that, when run on a computer, causes the computer to perform the steps of the above-described artificial intelligence-based networking method): obtain input data for the RAN node (¶ 0066, with reference to figure 1 "a central computing logic module receives reported data which may include: measurement report data from user terminals, wireless transmission data from base stations, and operation and maintenance data from a radio access network), the input data comprising configuration information and performance information for the RAN node (¶ 0070, the operation and maintenance data from the radio access network may include, but is not limited to, service attributes, user mobility and social relationship attributes, service and user relationship attributes, and other historical data stored in the core network and related to the user terminals and the radio access network); based on the input data, one or more target performance indicators and a constraint on one or more resources for the RAN node, use an optimization process (¶ 0073, with reference to step 120 in figure 1, Based on the reported data obtained during a cycle T1 and a proper machine learning algorithm, the central computing logic module configures an operating mode of the radio access network that matches the user behavior information, the service attributes, and the radio access network performance indicators) to select one or more RAN functions from a plurality of RAN functions for activation by the RAN node (¶ 0082, the operating mode may include one of the following, but is not limited to: a wide-area seamless coverage mode, a hot spot high capacity mode, a massive-connection low power mode, and a low-latency high-reliability mode), the optimization process being configured to select the one or more RAN functions using one or more models for predicting, based on the input data and a selection of RAN functions, usage of the one or more resources and a value of the one or more target performance indicators (¶ 0115- 0121, each operating mode is associated with different aspects to be optimized, i.e. different RAN functions are activated depending on the mode); and configure the RAN node to use the one or more selected RAN functions ( ¶ 0149, Step 37: When the timing reaches the cycle T2 (T2=T3*M), the edge communication entity enters the cycle T2 trigger mode and performs the configuration optimization of edge communication entity according to data related to user terminals and access nodes covered by the edge communication entity.", as also implied by ¶ 0116, 0118, 0120 and 0121). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 33 and 42 are rejected under 35 U.S.C. 103 as being unpatentable over PENG et al (20200195506) in view of GUTIERREZ-ESTEVEZ DAVID M ET AL (XP011752556) [sited by applicant]. Regarding claims 33, and 42, PENG et al discloses in claim 28, further, PENG et al does not specifically disclose, a contextual bandit process or a genetic process and MANO. In the same field of endeavor, GUTIERREZ-ESTEVEZ DAVID M ET AL discloses, a contextual bandit process or a genetic process (page 138) and MANO (page 134). Therefore, before the effective filing date of the claim invention, it would have been obvious to one of ordinary skill in the art at the time the invention was made to modify the device of PENG et al by specifically adding feature in order to enhance system performance to improving signaling the design of an efficient multi-service, multi-slice, and multi-tenant MANO entails challenges on both the architectural and algorithmic levels as taught by GUTIERREZ-ESTEVEZ DAVID M ET AL. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHAWAR IQBAL whose telephone number is (571)272-7909. The examiner can normally be reached M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jinsong Hu can be reached at 5712723965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KHAWAR IQBAL/ Primary Examiner, Art Unit 2643
Read full office action

Prosecution Timeline

Jan 25, 2024
Application Filed
Feb 28, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597354
SYSTEMS AND METHODS FOR IDENTIFYING A VEHICLE ASSET PAIRING
2y 5m to grant Granted Apr 07, 2026
Patent 12587826
Roaming for UE of a NPN
2y 5m to grant Granted Mar 24, 2026
Patent 12580881
METHODS AND SYSTEMS FOR DELAYING MESSAGE NOTIFICATIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12568373
FEDERATED MULTI-ACCESS EDGE COMPUTING AVAILABILITY NOTIFICATIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12563386
METHOD AND APPARATUS FOR SECURITY REALIZATION OF CONNECTIONS OVER HETEROGENEOUS ACCESS NETWORKS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+28.8%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 639 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month