18Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 5-16, 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al (US 2021/0241090) (hereinafter Chen) in view of Chan et al (US 2014/0310714) (hereinafter Chan).
Regarding claim 1, Chen discloses a method for dynamically controlling a cell, comprising:
obtaining first usage statistics of the cell (see Chen, p. [0047], e.g., raw operational data, and Fig. 4, p. [0055-0065], e.g., at step 410, the processing system obtains operational data from a radio access network (RAN));
generating a cell-specific predictive model (e.g., different sub-agents of a reinforcement learning agent) based on the first usage statistics (see Chen, Fig. 3, p.[0054], and Fig. 4, p.[0055-0065], e.g., at step 420, the processing system formats the operational data into state information and reward information for a reinforcement learning agent (RLA));
obtaining second usage statistics of the cell (see Chen, Fig. 4, p. [0056], e.g., the operational data may be obtained from base stations, baseband units, and so forth of the RAN, and p.[0055-0065], e.g., at step 430, the processing system processes the state information and the reward information via the reinforcement learning agent);
predicting third usage statistics of the cell, based on the second usage statistics and the cell-specific predictive model (see Chen, Fig. 4, p.[0055-0065], e.g., at step 440, the processing system determines a plurality of settings (e.g., one or more settings) for one or more parameters of the radio access network via the reinforcement learning agent, wherein the reinforcement learning agent determines the one or more settings in accordance with a plurality of selections for the one or more settings via the plurality of sub-agents); and
selecting an operational mode of the cell based on the third usage statistics (see Chen, p. [0055-0065], e.g., at step 450, the processing system applies the plurality of settings to the radio access network, and see Fig. 3, p. [0054]).
However, Chen does not expressly disclose the method comprising: obtaining first usage statistics of the cell for a first interval; obtaining second usage statistics of the cell corresponding to a second time interval; predicting third usage statistics of the cell for a third time interval; and
selecting an operational mode of the cell for the third time interval.
Chan discloses the above recited limitations (see Chan, p. [0069], e.g., the raw data or the trend information in a time series might show that during a first interval of time, and the raw data might then show that during a successive second interval of time. The information change in the raw data at the point in between the first interval and the second interval can indicate that the memory leak was fixed by some change made to the system. During a third interval the information change in the raw data following the second interval, the variance in heap usage might decrease significantly, indicating that yet another change was made to the system in between the second and third intervals of time, and p. [0071], e.g., the raw data show a change in the system, producing the distinctive first, second, and third intervals).
It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Chan’s teachings into Chen. The suggestion/motivation would have been to provide the result of the issuance of some directive which is the output of a resolution action as suggested by Chan.
Regarding claim 2, the combined teaching of Chen and Chan disclose the method of claim 1, where the cell-specific predictive model comprises a plurality of predictive models trained to estimate a plurality of physical resource block utilizations for a corresponding plurality of different time intervals (see Chen, Fig. 3, p.[0054], e.g., different sub-agents of a reinforcement learning agent, and Fig. 4, p. [0058-0059], e.g., each of the plurality of sub-agents is assigned a respective value function and a respective plurality of permitted actions, where the plurality of permitted actions comprises a plurality of allowable settings for a plurality of parameters of the radio access network).
Regarding claim 5, the combined teaching of Chen and Chan disclose the method of claim 1, where the second usage statistics comprise real-time statistics and the method further comprises sending the operational mode to the cell via a non-real-time advisory message (see Chen, p. [0010], e.g., real-time data streaming, and p. [0015], e.g., the reinforcement learning explores different configurations in real-time and update policies based upon the outcomes of the changes to determine the optimal configurations (e.g., parameter settings) of the network, and p. [0040], [0054], [0059], e.g., the plurality of permitted actions).
Regarding claim 6, the combined teaching of Chen and Chan disclose the method of claim 1, where the operational mode is selected from an energy-saving mode and a capacity mode (see Chen, Fig. 3, p. [0054], e.g., power on/off and scheduler options, handover offset configurations).
Regarding claim 7, the combined teaching of Chen and Chan disclose the method of claim 1, where the cell-specific predictive model comprises a deep-reinforcement learning model trained to select the operational mode based on at least one of a network traffic, a resource utilization, and a previous operational mode (see Chen, p. [0040], e.g., a reinforcement learning agent (RLA) may comprise a plurality of sub-agents, which may have different rewards, and which may select different actions with respect to a given state. A selected action of one sub-agent may affect the reward that accrues to a different sub-agent, and vice versa, and p. [0054], [0055-0065]).
Regarding claim 8, Chen discloses a near-real-time radio access network controller (e.g., SON/SDN controller 102), comprising: a non-real-time network interface configured to transact non-real-time advisory messages with a non-real-time network management entity via a best-effort network (see Chen, Fig. 1, p. [0027], e.g., EPC network 105 is an Internet Protocol (IP) packet core network that supports both real-time and non-real-time service delivery across a LTE network); a real-time control interface configured to control a cell according to a schedule constraint (see Chen, Fig. 2, e.g., raw configuration and control interface 260);
a processor (see Chen, Fig. 5); and a non-transitory computer readable medium comprising instructions, which when executed by the processor, causes the near-real-time radio access network controller to: obtain real-time usage statistics of the cell (see Chen, p. [0047], e.g., raw operational data, and Fig. 4, p. [0055-0065], e.g., at step 410, the processing system obtains operational data from a radio access network (RAN));
provide a first real-time usage statistic to the non-real-time network management entity (e.g., a reinforcement learning agent)(see Chen, Fig. 3, p.[0054], and Fig. 4, p.[0055-0065], e.g., at step 420, the processing system formats the operational data into state information and reward information for a reinforcement learning agent (RLA));
obtain an advisory operational mode for the cell from the non-real-time network management entity (see Chen, Fig. 4, p.[0055-0065], e.g., at step 440, the processing system determines a plurality of settings (e.g., one or more settings) for one or more parameters of the radio access network via the reinforcement learning agent, wherein the reinforcement learning agent determines the one or more settings in accordance with a plurality of selections for the one or more settings via the plurality of sub-agents); and
select a real-time operational mode of the cell based on the advisory operational mode (see Chen, p. [0055-0065], e.g., at step 450, the processing system applies the plurality of settings to the radio access network, and see Fig. 3, chart 300, p. [0054], e.g., action: power on/off different cells).
However, Chen does not expressly disclose the near-real-time radio access network controller to: provide a first real-time usage statistic corresponding to a first time interval to the non-real-time network management entity; obtain an advisory operational mode for the cell from the non-real-time network management entity, where the advisory operational mode corresponds to a second time interval subsequent to the first time interval.
Chan discloses the above recited limitations (see Chan, p. [0069], e.g., the raw data or the trend information in a time series might show that during a first interval of time, and the raw data might then show that during a successive second interval of time. The information change in the raw data at the point in between the first interval and the second interval can indicate that the memory leak was fixed by some change made to the system, and p. [0071], e.g., the raw data show a change in the system, producing the distinctive first, second, and third intervals).
It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Chan’s teachings into Chen. The suggestion/motivation would have been to provide the result of the issuance of some directive which is the output of a resolution action as suggested by Chan.
Regarding claim 9, the combined teaching of Chen and Chan disclose the near-real-time radio access network controller of claim 8, where the real-time usage statistics of the cell comprise physical resource block utilization measured for each transmission time interval (see Chen, p. [0057], e.g., the state information comprises a plurality of performance indicators (e.g., KPIs) such as: a physical resource block (PRB) utilization, a number of active endpoint devices (e.g., at a particular cell), a handover frequency, an average endpoint device bandwidth, a geographic distribution of endpoint devices, a radio frequency distribution, and a traffic volume, and p. [0049], e.g., KIP calculation intervals).
Regarding claim 10, the combined teaching of Chen and Chan disclose the near-real-time radio access network controller of claim 9, where the first real-time usage statistic corresponds to a first portion of the first time interval, and where the real-time usage statistics comprise a mean physical resource block utilization, a maximum physical resource block utilization, or a minimum physical resource block utilization (see Chen, p. [0057], e.g., the state information comprises a plurality of performance indicators (e.g., KPIs) such as: a physical resource block (PRB) utilization and p. [0049], e.g., KIP calculation intervals).
Regarding claim 11, the combined teaching of Chen and Chan disclose the near-real-time radio access network controller of claim 8, where the instructions further cause the near-real-time radio access network controller to determine whether the advisory operational mode may be enabled according to the schedule constraint (see Chen, Fig. 3, table 300, e.g., action: scheduler options).
Regarding claim 12, the combined teaching of Chen and Chan disclose the near-real-time radio access network controller of claim 11, where the real-time operational mode is selected from an energy-saving mode and a capacity mode (see Chen, Fig. 3, table 300, e.g., energy saving and load balancing).
Regarding claim 13, the combined teaching of Chen and Chan disclose the near-real-time radio access network controller of claim 11, where the real-time control interface is further configured to control the cell according to a power consumption constraint and where the instructions further cause the near-real-time radio access network controller to determine whether the advisory operational mode may be enabled according to the power consumption constraint (see Chen, Fig. 3, table 300, e.g., power on/off of different cells).
Regarding claim 14, the combined teaching of Chen and Chan disclose the near-real-time radio access network controller of claim 11, where the real-time control interface is further configured to control the cell according to a capacity hysteresis constraint and where the instructions further cause the near-real-time radio access network controller to determine whether the advisory operational mode may be enabled according to the capacity hysteresis constraint (see Chen, p. [0042], e.g., a network operator may give priority ratings to different sub-agents such that a goal of coverage optimization (implemented via a first sub-agent) is relatively more important (and provides greater impact to the selection of a parameter setting) as compared to a goal of load balancing (implemented via a second sub-agent)).
Regarding claim 15, Chen discloses a non-real-time network management entity (see Chen, Fig. 2, e.g., a reinforcement learning agent (RLA)), comprising: a non-real-time network interface configured to transact non-real-time advisory messages with a near-real-time radio access network controller via a best-effort network (see Chen, Fig. 2); a processor (see Chen, Fig. 5); and a non-transitory computer readable medium comprising instructions, which when executed by the processor, causes the non-real-time network management entity to:
obtain first cell-specific usage statistics of a cell, via a first non-real-time advisory message (see Chen, p. [0047], e.g., raw operational data, and Fig. 4, p. [0055-0065], e.g., at step 410, the processing system obtains operational data from a radio access network (RAN));
predict second cell-specific usage statistics of the cell, based on the first cell-specific usage statistics (see Chen, Fig. 3, p.[0054], and Fig. 4, p.[0055-0065], e.g., at step 420, the processing system formats the operational data into state information and reward information for a reinforcement learning agent (RLA)) and a predictive model trained on historic real-time usage statistics that are specific to the cell (see Chen, Fig. 4, p.[0055-0065], e.g., at step 440, the processing system determines a plurality of settings (e.g., one or more settings) for one or more parameters of the radio access network via the reinforcement learning agent, wherein the reinforcement learning agent determines the one or more settings in accordance with a plurality of selections for the one or more settings via the plurality of sub-agents);
select an operational mode of the cell based on the second cell-specific usage statistics (see Chen, p. [0055-0065], e.g., at step 450, the processing system applies the plurality of settings to the radio access network, and see Fig. 3, p. [0054]); and transmit the operational mode via a second non-real-time advisory message (see Chen, p. [0050], e.g., each of the sub-agents 241-243 may output, via the respective neural networks 247-249 respective selection(s) for respective setting(s) of respective parameter(s) of RAN 270, and p.[0053], e.g., the plurality of settings for the plurality of parameters (e.g., as determined by RLA 240 in accordance with a plurality of selections for the plurality of settings via the plurality of sub-agents), may be provided to RAN configuration and control interface 260 as one or more actions 250).
However, Chen does not expressly disclose the non-real-time network management entity to: obtain first cell-specific usage statistics of a cell corresponding to a first time interval,
predict second cell-specific usage statistics of the cell for a second time interval, select an operational mode of the cell for the second time interval.
Chan discloses the above recited limitations (see Chan, p. [0069], e.g., the raw data or the trend information in a time series might show that during a first interval of time, and the raw data might then show that during a successive second interval of time. The information change in the raw data at the point in between the first interval and the second interval can indicate that the memory leak was fixed by some change made to the system, and p. [0071], e.g., the raw data show a change in the system, producing the distinctive first, second, and third intervals).
It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Chan’s teachings into Chen. The suggestion/motivation would have been to provide the result of the issuance of some directive which is the output of a resolution action as suggested by Chan.
Regarding claim 16, the combined teaching of Chen and Chan disclose the non-real-time network management entity of claim 15, where the predictive model comprises a plurality of predictive models trained to estimate physical resource block utilization for a corresponding plurality of different time intervals (see Chen, Fig. 3, p. [0054], e.g., different sub-agents of a reinforcement learning agent, and each of the use cases may be associated with a respective agent (or sub-agent) with a neural network approximating the Q-function and having different reward-action sets).
Regarding claim 19, the combined teaching of Chen and Chan disclose the non-real-time network management entity of claim 15, where the second cell-specific usage statistics comprise a binary classification that characterizes whether a future traffic load exceeds a threshold (see Chen, p. [0011], e.g., Another RAN configuration is the handover (HO) parameters for optimal load balancing. By changing thresholds based on observed traffic imbalance, the network may offload traffic from overloaded cells to under-utilized cells to improve overall throughput).
Regarding claim 20, the combined teaching of Chen and Chan disclose the non-real-time network management entity of claim 15, where the instructions further cause the non-real-time network management entity to obtain another cell-specific usage statistics of another cell corresponding to the first time interval and where the second cell-specific usage statistics are based on the other cell-specific usage statistics (see Chen, p. [0056], e.g., the processing system obtains operational data from a radio access network (RAN). For instance, the operational data may be obtained from base stations, baseband units, and so forth of the RAN).
Regarding claim 21, the combined teaching of Chen and Chan disclose the non-real-time network management entity of claim 15, where the instructions further cause the non-real-time network management entity to transmit the operational mode directly to a cell of the radio access network (see Chen, p. [0050], e.g., each of the sub-agents 241-243 may output, via the respective neural networks 247-249 respective selection(s) for respective setting(s) of respective parameter(s) of RAN 270, and p.[0053], e.g., the plurality of settings for the plurality of parameters (e.g., as determined by RLA 240 in accordance with a plurality of selections for the plurality of settings via the plurality of sub-agents), may be provided to RAN configuration and control interface 260 as one or more actions 250).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over the combined teaching of Chen and Chan in view of Vasseur et al (US 2019/0306023) (hereinafter Vasseur).
Regarding claim 18, the combined teaching of Chen and Chan do not expressly disclose the non-real-time network management entity of claim 15, where the second cell-specific usage statistics comprise a quantile regression that characterizes a plurality of likelihoods for a corresponding plurality of future traffic loads.
Vasseur discloses the above recited limitations (see Vasseur, p. [0048], e.g., analyzer 312 may be configured to build predictive models for the joining/roaming time by taking into account a large plurality of parameters/observations (e.g., RF variables, time of day, number of clients, traffic load, DHCP/DNS/Radius time, AP/WLC loads, etc.). From this, analyzer 312 can detect potential network issues before they happen, and p. [0072], e.g., CDE 506 may employ statistical models predict certain moments (e.g., a quantile regressor, etc.) of the distribution P(M | S) by using S as an input vector).
It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Vasseur’s teachings into the combined teaching of Chen and Chan. The suggestion/motivation would have been to provide predictive analytics models to predict user experiences, which is a significant paradigm shift from reactive approaches to network health as suggested by Vasseur.
Allowable Subject Matter
Claims 3-4 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MINH TRANG T NGUYEN whose telephone number is (571)270-5248. The examiner can normally be reached M-F 8:30am-6:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chirag C Shah can be reached at 571-272-3144. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MINH TRANG T NGUYEN/Primary Examiner, Art Unit 2477