DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This action is responsive to the Remark filed on 5/30/25.
Claim(s) 1-4 was/were amended.
Claim(s) 1-4 is/are presented for examination.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Celozzi, U.S. Patent/Pub. No. US 2021/0288827 A1 in view of Smith, U.S. Patent/Pub. No. US 2009/0201815 A1, and further in view of Fukuda, US 2017/0026879 A1.
1. (currently amended): A management apparatus that manages a virtualized Distributed Unit (DU) in a base station forming a radio communication cell, constructed from one or more virtualized components, wherein the management apparatus comprises:
at least one memory configured to store program code; and
at least one processor configured to read the program code and operate as instructed by the program code, wherein the program code includes:
monitoring code configured to cause the at least one processor to monitor, as a usage status value, a radio terminal accommodation rate in the virtualized DU or a traffic accommodation rate in the virtualized DU (Celozzi, page 5, paragraph 78-79; i.e., [0078] In a preferred embodiment the capacity parameter comprises a second parameter applicable in case of scaling-in and scaling-out, wherein scaling-in comprises adding an instance of a Virtual Network Function Component and scaling-out comprises removing an instance of a Virtual Network Function Component. In short, this means that there may be more than one VNFC created based on one VDU and if the traffic/workload demand increases it is possible to increase the actual capacity handled by increasing the number of instantiated VNFCs (scaling-in). It is also possible to achieve the opposite; if the traffic/workload demand drops it is possible to reduce the number of instantiated VNFCs (scaling-out); [0079] In yet another preferred embodiment the capacity parameter comprises a third parameter applicable in case of scaling-up and scaling-down. Scaling-up comprises increasing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component. Scaling-down comprises the opposite-reducing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component [NOTE: according to the definition of the Detail Description, paragraph 86-90: the deletion allowance requirement in the case in which there is one VNFC included in the vDU 121 include: a state in which the usage status value is less than the first threshold value continuing for a prescribed time period]);
virtualized DU setting determination code configured to cause the at least one processor to determine a setting in the virtualized DU based on the usage status value and a deletion allowance requirement (Celozzi, page 5, paragraph 78-79; i.e., [0078] In a preferred embodiment the capacity parameter comprises a second parameter applicable in case of scaling-in and scaling-out, wherein scaling-in comprises adding an instance of a Virtual Network Function Component and scaling-out comprises removing an instance of a Virtual Network Function Component. In short, this means that there may be more than one VNFC created based on one VDU and if the traffic/workload demand increases it is possible to increase the actual capacity handled by increasing the number of instantiated VNFCs (scaling-in). It is also possible to achieve the opposite; if the traffic/workload demand drops it is possible to reduce the number of instantiated VNFCs (scaling-out); [0079] In yet another preferred embodiment the capacity parameter comprises a third parameter applicable in case of scaling-up and scaling-down. Scaling-up comprises increasing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component. Scaling-down comprises the opposite-reducing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component); and
control code configured to cause the at least one processor to control the virtualized DU in accordance with the setting that has been determined (Celozzi, page 5, paragraph 78-79; i.e., [0078] In a preferred embodiment the capacity parameter comprises a second parameter applicable in case of scaling-in and scaling-out, wherein scaling-in comprises adding an instance of a Virtual Network Function Component and scaling-out comprises removing an instance of a Virtual Network Function Component. In short, this means that there may be more than one VNFC created based on one VDU and if the traffic/workload demand increases it is possible to increase the actual capacity handled by increasing the number of instantiated VNFCs (scaling-in). It is also possible to achieve the opposite; if the traffic/workload demand drops it is possible to reduce the number of instantiated VNFCs (scaling-out); [0079] In yet another preferred embodiment the capacity parameter comprises a third parameter applicable in case of scaling-up and scaling-down. Scaling-up comprises increasing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component. Scaling-down comprises the opposite-reducing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component),
wherein the virtualized DU setting determining code is further configured to cause the at least one processor to: determine that the virtualized DU is to be deleted based on determining that usage status value is less than a first threshold value and the deletion allowance requirement is satisfied (Celozzi, page 5, paragraph 78-79; i.e., [0078] In a preferred embodiment the capacity parameter comprises a second parameter applicable in case of scaling-in and scaling-out, wherein scaling-in comprises adding an instance of a Virtual Network Function Component and scaling-out comprises removing an instance of a Virtual Network Function Component. In short, this means that there may be more than one VNFC created based on one VDU and if the traffic/workload demand increases it is possible to increase the actual capacity handled by increasing the number of instantiated VNFCs (scaling-in). It is also possible to achieve the opposite; if the traffic/workload demand drops it is possible to reduce the number of instantiated VNFCs (scaling-out); [0079] In yet another preferred embodiment the capacity parameter comprises a third parameter applicable in case of scaling-up and scaling-down. Scaling-up comprises increasing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component. Scaling-down comprises the opposite-reducing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component), and
determine that the virtualized DU is to be scaled in based on determining that usage status value is less than the first threshold value and the deletion allowance requirement is not satisfied (Celozzi, page 5, paragraph 78-79; i.e., [0078] In a preferred embodiment the capacity parameter comprises a second parameter applicable in case of scaling-in and scaling-out, wherein scaling-in comprises adding an instance of a Virtual Network Function Component and scaling-out comprises removing an instance of a Virtual Network Function Component. In short, this means that there may be more than one VNFC created based on one VDU and if the traffic/workload demand increases it is possible to increase the actual capacity handled by increasing the number of instantiated VNFCs (scaling-in). It is also possible to achieve the opposite; if the traffic/workload demand drops it is possible to reduce the number of instantiated VNFCs (scaling-out); [0079] In yet another preferred embodiment the capacity parameter comprises a third parameter applicable in case of scaling-up and scaling-down. Scaling-up comprises increasing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component. Scaling-down comprises the opposite-reducing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component).
But Celozzi failed to teach the claim limitation wherein determine a setting in the virtualized DU based on the usage status value and a deletion allowance requirement; determine that the virtualized DU is to be scaled in based on determining that usage status value is less than the first threshold value and the deletion allowance requirement is not satisfied wherein the deletion allowance requirement specifies that the virtualized DU is deleted based on determining that a predetermined condition is satisfied, and wherein the predetermined condition specifies one of (i) a state in which the usage status value is less than the first threshold value is greater than or equal to a first predetermined amount of time, (ii) a state in which the usage status value is less than the first threshold occurs at least a number of times within a second predetermined amount of time, (iii) the virtualized DU is active during a time period in which the base station is not used, or (iv) a heterogeneous network is formed in a radio communication cell formed by the base station.
However, Smith teaches the limitation wherein determine a setting in the virtualized DU based on the usage status value and a deletion allowance requirement; determine that the virtualized DU is to be scaled in based on determining that usage status value is less than the first threshold value and the deletion allowance requirement is not satisfied (Smith, page 1, paragraph 20; page 3, paragraph 56-58; i.e., [0020] In the first mode, frames received at any of a set of accepted data rates may be acknowledged according to the radio protocol, and in the second mode, one or more data rates may be removed from the set of accepted data rates to create a revised set of accepted data rates and frames received at any of the revised set of accepted data rates are acknowledged according to the radio protocol; [0056] One or more of these parameters may also be used in determining which data rates are removed from the accepted rates list and hence will be dropped (in block 303); [0057] Data rate in and data rate out may be used as described above with reference to FIG. 2, such that the behavior of the receiver is modified when the data rate in exceeds the data rate out. In an example, data rates may be removed from the accepted rates list (and hence frames dropped) which would result in the data rate exceeding the data rate out; [0058] Where available buffer space is used to define the criteria, one or more thresholds may be used. In the single threshold case, where the available buffer space exceeds the threshold, the receiver operates normally (i.e. it acknowledges those frames received in the standard manner for the particular radio protocol used) and where the available buffer space falls below the threshold ('Yes' in block 302), one or more of the higher transmission rates are dropped by failing to acknowledge frames received, i.e. these one or more higher transmission rates are removed from the accepted rates list. Where multiple thresholds are used, different data rates may be dropped according to the amount of available buffer space. For example, once the available buffer space falls below a first threshold, all the optional rates may be dropped (and hence removed from the accepted rates list) and once the available buffer space falls below a second threshold, all the mandatory rates which exceed the rate at which the beacons are transmitted (typically the lowest mandatory rate) will be dropped. ).
It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Celozzi to substitute buffer overflow condition from Smith for traffic load from Celozzi to provide flow
control mechanism as it is expected that the receiving device will be able to cope with the maximum possible data throughput (Smith, page 1, paragraph 1).
However, Fukuda teaches the limitation wherein the deletion allowance requirement specifies that the virtualized DU is deleted based on determining that a predetermined condition is satisfied, and wherein the predetermined condition specifies one of (i) a state in which the usage status value is less than the first threshold value is greater than or equal to a first predetermined amount of time, (ii) a state in which the usage status value is less than the first threshold occurs at least a number of times within a second predetermined amount of time, (iii) the virtualized DU is active during a time period in which the base station is not used, or (iv) a heterogeneous network is formed in a radio communication cell formed by the base station (Fukuda, page 1, paragraph 11; page 2, paragraph 25; page 5, paragraph 64-65; page 6, paragraph 67; page 7, paragraph 83; i.e., [0011] in deleting information on a base station included in the neighboring cell list in the HetNet, there may arise a case where information on the small base station with a low handover frequency is to be a deletion target; [0025] The radio communication unit 13 is a radio interface circuit which includes a function of communicating with the terminal 4 and is adapted for LTE. The radio communication unit 13 may further include a function of communicating with radio communication units of the other base stations 2 and 3. The base station 1 wirelessly communicates with the terminal 4 adapted for LTE via the radio communication unit 13; [0065] the neighboring cell list is deleted at the predetermined frequency from the neighboring cell list in accordance with a criterion; [0067] The base station including such a minimum configuration uses the different criteria in deleting the base station from the neighboring cell list. This makes it possible to set different ratios for deleting the small base station and the non-small base station from the neighboring cell list; [0083] the base station included in the neighboring cell list is deleted from the neighboring cell list in accordance with a criterion different depending on whether the base station included in the neighboring cell list. The criterion for deleting the base station is set based on the cell type information and the communication quality of the base station).
It would have been obvious to one of ordinary skill in the art before the effective date of the claimed invention to modify Celozzi to substitute buffer macrocell from Fukuda for NFVI resource from Celozzi to prevent the traffic of the macro base station from becoming tight. Accordingly, improvement in quality of a mobile communication service is expected (Fukuda, page 1, paragraph 4).
As to claim 2, Celozzi-Smith-Fukuda teaches the management apparatus as recited in claim 1, wherein the virtualized DU setting determining code is further configured to cause the at least one processor to
determine that the virtualized DU is to be scaled in based on determining that usage status value is equal to or higher than the first threshold value and less than a second threshold value higher than the first threshold value, determines that the virtualized DU is to be maintained if the usage status value is equal to or higher than the second threshold value and less than a third threshold value higher than the second threshold value (Celozzi, page 5, paragraph 78-79; i.e., [0078] In a preferred embodiment the capacity parameter comprises a second parameter applicable in case of scaling-in and scaling-out, wherein scaling-in comprises adding an instance of a Virtual Network Function Component and scaling-out comprises removing an instance of a Virtual Network Function Component. In short, this means that there may be more than one VNFC created based on one VDU and if the traffic/workload demand increases it is possible to increase the actual capacity handled by increasing the number of instantiated VNFCs (scaling-in). It is also possible to achieve the opposite; if the traffic/workload demand drops it is possible to reduce the number of instantiated VNFCs (scaling-out); [0079] In yet another preferred embodiment the capacity parameter comprises a third parameter applicable in case of scaling-up and scaling-down. Scaling-up comprises increasing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component. Scaling-down comprises the opposite-reducing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component),
determine that the virtualized DU is to be scaled out based on determining that usage status value is equal to or higher than the third threshold value and less than a fourth threshold value higher than the third threshold value (Celozzi, page 5, paragraph 78-79; i.e., [0078] In a preferred embodiment the capacity parameter comprises a second parameter applicable in case of scaling-in and scaling-out, wherein scaling-in comprises adding an instance of a Virtual Network Function Component and scaling-out comprises removing an instance of a Virtual Network Function Component. In short, this means that there may be more than one VNFC created based on one VDU and if the traffic/workload demand increases it is possible to increase the actual capacity handled by increasing the number of instantiated VNFCs (scaling-in). It is also possible to achieve the opposite; if the traffic/workload demand drops it is possible to reduce the number of instantiated VNFCs (scaling-out); [0079] In yet another preferred embodiment the capacity parameter comprises a third parameter applicable in case of scaling-up and scaling-down. Scaling-up comprises increasing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component. Scaling-down comprises the opposite-reducing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component), and
determine that a virtualized component is to be added to the virtualized DU based on determining that usage status value is equal to or higher than the fourth threshold value (Celozzi, page 5, paragraph 78-79; i.e., [0078] In a preferred embodiment the capacity parameter comprises a second parameter applicable in case of scaling-in and scaling-out, wherein scaling-in comprises adding an instance of a Virtual Network Function Component and scaling-out comprises removing an instance of a Virtual Network Function Component. In short, this means that there may be more than one VNFC created based on one VDU and if the traffic/workload demand increases it is possible to increase the actual capacity handled by increasing the number of instantiated VNFCs (scaling-in). It is also possible to achieve the opposite; if the traffic/workload demand drops it is possible to reduce the number of instantiated VNFCs (scaling-out); [0079] In yet another preferred embodiment the capacity parameter comprises a third parameter applicable in case of scaling-up and scaling-down. Scaling-up comprises increasing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component. Scaling-down comprises the opposite-reducing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component).
As to claim 3, Celozzi-Smith-Fukuda teaches the management apparatus as recited in claim 2, wherein
based on determining that virtualized DU setting determination unit has determined that the virtualized DU is to be maintained in a case in which the usage status value is equal to or higher than the second threshold value and less than the third threshold value higher than the second threshold value (Celozzi, page 5, paragraph 78-79; i.e., [0078] In a preferred embodiment the capacity parameter comprises a second parameter applicable in case of scaling-in and scaling-out, wherein scaling-in comprises adding an instance of a Virtual Network Function Component and scaling-out comprises removing an instance of a Virtual Network Function Component. In short, this means that there may be more than one VNFC created based on one VDU and if the traffic/workload demand increases it is possible to increase the actual capacity handled by increasing the number of instantiated VNFCs (scaling-in). It is also possible to achieve the opposite; if the traffic/workload demand drops it is possible to reduce the number of instantiated VNFCs (scaling-out); [0079] In yet another preferred embodiment the capacity parameter comprises a third parameter applicable in case of scaling-up and scaling-down. Scaling-up comprises increasing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component. Scaling-down comprises the opposite-reducing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component),
virtualized DU setting determining code is further configured to cause the at least one processor to:
determine that the virtualized DU is to be scaled in if a decrease rate in the usage status value is equal to or higher than a fifth threshold value (Celozzi, page 5, paragraph 78-79; i.e., [0078] In a preferred embodiment the capacity parameter comprises a second parameter applicable in case of scaling-in and scaling-out, wherein scaling-in comprises adding an instance of a Virtual Network Function Component and scaling-out comprises removing an instance of a Virtual Network Function Component. In short, this means that there may be more than one VNFC created based on one VDU and if the traffic/workload demand increases it is possible to increase the actual capacity handled by increasing the number of instantiated VNFCs (scaling-in). It is also possible to achieve the opposite; if the traffic/workload demand drops it is possible to reduce the number of instantiated VNFCs (scaling-out); [0079] In yet another preferred embodiment the capacity parameter comprises a third parameter applicable in case of scaling-up and scaling-down. Scaling-up comprises increasing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component. Scaling-down comprises the opposite-reducing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component), and
determine that the virtualized DU is to be scaled out if an increase rate in the usage status value is equal to or higher than a sixth threshold value (Celozzi, page 5, paragraph 78-79; i.e., [0078] In a preferred embodiment the capacity parameter comprises a second parameter applicable in case of scaling-in and scaling-out, wherein scaling-in comprises adding an instance of a Virtual Network Function Component and scaling-out comprises removing an instance of a Virtual Network Function Component. In short, this means that there may be more than one VNFC created based on one VDU and if the traffic/workload demand increases it is possible to increase the actual capacity handled by increasing the number of instantiated VNFCs (scaling-in). It is also possible to achieve the opposite; if the traffic/workload demand drops it is possible to reduce the number of instantiated VNFCs (scaling-out); [0079] In yet another preferred embodiment the capacity parameter comprises a third parameter applicable in case of scaling-up and scaling-down. Scaling-up comprises increasing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component. Scaling-down comprises the opposite-reducing at least one of CPU power, memory size and other characteristics of virtual resources available for the instantiated Virtual Network Function Component).
Claim(s) 4 is/are directed to a method/computer readable medium claims and they do not teach or further define over the limitations recited in claim(s) 1. Therefore, claim(s) 4 is/are also rejected for similar reasons set forth in claim(s) 1.
Response to Arguments
Applicant's arguments with respect to claim(s) 1-4 have been considered but are moot in view of the new ground(s) of rejection.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Listing of Relevant Arts
Ouchi, U.S. Patent/Pub. No. US 20180220458 A1 discloses heterogeneous network, radio communication and condition for deleting.
Zhang, U.S. Patent/Pub. No. US 20160323809 A1 discloses condition for removing and heterogeneous network, radio communication.
Contact Information
The present application is being examined under the pre-AIA first to invent provisions.
THUONG NGUYEN whose telephone number is (571)272-3864. The examiner can normally be reached on Monday-Friday 9:00-6:00.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Noel Beharry can be reached on 571-270-5630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THUONG NGUYEN/Primary Examiner, Art Unit 2416