Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made Claims 1, 2, 4, 6, 7, 9, 10, 12, 14, 15, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Vasamsetti et al. (US 11,843,545, hereinafter Vasamsetti ) in view of Kumar et al. (US 2024/0095079, hereinafter Kumar). Regarding claim 1, Vasamsetti discloses A computing system having a vDAS compute node implementing at least one virtual network function (NF) in a virtualized distributed antenna system ( vDAS ) having a plurality of radio units (RUs), the computing system comprising ( fig. 1-5, col. 3, line 67-col. 4, line 6: access device 107 may include … a radio unit (RU) … a distributed unit (DU); a col. 4, lines 44-46: external devices 117 may include virtual network devices (e.g., virtualized network functions (VNFs), servers ; col. 5, lines 26-29: at least a portion of external devices 117 may include CNF management service logic and an interface (e.g., an API) that supports the CNF management service ) : at least one server having at least one processor (col. 6, lines 32-35: external devices 117 may include an orchestrator 205 , resource controllers 220-1 through 220-S (also referred to as controllers 220, and individually or generally as controller 220) ; at least one vDAS compute node having at least one central processing unit … , wherein the at least one vDAS compute node includes at least one vDAS container running … (col. 9, lines 2-3: Host 250 provides various physical resources (e.g., processors ; col. 9, lines 15-20: hosts 250 may include CNFs 255-1 through 255-Y ; col. 9, line 21: CNFs 255 may be implemented as containers ) ; wherein the at least one server is configured to: receive periodic capacity usage reports from the at least one vDAS compute node (col. 9, line 67-col. 10, line 5: Orchestrator 205 may identify one or multiple resources of relevance to CNF 255 to analyze and determine a state or a triggering event, such as a planned/unplanned spike in usage of one or multiple application service s and/or network demand , including failures of an active site and traffic roll-over to a backup site; col. 11, lines 24-26: the CNF management service may perform another iteration of accelerated HPA to instantaneously add pods to provide the application service) ; compare scaling metric data derived from the periodic capacity usage reports to threshold limits to determine if any of the threshold limits have been reached by any of the scaling metric data for the at least one vDAS compute node (col. 11, lines 11-17: At block 330, one or multiple CNF resource values may be compared to corresponding CNF resource value thresholds. For example, CNF management service logic 215 of orchestrator 205 may compare a CNF resource value (e.g., a KPI value) included in or calculated from the updated CNF management service information to corresponding CNF value thresholds ; col. 11, lines 24-26: the CNF management service may perform another iteration of accelerated HPA to instantaneously add pods to provide the application service) ; when any of the threshold limits have been reached by any of the scaling metric data for the at least one vDAS compute node: cause the at least one vDAS compute node to scale capacity by either instantiating or deleting at least one additional vDAS container … of the at least one vDAS compute node (col. 9, line 21: CNFs 255 may be implemented as containers ; col. 1, line 66-col. 2, line 5: The scaling of resources for an application service includes vertical auto-scaling (e.g., modifying an amount of a resource allocated to a server device, etc.) and/or horizontal scaling (e.g., adding or removing containerized NF resources in the form of pods , etc.). Auto-scaling mechanisms such as horizontal pod autoscaling (HPA) incrementally adjust the number of worker nodes to support the application service based on auto-scaling rules that define various threshold values for triggering of the HPA . For example, the threshold values may pertain to central processing unit (CPU) and/or memory utilization/capacity, application service key performance indicators (KPIs), etc., associated with application service layer resources; col. 7, lines 27-34: auto-scaling may include an HPA infrastructure for adjusting the number of instances (e.g., VMs, pods, containers, host devices, etc.) in response to the amount of usage of available resources (e.g., memory, disk space, processor, communication interface, port, etc.) relative to their capacities. HPA rules may be based on for example, properties and/or events. For example, the parameters configured to trigger the respective HPA rule may include minimum, maximum ; col. 11, lines 18-26: Based upon a determination that the resource value threshold is satisfied (block 335—YES), process 300 may return to block 305. Alternatively, or in parallel, process 300 may return to block 325. Based upon a determination that the resource value threshold is not satisfied (block 335—NO), accelerated HPA may be enabled (block 315). For example, the CNF management service may perform another iteration of accelerated HPA to instantaneously add pods to provide the application service ) . Vasamsetti does not disclose at least one vDAS compute node having at least one central processing unit with a plurality of cores, wherein the at least one vDAS compute node includes at least one vDAS container running on a first subset of the plurality of cores … at least one additional vDAS container on a second subset of the plurality of cores . Kumar discloses at least one central processing unit with a plurality of cores … container running on a first subset of the plurality of cores … at least one additional … container on a second subset of the plurality of cores ( fig. 1-10, paragraph [0037]: the initial allocation might allocate 100 cores to a first container 220 1 , 200 cores to a second container 220 2 , 300 cores to a third container 220 3 , and 400 cores to a fourth container 220 4 . The initial core allocation (block 500) might be based on a division of all available cores, or might be based on the static core allocation values 310, or might be determined using another process depending on the implementation) . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Vasamsetti’s CNF autoscaling to adopt Kumar’s core allocation techniques to guarantee CPU resource assignment for newly instantiated containers and to reassign cores when containers are deleted. This modification would have been to optimize utilization of host resources by causing the host resources to be moved between the containers. (see Kumar paragraph [0005]). Regarding claim 9 referring to claim 1 , Vasamsetti discloses A method implemented in a virtualized distributed antenna system ( vDAS ) including at least one server and at least one vDAS compute node having a plurality of cores and implementing at least one virtual network function (NF) for at least one radio unit (RU) using at least one vDAS container running on a first subset of the plurality of cores, the method comprising: … (See the rejection for claim 1). Regarding claim 17 referring to claim 1 , Vasamsetti discloses A non-transitory processor-readable medium on which program instructions, configured to be executed by at least one processor, are embodied, wherein when executed by the at least one processor, the program instructions cause the at least one processor to: … (Fig. 4). Regarding claims 2 and 10, Vasamsetti discloses wherein the threshold limits include upper limits that, when exceeded, cause the at least one vDAS compute node to increase the capacity of the at least one vDAS compute node by instantiating the at least one additional vDAS container … of the at least one vDAS compute node ( fig. 1-5, col. 7, lines 27-34: auto-scaling may include an HPA infrastructure for adjusting the number of instances (e.g., VMs, pods, containers , host devices, etc.) in response to the amount of usage of available resources (e.g., memory, disk space, processor, communication interface, port, etc.) relative to their capacities. HPA rules may be based on for example, properties and/or events. For example, the parameters configured to trigger the respective HPA rule may include minimum, maximum ; col. 11, lines 18-26: Based upon a determination that the resource value threshold is satisfied (block 335—YES), process 300 may return to block 305. Alternatively, or in parallel, process 300 may return to block 325. Based upon a determination that the resource value threshold is not satisfied (block 335—NO), accelerated HPA may be enabled (block 315). For example, the CNF management service may perform another iteration of accelerated HPA to instantaneously add pods to provide the application service ). Vasamsetti does not disclose at least one additional vDAS container on the second subset of the plurality of cores . Kumar discloses at least one additional … container on the second subset of the plurality of cores ( fig. 1-10, paragraph [0037]: the initial allocation might allocate 100 cores to a first container 220 1 , 200 cores to a second container 220 2 , 300 cores to a third container 220 3 , and 400 cores to a fourth container 220 4 . The initial core allocation (block 500) might be based on a division of all available cores, or might be based on the static core allocation values 310, or might be determined using another process depending on the implementation) . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Vasamsetti’s CNF autoscaling to adopt Kumar’s core allocation techniques to guarantee CPU resource assignment for newly instantiated containers and to reassign cores when containers are deleted. This modification would have been to optimize utilization of host resources by causing the host resources to be moved between the containers. (see Kumar paragraph [0005]). Regarding claims 4 and 12, Vasamsetti discloses wherein the threshold limits include lower limits that, when not met, cause the at least one vDAS compute node to decrease the capacity of the at least one vDAS compute node by deleting the at least one additional vDAS container on the second subset of the plurality of cores of the at least one vDAS compute node ( fig. 1-5, col. 1, line 66-col. 2, line 5: The scaling of resources for an application service includes vertical auto-scaling (e.g., modifying an amount of a resource allocated to a server device, etc.) and/or horizontal scaling (e.g., adding or removing containerized NF resources in the form of pods , etc.). Auto-scaling mechanisms such as horizontal pod autoscaling (HPA) incrementally adjust the number of worker nodes to support the application service based on auto-scaling rules that define various threshold values for triggering of the HPA . For example, the threshold values may pertain to central processing unit (CPU) and/or memory utilization/capacity, application service key performance indicators (KPIs), etc., associated with application service layer resources; col. 7, lines 27-34: auto-scaling may include an HPA infrastructure for adjusting the number of instances (e.g., VMs, pods, containers, host devices, etc.) in response to the amount of usage of available resources (e.g., memory, disk space, processor, communication interface, port, etc.) relative to their capacities. HPA rules may be based on for example, properties and/or events. For example, the parameters configured to trigger the respective HPA rule may include minimum , maximum; col. 11, lines 18-26: Based upon a determination that the resource value threshold is satisfied (block 335—YES), process 300 may return to block 305. Alternatively, or in parallel, process 300 may return to block 325. Based upon a determination that the resource value threshold is not satisfied (block 335—NO), accelerated HPA may be enabled (block 315). For example, the CNF management service may perform another iteration of accelerated HPA to instantaneously add pods to provide the application service). Regarding claims 6, 14, and 20, Vasamsetti discloses wherein the at least one server is configured to cause the at least one vDAS compute node to scale the capacity of the at least one vDAS compute node through at least one of ( fig. 1-5, col. 11, lines 18-26: Based upon a determination that the resource value threshold is satisfied (block 335—YES), process 300 may return to block 305. Alternatively, or in parallel, process 300 may return to block 325. Based upon a determination that the resource value threshold is not satisfied (block 335—NO), accelerated HPA may be enabled (block 315). For example, the CNF management service may perform another iteration of accelerated HPA to instantaneously add pods to provide the application service ) scaling using a monolithic service architecture or scaling using micro-services architecture (col. 9, lines 21-34: CNFs 255 may be implemented as containers , hypervisor-based (e.g., bare-metal hypervisor, hosted hypervisor) (also known as a VM), or other known (e.g., proprietary, hybrid, etc.) network function virtualization (NFV), or future generation virtualization. CNFs 255 may include hosted application services (Apps ) 260-1 through 260-Z (also referred to as applications 260, and individually or generally as application 260). Application 260 may include software, firmware, and/or another form of executable code for an application service. Applications 260 may include one or multiple instances of the same or different application services. An application service may include a monolithic application, a microservice, a composite application (e.g., including multiple microservices ). Regarding claim 7, Vasamsetti discloses wherein the at least one server is configured to cause the at least one vDAS compute node to increase the capacity of the at least one vDAS compute node by instantiating the at least one additional vDAS container on the at least one vDAS compute node at least in part by being configured to: replicate the at least one vDAS compute node to create at least a second vDAS container ( fig. 1-5, col. 7, lines 27-34: auto-scaling may include an HPA infrastructure for adjusting the number of instances (e.g., VMs, pods, containers , host devices, etc.) in response to the amount of usage of available resources (e.g., memory, disk space, processor, communication interface, port, etc.) relative to their capacities. HPA rules may be based on for example, properties and/or events. For example, the parameters configured to trigger the respective HPA rule may include minimum, maximum ; col. 9, lines 21-34: CNFs 255 may be implemented as containers , hypervisor-based (e.g., bare-metal hypervisor, hosted hypervisor) (also known as a VM), or other known (e.g., proprietary, hybrid, etc.) network function virtualization (NFV), or future generation virtualization. CNFs 255 may include hosted application services (Apps ) 260-1 through 260-Z (also referred to as applications 260, and individually or generally as application 260). Application 260 may include software, firmware, and/or another form of executable code for an application service. Applications 260 may include one or multiple instances of the same or different application services. An application service may include a monolithic application ; col. 11, lines 18-26: Based upon a determination that the resource value threshold is satisfied (block 335—YES), process 300 may return to block 305. Alternatively, or in parallel, process 300 may return to block 325. Based upon a determination that the resource value threshold is not satisfied (block 335—NO), accelerated HPA may be enabled (block 315). For example, the CNF management service may perform another iteration of accelerated HPA to instantaneously add pods to provide the application service ; Note: Vasamsetti expressly contemplates monolithic application instances and HPA driven instantiation of CNF instances as cited above. Accordingly, the replication of a vDAS compute node to create an additional vDAS container, i.e., creating another monolithic instance to handle increased load is a direct application of Vasamsetti’s teaching). Regarding claim 15 , Vasamsetti discloses wherein causing the at least one vDAS compute node to increase the capacity of the at least one vDAS compute node by instantiating the at least one additional vDAS container on the second subset of the plurality of cores of the at least one vDAS compute node includes: replicating the at least one vDAS compute node ( fig. 1-5, col. 7, lines 27-34: auto-scaling may include an HPA infrastructure for adjusting the number of instances (e.g., VMs, pods, containers , host devices, etc.) in response to the amount of usage of available resources (e.g., memory, disk space, processor, communication interface, port, etc.) relative to their capacities. HPA rules may be based on for example, properties and/or events. For example, the parameters configured to trigger the respective HPA rule may include minimum, maximum ; col. 11, lines 18-26: Based upon a determination that the resource value threshold is satisfied (block 335—YES), process 300 may return to block 305. Alternatively, or in parallel, process 300 may return to block 325. Based upon a determination that the resource value threshold is not satisfied (block 335—NO), accelerated HPA may be enabled (block 315). For example, the CNF management service may perform another iteration of accelerated HPA to instantaneously add pods to provide the application service ; Note: Vasamsetti expressly contemplates monolithic application instances and HPA driven instantiation of CNF instances as cited above. Accordingly, the replication of a vDAS compute node, i.e., creating another monolithic instance to handle increased load is a direct application of Vasamsetti’s teaching). Vasamsetti does not disclose at least one additional vDAS container on the second subset of the plurality of cores . Kumar discloses at least one additional … container on the second subset of the plurality of cores (paragraph [0037]: the initial allocation might allocate 100 cores to a first container 2201, 200 cores to a second container 2202, 300 cores to a third container 2203, and 400 cores to a fourth container 2204. The initial core allocation (block 500) might be based on a division of all available cores, or might be based on the static core allocation values 310, or might be determined using another process depending on the implementation). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Vasamsetti’s CNF autoscaling to adopt Kumar’s core allocation techniques to guarantee CPU resource assignment for newly instantiated containers and to reassign cores when containers are deleted. This modification would have been to optimize utilization of host resources by causing the host resources to be moved between the containers. (see Kumar paragraph [0005]). Allowable Subject Matter Claims 3, 5, 8, 11, 13, 16, 18, and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Fu et al. (US 2024/0172097) discloses “The network node 902 includes hardware 940 comprising a set of one or more processors 942 (which are typically COTS processors or processor cores or ASICs) …. The multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space)” (paragraph [0127]) and “A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS)” (paragraph [0136]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SISLEY N. KIM whose telephone number is (571)270-7832 . The examiner can normally be reached M-F 11:30AM -7:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice . If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y. Blair can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov . Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SISLEY N KIM/ Primary Examiner, Art Unit 2196 3/15/2026