DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s submission filed on 9/30/2025 has been entered. Claims 1-20 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim 1 discloses a system comprising a server and three routers. Initially, the claim fails to fall into any of the four enumerated categories of 35 USC 101 as set forth above. Although the claim recites “a system comprising a server, a first router, a third router, and a second router”, the claim actually lacks the necessary physical components to constitute a machine or a manufacture within the meaning of 35 USC 101. They are clearly not a series of steps or acts to be a process nor are they a combination of chemical compounds to be a composition of matter. Moreover, the server may be a computer program as suggested in Applicant’s Specification at least in par. [0121] which supports the server may refer to “a software process”. The first router, third router, and second router may be computer programs as suggested in Applicant’s Specification at least in pars. [0054-0055] which supports the routers may be “network elements” which may be “a virtualized component” such as “a virtual router”. As such, the claim fails to fall within a statutory category. Hence, independent claim 1 and corresponding dependent claims 2-9 are not patent eligible.
Claim 10 discloses a system comprising a second virtual router, a server, and a third virtual router. Initially, the claim fails to fall into any of the four enumerated categories of 35 USC 101 as set forth above. Although the claim recites “a system comprising a second virtual router, a server, and a third virtual router”, the claim actually lacks the necessary physical components to constitute a machine or a manufacture within the meaning of 35 USC 101. They are clearly not a series of steps or acts to be a process nor are they a combination of chemical compounds to be a composition of matter. Moreover, the server may be a computer program as suggested in Applicant’s Specification at least in par. [0121] which supports the server may refer to “a software process”. The second virtual router and the third virtual router may be computer programs as suggested in Applicant’s Specification at least in pars. [0054-0055] which supports the routers may be “network elements” which may be “a virtualized component” such as “a virtual router”. As such, the claim fails to fall within a statutory category. Hence, independent claim 10 and corresponding dependent claims 11-14 are not patent eligible.
Claim 15 discloses a system comprising a first virtual router, a first server, a second virtual router, and a third virtual router. Initially, the claim fails to fall into any of the four enumerated categories of 35 USC 101 as set forth above. Although the claim recites “a system comprising a first virtual router, a first server, a second virtual router, and a third virtual router”, the claim actually lacks the necessary physical components to constitute a machine or a manufacture within the meaning of 35 USC 101. They are clearly not a series of steps or acts to be a process nor are they a combination of chemical compounds to be a composition of matter. Moreover, the server may be a computer program as suggested in Applicant’s Specification at least in par. [0121] which supports the server may refer to “a software process”. The first virtual router, the second virtual router, and the third virtual router may be computer programs as suggested in Applicant’s Specification at least in pars. [0054-0055] which supports the routers may be “network elements” which may be “a virtualized component” such as “a virtual router”. As such, the claim fails to fall within a statutory category. Hence, independent claim 15 and corresponding dependent claims 16-20 are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-6 and 9-17 are rejected under 35 U.S.C. 103 as being unpatentable over Akkipeddi et al. (US 2025/0097821), hereinafter "Akkipeddi", in view of Chu et al. (US 2004/0255028), hereinafter "Chu".
Regarding claim 1, Akkipeddi teaches:
A system, comprising:
a server residing at a cell site of a radio access network, the server is configured to run a virtualized distributed unit (see Akkipeddi, Figs. 1, 2, par. [0056]: Mobile network system 100 includes radio access networks 9, and see par. [0080], lines 1-15: Each of the VRFs 212A-212K has a corresponding virtual network interface to DU 22A. Each of the virtual network interfaces of DU 22A may thus be mapped into a different L3VPN in vCSR 20A in order to, e.g., support a different one of multiple network slices. As described in further detail below, a CNI of server 12A, when triggered by pod events from orchestrator 50, dynamically adds or deletes virtual network interfaces between the pod (here deployed with DU 22A) and the vRouter 206A, which may also be deployed as container in some examples. The CNI also dynamically updates cRPD 24A (the control plane of vCSR 20A) with host routes for each DU 22A/pod virtual network interface and corresponding Layer 3 VPN mappings, in the form of Route Distinguishers and Route Targets; in this case, the server performs actions deployed with a DU (i.e. runs a distributed unit)), and the virtualized distributed unit is configured to perform radio link control layer operations (see Akkipeddi, par. [0057], lines 8-10: DUs 22 may implement the Radio Link Control (RLC), Media Access Control (MAC), and the HI PHY layer);
a first router, the first router is separate from the server (see Akkipeddi, Fig. 2, item 204A, par. [0079], lines 9-11: Each of routers 204A-204K may be a gateway router for a data center having one or more servers to execute any one or more of CUs 213A-213K; in this case, router 204A corresponds to a first router which is separate from the server 12A) and is configured to receive data from the virtualized distributed unit running on the server (see Akkipeddi, Fig. 2, par. [0079]: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above. Each of routers 204A-204K may be a gateway router for a data center having one or more servers to execute any one or more of CUs 213A-213K; in this case, the first router may be in communication with the DU on the server via another router 206A);
a third router residing within a breakout edge data center (see Akkipeddi, Fig. 2, item 204B, par. [0079], lines 9-11: Each of routers 204A-204K may be a gateway router for a data center having one or more servers to execute any one or more of CUs 213A-213K; in this case, router 204B corresponds to a third router. The router being a gateway router for a data center corresponds to being within an edge data center),
a second router residing within a passthrough edge data center that is separate from the breakout edge data center (see Akkipeddi, Fig. 2, item 204K, par. [0079], lines 9-11: Each of routers 204A-204K may be a gateway router for a data center having one or more servers to execute any one or more of CUs 213A-213K; in this case, router 204K corresponds to a second router. The router being a gateway router for a data center corresponds to being within an edge data center. Each router may be a router for its own (i.e. separate) data center).
However, Akkipeddi does not teach:
wherein the first router is residing at the cell site,
wherein the third router is in communication with the first router via a direct private networking connection
wherein the second router is in communication with the third router via a second direct private networking connection, and the second router is configured to be in communication with the first router
Chu, in the same field of endeavor, teaches:
wherein the first router is residing at the cell site (see Chu, Fig. 3, par. [0038]: each PE router 308 is coupled to one or more customer edge (CE) devices 3221 through 322p (collectively CE devices 322) respectively at various customer sites 3201 through 320p (collectively sites 320)),
wherein the third router is in communication with the first router via a direct private networking connection (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system)
wherein the second router is in communication with the third router via a second direct private networking connection, and the second router is configured to be in communication with the first router (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system)
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the routers of Akkipeddi with the routers being in communication of Chu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of improving performance and reducing cost of routers (see Chu, par. [0097]).
Regarding claim 2, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the first router comprises a first virtual router (see Akkipeddi, Fig. 2, par. [0079]: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B, and see par. [0047]: The Layer 3 VPNs may be implemented using virtual routing and forwarding instances (VRFs); in this case, routers have virtual components for virtual routing and forwarding (corresponding to a virtual router));
the second router comprises a second virtual router (see Akkipeddi, Fig. 2, par. [0079]: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B, and see par. [0047]: The Layer 3 VPNs may be implemented using virtual routing and forwarding instances (VRFs); in this case, routers have virtual components for virtual routing and forwarding (corresponding to a virtual router)); and
the third router comprises a third virtual router (see Akkipeddi, Fig. 2, par. [0079]: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B, and see par. [0047]: The Layer 3 VPNs may be implemented using virtual routing and forwarding instances (VRFs); in this case, routers have virtual components for virtual routing and forwarding (corresponding to a virtual router)).
Regarding claim 3, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the third router configured to receive data from the virtualized distributed unit via the direct private networking connection (see Akkipeddi, Fig. 2, par. [0080], lines 1-15: Each of the VRFs 212A-212K has a corresponding virtual network interface to DU 22A. Each of the virtual network interfaces of DU 22A may thus be mapped into a different L3VPN in vCSR 20A in order to, e.g., support a different one of multiple network slices. As described in further detail below, a CNI of server 12A, when triggered by pod events from orchestrator 50, dynamically adds or deletes virtual network interfaces between the pod (here deployed with DU 22A) and the vRouter 206A, which may also be deployed as container in some examples. The CNI also dynamically updates cRPD 24A (the control plane of vCSR 20A) with host routes for each DU 22A/pod virtual network interface and corresponding Layer 3 VPN mappings, in the form of Route Distinguishers and Route Targets, and see Akkipeddi, Fig. 2, par. [0079], lines 1-9: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above; in this case, communication, including data reception, between the DU and the third router may be through another router and its associated tunnels, corresponding to a direct private networking connection).
Akkipeddi does not teach, but Chu teaches:
the third virtual router configured to receive data via the second direct private networking connection (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the routers of Akkipeddi with the routers being in communication of Chu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of improving performance and reducing cost of routers (see Chu, par. [0097]).
Regarding claim 4, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the third router configured to exchange data with the virtualized distributed unit via the direct private networking connection (see Akkipeddi, Fig. 2, par. [0080], lines 1-15: Each of the VRFs 212A-212K has a corresponding virtual network interface to DU 22A. Each of the virtual network interfaces of DU 22A may thus be mapped into a different L3VPN in vCSR 20A in order to, e.g., support a different one of multiple network slices. As described in further detail below, a CNI of server 12A, when triggered by pod events from orchestrator 50, dynamically adds or deletes virtual network interfaces between the pod (here deployed with DU 22A) and the vRouter 206A, which may also be deployed as container in some examples. The CNI also dynamically updates cRPD 24A (the control plane of vCSR 20A) with host routes for each DU 22A/pod virtual network interface and corresponding Layer 3 VPN mappings, in the form of Route Distinguishers and Route Targets, and see Akkipeddi, Fig. 2, par. [0079], lines 1-9: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above; in this case, communication, including data exchange, between the DU and the third router may be through the first router and its associated tunnels, corresponding to a direct private networking connection).
Akkipeddi does not teach, but Chu teaches:
the third router configured to exchange data via both the second direct private networking connection and the direct private networking connection (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the routers of Akkipeddi with the routers being in communication of Chu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of improving performance and reducing cost of routers (see Chu, par. [0097]).
Regarding claim 5, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the third router configured to concurrently receive data from the virtualized distributed unit via the direct private networking connection (see Akkipeddi, Fig. 2, par. [0080], lines 1-15: Each of the VRFs 212A-212K has a corresponding virtual network interface to DU 22A. Each of the virtual network interfaces of DU 22A may thus be mapped into a different L3VPN in vCSR 20A in order to, e.g., support a different one of multiple network slices. As described in further detail below, a CNI of server 12A, when triggered by pod events from orchestrator 50, dynamically adds or deletes virtual network interfaces between the pod (here deployed with DU 22A) and the vRouter 206A, which may also be deployed as container in some examples. The CNI also dynamically updates cRPD 24A (the control plane of vCSR 20A) with host routes for each DU 22A/pod virtual network interface and corresponding Layer 3 VPN mappings, in the form of Route Distinguishers and Route Targets, and see Akkipeddi, Fig. 2, par. [0079], lines 1-9: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above; in this case, communication, including data reception, between the DU and the third router may be through the first router and its associated tunnels, corresponding to a direct private networking connection).
Akkipeddi does not teach, but Chu teaches:
the third router configured to concurrently receive data via the second direct private networking connection and the direct private networking connection (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the routers of Akkipeddi with the routers being in communication of Chu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of improving performance and reducing cost of routers (see Chu, par. [0097]).
Regarding claim 6, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the server is configured to run the virtualized distributed unit as a containerized application (see Akkipeddi, par. [0066], lines 1-14: vCSR 20A executed by server 12A includes cRPD 24A and a forwarding plane of server 12A (e.g., a SmartNIC, kernel-based forwarding plane, or Data Plane Development Kit (DPDK)-based forwarding plane). cRPD 24A provides one or more of the above routing functions to program a forwarding plane of vCSR 20A in order to, among other tasks, advertise a layer 3 route for DU 22A outside of the cluster—including across the midhaul network to CU 13A—and forward layer 3 packets between DU 22A and CU 13A. In this way, the techniques realize cloud-native, containerized cell site routers 20 executing on the same servers 12 as containerized DUs 22, thus significantly reducing latency on the midhaul between DUs 22 and CUs 13, and see Akkipeddi, par. [0068], lines 8-10: Application workloads can be containerized network functions (CNFs), such as DUs).
Regarding claim 9, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the server comprises a virtual machine (see Akkipeddi, par. [0062], lines 1-6: Mobile network system 100 includes multiple servers 12A-12X to execute DUs 22. Each of servers 12 may be a real or virtual server that hosts/executes software that implements DUs 22. Such software may include one or more applications deployed as, e.g., virtual machine or containers, to servers 12).
Regarding claim 10, Akkipeddi teaches:
A system, comprising:
a second virtual router within a passthrough edge data center (see Akkipeddi, Fig. 2, item 204K, par. [0079]: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above. Each of routers 204A-204K may be a gateway router for a data center having one or more servers to execute any one or more of CUs 213A-213K; in this case, router 204K corresponds to a second router. The router being a gateway router for a data center corresponds to being within an edge data center. Routers have virtual components for virtual routing and forwarding (corresponding to a virtual router)), the second virtual router configured to receive data from a virtualized distributed unit (see Akkipeddi, Fig. 2, par. [0079], lines 1-9: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above, and see Akkipeddi, Fig. 2, par. [0080], lines 1-15: Each of the VRFs 212A-212K has a corresponding virtual network interface to DU 22A. Each of the virtual network interfaces of DU 22A may thus be mapped into a different L3VPN in vCSR 20A in order to, e.g., support a different one of multiple network slices. As described in further detail below, a CNI of server 12A, when triggered by pod events from orchestrator 50, dynamically adds or deletes virtual network interfaces between the pod (here deployed with DU 22A) and the vRouter 206A, which may also be deployed as container in some examples. The CNI also dynamically updates cRPD 24A (the control plane of vCSR 20A) with host routes for each DU 22A/pod virtual network interface and corresponding Layer 3 VPN mappings, in the form of Route Distinguishers and Route Targets; in this case, communication, including data reception, between the DU and the second router may be through another router and its associated tunnels, corresponding to a direct private networking connection. Routers have virtual components for virtual routing and forwarding (corresponding to a virtual router);
a server that is configured to run the virtualized distributed unit (see Akkipeddi, Fig. 2, par. [0080], lines 1-15: Each of the VRFs 212A-212K has a corresponding virtual network interface to DU 22A. Each of the virtual network interfaces of DU 22A may thus be mapped into a different L3VPN in vCSR 20A in order to, e.g., support a different one of multiple network slices. As described in further detail below, a CNI of server 12A, when triggered by pod events from orchestrator 50, dynamically adds or deletes virtual network interfaces between the pod (here deployed with DU 22A) and the vRouter 206A, which may also be deployed as container in some examples. The CNI also dynamically updates cRPD 24A (the control plane of vCSR 20A) with host routes for each DU 22A/pod virtual network interface and corresponding Layer 3 VPN mappings, in the form of Route Distinguishers and Route Targets; in this case, the server performs actions deployed with a DU (i.e. runs a distributed unit)), the virtualized distributed unit is configured to perform radio link control layer operations (see Akkipeddi, par. [0057], lines 8-10: DUs 22 may implement the Radio Link Control (RLC), Media Access Control (MAC), and the HI PHY layer); and
a third virtual router within a breakout edge data center that is separate from the passthrough edge data center (see Akkipeddi, Fig. 2, item 204B, par. [0079]: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above. Each of routers 204A-204K may be a gateway router for a data center having one or more servers to execute any one or more of CUs 213A-213K; in this case, router 204B corresponds to a third router. The router being a gateway router for a data center corresponds to being within an edge data center. Each router may be a router for its own (i.e. separate) data center. Routers have virtual components for virtual routing and forwarding (corresponding to a virtual router))
However, Akkipeddi does not teach:
the third router in communication with the second router via a direct private networking connection.
Chu, in the same field of endeavor, teaches:
the third router in communication with the second router via a direct private networking connection (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the routers of Akkipeddi with the routers being in communication of Chu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of improving performance and reducing cost of routers (see Chu, par. [0097]).
Regarding claim 11, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the server resides at the passthrough edge data center (see Akkipeddi, par. [0064], lines 2-7: virtualized cell site routers 24A-24X (“vCSRs 20A-20X” and collectively, “vCSRs 20”) provide layer 3 routing functionality between DUs 22 and CUs 13. These vCSR 24 may be executed on the same server 12 as one or more DUs 22 to provide provider edge router functionality to such DUs 22, and see Akkipeddi, par. [0080], lines 17-20: vCSR 20A is introduced as a cloud-native router into the data path to, e.g., support the F1 interfaces to CUs 213A-213K that may be executing in edge or regional data center sites).
Regarding claim 12, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the server is configured to run the virtualized distributed unit as a containerized application (see Akkipeddi, par. [0066], lines 1-14: vCSR 20A executed by server 12A includes cRPD 24A and a forwarding plane of server 12A (e.g., a SmartNIC, kernel-based forwarding plane, or Data Plane Development Kit (DPDK)-based forwarding plane). cRPD 24A provides one or more of the above routing functions to program a forwarding plane of vCSR 20A in order to, among other tasks, advertise a layer 3 route for DU 22A outside of the cluster—including across the midhaul network to CU 13A—and forward layer 3 packets between DU 22A and CU 13A. In this way, the techniques realize cloud-native, containerized cell site routers 20 executing on the same servers 12 as containerized DUs 22, thus significantly reducing latency on the midhaul between DUs 22 and CUs 13, and see Akkipeddi, par. [0068], lines 8-10: Application workloads can be containerized network functions (CNFs), such as DUs).
Regarding claim 13, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
further comprising:
a first virtual router (see Akkipeddi, Fig. 2, item 204A, par. [0079]: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above. Each of routers 204A-204K may be a gateway router for a data center having one or more servers to execute any one or more of CUs 213A-213K; in this case, router 204A corresponds to a first router which is separate from the server 12A. Routers have virtual components for virtual routing and forwarding (corresponding to a virtual router)), the third virtual router in communication with the virtualized distributed unit via the second direct private networking connection (see Akkipeddi, Fig. 2, par. [0080], lines 1-15: Each of the VRFs 212A-212K has a corresponding virtual network interface to DU 22A. Each of the virtual network interfaces of DU 22A may thus be mapped into a different L3VPN in vCSR 20A in order to, e.g., support a different one of multiple network slices. As described in further detail below, a CNI of server 12A, when triggered by pod events from orchestrator 50, dynamically adds or deletes virtual network interfaces between the pod (here deployed with DU 22A) and the vRouter 206A, which may also be deployed as container in some examples. The CNI also dynamically updates cRPD 24A (the control plane of vCSR 20A) with host routes for each DU 22A/pod virtual network interface and corresponding Layer 3 VPN mappings, in the form of Route Distinguishers and Route Targets, and see Akkipeddi, Fig. 2, par. [0079], lines 1-9: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above; in this case, communication between the DU and the third router may be through another router and its associated tunnels, corresponding to a direct private networking connection).
Akkipeddi does not teach, but Chu teaches:
wherein the first router residing at a cell site (see Chu, Fig. 3, par. [0038]: each PE router 308 is coupled to one or more customer edge (CE) devices 3221 through 322p (collectively CE devices 322) respectively at various customer sites 3201 through 320p (collectively sites 320))
the first router in communication with the third router via a second direct private networking connection (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system),
the third router in communication via the direct private networking connection and the second direct private networking connection (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the routers of Akkipeddi with the routers being in communication of Chu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of improving performance and reducing cost of routers (see Chu, par. [0097]).
Regarding claim 14, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the server is configured to host a containerized environment that runs the virtualized distributed unit (see Akkipeddi, par. [0062], lines 1-6: Mobile network system 100 includes multiple servers 12A-12X to execute DUs 22. Each of servers 12 may be a real or virtual server that hosts/executes software that implements DUs 22. Such software may include one or more applications deployed as, e.g., virtual machine or containers, to servers 12, and see Akkipeddi, par. [0066], lines 1-14: vCSR 20A executed by server 12A includes cRPD 24A and a forwarding plane of server 12A (e.g., a SmartNIC, kernel-based forwarding plane, or Data Plane Development Kit (DPDK)-based forwarding plane). cRPD 24A provides one or more of the above routing functions to program a forwarding plane of vCSR 20A in order to, among other tasks, advertise a layer 3 route for DU 22A outside of the cluster—including across the midhaul network to CU 13A—and forward layer 3 packets between DU 22A and CU 13A. In this way, the techniques realize cloud-native, containerized cell site routers 20 executing on the same servers 12 as containerized DUs 22, thus significantly reducing latency on the midhaul between DUs 22 and CUs 13, and see Akkipeddi, par. [0068], lines 8-10: Application workloads can be containerized network functions (CNFs), such as DUs) within an isolated runtime environment (see Akkipeddi, pars. [0179-0180]: Pod 202A includes one or more application containers 229A. Pod 202B includes an instance of cRPD 324. Container platform 804 includes container runtime 208, orchestration agent 310, service proxy 211, and CNI 312. Container engine 208 includes code executable by microprocessor 810. Container runtime 208 may be one or more computer processes. Container engine 208 runs containerized applications in the form of containers 229A-229B; in this case, the DU may have containers with have a runtime).
Regarding claim 15, Akkipeddi teaches:
A system, comprising:
a first virtual router of a radio access network (see Akkipeddi, Fig. 2, item 204A, par. [0079]: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above. Each of routers 204A-204K may be a gateway router for a data center having one or more servers to execute any one or more of CUs 213A-213K; in this case, router 204A corresponds to a first router which is separate from the server 12A. Routers have virtual components for virtual routing and forwarding (corresponding to a virtual router)), the first router configured to receive data from a virtualized distributed unit (see Akkipeddi, Fig. 2, par. [0079]: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above. Each of routers 204A-204K may be a gateway router for a data center having one or more servers to execute any one or more of CUs 213A-213K; in this case, the first router may be in communication with the DU on the server via another router 206A);
a first server residing at the cell site, the first server is configured to run the virtualized distributed unit (see Akkipeddi, Fig. 2, par. [0080], lines 1-15: Each of the VRFs 212A-212K has a corresponding virtual network interface to DU 22A. Each of the virtual network interfaces of DU 22A may thus be mapped into a different L3VPN in vCSR 20A in order to, e.g., support a different one of multiple network slices. As described in further detail below, a CNI of server 12A, when triggered by pod events from orchestrator 50, dynamically adds or deletes virtual network interfaces between the pod (here deployed with DU 22A) and the vRouter 206A, which may also be deployed as container in some examples. The CNI also dynamically updates cRPD 24A (the control plane of vCSR 20A) with host routes for each DU 22A/pod virtual network interface and corresponding Layer 3 VPN mappings, in the form of Route Distinguishers and Route Targets; in this case, the server performs actions deployed with a DU (i.e. runs a distributed unit)) at a first point in time (see Akkipeddi, par. [0062], lines 1-6: Mobile network system 100 includes multiple servers 12A-12X to execute DUs 22. Each of servers 12 may be a real or virtual server that hosts/executes software that implements DUs 22. Such software may include one or more applications deployed as, e.g., virtual machine or containers, to servers 12, and see Akkipeddi, pars. [0179-0180]: Pod 202A includes one or more application containers 229A. Pod 202B includes an instance of cRPD 324. Container platform 804 includes container runtime 208, orchestration agent 310, service proxy 211, and CNI 312. Container engine 208 includes code executable by microprocessor 810. Container runtime 208 may be one or more computer processes. Container engine 208 runs containerized applications in the form of containers 229A-229B; in this case, the DU may have containers with have a runtime);
a second virtual router residing within a first data center (see Akkipeddi, Fig. 2, item 204B, par. [0079]: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above. Each of routers 204A-204K may be a gateway router for a data center having one or more servers to execute any one or more of CUs 213A-213K; in this case, router 204B corresponds to a second router. The router being a gateway router for a data center corresponds to being within an edge data center. Routers have virtual components for virtual routing and forwarding (corresponding to a virtual router)); and
a third virtual router residing within a second data center that is separate from the first data center (see Akkipeddi, Fig. 2, item 204K, par. [0079]: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above. Each of routers 204A-204K may be a gateway router for a data center having one or more servers to execute any one or more of CUs 213A-213K; in this case, router 204K corresponds to a third router. The router being a gateway router for a data center corresponds to being within an edge data center. Each router may be a router for its own (i.e. separate) data center. Routers have virtual components for virtual routing and forwarding (corresponding to a virtual router))
However, Akkipeddi does not teach:
wherein the first router is residing at the cell site
the second router is in communication with the first virtual router via a direct private networking connection
the third router is in communication with the second router via a second private networking connection.
Chu, in the same field of endeavor, teaches:
wherein the first router is residing at the cell site (see Chu, Fig. 3, par. [0038]: each PE router 308 is coupled to one or more customer edge (CE) devices 3221 through 322p (collectively CE devices 322) respectively at various customer sites 3201 through 320p (collectively sites 320))
the second router is in communication with the first router via a direct private networking connection (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system)
the third router is in communication with the second router via a second private networking connection (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the routers of Akkipeddi with the routers being in communication of Chu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of improving performance and reducing cost of routers (see Chu, par. [0097]).
Regarding claim 16, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the second virtual router configured to receive data from the virtualized distributed unit via the direct private networking connection (see Akkipeddi, Fig. 2, par. [0080], lines 1-15: Each of the VRFs 212A-212K has a corresponding virtual network interface to DU 22A. Each of the virtual network interfaces of DU 22A may thus be mapped into a different L3VPN in vCSR 20A in order to, e.g., support a different one of multiple network slices. As described in further detail below, a CNI of server 12A, when triggered by pod events from orchestrator 50, dynamically adds or deletes virtual network interfaces between the pod (here deployed with DU 22A) and the vRouter 206A, which may also be deployed as container in some examples. The CNI also dynamically updates cRPD 24A (the control plane of vCSR 20A) with host routes for each DU 22A/pod virtual network interface and corresponding Layer 3 VPN mappings, in the form of Route Distinguishers and Route Targets, and see Akkipeddi, Fig. 2, par. [0079], lines 1-9: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above; in this case, communication, including data reception, between the DU and the second router may be through another router and its associated tunnels, corresponding to a direct private networking connection)
Akkipeddi does not teach, but Chu teaches:
the second router configured to receive data via the direct private networking connection and the second direct private networking connection (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the routers of Akkipeddi with the routers being in communication of Chu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of improving performance and reducing cost of routers (see Chu, par. [0097]).
Regarding claim 17, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the second virtual router configured to concurrently exchange data with the virtualized distributed unit via the direct private networking connection (see Akkipeddi, Fig. 2, par. [0080], lines 1-15: Each of the VRFs 212A-212K has a corresponding virtual network interface to DU 22A. Each of the virtual network interfaces of DU 22A may thus be mapped into a different L3VPN in vCSR 20A in order to, e.g., support a different one of multiple network slices. As described in further detail below, a CNI of server 12A, when triggered by pod events from orchestrator 50, dynamically adds or deletes virtual network interfaces between the pod (here deployed with DU 22A) and the vRouter 206A, which may also be deployed as container in some examples. The CNI also dynamically updates cRPD 24A (the control plane of vCSR 20A) with host routes for each DU 22A/pod virtual network interface and corresponding Layer 3 VPN mappings, in the form of Route Distinguishers and Route Targets, and see Akkipeddi, Fig. 2, par. [0079], lines 1-9: Virtualized cell site router 20A includes a virtual router forwarding plane (vRouter) 206A configured with VRFs 212A-212K (collectively, “VRFs 212”) for respective network slices implemented with respective L3VPNs, which vCSR 20A and routers 204A-204B implement using tunnels 231A-231K connecting VRFs 212 to VRFs 210A-210K on routers 204A-204B. Each of tunnels 231A-231K may represent a SR-MPLSoIPv6 or other type of tunnel mentioned above; in this case, communication, including data exchange, between the DU and the second router may be through the first router and its associated tunnels, corresponding to a direct private networking connection).
Akkipeddi does not teach, but Chu teaches:
the second router configured to concurrently exchange data via the second direct private networking connection and the direct private networking connection (see Chu, Fig. 5, pars. [0051-0052]: FIG. 5 depicts a schematic diagram of an exemplary VPN network 500 having two full VPN components. In particular, an exemplary customer VPN 502 including thirteen (13) sites (A to M) 520 comprises two full mesh VPN components 5041 and 5042. VPN component 1 5041 includes sites A through H, while VPN component 2 5042 includes of sites F through M. It is noted that sites F, G, and H are in both VPN components 5041 and 5042. Each site illustratively has an associated CE router (not shown). Sites in VPN component 1 5041 can converse with each other, and similarly sites in VPN component 2 5042 can converse with each other. Further, sites F, G, and H can converse with all other sites by virtue of being members of both VPN components 5041 and 5042; in this case, routers in various locations are in communication with each other in a mesh system).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the routers of Akkipeddi with the routers being in communication of Chu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of improving performance and reducing cost of routers (see Chu, par. [0097]).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Akkipeddi in view of Chu as applied to claims 1-6 and 9-17 above, and further in view of Lu et al. (WO 2020/253347), published 24 December, 2020, hereinafter “Lu” (see “WO2020253347_Translation.pdf” for citations).
Regarding claim 7, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
wherein:
the server is configured to run a replication controller for the virtualized distributed unit (see Akkipeddi, Fig. 3B, par. [0084], lines 3-6: Containerized networking interface 312 may be a CNI plugin that configures the interfaces of the container workloads (DUs 22A-1 to 22A-N in this example) with the DPDK-based vRouter 206A; in this case, the containerized networking interface corresponds to a replication controller).
However, the combination of Akkipeddi in view of Chu does not teach:
a replication controller that regulates a number of containerized applications
Lu, in the same field of endeavor, teaches:
a replication controller that regulates a number of containerized applications (see Lu, par. [0066]: in Figure 1, a Kubernetes container cluster includes two container clusters, IDC1 and IDC2. Since a container contains an application, Kubernetes' management of containers is equivalent to the management of application deployment, and see Lu, par. [0122]: The container cluster management device receives an application processing request sent by the tool platform, the application processing request includes a program file of the application to be processed, resource requirement information of the program file, an application identifier to be processed, and a container cluster identifier and a corresponding server identifier corresponding to the application to be processed, wherein a readable format of the container cluster corresponding to the container cluster identifier is different from a readable format of the tool platform, and the program file includes files for upgrading, expanding, and deleting the application to be processed; in this case, the device may expand or delete applications (corresponding to regulating a number of applications))
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the controller of the combination of Akkipeddi in view of Chu with the controller regulating a number of containerized applications of Lu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing the management cost of the container cluster (see Lu, par. [0051]).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Akkipeddi in view of Chu as applied to claims 1-6 and 9-17 above, and further in view of Lu, and further in view of Atwal et al. (US 12,212,491), hereinafter “Atwal”.
Regarding claim 8, the combination of Akkipeddi in view of Chu teaches the system.
However, the combination of Akkipeddi Chu does not teach:
wherein:
the server is configured to decrease the number of containerized applications for the virtualized distributed unit in response to a change in a power requirement for the virtualized distributed unit.
Lu, in the same field of endeavor, teaches:
wherein:
the server is configured to decrease the number of containerized applications for the virtualized distributed unit (see Lu, par. [0066]: in Figure 1, a Kubernetes container cluster includes two container clusters, IDC1 and IDC2. Since a container contains an application, Kubernetes' management of containers is equivalent to the management of application deployment, and see Lu, par. [0122]: The container cluster management device receives an application processing request sent by the tool platform, the application processing request includes a program file of the application to be processed, resource requirement information of the program file, an application identifier to be processed, and a container cluster identifier and a corresponding server identifier corresponding to the application to be processed, wherein a readable format of the container cluster corresponding to the container cluster identifier is different from a readable format of the tool platform, and the program file includes files for upgrading, expanding, and deleting the application to be processed; in this case, the device may delete the applications (corresponding to decreasing the number of applications))
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the controller of the combination of Akkipeddi in view of Chu with the controller decreasing a number of containerized applications of Lu with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of reducing the management cost of the container cluster (see Lu, par. [0051]).
However, the combination of Akkipeddi in view of Chu, and further in view of Lu, does not teach:
in response to a change in a power requirement for the virtualized distributed unit.
Atwal, in the same field of endeavor, teaches:
in response to a change in a power requirement for the virtualized distributed unit (see Atwal, col. 76, lines 14-23: Control plane algorithms may only need to be changed when a new application may be introduced or a new call type. To change the control plane algorithms, there may be a need to be able to refresh LEO software such that a new LEO software may be uploaded to replace previous LEO software instead of reprogramming. Volume of data may not affect the control plane except the control plane may need more power because it is handling more connections per second or per hour, and see Atwal, col. 76, lines 37-46: entire applications or at least a majority portion of the applications may be run on the LEO system to avoid security issues with running these applications across one or more terrestrial systems. To do this, the LEO system may need sufficient compute power which may be based on hardware. This may be sufficient compute power (and related hardware) to accommodate execution of at least control portions of applications, majority portions of applications, and/or entire applications (e.g., as needed based on security standards for each application), and see Atwal, col. 77, lines 6-11: Kubernetes servers may be used to provide control plane-related software applications that may decide when and where to run pods, manage traffic routing, and scale the pods based on the utilization or other metrics that may be defined by the administrator of the LEO system, and see Atwal, col. 81, lines 10-14: The LEO system or more generally the platform may utilize other technologies. For example, the platform may use open RAN (O-RAN) specific items for a distributed unit/central unit (DU/CU) split and may introduce some specific security language; in this case, modifications to the servers and algorithms may be based on a power necessary for applications).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the decrease of applications of the combination of Akkipeddi in view of Chu, and further in view of Lu, with the modification in response to a change in a power requirement of Atwal with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of customized control for an application’s specific needs (see Atwal, col. 14, lines 6-24).
Claims 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Akkipeddi in view of Chu, as applied to claims 1-6 and 9-17 above, and further in view of Sabella et al. (US 12,047,986), hereinafter “Sabella”.
Regarding claim 18, the combination of Akkipeddi in view of Chu teaches the system. Akkipeddi further teaches:
further comprising:
a second server residing at the second data center configured to run the virtualized distributed unit (see Akkipeddi, Fig. 1, par. [0062], lines 1-6: Mobile network system 100 includes multiple servers 12A-12X to execute DUs 22. Each of servers 12 may be a real or virtual server that hosts/executes software that implements DUs 22. Such software may include one or more applications deployed as, e.g., virtual machine or containers, to servers 12; in this case, multiple servers may perform actions deployed with a DU (i.e. runs a distributed unit))
However, the combination of Akkipeddi in view of Chu does not teach:
at a second point in time subsequent to the first point in time.
Sabella, in the same field of endeavor, teaches:
at a second point in time subsequent to the first point in time (see Sabella, col. 50, lines 55-63: in the beginning (i.e., at time instant t0), an Ultra-Reliable Low Latency Communication (URLLC) slice is active. Examples involve automation process/machinery control, robotic arm operation and others. However, at a later time instant (i.e., at time instant t1), a different slice is activated (e.g., an eMBB service), which creates the need to instantiate a new MEC app for local video processing, for instance, for teleoperated trouble-shooting, and see Sabella, Fig. 2B, col. 12, lines 7-14: The SCEF 220 can be configured to expose the 3GPP network services and capabilities to one or more applications running on one or more service capability server (SCS)/application server (AS), such as SCS/AS 254A, 254B, . . . , 254N. Each of the SCS/AS 254A-254N can communicate with the SCEF 220 via application programming interfaces (APIs) 252A, 252B, 252C, . . . , 252N, as illustrated in FIG. 2B; in this case, different applications and servers may run at a later point in time).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the second server of the combination of Akkipeddi in view of Chu with the server running at a later point in time of Sabella with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of full integration of MEC in 5G systems (see Sabella, col. 45, lines 13-21).
Regarding claim 19, the combination of Akkipeddi in view of Chu, and further in view of Sabella, teaches the system. Akkipeddi further teaches:
wherein:
the system is configured to redeploy the virtualized distributed unit to the second server (see Akkipeddi, Fig. 1, par. [0074], lines 1-5: orchestrator 50 controls the deployment, scaling, and operations of containers across clusters of servers 12 and the providing of computing infrastructure, which may include container-centric computing infrastructure, and see Akkipeddi, par. [0064], lines 5-13: These vCSR 24 may be executed on the same server 12 as one or more DUs 22 to provide provider edge router functionality to such DUs 22. Although each of vCSRs 20 is termed a “cell site” router, any of vCSRs 20 may be deployed to a local data center together with one or more DUs 22 for which the vCSR provides IP services, as shown with respect to vCSRs 20A-20N, i.e., where the local data center includes servers 12 that execute DUs 22 for one or more cell sites, and see Akkipeddi, Fig. 1, par. [0062], lines 1-6: Mobile network system 100 includes multiple servers 12A-12X to execute DUs 22. Each of servers 12 may be a real or virtual server that hosts/executes software that implements DUs 22. Such software may include one or more applications deployed as, e.g., virtual machine or containers, to servers 12; in this case, the DU applications may be executed on different servers)
The combination of Akkipeddi in view of Chu does not teach, but Sabella teaches:
based on a latency requirement for a network slice (see Sabella, col. 42, lines 4-13: Each network slice instance can be specifically configured to support performance related to a QoS flow of a UE, including capacity, security levels, geographical coverage, and latency. Network slice instances may include partitioning of RAN functionalities, core infrastructure including the Evolved Packet Core (EPC), as well as the switches and Data Center Servers where the 5G mobile applications and content may be hosted (e.g., as VNFs provided by MEC applications executing on resources of a MEC system within the 5G communication network); in this case, servers may be configured based on network slice instances configured to support latency requirements).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the use of the second server of the combination of Akkipeddi in view of Chu with the use of the server based on a latency of Sabella with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of full integration of MEC in 5G systems (see Sabella, col. 45, lines 13-21).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Akkipeddi in view of Chu, and further in view of Sabella, as applied to claims 18-19 above, and further in view of Chen et al. (CN 113381892), published 10 September, 2021, hereinafter “Chen” (see “CN113381892_Translation.pdf” for citations).
Regarding claim 20, the combination of Akkipeddi in view of Chu, and further in view of Sabella, teaches the system. Akkipeddi further teaches:
wherein:
the system is configured to redeploy the virtualized distributed unit to the second server (see Akkipeddi, Fig. 1, par. [0074], lines 1-5: orchestrator 50 controls the deployment, scaling, and operations of containers across clusters of servers 12 and the providing of computing infrastructure, which may include container-centric computing infrastructure, and see Akkipeddi, par. [0064], lines 5-13: These vCSR 24 may be executed on the same server 12 as one or more DUs 22 to provide provider edge router functionality to such DUs 22. Although each of vCSRs 20 is termed a “cell site” router, any of vCSRs 20 may be deployed to a local data center together with one or more DUs 22 for which the vCSR provides IP services, as shown with respect to vCSRs 20A-20N, i.e., where the local data center includes servers 12 that execute DUs 22 for one or more cell sites, and see Akkipeddi, Fig. 1, par. [0062], lines 1-6: Mobile network system 100 includes multiple servers 12A-12X to execute DUs 22. Each of servers 12 may be a real or virtual server that hosts/executes software that implements DUs 22. Such software may include one or more applications deployed as, e.g., virtual machine or containers, to servers 12; in this case, the DU applications may be executed on different servers)
However, the combination of Akkipeddi in view of Chu, and further in view of Sabella, does not teach:
based on a power requirement for a network slice.
Chen, in the same field of endeavor, teaches:
based on a power requirement for a network slice (see Chen, par. [0115]: The modules coordinate their work, reasonably allocate slice resources, and respond to various failures to ensure the low latency and high reliability required by new power services to the greatest extent possible, ensure business continuity, and improve the disaster recovery and redundant backup capabilities of the slices. The local power grid business module can execute the distribution of network slices, monitor the operating status of each user's power equipment in the corresponding area, and provide fault warning information in a timely manner. The deep reinforcement learning model in the local data processing module processes the slice allocation information to be selected and the parameter information to be processed, obtains the slice allocation method and the model parameter information to be used, and sends the slice allocation method to the edge slice processing module, and encrypts the model parameter information to be used and sends it to the edge data processing module. Based on the federated learning framework, the edge data processing module can receive and decrypt the model parameter information to be used, select a suitable aggregation method (such as weighted average) to process the model parameter information to be used, obtain the model parameter information to be updated, and then send the model parameter information to be updated to the local data processing module, so that the deep reinforcement learning model in the local data processing module can be updated based on the model parameter information to be updated. The edge slice processing module receives the slice allocation method and allocates network slices to each user in the corresponding area based on the slice allocation method, and monitors the operating status of the edge server, and see Chen, par. [0052]: the edge network system is composed of multiple edge servers and can process the target network slice allocation method. It should be noted that at least one edge server (such as an edge MEC server) is set up in each area. The edge network system receives the network slice allocation method of all areas, and allocates network slices to each user in the corresponding area based on the network slicing method; in this case, multiple servers allocate network slices which may be targeted based on a power).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the use of the second server of the combination of Akkipeddi in view of Chu, and further in view of Sabella, with the use of the server based on a power requirement of Chen with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification for the benefit of making the allocation of network slices more flexible (see Chen, par. [0115]).
Response to Arguments
Applicant’s arguments, filed 09/30/2025, with respect to the rejection(s) of claims 1, 10, and 15 under 35 USC § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made under 35 USC § 103 over Akkipeddi in view of Chu.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Akkipeddi et al. (US 2022/0279420) teaches techniques for a containerized router operating within a cloud native orchestration framework.
Barbir et al. (US 2005/0265308) teaches selection of a logical grouping of one or more virtual private network tunnels.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CALEB J BALLOWE whose telephone number is (571)270-0410. The examiner can normally be reached MON-FRI 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nishant B. Divecha can be reached at (571) 270-3125. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.J.B./Examiner, Art Unit 2419
/Nishant Divecha/Supervisory Patent Examiner, Art Unit 2419