DETAILED ACTION
This action is responsive to communications filed 17 February 2026.
Claims 1-20 are subject to examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 17 December 2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-6, 8-9 and 11-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Leary et al. (US-12380282-B2) hereinafter Leary in view of Lee et al. (US-20250140245-A1) hereinafter Lee further in view of Yu et al. (US-20200403922-A1) hereinafter Yu.
Regarding claim 1, Leary discloses:
A system ([3:35-63] system 100), comprising:
a cloud provider network comprising at least one computing device configured to execute a container hosting a large language model (LLM) ([3:35-63] cloud computing environment or multi-tenant resource provider environment, etc. … shared resources are used to host one or more large language models 104 [27:41-54] models may be executed as part of execution of containerized instantiations of applications [37:15-26] container may be used to serve different models);
utilize the LLM to provide a functionality for the edge CPE device ([12:50-13:36] content server 720 (e.g., a cloud server or edge server) may initiate a session associated with at least client device 702 (i.e. edge CPE with the edge server) … NLP-related processing, which can be passed to a large language model 730 for processing).
Leary does not explicitly disclose:
the at least one computing device being in a region of the cloud provider network;
a customer premises network of a customer that uses private network addresses and is separated from a public network by a gateway; and
an edge customer premises equipment (CPE) device on the customer premises network, wherein the edge CPE device is configured to at least:
establish a layer-3 virtual private network between the cloud provider network and the customer premises network;
establish a layer-2 virtual interface for the container using a tunnel to encapsulate layer-2 traffic over the layer-3 virtual private network; and
provide a functionality for the edge CPE device via the layer-2 virtual interface.
However, Lee discloses:
the at least one computing device being in a region of the cloud provider network ([0069] user could keep their personal LLM … installed on their own private server in at a cloud provider);
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary in view of Lee to have the at least one computing device being in a region of the cloud provider network. One of ordinary skill in the art would have been motivated to do so to keep a personal LLM installed on their own private server at a cloud provider to use various access controls to ensure that they have control over who is able to use that personal LLM (Lee, [0069]).
Leary-Lee do not explicitly disclose:
a customer premises network of a customer that uses private network addresses and is separated from a public network by a gateway; and
an edge customer premises equipment (CPE) device on the customer premises network, wherein the edge CPE device is configured to at least:
establish a layer-3 virtual private network between the cloud provider network and the customer premises network;
establish a layer-2 virtual interface for the container using a tunnel to encapsulate layer-2 traffic over the layer-3 virtual private network; and
provide a functionality for the edge CPE device via the layer-2 virtual interface.
However, Yu discloses:
a customer premises network of a customer that uses private network addresses and is separated from a public network by a gateway ([0034] gateway 110 extends a private network of datacenter 11 across a public network of cloud 12 and enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud 12 were connected to the private network of datacenter 11 (i.e. gateway separates the public and private network and allows connection)); and
an edge customer premises equipment (CPE) device on the customer premises network ([0031] gateways 110-110a may be implemented as edge gateways [0034] gateway 110 extends a private network of datacenter 11 across a public network of cloud 12 and enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud 12 were connected to the private network of datacenter 11 (i.e. gateway separates the public and private network and allows connection and customers of the edge gateway are edge customers)), wherein the edge CPE device is configured to at least:
establish a layer-3 virtual private network between the cloud provider network and the customer premises network ([0034] gateway 110 supports a plurality of site-to-site IPsec VPN tunnels … use the IPsec to secure the IP packets (i.e. layer-3) communicated within IP communications sessions (i.e. established communication network to communicate packets) … VPN users of the private networks to send and receive data);
establish a layer-2 virtual interface for the container using a tunnel to encapsulate layer-2 traffic over the layer-3 virtual private network ([0059] L2VPN API is extended to provide the capabilities for establishing multiple VPN sessions for L2VPN session configurations created for edge gateways, see [0066] load-balancing L2VPN traffic over multiple IPsec VPN tunnels (i.e. layer-2 traffic over tunnels to encapsulate layer-2 traffic over the layer-3 VPN, see [0057] packet is encapsulated at least with the VPN address … communicated via IPsec VPN tunnel [0059] configuring multiple VTI interfaces for L2VPN traffic)); and
provide a functionality for the edge CPE device via the layer-2 virtual interface ([0034] enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud 12 were connected to the private network of datacenter 11, see [0029] virtualized computing instances or workloads (i.e. functionality) … addressable data compute node or an isolated user space instance … name space container, see also [FIG. 1] e.g. VM on cloud).
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary-Lee in view of Yu to have a customer premises network use private network addresses and separated from the public network by a gateway, where a customer premises equipment establishes a L3 VPN and a L2 virtual interface using a tunnel to encapsulate L2 traffic over L3 VPN to provide a functionality for the edge CPE device via the layer-2 virtual interface. One of ordinary skill in the art would have been motivated to do so to enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud were connected to the private network of datacenter (Yu, [0034]).
Regarding claim 4, Leary-Lee-Yu disclose:
The system of claim 1, set forth above,
Leary discloses:
wherein the LLM is trained based at least in part on data obtained from the edge CPE device ([13:57-67] system can be used for … performing deep learning operations … implemented using an edge device, see [27:27-40] trained at facilities … or another facilities).
Regarding claim 5, Leary-Lee-Yu disclose:
The system of claim 1, set forth above,
Leary discloses:
wherein the functionality comprises natural language processing and voice recognition of audio captured by the edge CPE device ([2:26-3:3] LLM for a variety of different natural language processing (NLP)-related inferencing tasks [19:10-18] speech recognition).
Regarding claim 6, Leary-Lee-Yu disclose:
The system of claim 1, set forth above,
Leary discloses:
wherein the functionality comprises personal automation for the customer via the edge CPE device ([3:4-30] system that automatically generates code to perform a given task).
Regarding claim 8, Leary-Lee-Yu disclose:
The system of claim 1, set forth above,
Leary does not explicitly disclose:
wherein the LLM is specific to the customer.
However, Lee discloses:
wherein the LLM is specific to the customer ([0069] user could keep their personal LLM … installed on their own private server in at a cloud provider).
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary in view of Lee to have the LLM specific to the customer. One of ordinary skill in the art would have been motivated to do so to keep a personal LLM installed on their own private server at a cloud provider to use various access controls to ensure that they have control over who is able to use that personal LLM (Lee, [0069]).
Regarding claim 9, Leary discloses:
A computer-implemented method ([43:47-61] method), comprising:
a cloud-based artificial intelligence (AI) engine executed on the cloud provider network ([3:35-63] cloud computing environment or multi-tenant resource provider environment, etc. … shared resources are used to host one or more large language models 104 [27:41-54] models may be executed as part of execution of containerized instantiations of applications [37:15-26] container may be used to serve different models, see also [27:7-26] implementation of machine learning models … call upon services (AI, i.e. artificial intelligence, etc.)); and
using the cloud-based AI engine to provide a functionality for an edge device on the customer premises network ([12:50-13:36] content server 720 (e.g., a cloud server or edge server) may initiate a session associated with at least client device 702 (i.e. edge CPE with the edge server) … NLP-related processing, which can be passed to a large language model 730 for processing, see also [27:7-26] machine learning models … services (AI, inference, visualization, compute, etc.)).
Leary does not explicitly disclose:
a cloud-based artificial intelligence (AI) engine executed on the cloud provider network, in a region of the cloud provider network;
establishing a layer-3 virtual private network between a cloud provider network and a customer premises network of a customer;
establishing a layer-2 virtual interface using a tunnel to encapsulate layer-2 traffic over the layer-3 virtual private network; and
provide a functionality for an edge device on the customer premises network via the layer-2 virtual interface.
However, Lee discloses:
a cloud-based artificial intelligence (AI) engine executed on the cloud provider network, in a region of the cloud provider network ([0069] user could keep their personal LLM … installed on their own private server in at a cloud provider, see [0015] use artificial intelligence (“AI”) functionality to generate content [0016] LLM);
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary in view of Lee to have the cloud-based AI engine executed on the cloud provider network, in a region of the cloud provider network. One of ordinary skill in the art would have been motivated to do so to keep a personal LLM installed on their own private server at a cloud provider to use various access controls to ensure that they have control over who is able to use that personal LLM (Lee, [0069]);
Leary-Lee do not explicitly disclose:
establishing a layer-3 virtual private network between a cloud provider network and a customer premises network of a customer;
establishing a layer-2 virtual interface using a tunnel to encapsulate layer-2 traffic over the layer-3 virtual private network; and
provide a functionality for an edge device on the customer premises network via the layer-2 virtual interface.
However, Yu discloses:
establishing a layer-3 virtual private network between a cloud provider network and a customer premises network of a customer ([0034] gateway 110 extends a private network of datacenter 11 across a public network of cloud 12 and enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud 12 were connected to the private network of datacenter 11 (i.e. gateway separates the public and private network and allows connection … gateway 110 supports a plurality of site-to-site IPsec VPN tunnels … use the IPsec to secure the IP packets (i.e. layer-3) communicated within IP communications sessions (i.e. established communication network to communicate packets) … VPN users of the private networks to send and receive data);
establishing a layer-2 virtual interface using a tunnel to encapsulate layer-2 traffic over the layer-3 virtual private network ([0059] L2VPN API is extended to provide the capabilities for establishing multiple VPN sessions for L2VPN session configurations created for edge gateways, see [0066] load-balancing L2VPN traffic over multiple IPsec VPN tunnels (i.e. layer-2 traffic over tunnels to encapsulate layer-2 traffic over the layer-3 VPN, see [0057] packet is encapsulated at least with the VPN address … communicated via IPsec VPN tunnel [0059] configuring multiple VTI interfaces for L2VPN traffic)); and
provide a functionality for an edge device on the customer premises network via the layer-2 virtual interface ([0031] gateways 110-110a may be implemented as edge gateways [0034] gateway 110 extends a private network of datacenter 11 across a public network of cloud 12 and enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud 12 were connected to the private network of datacenter 11 (i.e. gateway separates the public and private network and allows connection and customers of the edge gateway are edge customers), see [0029] virtualized computing instances or workloads (i.e. functionality) … addressable data compute node or an isolated user space instance … name space container, see also [FIG. 1] e.g. VM on cloud).
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary-Lee in view of Yu to have a customer premises network use private network addresses and separated from the public network by a gateway, where a customer premises equipment establishes a L3 VPN and a L2 virtual interface using a tunnel to encapsulate L2 traffic over L3 VPN to provide a functionality for the edge CPE device via the layer-2 virtual interface. One of ordinary skill in the art would have been motivated to do so to enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud were connected to the private network of datacenter (Yu, [0034]).
Regarding claim 11, Leary-Lee-Yu disclose:
The computer-implemented method of claim 9, set forth above,
Leary discloses:
further comprising encrypting data exchanged between the cloud-based AI engine and the edge device ([33:29-24:8] system 1500 may be communicatively coupled to (e.g., via encrypted links) … software layer may be implemented as a secure, encrypted … through which applications or containers may be invoked … from external environments … execute one or more services 1420 for performing computer, AI, or visualization tasks (i.e. encrypted links and encrypted layer requires the data to be encrypted), see also [27:7-26] machine learning models … services (AI, inference, visualization, compute, etc.)).
Regarding claim 12, Leary-Lee-Yu disclose:
The computer-implemented method of claim 9, set forth above,
Leary discloses:
further comprising executing the cloud-based AI engine in at least one of: a container ([3:35-63] cloud computing environment or multi-tenant resource provider environment, etc. … shared resources are used to host one or more large language models 104 [27:41-54] models may be executed as part of execution of containerized instantiations of applications [37:15-26] container may be used to serve different models, see also [27:7-26] machine learning models … services (AI, inference, visualization, compute, etc.)) or a virtual machine instance.
Regarding claim 13, Leary-Lee-Yu disclose:
The computer-implemented method of claim 9, set forth above, further comprising:
Leary discloses:
receiving data generated by the cloud-based AI engine ([12:50-13:36] content server 720 (e.g., a cloud server or edge server) may initiate a session associated with at least client device 702 (i.e. edge CPE with the edge server) … NLP-related processing, which can be passed to a large language model 730 for processing, see also [27:7-26] machine learning models … services (AI, inference, visualization, compute, etc.)); and
Leary does not explicitly disclose:
sending the data to the edge device via the layer-2 virtual interface.
However, Yu discloses:
sending the data to the edge device via the layer-2 virtual interface ([0034] enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud 12 were connected to the private network of datacenter 11, see [0029] virtualized computing instances or workloads (i.e. functionality) … addressable data compute node or an isolated user space instance … name space container, see also [FIG. 1] e.g. VM on cloud).
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary in view of Yu to send data to the edge device via the layer-2 virtual interface. One of ordinary skill in the art would have been motivated to do so to enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud were connected to the private network of datacenter (Yu, [0034]).
Regarding claim 14, Leary-Lee-Yu disclose:
The computer-implemented method of claim 9, set forth above,
Leary discloses:
further comprising training the cloud-based AI engine based at least in part on data received from the edge device ([13:57-67] system can be used for … performing deep learning operations … implemented using an edge device, see [27:27-40] trained at facilities … or another facilities).
Regarding claim 15, Leary-Lee-Yu disclose:
The computer-implemented method of claim 9, set forth above,
Leary discloses:
wherein the cloud-based AI engine comprises a large language model (LLM) ([3:35-63] cloud computing environment or multi-tenant resource provider environment, etc. … shared resources are used to host one or more large language models 104 [27:41-54] models may be executed as part of execution of containerized instantiations of applications [37:15-26] container may be used to serve different models, see also [27:7-26] machine learning models … services (AI, inference, visualization, compute, etc.)).
Regarding claim 16, Leary-Lee-Yu disclose:
The computer-implemented method of claim 9, set forth above,
Leary does not explicitly disclose:
wherein the cloud-based AI engine is an instance specific to the customer.
However, Lee discloses:
wherein the cloud-based AI engine is an instance specific to the customer ([0069] user could keep their personal LLM … installed on their own private server in at a cloud provider, see [0015] use artificial intelligence (“AI”) functionality to generate content [0016] LLM).
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary in view of Lee to have the LLM specific to the customer. One of ordinary skill in the art would have been motivated to do so to keep a personal LLM installed on their own private server at a cloud provider to use various access controls to ensure that they have control over who is able to use that personal LLM (Lee, [0069]).
Regarding claim 17, Leary discloses:
A computer-implemented method ([43:47-61] method), comprising:
a cloud-based artificial intelligence (AI) engine executed on the cloud provider network ([3:35-63] cloud computing environment or multi-tenant resource provider environment, etc. … shared resources are used to host one or more large language models 104 [27:41-54] models may be executed as part of execution of containerized instantiations of applications [37:15-26] container may be used to serve different models, see also [27:7-26] implementation of machine learning models … call upon services (AI, i.e. artificial intelligence, etc.)); and
training the cloud-based AI engine based at least in part on data received from an edge device on the customer premises network ([13:57-67] system can be used for … performing deep learning operations … implemented using an edge device, see [27:27-40] trained at facilities … or another facilities).
Leary does not explicitly disclose:
a cloud-based artificial intelligence (AI) engine executed on the cloud provider network, in a region of the cloud provider network;
establishing a layer-3 virtual private network between a cloud provider network and a customer premises network of a customer;
establishing a layer-2 virtual interface for a cloud-based engine executed on the cloud provider network using a tunnel to encapsulate layer-2 traffic over the layer-3 virtual private network; and
data received from an edge device on the customer premises network via the layer-2 virtual interface.
However, Lee discloses:
a cloud-based artificial intelligence (AI) engine executed on the cloud provider network, in a region of the cloud provider network ([0069] user could keep their personal LLM … installed on their own private server in at a cloud provider, see [0015] use artificial intelligence (“AI”) functionality to generate content [0016] LLM);
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary in view of Lee to have the cloud-based AI engine executed on the cloud provider network, in a region of the cloud provider network. One of ordinary skill in the art would have been motivated to do so to keep a personal LLM installed on their own private server at a cloud provider to use various access controls to ensure that they have control over who is able to use that personal LLM (Lee, [0069]);
Leary-Lee do not explicitly disclose:
establishing a layer-3 virtual private network between a cloud provider network and a customer premises network of a customer;
establishing a layer-2 virtual interface for a cloud-based engine executed on the cloud provider network using a tunnel to encapsulate layer-2 traffic over the layer-3 virtual private network; and
data received from an edge device on the customer premises network via the layer-2 virtual interface.
However, Yu discloses:
establishing a layer-3 virtual private network between a cloud provider network and a customer premises network of a customer ([0034] gateway 110 extends a private network of datacenter 11 across a public network of cloud 12 and enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud 12 were connected to the private network of datacenter 11 (i.e. gateway separates the public and private network and allows connection … gateway 110 supports a plurality of site-to-site IPsec VPN tunnels … use the IPsec to secure the IP packets (i.e. layer-3) communicated within IP communications sessions (i.e. established communication network to communicate packets) … VPN users of the private networks to send and receive data);
establishing a layer-2 virtual interface for a cloud-based engine executed on the cloud provider network using a tunnel to encapsulate layer-2 traffic over the layer-3 virtual private network ([0059] L2VPN API is extended to provide the capabilities for establishing multiple VPN sessions for L2VPN session configurations created for edge gateways, see [0066] load-balancing L2VPN traffic over multiple IPsec VPN tunnels (i.e. layer-2 traffic over tunnels to encapsulate layer-2 traffic over the layer-3 VPN, see [0057] packet is encapsulated at least with the VPN address … communicated via IPsec VPN tunnel [0059] configuring multiple VTI interfaces for L2VPN traffic)); and
data received from an edge device on the customer premises network via the layer-2 virtual interface ([0031] gateways 110-110a may be implemented as edge gateways [0034] gateway 110 extends a private network of datacenter 11 across a public network of cloud 12 and enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud 12 were connected to the private network of datacenter 11 (i.e. gateway separates the public and private network and allows connection and customers of the edge gateway are edge customers), see [0029] virtualized computing instances or workloads (i.e. functionality) … addressable data compute node or an isolated user space instance … name space container, see also [FIG. 1] e.g. VM on cloud).
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary-Lee in view of Yu to have a customer premises network use private network addresses and separated from the public network by a gateway, where a customer premises equipment establishes a L3 VPN and a L2 virtual interface using a tunnel to encapsulate L2 traffic over L3 VPN to provide data for the edge CPE device via the layer-2 virtual interface. One of ordinary skill in the art would have been motivated to do so to enables the VPN users of the private networks to send and receive data across the public network as if the resources of cloud were connected to the private network of datacenter (Yu, [0034]).
Regarding claim 18, Leary-Lee-Yu disclose:
The computer-implemented method of claim 17, set forth above,
Leary does not explicitly disclose:
wherein the cloud-based AI engine is specific to the customer.
However, Lee discloses:
wherein the cloud-based AI engine is specific to the customer ([0069] user could keep their personal LLM … installed on their own private server in at a cloud provider, see [0015] use artificial intelligence (“AI”) functionality to generate content [0016] LLM).
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary in view of Lee to have the LLM specific to the customer. One of ordinary skill in the art would have been motivated to do so to keep a personal LLM installed on their own private server at a cloud provider to use various access controls to ensure that they have control over who is able to use that personal LLM (Lee, [0069]).
Regarding claim 19, Leary-Lee-Yu disclose:
The computer-implemented method of claim 17, set forth above,
Leary discloses:
wherein training the cloud-based AI engine ([13:57-67] system can be used for … performing deep learning operations … implemented using an edge device, see [27:27-40] trained at facilities … or another facilities, see also [27:7-26] machine learning models … services (AI, inference, visualization, compute, etc.)) further comprises training the cloud-based AI engine to provide a functionality for the edge device ([12:50-13:36] content server 720 (e.g., a cloud server or edge server) may initiate a session associated with at least client device 702 (i.e. edge CPE with the edge server) … NLP-related processing, which can be passed to a large language model 730 for processing, see also [27:7-26] machine learning models … services (AI, inference, visualization, compute, etc.)).
Regarding claim 20, Leary-Lee-Yu disclose:
The computer-implemented method of claim 17, set forth above,
Leary discloses:
wherein the data comprises environmental data captured by the edge device ([5:6-62] temperature, see [22:18-52] thermal sensor … accelerometer … ambient light sensor … compass … gyroscope (i.e. environmental data)).
Claim(s) 2 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Leary et al. (US-12380282-B2) hereinafter Leary in view of Lee et al. (US-20250140245-A1) hereinafter Lee further in view of Yu et al. (US-20200403922-A1) hereinafter Yu further in view of Hu et al. (US-11689388-B2) hereinafter Hu.
Regarding claim 2, Leary-Lee-Yu disclose:
The system of claim 1, set forth above,
Leary-Lee-Yu do not explicitly disclose:
wherein the customer premises network comprises a home network.
However, Hu discloses:
wherein the customer premises network comprises a home network ([5:51-6:10] home office applications …. first geographic location can for example correspond to the home or personal residence of a business owner or employee (i.e. home network)).
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary-Lee-Yu in view of Hu to have the customer premises network comprise a home network. One of ordinary skill in the art would have been motivated to do so to dynamically configure VNFs at an edge cloud such as at a home location (Hu, [4:19-45]).
Regarding claim 10, Leary-Lee-Yu disclose:
The computer-implemented method of claim 9, set forth above,
Leary-Lee-Yu do not explicitly disclose:
wherein the edge device is different from another edge device that functions as an endpoint to the tunnel.
However, Hu discloses:
wherein the edge device is different from another edge device that functions as an endpoint to the tunnel ([FIG. 1] e.g. multiple endpoints for multiple tunnels, wherein endpoints are different than the originating device on the edge).
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary-Lee-Yu in view of Hu to have the edge device different from another edge device that functions as an endpoint to the tunnel. One of ordinary skill in the art would have been motivated to do so to connect one or more LANs via VPN tunnels enabled by VNFs at one or more chosen physical locations (Hu, [5:26-42]).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Leary et al. (US-12380282-B2) hereinafter Leary in view of Lee et al. (US-20250140245-A1) hereinafter Lee further in view of Yu et al. (US-20200403922-A1) hereinafter Yu further in view of Nallamothu et al. (US-20230079209-A1) hereinafter Nallamothu.
Regarding claim 3, Leary-Lee-Yu disclose:
The system of claim 1, set forth above,
Leary-Lee-Yu do not explicitly disclose:
wherein the container is assigned a layer-3 network address on the customer premises network.
However, Nallamothu discloses:
wherein the container is assigned a layer-3 network address on the customer premises network ([0047] containers may share an IP address and port space … containers in different pods have different IP addresses … can use IP networking to communicate (i.e. layer-3)).
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary-Lee-Yu in view of Nallamothu to have the container assigned a layer-3 network address on the customer premises network. One of ordinary skill in the art would have been motivated to do so to use IP networking to communicate with the containers (Nallamothu, [0047]).
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Leary et al. (US-12380282-B2) hereinafter Leary in view of Lee et al. (US-20250140245-A1) hereinafter Lee further in view of Yu et al. (US-20200403922-A1) hereinafter Yu further in view of Nair et al. (US-12333462-B2) hereinafter Nair.
Regarding claim 7, Leary-Lee-Yu disclose:
The system of claim 1, set forth above,
Leary-Lee-Yu do not explicitly disclose:
wherein the functionality comprises optimizing energy usage of Internet-of-Things (IoT) devices of the customer premises network.
However, Nair discloses:
wherein the functionality comprises optimizing energy usage of Internet-of-Things (IoT) devices of the customer premises network ([20:36-43] [Table 1] AI/ML applications in the context of edge computing … real-time energy usage monitoring, energy management, distributed power generation and storage, automated controls, predictive maintenance, etc. (e.g., for energy monitors, infrared cameras, temperature and current sensors, flow meters)).
It would have been obvious to one of ordinary skill in the pertinent art before the effective filing date of the claimed invention to modify the invention of Leary-Lee-Yu in view of Nair to have the functionality comprise of optimizing energy usage of IoT devices. One of ordinary skill in the art would have been motivated to do so to have various AI and ML applications at the edge such as for energy usage monitoring, management, etc. (Nair, [20:36-43] [TABLE 1]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Mammoliti et al. (US-8532095-B2) TECHNIQUES CONFIGURING CUSTOMER EQUIPMENT FOR NETWORK OPERATIONS FROM PROVIDER EDGE;
Farinacci et al. (US-8645576-B2) OVERLAY TRANSPORT VIRTUALIZATION;
Sung et al. (US-10044841-B2) METHODS AND SYSTEMS FOR CREATING PROTOCOL HEADER FOR EMBEDDED LAYER TWO PACKETS;
Guo et al. (US-12368745-B1) Using Natural Language Queries To Conduct An Investigation Of A Monitored System;
Galvin et al. (US-12387050-B1) MULTI-STAGE LLM WITH UNLIMITED CONTEXT.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alex Tran whose telephone number is (571)272-8173. The examiner can normally be reached Monday-Friday 10AM-6PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamal Divecha can be reached at (571)272-5863. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Alex Tran/Primary Examiner, Art Unit 2453