DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/27/2026 has been entered.
Claims 1, 22-29 and 32-39 are pending and they are presented for examination.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim(s) 1, 22-29 and 32-39 is/are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 1 (similarly claims 29 and 32) recite: “transmitting, to a management and orchestration entity, an indication to transfer”. After careful search of the instant application, the examiner was unable to find any disclosure which discloses transmitting an indication to transfer. Closest disclosure in PGPub paragraph 18 states: “Preferably, the at least one second node in the container cluster instance is released, and the method further comprises transmitting, to a management and orchestration entity, an indication of transferring at least one network service or at least one virtualized network function instance running in the at least one second node to other nodes in the container cluster instance.”
Transmitting an indication of transferring (emphasis added) is an indication that a transferring of a workload is currently/presently happening and NOT an indication to transfer (i.e. future transfer).
Claims 22-28 and 33-39 are rejected based on rejection of its corresponding dependent claim.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 1, 22-29 and 32-39 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 (similarly claims 29 and 32) recite: “transmitting, to a management and orchestration entity, an indication to transfer”. Based on the disclosure of the instant application, the examiner is unclear if an indication to transfer is same as an indication of transfer (i.e. currently transferring/initiate transfer) or indication of future transfer. Closest disclosure in PGPub paragraph 18 states: “Preferably, the at least one second node in the container cluster instance is released, and the method further comprises transmitting, to a management and orchestration entity, an indication of transferring at least one network service or at least one virtualized network function instance running in the at least one second node to other nodes in the container cluster instance.”
Claims 22-28 and 33-39 are rejected based on rejection of its corresponding dependent claim.
Response to Amendment
Applicant's arguments with respect to claims 1, 22-29 and 32-39 have been considered but are moot in view of the new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 22-29 and 32-39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cherunni (Pub 20200412596) in view of Patel et al. (Pub 20160205519) (hereafter Patel).
As per claim 1, Cherunni teaches:
A method for use in a container cluster management entity, the method comprising: ([Paragraph 23], Kubernetes can include an open-source container orchestration system for automating application deployment, scaling, and management. Kubernetes can provide a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. A pod in Kubernetes can include a scheduling unit.)
in response to at least one triggering condition for scaling down, performing a container cluster scaling operation on cluster instance resources in a container cluster instance, the container cluster scaling comprising:
transmitting, to a management and orchestration entity, an indication to transfer one or more workloads running on at least one second node to other nodes in the container cluster instance, wherein the one or more workloads comprises at least one network service or at least one virtualized network function instance; and
subsequently releasing the at least one second node from the container cluster instance. ([Paragraph 10], In some examples, the container management platform can generate at least one microservice based on an occurrence of a predetermined event defined by the VNF descriptor. The events can refer to actions or occurrences recognized by software, which may originate asynchronously from the external environment, that may be handled by the software. The policy and mediation engine may instantiate such microservices by the VIM based on certain triggers defined by the VNF descriptors. The generation of the VNF can include determining at least one of a VNF placement (for example, installation and configuration), an addition of other VNFs, or a VNF teardown. The disclosed systems may perform additional activities as part of the generation of the VNF, including providing VNF scaling based on dynamic requirements from the network, monitoring and analyzing VNFs for errors, capacity management, and performance, and upgrading and updating VNFs for applying new releases and patches. [Paragraph 35], With automation, the disclosed systems can spin up or destroy VNFs (as VMs or containers) to elastically scale network functions to match dynamic demand)
Cherunni teaches scaling operations (in/out) of VNF(s) (container(s)/VM(s)) and subsequently releasing node(s) based on scaling operations.
However, Cherunni does not explicitly disclose transmitting, to a management and orchestration entity, an indication to transfer one or more workloads running on at least one second node to other nodes in the container cluster instance.
Patel teaches transmitting, to a management and orchestration entity, an indication to transfer one or more workloads running on at least one second node to other nodes in the container cluster instance. ([Paragraph 2], This patent application is related to the following co-pending and commonly assigned patent application filed on the same date: “System and Method for Elastic Scaling using a Container-Based Platform” (Attorney Docket No. KOD-013), which application is hereby incorporated by reference herein as if reproduced in its entirety. [Paragraph 25], Various embodiments provide mechanisms to persist and recover PTT pre-established sessions, mechanisms to dynamically scale-up and scale-down the load handling capacity of the system… [Paragraph 35], Service orchestration layer 202 is the highest layer of abstraction in infrastructure management architecture 200. Service orchestration layer 202 is a layer on top of which various service components that constitute the PTT System operate. A service orchestrator in service orchestration layer 202 uses service metrics to scale service clusters 210 (e.g., groups of containers may be referred to collectively as a container cluster) for each service component (e.g., the various service components illustrated in FIG. 3, below). Scaling service clusters 210 may include transmitting scaling triggers to lower layers (e.g., container management layer 204). In some embodiments, the scaling of service clusters 210 may be in real time. These scaling triggers may be based on service metrics transmitted to service orchestration layer 202 from lower layers (e.g., container management layer 204). Embodiment service metrics for a PTT platform may include, for example, number of PTT pre-established sessions, PTT call setup rate, PTT call leg setup rate (e.g., latency), number of concurrently active PTT calls, number of concurrently active PTT call legs, number of media codec instances in active use, combinations thereof, and the like. Service orchestration layer 202 may also create new container instances to replace failed container instances, for example, based on faults transmitted to service orchestration layer 202 from lower layers (e.g., container management layer 204). [Paragraph 36], Container management layer 204 operates on top of a pool of virtual machines (e.g., compute nodes 212 in virtual infrastructure management layer 206) to manage the distribution of services clusters 210 across various compute nodes 212. For example, container management layer 204 may manifest container instances for each service cluster 210 across compute nodes 212… Container management layer 204 may instantiate new compute nodes to scale the system when needed based on the platform metrics. For example, container management layer 204 may transmit scaling triggers to virtual infrastructure management layer 206 to instantiate new compute nodes or to remove compute nodes as desired. In some embodiments, container management layer 204 may also transmit desired compute node profiles with the scaling triggers to virtual infrastructure management layer 206. [Paragraph 46], FIG. 4 illustrates a block diagram 400 of interactions between various service orchestration and container management modules/layers in an embodiment PTT system platform (e.g., PTT platform 106). In some embodiments, service cluster management is function of service orchestration. As part of service cluster management, PTT platform 106 may perform one or more of the following non-limiting functions: service instantiation and configuration, automatically scaling the system based on one or more capacity indicators, automatically updating a load balancer pool when new pool members are added or removed, and migrating containers from one host (e.g., a virtual compute node) to another when a host is overloaded.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Cherunni wherein in response to a dynamic scaling condition (scale down), scale down cluster scaling operation is performed in a container cluster instance workload(s) (network service/virtualized network function) by monitoring and analyzing dynamic requirements, capacity management and performance of the workload(s), and releasing a node from a container cluster instance based on the dynamic scaling operation, into teachings of Patel wherein dynamic scaling operation transmits a request/indication to migrate the workload(s) from one node to other nodes, because this would enhance the teachings of Cherunni wherein by transmitting the request/indication to a management and orchestration entity to migrate/transfer the workload(s) to other nodes of the container cluster instance, it allows management and orchestration entity to interact to perform automatic/dynamic scaling operations based on various indicators such as capacity, performance, service availability requirement, etc.
As per claim 22, rejection of claim 1 is incorporated:
Cherunni teaches wherein the at least one triggering condition comprises at least one of:
updating a container cluster descriptor template corresponding to the container cluster instance,
receiving, from a management entity, a request of performing the container cluster scaling operation on the cluster instance resources in the container cluster instance,
determining that available cluster instance resources of the container cluster instance are below a scaling threshold, or
determining that available cluster instance resources of the container cluster instance are insufficient for a cluster instance resource requirement of a container cluster instance requested by a network service or a virtualized network function instantiation operation. ([Paragraph 10], In some examples, the container management platform can generate at least one microservice based on an occurrence of a predetermined event defined by the VNF descriptor. The events can refer to actions or occurrences recognized by software, which may originate asynchronously from the external environment, that may be handled by the software. The policy and mediation engine may instantiate such microservices by the VIM based on certain triggers defined by the VNF descriptors. The generation of the VNF can include determining at least one of a VNF placement (for example, installation and configuration), an addition of other VNFs, or a VNF teardown. The disclosed systems may perform additional activities as part of the generation of the VNF, including providing VNF scaling based on dynamic requirements from the network, monitoring and analyzing VNFs for errors, capacity management, and performance, and upgrading and updating VNFs for applying new releases and patches.)
As per claim 23, rejection of claim 22 is incorporated:
Cherunni teaches wherein the scaling threshold is associated with a container cluster manager scaling policy pre-configured in the container cluster management entity. ([Paragraph 10], In some examples, the container management platform can generate at least one microservice based on an occurrence of a predetermined event defined by the VNF descriptor. The events can refer to actions or occurrences recognized by software, which may originate asynchronously from the external environment, that may be handled by the software. The policy and mediation engine may instantiate such microservices by the VIM based on certain triggers defined by the VNF descriptors. The generation of the VNF can include determining at least one of a VNF placement (for example, installation and configuration), an addition of other VNFs, or a VNF teardown. The disclosed systems may perform additional activities as part of the generation of the VNF, including providing VNF scaling based on dynamic requirements from the network, monitoring and analyzing VNFs for errors, capacity management, and performance, and upgrading and updating VNFs for applying new releases and patches. [Paragraph 20], SLAs can be output-based in that their purpose is specifically to define what the end user can receive. For instance, an SLA can define performance characteristics of a network sliced being leased to a tenant. The SLA can specify VNF performance thresholds for one or more VNFs in a service chain for the slice. [Paragraph 42], At stage 104, the method may include selectively generating at least one container based on the VNF descriptor. Further, the disclosed systems may configure the VIM to handle policy-driven (for example, SLA-based) attributes defined in the descriptors (for example, VVLDs, VNFDs, and PNFDs).)
As per claim 24, rejection of claim 22 is incorporated:
Cherunni teaches transmitting, to a container infrastructure service management entity, a request of determining a container resource requirement of the network service or the virtualized network function instantiation operation, and
receiving, from the container infrastructure service management entity, the cluster instance resource requirement of the container cluster instance requested by the network service or the virtualized network function instantiation operation. ([Paragraph 2], Network function virtualization orchestrators (“NFVOs”) may use underlying virtual infrastructure managers (“VIMs”) for implementing the VNF services according to service requirements. For example, a relatively complex VNF service may be constructed of a master VNF and multiple dependent VNFs. Such a VNF may have an instruction set that defines a descriptor such as a virtual VNF link descriptor (“VVLD”) and a physical network function descriptor (“PNFD”). The VVLD and the PNFD may describe the virtual and physical network requirements for the VNF. The instruction set may conform with a service level agreement (“SLA”) and can describe links to use or avoid in the physical network based on the descriptor. The SLA can govern performance requirements for various tenants. The orchestration may be critical to ensure the execution of appropriate SLAs for tenants and end users that subscribe to the VNF service. For instance, a virtual evolved packet core (“EPC”) service may be allotted to a set of subscribers as part of a 5G network slice. An example SLA may describe that the total end-to-end latency for traffic flows within the various components of a corresponding multi-part VNF may not exceed about 10 milliseconds. [Paragraph 9], The stages can further include selectively generating at least one container on the physical network based on the VNF descriptor. The stages can include determining, by the VIM, an integrated network requirement based on state information associated with the integrated network. The stages can also include providing, by the VIM, to a container management platform, the integrated network requirement, and generating, by the container management platform, a VNF in the container to fulfill the integrated network requirement. In some examples, the VNF can include a first subordinate VNF that includes at least one microservice, and the stages can further include: generating a parent VNF on the integrated network, the parent VNF including the first subordinate VNF. The stages can also include causing a second subordinate VNF including a virtual machine to be generated on the parent VNF based on the VNF descriptor to fulfill the integrated network requirement. The stages can further include generating, based on the VNF descriptor, a VM to fulfill the integrated network requirement.)
As per claim 25, rejection of claim 1 is incorporated:
Cherunni teaches wherein the container cluster scaling operation on the cluster instance resource in the container cluster instance comprises
expanding or reducing node resources of at least one third node in the container cluster instance,
wherein the node resources comprise at least one of computing resources, storage resources or network resources, wherein the at least one second node in the container cluster instance is released, and wherein the method further comprises: transmitting, to a management and orchestration entity, an indication of transferring at least one network service or at least one virtualized network function instance running in the at least one second node to other nodes in the container cluster instance. ([Paragraph 10], In some examples, the container management platform can generate at least one microservice based on an occurrence of a predetermined event defined by the VNF descriptor. The events can refer to actions or occurrences recognized by software, which may originate asynchronously from the external environment, that may be handled by the software. The policy and mediation engine may instantiate such microservices by the VIM based on certain triggers defined by the VNF descriptors. The generation of the VNF can include determining at least one of a VNF placement (for example, installation and configuration), an addition of other VNFs, or a VNF teardown. The disclosed systems may perform additional activities as part of the generation of the VNF, including providing VNF scaling based on dynamic requirements from the network, monitoring and analyzing VNFs for errors, capacity management, and performance, and upgrading and updating VNFs for applying new releases and patches. [Paragraph 32], The VIM may instantiate a container by maintaining state information regarding the containers and pods, related IP and MAC information, and node information. The VIM may identify, during workload creation, corresponding resource requirements for executing the workload based on descriptors of the VNF. [Paragraph 35], Automation and orchestration can facilitate programmatically controlling, monitoring, and repairing or replacing networking components without direct human involvement. With automation, the disclosed systems can spin up or destroy VNFs (as VMs or containers) to elastically scale network functions to match dynamic demand. [Paragraph 71], In some examples, the resources depicted in FIG. 4 may include clusters 410 for the VIM 408 or compute nodes 422 for the container management platform 420. The clusters 410 of the VIM 408 may include representative cluster A 411 and cluster B 412. The compute nodes in the container management platform 420 may be referred to as a pod, such as example pod A 423 and pod B 424. [Paragraph 37], VNFs can move individual network functions out of dedicated hardware devices into software that runs on commodity hardware. [Paragraph 35], With automation, the disclosed systems can spin up or destroy VNFs (as VMs or containers) to elastically scale network functions to match dynamic demand)
As per claim 26, rejection of claim 1 is incorporated:
Cherunni teaches wherein the performing the container cluster scaling operation on the cluster instance resources in the container cluster instance in response to the at least one triggering condition comprises:
interacting with a virtualized infrastructure manager (VIM) for performing the container cluster scaling operation on the cluster instance resources in the container cluster instance, or
transmitting, to the VIM, a request of performing the container cluster scaling operation on the cluster instance resources in the container cluster instance. ([Paragraph 10], In some examples, the container management platform can generate at least one microservice based on an occurrence of a predetermined event defined by the VNF descriptor. The events can refer to actions or occurrences recognized by software, which may originate asynchronously from the external environment, that may be handled by the software. The policy and mediation engine may instantiate such microservices by the VIM based on certain triggers defined by the VNF descriptors. The generation of the VNF can include determining at least one of a VNF placement (for example, installation and configuration), an addition of other VNFs, or a VNF teardown. The disclosed systems may perform additional activities as part of the generation of the VNF, including providing VNF scaling based on dynamic requirements from the network, monitoring and analyzing VNFs for errors, capacity management, and performance, and upgrading and updating VNFs for applying new releases and patches. [Paragraph 32], The VIM may instantiate a container by maintaining state information regarding the containers and pods, related IP and MAC information, and node information. The VIM may identify, during workload creation, corresponding resource requirements for executing the workload based on descriptors of the VNF. [Paragraph 71], In some examples, the resources depicted in FIG. 4 may include clusters 410 for the VIM 408 or compute nodes 422 for the container management platform 420. The clusters 410 of the VIM 408 may include representative cluster A 411 and cluster B 412. The compute nodes in the container management platform 420 may be referred to as a pod, such as example pod A 423 and pod B 424.)
As per claim 27, rejection of claim 1 is incorporated:
Cherunni teaches updating the container cluster instance based on the scaled cluster instance resources of the container cluster instance, wherein at least one of:
the updating the container cluster instance based on the scaled cluster instance resources of the container cluster instance comprises at least one of: adding at least one first node to the container cluster instance, deleting at least one second node from the container cluster instance, or updating node resources allocated to at least one third node in the container cluster instance, or
the method further comprises: updating runtime information associated with the scaled cluster instance resources of the updated container cluster instance, and transmitting, to a management entity, at least one of a notification indicating that the container cluster instance is updated or the runtime information of the updated container cluster instance. ([Paragraph 10], The disclosed systems may perform additional activities as part of the generation of the VNF, including providing VNF scaling based on dynamic requirements from the network, monitoring and analyzing VNFs for errors, capacity management, and performance, and upgrading and updating VNFs for applying new releases and patches. [Paragraph 30], Having a container in place to host microservices can allow active schedule and management to optimize resource utilization. A container orchestration engine can enable the provisioning of hosts resources to containers, the assignment of containers to hosts, and the instantiation and rescheduling of containers. [Paragraph 32], The VIM may communicate the state information to the container management platform to make decisions regarding VNF placement, scale-out, and tear-down. [Paragraph 71], The compute nodes in the container management platform 420 may be referred to as a pod, such as example pod A 423 and pod B 424. [Paragraph 23], Kubernetes can include an open-source container orchestration system for automating application deployment, scaling, and management. Kubernetes can provide a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. A pod in Kubernetes can include a scheduling unit.)
As per claim 28, rejection of claim 1 is incorporated:
Cherunni teaches wherein the cluster instance resources are associated with at least one of the number of nodes in the container cluster instance or node resources of the nodes in the container cluster instance, and wherein the node resources comprise at least one of computing resources, storage resources or network resources. ([Paragraph 71], The compute nodes in the container management platform 420 may be referred to as a pod, such as example pod A 423 and pod B 424. [Paragraph 23], Kubernetes can include an open-source container orchestration system for automating application deployment, scaling, and management. Kubernetes can provide a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. A pod in Kubernetes can include a scheduling unit. A pod in Kubernetes can include a scheduling unit. The pod can add a higher level of abstraction by grouping containerized components. A pod can include containers that can be co-located on the host machine and can share resources. Openstack can include an open-source cloud operating system that automates the management of the compute, storage, and networking components of a cloud environment. [Paragraph 24], Conventional systems may use an orchestrator distinguish between which VNFs to be directed towards the container management platform or the VIM. The orchestrator may include a network functions virtualization (“NFV”) management and orchestrator (“MANO”). An example of network functions virtualization management and network orchestration (“NFV MANO”) can refer to a framework developed by a working group of the same name within the European Telecommunications Standards Institute (“ETSI”) for NFV. The NFV MANO can include the ETSI-defined framework for the management and orchestration of all resources in a virtualized data center including compute, networking, storage, and VM resources.)
As per claim 29, Cherunni teaches:
A method for use in a management and orchestration entity, the method comprising:
receiving, from a container cluster management entity, an indication to transfer one or more workloads running on at least one second node to other nodes in a container cluster instance, wherein the one or more workloads comprise at least one network service or at least one virtualized network function instance; and
in response to receiving the indication, transferring the one or more workloads to the other nodes in the container cluster instance to enable subsequent release of the at least one second node. ([Paragraph 2], An example SLA may describe that the total end-to-end latency for traffic flows within the various components of a corresponding multi-part VNF may not exceed about 10 milliseconds. [Paragraph 10], In some examples, the container management platform can generate at least one microservice based on an occurrence of a predetermined event defined by the VNF descriptor. The events can refer to actions or occurrences recognized by software, which may originate asynchronously from the external environment, that may be handled by the software. The policy and mediation engine may instantiate such microservices by the VIM based on certain triggers defined by the VNF descriptors. The generation of the VNF can include determining at least one of a VNF placement (for example, installation and configuration), an addition of other VNFs, or a VNF teardown. The disclosed systems may perform additional activities as part of the generation of the VNF, including providing VNF scaling based on dynamic requirements from the network, monitoring and analyzing VNFs for errors, capacity management, and performance, and upgrading and updating VNFs for applying new releases and patches. [Paragraph 20], SLAs can be output-based in that their purpose is specifically to define what the end user can receive. For instance, an SLA can define performance characteristics of a network sliced being leased to a tenant. The SLA can specify VNF performance thresholds for one or more VNFs in a service chain for the slice. [Paragraph 42], At stage 104, the method may include selectively generating at least one container based on the VNF descriptor. Further, the disclosed systems may configure the VIM to handle policy-driven (for example, SLA-based) attributes defined in the descriptors (for example, VVLDs, VNFDs, and PNFDs). [Paragraph 33], Further, the VIM may be configured to handle policy-driven (for example, SLA-based) attributes defined in descriptions (for example, VVLDs, VNFDs, and PNFDs). [Paragraph 40], The VNF manager may communicate the VNF's service attributes to the MANO to perform and adhere to a given SLA. [Paragraph 35], With automation, the disclosed systems can spin up or destroy VNFs (as VMs or containers) to elastically scale network functions to match dynamic demand)
Cherunni teaches scaling operations (in/out) of VNF(s) (container(s)/VM(s)) and subsequently releasing node(s) based on scaling operations.
However, Cherunni does not explicitly disclose receiving, from a container cluster management entity, an indication to transfer one or more workloads running on at least one second node to other nodes in a container cluster instance; and in response to receiving the indication, transferring the one or more workloads to the other nodes in the container cluster instance.
Patel teaches receiving, from a container cluster management entity, an indication to transfer one or more workloads running on at least one second node to other nodes in a container cluster instance; and in response to receiving the indication, transferring the one or more workloads to the other nodes in the container cluster instance. ([Paragraph 2], This patent application is related to the following co-pending and commonly assigned patent application filed on the same date: “System and Method for Elastic Scaling using a Container-Based Platform” (Attorney Docket No. KOD-013), which application is hereby incorporated by reference herein as if reproduced in its entirety. [Paragraph 25], Various embodiments provide mechanisms to persist and recover PTT pre-established sessions, mechanisms to dynamically scale-up and scale-down the load handling capacity of the system… [Paragraph 35], Service orchestration layer 202 is the highest layer of abstraction in infrastructure management architecture 200. Service orchestration layer 202 is a layer on top of which various service components that constitute the PTT System operate. A service orchestrator in service orchestration layer 202 uses service metrics to scale service clusters 210 (e.g., groups of containers may be referred to collectively as a container cluster) for each service component (e.g., the various service components illustrated in FIG. 3, below). Scaling service clusters 210 may include transmitting scaling triggers to lower layers (e.g., container management layer 204). In some embodiments, the scaling of service clusters 210 may be in real time. These scaling triggers may be based on service metrics transmitted to service orchestration layer 202 from lower layers (e.g., container management layer 204). Embodiment service metrics for a PTT platform may include, for example, number of PTT pre-established sessions, PTT call setup rate, PTT call leg setup rate (e.g., latency), number of concurrently active PTT calls, number of concurrently active PTT call legs, number of media codec instances in active use, combinations thereof, and the like. Service orchestration layer 202 may also create new container instances to replace failed container instances, for example, based on faults transmitted to service orchestration layer 202 from lower layers (e.g., container management layer 204). [Paragraph 36], Container management layer 204 operates on top of a pool of virtual machines (e.g., compute nodes 212 in virtual infrastructure management layer 206) to manage the distribution of services clusters 210 across various compute nodes 212. For example, container management layer 204 may manifest container instances for each service cluster 210 across compute nodes 212… Container management layer 204 may instantiate new compute nodes to scale the system when needed based on the platform metrics. For example, container management layer 204 may transmit scaling triggers to virtual infrastructure management layer 206 to instantiate new compute nodes or to remove compute nodes as desired. In some embodiments, container management layer 204 may also transmit desired compute node profiles with the scaling triggers to virtual infrastructure management layer 206. [Paragraph 46], FIG. 4 illustrates a block diagram 400 of interactions between various service orchestration and container management modules/layers in an embodiment PTT system platform (e.g., PTT platform 106). In some embodiments, service cluster management is function of service orchestration. As part of service cluster management, PTT platform 106 may perform one or more of the following non-limiting functions: service instantiation and configuration, automatically scaling the system based on one or more capacity indicators, automatically updating a load balancer pool when new pool members are added or removed, and migrating containers from one host (e.g., a virtual compute node) to another when a host is overloaded.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Cherunni wherein in response to a dynamic scaling condition (scale down), scale down cluster scaling operation is performed in a container cluster instance workload(s) (network service/virtualized network function) by monitoring and analyzing dynamic requirements, capacity management and performance of the workload(s), and releasing a node from a container cluster instance based on the dynamic scaling operation, into teachings of Patel wherein dynamic scaling operation transmits a request/indication to migrate the workload(s) from one node to other nodes, because this would enhance the teachings of Cherunni wherein the request/indication received to migrate/transfer the workload(s) to other nodes of the container cluster instance, it allows management and orchestration entity to interact to perform automatic/dynamic scaling operations based on various indicators such as capacity, performance, service availability requirement, etc.
As per claims 32-39, these are device claims corresponding to the method claims 1 and 22-28. Therefore, rejected based on similar rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DONG U KIM/Primary Examiner, Art Unit 2197