Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1, 3-10, and 12-18 are pending. Claims 2, 11, and 19-25 are cancelled.
Response to Arguments
Regarding 35 U.S.C. 101:
Applicant’s amendments and arguments regarding the rejection of claims 1, 3-10, and 12-18 under 35 U.S.C. 101 have been fully considered and are found to be persuasive. The rejections of claims 1, 3-10, and 12-18 under 35 U.S.C. 101 are withdrawn as the claims are found to integrate the judicial exception of allocation into a practical application through the introduced utilization of the newly allocated resources.
Regarding: Prior Art Rejections:
Applicant’s amendments and arguments regarding the rejection of claims 1, 3-10, and 12-18 under 35 U.S.C. 103 have been fully considered and are found to be not persuasive. The rejections of claims 1, 3-10, and 12-18 under 35 U.S.C. 103 are maintained.
Applicant’s amendment introducing the identification of underutilized hosts and reassignment of underutilized hosts to achieve better balance for workloads is taught by Gan. Gan teaches the concept of dynamic container grouping and rebalancing of resource assignments to improve performance and efficiency. Gan references memory and CPU utilization as a metric used in determining rebalancing of assignments ([0009] Automatic re-distribution is supported by numerous platforms based on simple metrics, i.e., memory or central processing unit (CPU) utilization of a particular worker node).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 7-9, 10, 13, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Gan et al. US 20210158083 Al (“Gan”) in view of Mukhopadhyay et al. US 20200409691 Al (“Mukhopadhyay”) in view of Pan et al. US 20170048163 A1 (“Pan”).
Gan, Mukhopadhyay, and Pan are cited in a previous office action.
Regarding claim 1, Gan teaches the invention substantially as claimed including:
An apparatus comprising:
at least one memory; instructions in the apparatus; and processor circuitry to execute the instructions ([0038] Programs may be stored in persistent storage 308 and in memory 306 for execution and/or access by one or more of the respective computer processors 304 via cache 316) to:
obtain a request to perform a service on a logical workload domain (Server 120 (logical workload domain) operates to run container orchestration system 122, run network monitoring system 124, and store and/or send data using database 128. [0019] "In general, a container orchestration system controls and automates tasks (i.e, services) including, but not limited to, provisioning and deployment of containers, redundancy and availability of containers, allocation of resources between containers, movement of containers across a host infrastructure, and load balancing between containers. In an embodiment, container orchestration system 122 is in communication with network monitoring system 124 and receives container groupings from network monitoring system 124"; Examiner notes: each task constitutes a service), the logical workload domain logically grouping at least two or more workload domains based on a criterion ([0020] "Network monitoring system 124 operates to group containers (i.e., workload domains) for optimized efficiency and application performance by using a reinforcement learning module and a KNN; [0023] KNN 126 is used by network monitoring system 124 to initially group containers, e.g., containers 130, based on a centroid of the containers' distance in properties from each other, the type of task the containers are running, and the difference in network interaction between the containers based on network monitoring"; Examiner notes: network monitoring system 124 is part of server 120 and is considered part of the logical workload domain.);
identify the at least two or more workload domains grouped in the logical workload domain ([0024] "Data received, used, and/or generated by container orchestration system 122 may include, but is not limited to, unique identifiers of containers, IP addresses of containers");
identifying underutilized hosts associated with a workload and assigning the underutilized hosts to a workload domain associated with an overutilized host ([0009] Automatic re-distribution is supported by numerous platforms based on simple metrics, i.e., memory or central processing unit (CPU) utilization of a particular worker node. These known automatic re-distribution algorithms work to re-balance loads across worker nodes to reduce costs and latency but do so with no regard for application performance and energy costs. Embodiments of the present invention recognize the usefulness for a cloud service provider of grouping containers that are executing at the same time. The group of containers can be allocated or re-balanced across CPUs to improve performance, efficiency, or both. Container orchestration platforms may group containers from multiple customers in order to improve efficiency; Examiner notes: assigning underutilized hosts with workload domain with overutilized host represents balancing allocations of resources to achieve better system performance, latency, efficiency which involves identification of problematic resources and reassignment);
causing a utilization of hardware resource at the underutilized hosts to perform the service on the logical workload domain ([0026] a container orchestration system that re-organizes the containers on worker nodes based on the groupings; [0034] container orchestration system 122 moves containers 130 between servers and/or VMs based on the container groupings output by reinforcement learning module 125; Examiner notes: the services running on the containers are redistributed to different resources to run);
wherein the hardware resources, including processors and memories, are allocated for the orchestrated service ([0009] Automatic re-distribution is supported by numerous platforms based on simple metrics, i.e., memory or central processing unit (CPU) utilization of a particular worker node … The group of containers can be allocated or re-balanced across CPUs to improve performance, efficiency, or both; [0025] containers 130 are running on hardware, i.e., servers, and/or VMs and can be moved by container orchestration system 122 between hardware and/or VMs based on the groupings output by network monitoring system 124).
Gan does not specifically teach obtaining a request to perform a service and concurrently orchestrate the service on the at least two or more workload domains.
However, Mukhopadhyay teaches obtaining a request to perform a service ([0005] "receiving, at an SDDC manager, a super bundle that includes multiple upgrade bundles"; [0007] "The super bundle received by the SDDC manager can identify multiple SDDC elements and corresponding versions for installation"; Examiner notes: upgrades are services.) and concurrently orchestrate the service on the at least two or more workload domains ( [0050] "the SDDC manager can instruct the relevant orchestrators to perform upgrades (i.e., services) according to the upgrade sequence. In one example, the SDDC manager instructs the second orchestrator to upgrade the second SDDC element using the relevant upgrade bundle for that element. After the second orchestrator confirms successful installation, the SDDC manager can instruct the first orchestrator to upgrade the first SDDC element using the relevant upgrade bundle for that element"; [0067] "Though some of the described methods have been presented as a series of steps, it should be appreciated that one or more steps can occur simultaneously").
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Mukhopadhyay's simultaneous orchestration of upgrade services on multiple elements with the container orchestration and network monitoring server system of Gan resulting in an orchestrator that is able to concurrently manage services for multiple recipients. A person of ordinary skill in the art would have been motivated to make this combination to increase component management efficiency in a SDDC (Mukhopadhyay [0003] A need therefore exists for an efficient mechanism for managing component dependencies and applying upgrades to an SDDC in a manner that retains functionality of the SDDC without requiring much, if any, work from an SDDC administrator).
Gan and Mukhopadhyay do not explicitly teach the request being processed by a proxy server for directing to an intended receiving server, and the request is processed by a proxy server configured to direct the request to a management circuit or an operator circuit.
.
However, Pan teaches the request being processed by a proxy server for directing to an intended receiving server ([0035] FIG. 1, the proxy server 104 is responsible for forwarding the application requests from user-side devices to the server for processing), and the request is processed by a proxy server configured to direct the request to a management circuit or an operator circuit (Fig 1; [0036] More particularly, when monitoring the blocking status of the application requests to be processed by the server 102, the scheduling system does not acquire the blocking status directly from the server, but instead indirectly obtains the blocking status of the application requests to be processed by the server 102 by collecting data from the proxy server 104 and then analyzing the same).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Pan’s request forwarding proxy server with the container orchestration and network monitoring server system of Gan and Mukhopadhyay resulting in an upstream proxy server that routes service requests to a scheduling system and further to servers in the SDDC. A person of ordinary skill in the art would have been motivated to make this combination to provide Gan and Mukhopadhyay’s system with the advantage of monitoring and managing incoming workload requests for improved task scheduling and system scalability (see Pan [0005] a flexible scheduling mechanism has been proposed to address the problem mentioned above, that is, when there is a high number of hits, computing resources will be created automatically to expand the processing capacity of the system, and when there is a low number of hits and the system is idle, computing resources will be reduced automatically to save costs; [0041] Since the proxy server is responsible for forwarding the application requests to the server and receiving responses returned from the server after the server processes the application requests, the number of the requests sent to the server to be processed by the server can be known by the number of the application requests that have been forwarded by the proxy server to the server and the number of the requests that correspond to the responses received by the proxy server from the server).
Regarding claim 4, Gan, Mukhopadhyay, and Pan teach the apparatus of claim 1.
Gan further teaches wherein the criterion is a user criterion defined at deployment of the at least two or more workload domains ([0008] "container orchestration platforms ... allow users to configure specific rulesets around how containers are scheduled and placed on underlying worker nodes (i.e., servers)"; [0023] "in some embodiments, a user's predicted inputs, input through a user interface (not shown}, to curate the grouping of said containers in an optimal fashion").
Regarding claim 7, Gan, Mukhopadhyay, and Pan teach the apparatus of claim 1.
Mukhopadhyay further teaches wherein the processor circuitry is to execute the instructions to identify the at least two or more workload domains based on the service to be performed ([0057] "The SDDC manager can receive the listing of elements (i.e, workload domains) and corresponding versions and, using that information, determine whether any upgrades are needed. Upgrade needs can be determined automatically by the SDDC manager, such that all SDDC elements are maintained in the most recent version that maintains compatibility across the SDDC").
Regarding claim 8, Gan, Mukhopadhyay, and Pan teach the apparatus of claim 1.
Mukhopadhyay further teaches wherein the processor circuitry is to identify the at least two or more workload domains by:
accessing identifying information in the request ([0051] "The example manifest includes a super bundle ID; [0054]" The SDDC manager can then determine if the super bundle stored in the software depot has not yet been received. To make this determination, the SDDC manager can determine whether a super bundle exists within a persistent storage location of the SDDC manager, and if so, whether that super bundle includes an ID that matches the ID of the super bundle in the software depot");
submitting a query to a datastore based on the identifying information ([0054] "If the super bundle in the software depot has a new ID number, for example, the SDDC manager can download the super bundle at stage 525. In some examples, this stage can include downloading the individual upgrade bundles referenced by the super bundle"; Examiner notes: downloading bundles involves submitting a query to where the bundles are served out.); and
based on the query, identifying the logical workload domain as a target logical workload domain to perform the service ([0056] "the SDDC manager can make API calls to the first and second orchestrators to determine upgrade needs for the SDDC components managed by those orchestrators. The API calls can each request a listing of SDDC elements managed by the respective orchestrator, along with the current versions of each of those SDDC elements; [0057] "The SDDC manager can receive the listing of elements and corresponding versions and, using that information, determine whether any upgrades are needed. Upgrade needs can be determined automatically by the SDDC manager, such that all SDDC elements are maintained in the most recent version that maintains compatibility across the SDDC"; [0059] "the SDDC manager can instruct the first and second orchestrator, respectively, to upgrade relevant SDDC elements").
Regarding claim 9, Gan, Mukhopadhyay, and Pan teach the apparatus of claim 1.
Mukhopadhyay further teaches wherein to concurrently orchestrate the service on the at least two or more workload domains, the processor circuitry is to execute the instructions to at least one of configure, coordinate, or manage the service on the at least two or more workload domains ( [0050] "the SDDC manager can manage each step of the installation process"; [0059] "At stages 550 and 555, the SDDC manager can instruct the first and second orchestrator, respectively, to upgrade relevant SDDC elements. The instruction can include an identification of a storage location that includes the relevant upgrade bundles. In a not her example, the instruction can include further instructions for retrieving the upgrade bundles from the software depot").
Regarding claim 10, it is the non-transitory computer readable storage medium of claim 1. Therefore, it is rejected for the same reason as claim 1 above.
Mukhopadhyay further teaches the non-transitory computer readable storage medium comprising instructions that, when executed, cause one or more processors to at least: ([0011] a non-transitory, computer-readable medium having instructions that, when executed by a processor associated with a computing device, cause the processor to perform the stages described").
Regarding claims 13 and 16-18, they are the non-transitory computer readable storage media of claims 4 and 7-9 respectively. Therefore, claims 13 and 16-18 are rejected for the same reason as claims 4 and 7-9.
Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Gan et al. US 20210158083 Al in view of Mukhopadhyay et al. US 20200409691 Al in view of Pan et al. US 20170048163 A1 in further view of Cherny et al. US 20200082071 Al (“Cherny”).
Cherny is cited in a previous office action.
Regarding claim 3, Gan, Mukhopadhyay, and Pan teach the apparatus of claim 1.
Gan, Mukhopadhyay, and Pan do not explicitly teach wherein the criterion is an application criterion, the at least two or more workload domains executing a same application.
However, Cherny teaches wherein the criterion is an application criterion, the at least two or more workload domains executing a same application (Container groups 26 can be defined in a variety of different ways, including, but not limited to, any of the following; [0050] "grouping based on a same application type (e.g., web server)"; [0051] "grouping based on a same application ( e.g., APACHE)"; [0052] "grouping based on related microservices that are part of a same larger application").
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Cherny's container grouping method with the container orchestration system of Gan, Mukhopadhyay, and Pan resulting in the system being able to group containers based on the executed application. A person of ordinary skill in the art would have been motivated to make this combination to increase management efficiency by gaining insight of the system as a whole (Cherny [0069] "the credential safety criteria indicates that a credential is unsafe if it is predicted to be used by software containers 20 from different container groups 26. If a credential is used by other container groups 26, that is a good indicator that the credential in question is a default credential and therefore not suitable for production use").
Regarding claim 12, it is the non-transitory computer readable storage medium of claim 3. Therefore, it is rejected for the same reason as claim 3 above.
Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Gan et al. US 20210158083 Al in view of Mukhopadhyay et al. US 20200409691 Al in view of Pan et al. US 20170048163 A1 in further view of Krishnan et al. US 20190324820 Al (“Krishnan”).
Krishnan is cited in a previous office action.
Regarding claim 5, Gan, Mukhopadhyay, and Pan teach the apparatus of claim 1.
Mukhopadhyay further teaches wherein the service is a first service ([0059] "the SDDC manager can instruct the first and second orchestrator, respectively, to upgrade relevant SDDC elements"), the at least two or more workload domains are two or more first workload domains ([0050] "the SDDC manager instructs the second orchestrator to upgrade the second SDDC element (i.e., a workload domain) using the relevant upgrade bundle for that element. After the second orchestrator confirms successful installation, the SDDC manager can instruct the first orchestrator to upgrade the first SDDC element (i.e., another workload domain) using the relevant upgrade bundle for that element"),and the request is a first request to perform the first service to upgrade the at least two or more workload domains ([0005] "receiving, at an SDDC manager, a super bundle that includes multiple upgrade bundles"; [0007] "The super bundle received by the SDDC manager can identify multiple SDDC elements and corresponding versions for installation"), the processor circuitry is to execute the instructions to obtain: a third request to perform a third service, the third service to create a second workload domain that is to be grouped in the logical workload domain ([0006] "The SDDC manager can create and delete additional SDDC workload instances"; Examiner notes: creation of additional workloads occur in response to some user/system request for more resources).
Gan, Mukhopadhyay, and Pan do not explicitly teach a second request to perform a second service, the second service to apply a security policy to the two or more first workload domains;
However, Krishnan teaches a second request to perform a second service, the second service to apply a security policy to the two or more first workload domains ([0031] "Examples disclosed herein improve workload domain management in virtualized server systems by dynamically adjusting resources associated with a workload domain. In some disclosed examples, the SDDC manager populates and manages a free pool of resources such as virtualized servers based on requirements, end user specifications; [0030] more resources are required for a workload domain as the user-selected requirements increase (e.g., higher ... security ... options require more resources than lower ... security ... options); [0061] "the network virtualizer 212 also provides network and security services to VMs (i.e., workload domains) with a policy driven approach. The example network virtualizer 212 includes a number of components to deploy and manage virtualized network resources across servers, switches, and clients"; Examiner notes: the user increasing their security requirement is a request to perform the second service).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Krishnan's workload domain security policy management system with the container orchestration system of Gan, Mukhopadhyay, and Pan resulting in a system that is able to receive and execute security policies to its containers. A person of ordinary skill in the art would have been motivated to make this combination to improve workload domain management efficiency (Krishnan [0018] An SDDC manager can provide automation of workflows for life cycle management and operations of a self-contained private cloud instance).
Regarding claim 14, it is the non-transitory computer readable storage medium of claim 5. Therefore, it is rejected for the same reason as claim 5 above.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Gan et al. US 20210158083 Al in view of Mukhopadhyay et al. US 20200409691 Al in view of Pan et al. US 20170048163 A1 in view of Krishnan et al. US 20190324820 Al in further view of Tembey et al. US 20190327144 Al.
Tembey is cited in a previous office action.
Regarding claim 6, Gan, Mukhopadhyay, Pan, and Krishnan teach the apparatus of claim 5.
Gan, Mukhopadhyay, Pan, and Krishnan do not explicitly teach wherein the processor circuitry is to execute the instructions to invoke a reference configuration template to create the second workload domain that is to be grouped in the logical workload domain, the reference configuration template to provide pre-defined configuration settings for the second workload domain.
However, Tembey teaches wherein the processor circuitry is to execute the instructions to invoke a reference configuration template to create the second workload domain that is to be grouped in the logical workload domain ([0034] "the SDDC manager selects a template from a database ( e.g., a template catalog) to deploy one or more resources to a virtualized server system"), the reference configuration template to provide pre-defined configuration settings for the second workload domain ([0034] "the template may include pre-determined selections for a plurality of configurations in addition to availability, capacity, and performance").
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Tembey's template-based resource creation system with the container orchestration system of Gan, Mukhopadhyay, Pan, and Krishnan. A person of ordinary skill in the art would have been motivated to make this combination to improve resource allocation efficiency (Tembey [0033] "Examples disclosed herein describe template driven infrastructure in virtualized server systems to improve infrastructure provisioning flexibility and resource allocation").
Regarding claim 15, it is the non-transitory computer readable storage medium of claim 6. Therefore, it is rejected for the same reason as claim 6 above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the
examiner should be directed to HARRISON LI whose telephone number is (703) 756-1469. The
examiner can normally be reached Monday-Friday 9:00am-5:30pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing
using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is
encouraged to use the USPTO Automated Interview Request (AIR) at
http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Aimee Li can be reached on (571) 272-4169. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.L./
Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195