DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-2, 5-7, and 9-23 are presented for examination.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 5, 10-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 2022/0004410 A1) in view of Asthana et al. (hereinafter Asthana) (US 2020/0110638 A1).
Chen was cited in the IDS on 7/23/24.
As to claim 1, Chen teaches a method for unified virtual infrastructure and containerized workload deployment via a deployment platform (unified orchestration function entity 80 that deploys both CIM 82 and VIM 81) (Figs. 2,3, and 10), the method comprising:
receiving, at the deployment platform (unified orchestration function entity 80 receives a deployment request of an application instance), definitions (deployment request includes an identifier ID and orchestration information of the application, and the orchestration information is used to indicate resources required for running the instance of the application; VIM or a CIM to deploy a VM or a container that is able to run the instance of the application) of the virtual infrastructure (VMs within VIM 81) and the containerized workload (containers within CIM 82 that may be Docker-based) (Fig. 3, Step 301) ([0008]; Figs. 2,3, and 10);
sending, by the deployment platform (unified orchestration function entity 80), first information comprising the definition (VM-related resource allocation requests) of the virtual infrastructure (VMs within VIM 81) to an infrastructure manager (VIM 81 controlled via VM scheduling management module 803) configured to deploy the virtual infrastructure including a container orchestrator (CIM 82 controlled via Container orchestration management module 802) (Figs. 2,3, and 10) (Figs. 2,3, 8, and 10); and
sending, by the deployment platform (unified orchestration function entity 80), second information comprising the definition of the containerized workload to the container orchestrator (CIM 82 with Docker and container image repository, etc.) configured to deploy the containerized workload on the deployed virtual infrastructure (container orchestration management module 802 sends container-related allocation requests to the CIM 82) (Figs. 2,3, and 10).
Chen does not disclose receiving a (single) definition that includes both the virtual infrastructure and the containerized workload.
However, Asthana teaches using a blueprint that is a single declarative representation of a workload for input to an orchestration engine that includes both virtual infrastructure resources (VMs, networks, storage, etc.) and application/container workload resources (containers, databases, services, etc.) in one declarative specification. The blueprint is then parsed by the orchestration engine to provision both the infrastructure and the workload together ([0004]; [0020]-[0021]; [0055]-[0057]; [0060]; [0064]; [0096]).
It would have been obvious to one of ordinary skill in the art before the effective date of the application to modify Chen such that it would use a blueprint which contains a single definition that includes both virtual infrastructure and containerized workload, as taught in Asthana. The suggestion/motivation for doing so would have been to provide the predicted result of taking advantage of the use of the unified blueprint which allow orchestration engines to automatically validate and execute deployments without manual step-by-step programming, thereby reducing errors, improving reusability, and enabling predictable, repeatable deployments. Furthermore, it would provide a unified, portable definition for infrastructure and workload resources across heterogeneous orchestration engines (Asthana: [0020]-[0021]; [0060]).
Furthermore, under the broadest reasonable interpretation (BRI), a “container cluster” reasonably reads on a set of container-capable compute nodes/resources that could be managed by a container infrastructure manager or orchestrator. This is consistent with the specification’s use of container cluster (Kubernetes cluster or K8S cluster – see paragraph [0005]) and that one or more namespaces may be defined in the container cluster ([0005]). Thus, under BRI, a ”namespace of the container cluster” could include a logical partition within the cluster used to isolate and manage groups of resources/workloads. Chen’s VI/CI implemented across multiple edge hosts under centralized container-management (CIM/VIM) constitutes, under BRI, a container cluster.
Chen teaches that the orchestration information for container deployment may be a Kubernetes-based yaml type ([0077]), and one of ordinary skill in the art would understand that namespaces are fundamental in Kubernetes clusters and are known to partition and isolate groups of resources for the purposes of resource allocation, access control, and workload management.
As to claim 2, Chen teaches the method of claim 1, wherein sending the first information comprises sending a first request (first resource allocation request) to a provider application programming interface (API) (Mm4, Mm6, etc.) of the virtual infrastructure ([0139]-[0140]; [0138]; Fig. 2), and wherein sending the second information comprises sending a second request (another resource allocation request) to a second API (Mm 10, Mm11, etc.) of the container orchestrator (Container infrastructure manager) ([0141]-[0142]; [0138]; Fig. 2).
As to claim 5, Chen teaches the method of claim 1, wherein the definition of the containerized workload includes a definition (orchestration information with resource definitions or requirements such as image address, disk type, storage volume, disk size, etc.) of one or more application resources (computing, storage, network, disk image address, etc.) ([0077]-[0078]; [0138]-[0144]; Table 1).
As to claim 10, Chen teaches the method of claim 1, wherein the definition of the virtual infrastructure identifies one or more virtual components of the virtual infrastructure ([0077]; [0109]-[0119]).
As to claim 11, Chen teaches the method of claim 10, wherein the one or more virtual components comprise one or more controllers (MEO, CIM, MEPM, etc.) or one or more virtual machines ([0008]-[0009]; [0153]-[0158]; [0235]-[0239]; Figs 1, 2, and 5b).
As to claim 12, Chen teaches the method of claim 11, wherein the definition of the virtual infrastructure identifies configuration parameters (an image address of the application, a disk type, a stored volume name, and a disk size, etc.) for the one or more controllers or the one or more virtual machines ([0077]-[0078]; [0120]-[0122]; [0130]; [0154]-[0157]; Table 1).
As to claim 13, Chen teaches the method of claim 1, wherein the definition of the containerized workload identifies one or more applications (ID of an instance of an application) of the containerized workload ([0008]-[0009]; [0124]; [0130]; Table 1).
As to claim 14, Chen teaches the method of claim 13, wherein the definition of the containerized workload identifies one or more Kubernetes-based containers running one or more applications for running the one or more applications ([0077]). Chen does not explicitly disclose pods for running the one or more applications. However, it is well known in Kubernetes that pods - the fundamental execution unit - are supported by configuration parameters for resources. One of ordinary skill in the art would understand that the definition of a containerized workload would include configuration parameters for the resources that support pods, based on Chen’s explicit disclosure of Kubernetes.
As to claim 15, Chen ([0077]-[0078]; [0138]-[0144]) in view of Asthana ([0056]-[0057]) teaches the method of claim 14, wherein the definition of the containerized workload identifies configuration parameters for one or more resources that support the one or more pods.
As to claim 16, it is rejected for the same reasons as stated in the rejection of claim 1.
As to claim 17, it is rejected for the same reasons as stated in the rejection of claim 2.
As to claim 19, it is rejected for the same reasons as stated in the rejection of claim 1.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Asthana, and further in view of Khakare et al. (hereinafter Khakare) (US 2021/0055917 A1).
As to claim 8, Chen in view of Asthana does not explicitly teach the method of claim 1, wherein receiving the definition of the virtual infrastructure and the containerized workload comprises receiving user input on a graphical user interface (GUI) of the deployment platform, GUI including a project canvas configured to display one or more icons corresponding to one or more components or resources of the virtual infrastructure and the containerized workload.
However, Khakare discloses creating a blueprint that defines cloud infrastructure and workloads. The blueprint definition includes infrastructure (VPC, servers, etc.) and workloads (application packages, containers, etc.) ([0007]-[0009]; [0019]). A graphical user interface (GUI) and blueprint component icons are used for blueprint creation and modification with capability of drag-and-drop functionality for the user onto a canvas to add and/or remove components ([0007]; [0015]; [0019]; [0072]-[0074]; [0093]; Fig. 5). Finally, Khakare discloses that the icons represent infrastructure resources (virtual infrastructure) and workloads (application packages, including containers) ([0089]-[0090]; [0072]-[0074]). It would have been obvious to one of ordinary skill in the art before the effective date of the application to modify Chen in view of Asthana such that it would include wherein receiving the definition of the virtual infrastructure and the containerized workload comprises receiving user input on a graphical user interface (GUI) of the deployment platform, GUI including a project canvas configured to display one or more icons corresponding to one or more components or resources of the virtual infrastructure and the containerized workload, as taught in Khakare. The suggestion/motivation for doing so would have been to provide the predicted result of providing a visual, drag-and-drop GUI that allows users to easily define infrastructure and workloads without manually coding scripts (Khakare - [0063]). This simplifies cloud deployments and reduces human error from manual work (Khakare - [0004]; [0063]).
Claims 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Asthana, and further in view of Fernandes et al. (hereinafter Fernandes) (US 11,711,315 B1).
As to claim 21, the limitations are not explicitly taught by Chen in view of Asthana. However, Fernandes teaches the non-transitory computer readable medium of claim 16, wherein: the definition of the virtual infrastructure identifies one or more virtual components of the virtual infrastructure (URI is parsed to extract the resource information such as group, version, kind, namespace, name, and possibly other attributes) (col. 7, lines 40-47); and the one or more virtual components comprise one or more controllers or one or more virtual machines (Abstract; col. 3, line 53 through col. 4, line 3; Figs 1 and 2). It would have been obvious to one of ordinary skill in the art before the effective date of the application to implement Chen using Fernandes’s standard Kubernetes features so that better organizing and managing containerized workloads can be done.
As to claim 22, Fernandes teaches the non-transitory computer readable medium of claim 21, wherein the definition of the virtual infrastructure identifies configuration parameters (content in YAML) for the one or more controllers or the one or more virtual machines (col. 3, line 53 through col. 4, line 3; col. 5, line 65 through col. 6, line 20; Figs 1 and 2).
As to claim 23, the limitations are not explicitly taught by Chen in view of Asthana. However, Fernandes teaches the non-transitory computer readable medium of claim 16, wherein the definition of the containerized workload identifies: one or more applications of the containerized workload (pod may be a collection of one or more containers configured to execute a workload) (col. 4, lines 25-50); one or more containers grouped into one or more pods for running the one or more applications (pod 122A may include one or more containers 124A-B) (col. 4, lines 25-50); and configuration parameters (via YAML) for one or more resources that support the one or more pods (Pod 122) (Figs 1 and 2). It would have been obvious to one of ordinary skill in the art before the effective date of the application to implement Chen using Fernandes’s standard Kubernetes features so that better organizing and managing containerized workloads can be done.
Response to Arguments
Applicant argues that there is no reference that discloses a “container cluster” and no reference that discloses a namespace of a container cluster.
In response, Chen teaches that the orchestration information for container deployment may be a Kubernetes-based yaml type ([0077]), and one of ordinary skill in the art would understand that namespaces are fundamental in Kubernetes clusters. Asthana teaches orchestration blueprints are declarative resource specifications typically expressed in YAML/JASON. It would have been obvious to one of ordinary skill in the art that Chen’s Kubernetes/YAML orchestration to employe Kubernetes’ conventional namespace means. Fernandes et al. (US 11,711,315 B1) is provided to show confirmation in this by disclosing that a Kubernetes namespace provides a scope of the names so that different users/projects can share a cluster, and it prevents resources in one namespace from interacting with resources in other namespaces (col. 5, line 52 through col. 6, line 21; Figs 1, 2, 3). Therefore, Fernandes corroborates that Kubernetes-based YAML entails in practice – Kubernetes resources are namespace-scoped and YAML commonly specifies a namespace.
Allowable Subject Matter
Claims 6-7, 9, 18, and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH TANG whose telephone number is (571)272-3772. The examiner can normally be reached Monday-Friday 7AM-3PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KENNETH TANG/Primary Examiner, Art Unit 2197