DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-21 are pending in this office correspondence.
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. IN202341047631, filed on 04/14/2023.
Drawings
The Drawings filed on 09/07/2023, have been acknowledged.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4, 9, 11-12, 16 and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US Patent Application Publication (US 20210271506 A1) issued to Ganguly et al. (hereinafter as “GANGULY”).
Regarding claim 1, GANGULY teaches a method comprising:
generating, during a first boot of a collector appliance, an adapter instance on the collector appliance (GANGULY Para. [0024]: “…, centralized cloud infrastructure provisioning and management are provided. …, a first virtual machine executing on a centralized management node provides a first image file to a first computing entity arranged within a first point of delivery, wherein the first image file comprises at least one of a first boot configuration file ...”,
the examiner notes that the reference discloses a centralized management node to that of a collector appliance; and the reference provides an image file to that of generating an adapter instance);
receiving a request to bootstrap a first virtual computing instance (VCI) running in a first virtual infrastructure managed by a first virtual infrastructure manager (VIM) and a second VCI running in a second virtual infrastructure managed by a second VIM (GANGULY Para. [0024]: “…, centralized cloud infrastructure provisioning and management are provided. …, a first virtual machine executing on a centralized management node provides a first image file to a first computing entity arranged within a first point of delivery, wherein the first image file comprises at least one of a first boot configuration file or a first ramdisk file. A second virtual machine executing on the centralized management node provides a second image file to a second computing entity arranged within a second point of delivery different from the first point of delivery. The second image file comprises at least one of a second boot configuration file or a second ramdisk file. The first virtual machine executing on the centralized management node provides a third image file to the first computing entity arranged within the first point of delivery.”,
the examiner notes that the reference discloses a centralized management node to that of a collector appliance; and the reference provides an image file to that of generating an adapter instance);
upon receiving the request, performing, using the adapter instance (GANGUL Fig. 6, Para. [0044]: “Upon receipt of DHCP PXE boot request 652, a VM associated with central management node 305 may send DHCP boot response 653”), a bootstrapping process of:
the first VCI to map the adapter instance to the first virtual infrastructure and to install a first monitoring agent (GANGULY Fig. 10, Para. [0078]: “FIG. 10 is a block diagram illustrating a system that includes a virtual infrastructure manager (VIM)-monitoring (MON) architecture 1000 with a management node 1010 and a VIM POD 1020 configuration. That is, to deploy a telecommunication cloud mobile core network, mobile network operators may use open source software, such as OpenStack. A VIM POD 1020 may employ OpenStack for distributed TelcoCloud deployments and enterprise clouds. As such, included in VIM POD 1020 are compute nodes 1022a-n, storage nodes 1024a-c, and controller nodes 1026a-c. The actual number of nodes depends on a particular deployment. The VIM POD 1020 may use a monitoring mechanism at the cloud/NFVI POD level. The monitoring mechanism may employ an event monitoring server 1011 that may be embodied as a Prometheus server, and that is hosted on management node 1010.”; and
Fig. 10, Para. [0079]: “Specifically, VIM monitoring is performed using a lightweight POD-level monitoring solution called VIM-MON that is based on an open source Prometheus, Telegraf, Grafana (PTG) stack. The VIM-MON architecture 1000 employs infrastructure-level metric collection based on metric collection and reporting agents installed on all nodes in the VIM POD 1020.”,
the examiner notes that the reference discloses in Fig. 10 a centralized Management Node (1010) that employes connection to VIM POD-level mechanism and reporting agents installed on all nodes to that of the claimed instance adapter mapping and monitoring of VCIs); and
the second VCI to map the adapter instance to the second virtual infrastructure and to install a second monitoring agent (GANGULY Fig. 10, Para. [0078]: “FIG. 10 is a block diagram illustrating a system that includes a virtual infrastructure manager (VIM)-monitoring (MON) architecture 1000 with a management node 1010 and a VIM POD 1020 configuration. That is, to deploy a telecommunication cloud mobile core network, mobile network operators may use open source software, such as OpenStack. A VIM POD 1020 may employ OpenStack for distributed TelcoCloud deployments and enterprise clouds. As such, included in VIM POD 1020 are compute nodes 1022a-n, storage nodes 1024a-c, and controller nodes 1026a-c. The actual number of nodes depends on a particular deployment. The VIM POD 1020 may use a monitoring mechanism at the cloud/NFVI POD level. The monitoring mechanism may employ an event monitoring server 1011 that may be embodied as a Prometheus server, and that is hosted on management node 1010.”; and
Fig. 10, Para. [0079]: “Specifically, VIM monitoring is performed using a lightweight POD-level monitoring solution called VIM-MON that is based on an open source Prometheus, Telegraf, Grafana (PTG) stack. The VIM-MON architecture 1000 employs infrastructure-level metric collection based on metric collection and reporting agents installed on all nodes in the VIM POD 1020.”); and
collecting, using the adapter instance, performance metrics associated with the first VCI and the second VCI from the first monitoring agent and the second monitoring agent, respectively (GANGULY Fig. 10, Para. [0078]: “…, included in VIM POD 1020 are compute nodes 1022a-n, storage nodes 1024a-c, and controller nodes 1026a-c. The actual number of nodes depends on a particular deployment. The VIM POD 1020 may use a monitoring mechanism at the cloud/NFVI POD level. The monitoring mechanism may employ an event monitoring server 1011 that may be embodied as a Prometheus server, and that is hosted on management node 1010.”; and
Fig. 10, Para. [0080]: “…, some metrics are collected using remote APIs (e.g. the metric collecting and reporting agent 1030a on the management node 1010 collects OpenStack metrics using the OpenStack API). These metrics are then read (“scraped”) by the event monitoring server 1011 running on the management node 1010 at regular intervals (e.g., a default scraping interval of 15 seconds). Metrics that are scraped are received on the management node 1010 and are then stored in the local TSDB 1012.”).
Regarding claims (9 and 10), the aforementioned claims recite similar limitations to claim 1, and therefore rejected for similar reasons as discussed above.
Regarding claim 2, GANGULY teaches the limitations of claim 1. Further, GANGULY teaches sending the performance metrics associated with the first VCI and the second VCI to a monitoring application for monitoring and troubleshooting the first VCI and the second VCI (GANGULY Fig. 10, Para. [0080] VIM-MON architecture 1000 may be implemented through a number of containers and processes deployed by a VIM installer when an option for VIM-MON architecture 1000 is enabled. As noted above, metrics may be collected by metric collecting and reporting agents 1030a-j. There is one metric collecting and reporting agent 1030a-j per node in VIM POD 1020. Most metrics are local; some metrics are collected using remote APIs (e.g. the metric collecting and reporting agent 1030a on the management node 1010 collects OpenStack metrics using the OpenStack API). These metrics are then read (“scraped”) by the event monitoring server 1011 running on the management node 1010 at regular intervals (e.g., a default scraping interval of 15 seconds).”; and
Fig. 10, Para. [0081]: “These incoming metrics are also used by the event monitoring server 1011 to evaluate alerting rules (which are rules based on the value of metrics using programmable expressions and that are stored in configuration and alert rules database 1015). When an alerting rule becomes active, an alert is created in the pending state. When a pending alert remains pending for a certain duration, it will become firing.”,
the examiner notes that the reference discloses that when metrics are collected and read by the central event monitoring server running on the management node to that of sending the performance metrics to a monitoring application. Further, the reference discloses that these incoming metrics are also used by the event monitoring server to evaluate alerting rules to that of monitoring and troubleshooting).
Regarding claim 4, GANGULY teaches the limitations of claim 1. Further, GANGULY teaches wherein collecting the performance metrics comprises:
generating a global queue corresponding to the first virtual infrastructure and the second virtual infrastructure in the adapter instance (GANGUL Fig. 13/Fig. 14, Para. [0094] Each of controller nodes 1325a-c includes an HA proxy 1320a-c, and a metric collecting and reporting agent 1310a-c. Included in each of metric collecting and reporting agents 1310a-c are a metric collecting and reporting proxy (T-proxy) 1330a-c and an accompanying T-proxy port 1332a-c, an event monitoring output plugin 1335a-c with accompanying ports 1337a-c, and caches 1350a-c. Each metric collecting and reporting agent 1310a-c also includes one or more input plugins 1355a-i. Non-controller node 1325d includes an event monitoring output plugin 1335d with an accompanying port 1337d and cache 1350d. Metric collecting and reporting agent 1310d includes one or more input plugins 1355j and 1335k, but lacks a T-proxy and its accompanying port. Non-controller node 1325d also lacks an HA proxy.”, the examiner notes that the reference discloses proxy port for metric collecting and reporting to that of a global queue); and
collecting, using the adapter instance, the performance metrics associated with the first VCI and the second VCI from the first monitoring agent and the second monitoring agent, respectively, into the global queue (GANGUL Fig. 13/Fig. 14, Para. [0094] Each of controller nodes 1325a-c includes an HA proxy 1320a-c, and a metric collecting and reporting agent 1310a-c. Included in each of metric collecting and reporting agents 1310a-c are a metric collecting and reporting proxy (T-proxy) 1330a-c and an accompanying T-proxy port 1332a-c, an event monitoring output plugin 1335a-c with accompanying ports 1337a-c, and caches 1350a-c. Each metric collecting and reporting agent 1310a-c also includes one or more input plugins 1355a-i. Non-controller node 1325d includes an event monitoring output plugin 1335d with an accompanying port 1337d and cache 1350d. Metric collecting and reporting agent 1310d includes one or more input plugins 1355j and 1335k, but lacks a T-proxy and its accompanying port. Non-controller node 1325d also lacks an HA proxy.”,
the examiner notes that the reference discloses proxy port for metric collecting and reporting to that of a global queue).
Regarding claims (11 and 18), the aforementioned claims recite similar limitations to claim 4, and therefore rejected for similar reasons as discussed above.
Regarding claim 5, GANGULY teaches the limitations of claim 1. Further, GANGULY teaches wherein collecting the performance metrics comprises:
generating a first queue and a second queue corresponding to the first virtual infrastructure and the second virtual infrastructure, respectively, in the adapter instance (GANGUL Fig. 13/Fig. 14, Para. [0094]: “Each of controller nodes 1325a-c includes an HA proxy 1320a-c, and a metric collecting and reporting agent 1310a-c. Included in each of metric collecting and reporting agents 1310a-c are a metric collecting and reporting proxy (T-proxy) 1330a-c and an accompanying T-proxy port 1332a-c, an event monitoring output plugin 1335a-c with accompanying ports 1337a-c, and caches 1350a-c. Each metric collecting and reporting agent 1310a-c also includes one or more input plugins 1355a-i.);
collecting, using the adapter instance, the performance metrics associated with the first VCI from the first monitoring agent into the first queue (GANGUL Fig. 13/Fig. 14, Para. [0094]: “Each of controller nodes 1325a-c includes an HA proxy 1320a-c, and a metric collecting and reporting agent 1310a-c. Included in each of metric collecting and reporting agents 1310a-c are a metric collecting and reporting proxy (T-proxy) 1330a-c and an accompanying T-proxy port 1332a-c, an event monitoring output plugin 1335a-c with accompanying ports 1337a-c, and caches 1350a-c. Each metric collecting and reporting agent 1310a-c also includes one or more input plugins 1355a-i. Non-controller node 1325d includes an event monitoring output plugin 1335d with an accompanying port 1337d and cache 1350d. Metric collecting and reporting agent 1310d includes one or more input plugins 1355j and 1335k, but lacks a T-proxy and its accompanying port. Non-controller node 1325d also lacks an HA proxy.”,
the examiner notes that the reference discloses output plugin (1335a-c) with accompanying ports (1337a-c) for metric collecting and reporting to that of performance metrics queues); and
collecting, using the adapter instance, the performance metrics associated with the second VCI from the second monitoring agent into the second queue (GANGUL Fig. 13/Fig. 14, Para. [0094]: “Each of controller nodes 1325a-c includes an HA proxy 1320a-c, and a metric collecting and reporting agent 1310a-c. Included in each of metric collecting and reporting agents 1310a-c are a metric collecting and reporting proxy (T-proxy) 1330a-c and an accompanying T-proxy port 1332a-c, an event monitoring output plugin 1335a-c with accompanying ports 1337a-c, and caches 1350a-c. Each metric collecting and reporting agent 1310a-c also includes one or more input plugins 1355a-i. Non-controller node 1325d includes an event monitoring output plugin 1335d with an accompanying port 1337d and cache 1350d. Metric collecting and reporting agent 1310d includes one or more input plugins 1355j and 1335k, but lacks a T-proxy and its accompanying port. Non-controller node 1325d also lacks an HA proxy.”,
the examiner notes that the reference discloses output plugin (1335a-c) with accompanying ports (1337a-c) for metric collecting and reporting to that of performance metrics queues).
Regarding claim (12), the aforementioned claims recite similar limitations to claim 5, and therefore rejected for similar reasons as discussed above
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6-8, 13-15 and 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication (US 20210271506 A1) issued to Ganguly et al. (hereinafter as “GANGULY”), and in view of US Patent Application Publication (US 20170222981 A1) issued to Srivastav et al. (hereinafter as “SRIVASTAV”).
Regarding claim 6, GANGULY teaches the limitations of claim 1.
However, GANGULY does not explicitly teach wherein performing the bootstrapping process of the first VCI comprises: in response to initiating the bootstrapping process of the first VCI based on the request, retrieving a first security certificate of the first virtual infrastructure from a virtual infrastructure management adapter deployed in the first VIM; and performing, using the first security certificate, the bootstrapping process of the first VCI running in the first virtual infrastructure.
But SRIVASTAV teaches wherein performing the bootstrapping process of the first VCI comprises: in response to initiating the bootstrapping process of the first VCI based on the request, retrieving a first security certificate of the first virtual infrastructure from a virtual infrastructure management adapter deployed in the first VIM (SRIVASTAV Fig 3, Para. [0025]: “…, FIG. 3 shows the configuration of client and server instances of virtual machines. The client virtual machine instance 330 stores a generated private key 334 (e.g., generated according to a Rivest, Shamir, Adleman (RSA) algorithm) and a signed digital signature certificate 336, which is obtained through dynamic enrollment with EST server 310. Additionally, the trust store 332 may be provisioned during the virtual machine security bootstrapping. The trust store 332 in the client virtual machine instance 330 stores the root CA certificate 322 used to validate communications channels using a mutual certificate based Transport Layer Security (TLS) session. In particular, the client virtual machine 330 may validate a TLS connection to the server virtual machine 340 using the mutual certificate based TLS session.”; and
performing, using the first security certificate, the bootstrapping process of the first VCI running in the first virtual infrastructure (SRIVASTAV Fig 3, Para. [0026]: “The server virtual machine instance 340 similarly stores a generated RSA private key 344 and a signed digital certificate 346, which is obtained through dynamic enrollment with the EST server 310. In addition to the techniques presented herein, the server virtual machine 340 may be provisioned during a virtual machine bootstrap with appropriate keys in the server trust store 342.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of GANGULY (disclosing methods for centralized management, provisioning and monitoring of cloud infrastructure) to include the teachings of SRIVASTAV (disclosing methods for security authentication key management in a distributed network environment) and arrive at a method that provides a dynamic secure authentication of virtual machines ahead of deploying or bootstrapping these virtual machines. One of ordinary skill in the art would have been motivated to make this combination because virtual machines may be transient and instances may be booted to scale with demand for the service provided by that virtual machine, thereby rather than manually provisioning each instance with an individual digital certificate, each instance of the virtual machine may be efficiently provided with the required authentication keys and certificate in the virtual machine image used to create each instance, as recognized by (SRIVASTAV Abstract, Para. [0002]-[0003], [0010]). In addition, the references of GANGULY and SRIVASTAV teach features that are directed to analogous art and they are directed to the same field of endeavor of cloud resource allocation and management.
Regarding claims (13 and 19), the aforementioned claims recite similar limitations to claim 6, and therefore rejected for similar reasons as discussed above.
Regarding claim 7, the combination of GANGULY and SRIVASTAV teach the limitations of claim 6. Further, SRIVASTAV teaches receiving a request to bootstrap a third VCI running in the first virtual infrastructure (SRIVASTAV Fig 1, Para. [0027]: “…, a third party client may obtain a certificate from the registration authority 150 via a Web User Interface (WUI) signed by a Public API certificate authority. Each public interface may require its own chain of trust, i.e., a Public API certificate authority.”);
in response to initiating the bootstrapping process of the third VCI based on the request to bootstrap the third VCI, retrieving the first security certificate of the first virtual infrastructure from the virtual infrastructure management adapter deployed in the first VIM (SRIVASTAV Para. [0013]: “The techniques presented herein leverage the EST protocol to bootstrap virtual machines hosting client and service processes with a set of instance-specific PKI keys and certificates. These instance-specific key and certificates enable clients and services to communicate with each other in a secure fashion. Also presented herein are mechanisms to generate authorizations to specific service instances for third parties.”; and
Fig 3, Para. [0028]: “…, the certificates may be distributed throughout a customer's enterprise cloud network according to the following steps. Initially, an operator installs a certificate authority (e.g. CA 320) in the customer's enterprise cloud network. The certificate authority will provide the basis for dynamic certificate enrollment for bootstrapping operator-provided solution services as well as client virtual machine instances.”); and
performing, using the first security certificate, the bootstrapping process of the third VCI running in the first virtual infrastructure to install the third monitoring agent (SRIVASTAV Fig. 1, Para. [0016]: “The controller 110 selects appropriate computing resources 130 (e.g., processors, memory, network interfaces, etc.) to run the virtual machine. The computing resources 130 may run a plurality of virtual machines, such as virtual machines 132 and 134. A virtual machine 132 may be based on a virtual machine image 140, which may be stored in an image database in the cloud network system 100. The image database may be stored in memory of the controller 110 or in a separate database in the cloud network 100. Each of the virtual machines 132 and 134 can communicate with a registration authority (RA) 150 to obtain cryptographic material, such as keys and certificates, used in secure communication channels. The registration authority 150 may be running on another virtual machine within the cloud network system 100.”; and
Para. [0029]: “…, a bootstrapping utility (e.g. OpenStack Heat Template) operating on an administrative node in the customer's cloud network enables a newly instantiated virtual machine to perform certificate enrollment using the EST protocol. The bootstrapping utility provides queries to obtain the Private Root CA certificate and API-specific CA root certificates via an API.”).
Regarding claims (14 and 20), the aforementioned claims recite similar limitations to claim 7, and therefore rejected for similar reasons as discussed above.
Regarding claim 8, GANGULY teaches the limitations of claim 1.
However, GANGULY does not explicitly teach wherein performing the bootstrapping process of the second VCI comprises: in response to initiating the bootstrapping process of the second VCI based on the request, retrieving a second security certificate of the second virtual infrastructure from a virtual infrastructure management adapter deployed in the second VIM; and performing, using the second security certificate, the bootstrapping process of the second VCI running in the second virtual infrastructure.
But SRIVASTAV teaches in response to initiating the bootstrapping process of the second VCI based on the request, retrieving a second security certificate of the second virtual infrastructure from a virtual infrastructure management adapter deployed in the second VIM (SRIVASTAV Para. [0013]: “The techniques presented herein leverage the EST protocol to bootstrap virtual machines hosting client and service processes with a set of instance-specific PKI keys and certificates. These instance-specific key and certificates enable clients and services to communicate with each other in a secure fashion. Also presented herein are mechanisms to generate authorizations to specific service instances for third parties.”; and
Fig 3, Para. [0028]: “…, the certificates may be distributed throughout a customer's enterprise cloud network according to the following steps. Initially, an operator installs a certificate authority (e.g. CA 320) in the customer's enterprise cloud network. The certificate authority will provide the basis for dynamic certificate enrollment for bootstrapping operator-provided solution services as well as client virtual machine instances.”); and
performing, using the second security certificate, the bootstrapping process of the second VCI running in the second virtual infrastructure (SRIVASTAV Fig. 1, Para. [0016]: “The controller 110 selects appropriate computing resources 130 (e.g., processors, memory, network interfaces, etc.) to run the virtual machine. The computing resources 130 may run a plurality of virtual machines, such as virtual machines 132 and 134. A virtual machine 132 may be based on a virtual machine image 140, which may be stored in an image database in the cloud network system 100. The image database may be stored in memory of the controller 110 or in a separate database in the cloud network 100. Each of the virtual machines 132 and 134 can communicate with a registration authority (RA) 150 to obtain cryptographic material, such as keys and certificates, used in secure communication channels. The registration authority 150 may be running on another virtual machine within the cloud network system 100.”; and
Para. [0029]: “…, a bootstrapping utility (e.g. OpenStack Heat Template) operating on an administrative node in the customer's cloud network enables a newly instantiated virtual machine to perform certificate enrollment using the EST protocol. The bootstrapping utility provides queries to obtain the Private Root CA certificate and API-specific CA root certificates via an API.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of GANGULY (disclosing methods for centralized management, provisioning and monitoring of cloud infrastructure) to include the teachings of SRIVASTAV (disclosing methods for security authentication key management in a distributed network environment) and arrive at a method that provides a dynamic secure authentication of virtual machines ahead of deploying or bootstrapping these virtual machines. One of ordinary skill in the art would have been motivated to make this combination because virtual machines may be transient and instances may be booted to scale with demand for the service provided by that virtual machine, thereby rather than manually provisioning each instance with an individual digital certificate, each instance of the virtual machine may be efficiently provided with the required authentication keys and certificate in the virtual machine image used to create each instance, as recognized by (SRIVASTAV Abstract, Para. [0002]-[0003], [0010]). In addition, the references of GANGULY and SRIVASTAV teach features that are directed to analogous art and they are directed to the same field of endeavor of cloud resource allocation and management.
Regarding claims (15 and 21), the aforementioned claims recite similar limitations to claim 8, and therefore rejected for similar reasons as discussed above.
Claims 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication (US 20210271506 A1) issued to Ganguly et al. (hereinafter as “GANGULY”), and in view of US Patent Application Publication (US 20170222981 A1) issued to Srivastav et al. (hereinafter as “SRIVASTAV”).
Regarding claim 3, GANGULY teaches the limitations of claim 1.
However, GANGUL does not explicitly teaches Further, GANGULY teaches wherein performing the bootstrapping process of the first VCI and the second VCI to map the adapter instance to the first virtual infrastructure and the second virtual infrastructure comprises: determining whether the first virtual infrastructure and the second virtual infrastructure are mapped to the adapter instance; and in response to determining that the first virtual infrastructure, the second virtual infrastructure, or both are not mapped to the adapter instance, adding the first virtual infrastructure, the second virtual infrastructure, or both to the adapter instance.
But SHELKE teaches determining whether the first virtual infrastructure and the second virtual infrastructure are mapped to the adapter instance (SHELKE Fig. 4, Para. [0073]: “The example certification engine 412 examines the configuration of the virtual network system within the data center 302 and processes rules to determine if the virtual network deployment meets certification rules. The example certification rules may be policies and/or rules that confirm that the deployment. For example, the certification engine 412 may utilize test automation tools to validate the deployment of the virtual networking system. The rules and/or policies to be tested may be provided by the example administrator 306. For example, example certification rules include: [0074] Checking if all deployed appliances have retained configurations (e.g., IP addresses) [0075] Checking that there are no errors in logs [0076] Checking if all services from a Management Plane and/or Control Plane are up and running without any failures [0077] Checking if a Management Plane Cluster and/or Control Plane Cluster are stable [0078] Checking connectivity between the Management/Policy Plane …”; and
Fig. 4, Para. [0083]: “The example certification engine 412 may additionally or alternatively validate connectivity among components of the virtual network system. For example, the certification engine 412 may verify that a virtual network manager (e.g., NSX Manager, NSX Management Pack (MP)) can communicate with a central control plane (e.g., the NSX Central Control Plane (CCP)), may verify that the central control plane can communicate with hypervisors, may verify that the virtual network manager can communicate with hypervisors, may verify that the virtual network manager can communicate with edges devices of the network, etc.”); and
in response to determining that the first virtual infrastructure, the second virtual infrastructure, or both are not mapped to the adapter instance, adding the first virtual infrastructure, the second virtual infrastructure, or both to the adapter instance (SHELKE Fig. 4, Para. [0082]: “The certification engine 412 may additionally validate each appliance of the virtual network system. For example, the certification engine 412 query and/or otherwise check for proper operation of virtual network system services. The certification engine 412 may check versions of components of the virtual network system to confirm that the versions are up to date and/or that all versions are compatible based on a compatibility table. The certification engine 412 may check that resulting hardware and/or network configurations meet requested hardware and/or network configurations. The certification engine 412 may additionally analyze logs to ensure that there were no errors, may verify that log rotation is operating, may verify that command line interface command execute successfully, etc.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of GANGULY (disclosing methods for centralized management, provisioning and monitoring of cloud infrastructure) to include the teachings of SHELKE (disclosing methods for successful deployment of virtual machines in a data center) and arrive at a method that provides a mechanism for a successful deployment of virtual machines. One of ordinary skill in the art would have been motivated to make this combination because by providing a single interface that may be utilized for deploying and configuring various aspects of a virtual network system may reduce errors/mistakes when deploying virtual networking that leads to increasing the reliability of network system deployment, as recognized by (SHELKE Abstract, Para. [0003]-[0005], [0105]). In addition, the references of GANGULY and SHELKE teach features that are directed to analogous art and they are directed to the same field of endeavor of virtual cloud resource management.
Regarding claims (10 and 17), the aforementioned claims recite similar limitations to claim 3, and therefore rejected for similar reasons as discussed above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
YANG et al.; (US 20170116014 A1); “Methods for policy based application monitoring in virtualized environment, wherein an agent in a virtual machine (VM) and an application policy manager to implement application monitoring and remediation provides a user interface for defining policies for applications running on VMs where each policy includes monitoring conditions and remediation for the monitoring conditions.”
Thiyagarajah et al.; (US 20170177377 A1 A1); “Methods for starting application processors of a virtual machine, wherein a bootstrap processor startup module to bootstrap a virtual machine while the virtual machine is being started up.”
SINGH et al.; (US 20150378765 A1); “Methods to scale application deployments in cloud computing environments using virtual machine pools, wherein a VM deployment monitor monitors deployment environments and determines whether to initiate a scaling operation and to what extent.”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Zuheir A Mheir whose telephone number is (571)272-4151. The examiner can normally be reached on Monday - Friday 9:00 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached on (571)272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
2/6/2026
/ZUHEIR A MHEIR/Patent Examiner, Art Unit 2156
/PIERRE VITAL/Supervisory Patent Examiner, Art Unit 2198