DETAILED ACTION
This Office Action is in response to claims filed on 01/22/2026.
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see page 10 of applicant's remark, filed 01/22/2026, with respect to objection of the specification have been fully considered and are persuasive. The objection of 10/22/205 has been withdrawn.
Applicant’s arguments filed 01/22/2026 have been fully considered but they are not persuasive. Applicant argues in substance:
Claims 2, 4, 5, and 7 are provisionally rejected on the ground of non-statutory double patenting as allegedly being unpatentable over U.S. Application No. 18/308,718. Withdrawal of this rejection is respectfully requested for at least the following reasons.
The provisional non-statutory double patenting rejection is respectfully requested to be held in abeyance as the instant application and U.S. Application No. 18/308,718 are both pending. Also, the independent claims of the instant application are not provisionally rejected on the ground of non-statutory double patenting, and thus the independent claims and claims depending therefrom are believed to be patentably distinct from the claims of U.S. Application No. 18/308,718.
Therefore, withdrawal of the rejection is respectfully requested.
With regard to point (a), Examiner has considered Applicant’s remark. However, with respect to patentable distinction of independent and dependent claims, the claim recognized in the previous Office Action, filed 10/22/2025, have been found to not be patentably distinct from the reference claims and therefore the rejection is maintained. Additionally, because the basis for the rejection remains unchanged and no terminal disclaimer has been filed, the non-statutory double patenting rejection raised in the previous Office Action, filed 08/12/2025, continues to stand and will be reiterated below.
Applicant’s arguments with respect to claims 1, 8, and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 2, 4, 5, and 7 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 3, 5, 7, and 11 of co-pending Application No. 18/308,718 (hereinafter '718) in view of CaraDonna et al. Pub. No. US 2018/0173554 A1 (hereinafter CaraDonna). Although the claims are issue are not identical, they are not patentably distinct from each other because the claims of '718 are narrower in scope and, in view of CaraDonna, would be recognized by a person of ordinary skill in the art as an obvious variant. Further, the instant claim 7 recites “implementing a failback procedure for hosting a new instance” whereas ‘718 in claim 7 recites “executing a failback operation for hosting.” As used herein, the terms “procedure” and “operation” are used interchangeably to refer to a sequence of steps. Moreover, the terms “implementing” and “executing” are synonymous in describing the performance of such steps. Additionally, it is implicit that “hosting” encompasses the initiation of a new virtual machine instance and thus the scope is commensurate.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Instant Application
Co-pending Application: 18/308,718
2. The method of claim 1, comprising:
converting the snapshot from a backup format, used by a cloud backup process to back up the snapshot to the storage bucket, to a virtual machine disk file format; and
invoking the cloud import API to convert the boot virtual machine disk from the virtual machine disk format to the second virtual machine format.
11. The method of claim 1, wherein the disaster recovery orchestration process comprises:
converting the snapshot from a backup format, used by a cloud backup process to back up the snapshot to the storage bucket, to a virtual machine disk format.
4. The method of claim 1, comprising:
mounting the destination data virtual machine disk from a cloud volume hosted within the cloud storage environment for access by the destination virtual machine.
3. The method of claim 1, comprising:
mounting the destination data virtual machine disk from a cloud volume hosted within the cloud storage environment for access by the destination virtual machine.
5. The method of claim 1, comprising:
during failover operation of the destination virtual machine, creating and storing incremental snapshots of the destination virtual machine into the storage bucket to capture changes made to the destination data virtual disk by the destination virtual machine.
5. The method of claim 1, comprising:
during failover operation of the destination virtual machine, storing incremental snapshots of the destination virtual machine into the storage bucket, wherein the incremental snapshots capture changes made to the destination data virtual disk by the destination virtual machine
7. The method of claim 1, comprising:
in response to implementing a failback procedure for hosting a new instance of the primary virtual machine using a restored boot virtual machine disk for providing access to a restore data virtual machine disk, deleting the destination virtual machine to free resources consumed by the destination virtual machine.
7. The method of claim 1, comprising:
in response to executing a failback operation for hosting the primary virtual machine using a restored boot virtual machine disk for providing access to a restore data virtual machine disk, deleting the destination virtual machine that free resources consumed by the destination virtual machine.
‘718 does not explicitly disclose “invoking the cloud import API to convert the boot virtual machine disk from the virtual machine disk format to the second virtual machine format.”
However, CaraDonna teaches invoking the cloud import API to convert the boot virtual machine disk ([0014], With application programming interface function calls and/or scripted task automation and configuration management commands, different applications and tools are coordinated to convert the boot disks into widely adopted storage representations, to instantiate VMs in the destination environment; [0024], Returning to FIG. 1, the orchestrator 120 invokes calls to a function(s) defined by an API of the cloud service provider of the cloud site 113 to instantiate the virtual machines 127A-127D in the virtual environment Y) from the virtual machine disk format to the second virtual machine format ([0047], At block 507, orchestrator uses sub-file cloning on each virtual disk of the select virtual disk set to convert each virtual disk into a LUN with the boot data. The orchestrator creates the LUN without virtual disk metadata that can be incompatible in the foreign VM environment)
Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teaching of CaraDonna with the teachings of ‘718 in order to provide a method that teaches conversion of a snapshot to a second virtual machine disk format. The modification would have been motivated by the desire of simplifying the process of migrating virtual machines between different virtualized environments commonly used in disaster recovery operations.
‘718 does not explicitly disclose “creating and storing incremental snapshots”
However, CaraDonna further teaches creating and storing incremental snapshots ([0019], For instance, replication relationships can be created with incremental snapshots to capture updates)
Therefore, it would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teaching of CaraDonna with the teachings of ‘718 in order to provide a method that teaches creating and storing incremental snapshots. The modification would have been motivated by the desire of maintaining the latest version of a virtual machine to be used for migration.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 8, 15, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over CaraDonna et al. Pub. No. US 2018/0173554 A1 (hereinafter CaraDonna) in view of Naidu et al. Pub. No. US 2020/0409803 A1 (hereinafter Naidu) in view of Vatnikov et al. Pub. No. US 2015/0341221 A1 (hereinafter Vatnikov).
With regard to claim 1, CaraDonna teaches a method comprising ([0011], The description that follows includes example systems, methods, techniques, and program flows that embody embodiments of the disclosure):
[replicating storage volumes] backing up snapshots, capturing states ([0014], A copy of a source logical storage container with multiple virtual disk of VMs can be created in a public cloud destination) of a boot virtual machine disk and a data virtual disk of a primary virtual machine ([0022], a virtual machine can be associated with multiple virtual disks including a boot disk and one or more data disks) hosted on premise by a first hypervisor that supports a first virtual machine format ([0014], The application leverages the underlying data of the virtualized system that has been replicated at the storage layer. This avoids the inefficiencies of migrating or exporting virtual machines at the guest operating system layer and allows for enterprise scale shift of VMs at a primary site (e.g., thousands of VMs) to a cloud site despite differences in virtualization technology), to a storage bucket of a cloud storage environment ([0019], To create the supporting data for the virtualized system in the destination virtual environment (i.e., cloud site 113) at a storage layer, data of the logical storage containers 103, 105 are replicated into the cloud site 113; [0028], virtual disks are replicated from a primary storage cluster to a secondary storage cluster. The primary storage cluster can include on-premise devices and/or private cloud devices. The secondary storage cluster can include private cloud resources and/or public cloud resources (Examiner notes: storage bucket of cloud storage environment); and
in response to the primary virtual machine experiencing a failure triggering a disaster recovery orchestration process that includes ([0030], At block 401, an orchestrator detects a trigger. A trigger can be detection of a failure at a primary site, request to test a failover plan, moving a virtualized system into a different virtualization environment, etc.):
converting the boot virtual machine disk ([0014], With application programming interface function calls and/or scripted task automation and configuration management commands, different applications and tools are coordinated to convert the boot disks into widely adopted storage representations, to instantiate VMs in the destination environment; [0024], Returning to FIG. 1, the orchestrator 120 invokes calls to a function(s) defined by an API of the cloud service provider of the cloud site 113 to instantiate the virtual machines 127A-127D in the virtual environment Y), captured by a snapshot of the primary virtual machine, from the first virtual machine to a second virtual machine format supported by a second hypervisor of the cloud storage environment for a destination boot virtual machine disk ([0033], At block 405, the orchestrator begins processing each of the virtual disks in the selected logical storage container for conversion. The orchestrator processes the virtual disks based on virtual disk type), …
performing a restore operation to utilize the snapshot of the primary virtual machine to create a destination data virtual machine disk for use by the destination virtual machine ([0036], At block 409, the orchestrator deploys a VM in the destination VM environment based on a VM template or pre-configured/prepopulated cloud compute instance. Prior to this deployment, VMs do not need to be powered on. This avoids incurring the costs of maintaining powered on VMs until wanted or needed. The VM template has been previously configured with minimal configuration information … The orchestrator can select an appropriate cloud compute instance or VM template based on configuration information of the source VM); and
booting the destination virtual machine through the second hypervisor using the modified network details and the destination boot virtual machine disk for providing access to data within the destination data virtual machine disk ([0037], At block 413, the orchestrator attaches the logical storage target(s) converted from virtual disk(s) to the deployed VM and chain loads the boot logical storage target into the VM. The deployed VM includes a boot loader programmed to boot the logical storage target).
CaraDonna reasonable teaches the creation and replication of virtual machine comprising of virtual machine disks on to storage volumes for use in heterogenous virtualized environments. However, CaraDonna does not explicitly associate the storage volumes of the virtual machine as snapshots and that the replication of these storage volumes as backups.
Naidu teaches creating snapshots ([0017] Accordingly, as provided herein, an agent component, a virtual machine agent (e.g., a virtual machine proxy), and a storage agent (e.g., a storage proxy) are implemented in order to provide the computing environment with data protection and storage functionality; [0018], the virtual machine agent can interact with APIs of the virtual machine management platform in order to invoke the virtual machine management platform to create a snapshot of the virtual machine hosted by a hypervisor within the computing environment; [0019], The agent components uses the storage agent to write the metafile from the virtual machine agent into storage of the storage environment, such as within a volume of the storage environment. Also, the storage agent receives the snapshot data, destined for the storage environment) …;
backing up the snapshots ([0019], In this way, snapshot data of the snapshot is transferred to the storage environment in a format understood and interpretable by the snapshot management service and the storage of the storage environment; [0025], The first node 130 may replicate the data and/or operations to other computing devices, such as to the second node 132, the third node 136, a storage virtual machine executing within the distributed computing platform 102, etc., so that the one or more replicas of the data are maintained. For example, the third node 136 may host a destination storage volume that is maintained as a replica of a source storage volume (Examiner notes: Snapshot of virtual machine) of the first node 130. Such replicas can be used for disaster recovery and failover)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Naidu with the teachings of CaraDonna in order to provide a method that teaches storage volumes as snapshots and replication of volumes as snapshot backups. The motivation for applying Naidu teaching with CaraDonna teaching is to provide a method that teaches simple substitution of the storage volumes of CaraDonna with snapshots of Naidu because both are known elements for preserving the state of virtual machines and such that the substitution would reasonably be expected to obtain predictable results. The combination further provides a simple substitution for replication and backups of virtual machine volumes, as both are known methods for storing and maintaining virtual machine data, and such that the substitution would reasonably be expected to obtain predictable results. CaraDonna and Naidu are analogous art directed towards hypervisor-specific management and virtual machine handling. Therefore, it would have been obvious for one of ordinary skill in the art to combine Naidu with CaraDonna to teach the claimed invention in order to provide the well-known practice of creating and preserving the state of virtual machines through snapshot backups.
However, CaraDonna and Naidu do not explicitly teach modification of network details and booting of a virtual machine in association with the converted network details.
Vatnikov teaches wherein the converting modifies network details enabling a host operating system to be discovered during boot to create modified network details ([0014], Illustratively, in IP mapper object 120 the IP mapping rule may be defined as “each statically configured vNIC found at protected site’s Network-P and IP subnet 10.17.186.0/24 and failing over to recovery site Network-R shall be placed at subnet 10.17.187.0/24 and use gateway 10.17.187.1 and DNS server 10.17.187.3”, where the predicate for this rule defines the conditions that the network attachment and subnet/parameters need to match, namely the vNIC being connected to protected site’s Network-P and configured at 10.17.186.0/24 and failing over to Network-R … If this predicate is satisfied, then the site recovery application resolves new parameter value for reconfiguring the VM, according to the rule’s resolver function … After determining these parameter values, the site recover application generates (or modifies) a configuration script with the determined parameter values, and this script is then executed on the VM to apply the IP customization);
booting the destination virtual machine through the second hypervisor (FIG. 2, VM 111B replicated on recovery site 102; [0019], FIG. 3, illustrates a method 300 for recovering a virtual machine with IP customization, according to an embodiment. As shown, method 300 begins at step 302, where the VM has been recovered at the recovery site. At this point, files associated with the VM have been replicated to the recovery site, and other recovery steps, such as restoring the virtual disks of the VMs, may have been taken. In one embodiment, the VM may be ready to be powered on at the recovery site.) using the modified network details ([0021], At step 314, the site recovery application determines whether an IP mapper object exists for the VM’s networks … If one or more IP mapper objects are identified, then the recovered VM is powered on at step 316 and at step 318, the site recovery application retrieves guest OS TCP/IP settings for IP customization purposes)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Vatnikov with the teachings of Caradonna and Naidu in order to provide a method that teaches modification of network details following a failover event of a virtual machine from a source to a destination virtualization environment and booting such virtual machine using the modified network details. The motivation for applying Vatnikov teaching with Caradonna and Naidu teaching is to provide a method that allows for dynamic resolution of network functionality for virtual machines undergoing disaster recovery, thereby enabling streamlined network reconfiguration that improves manageability and reduces downtime ([0007], Vatnikov). Caradonna and Naidu and Vatnikov are analogous art directed towards hypervisor-specific management and networking arrangement. Therefore, it would have been obvious for one of ordinary skill in the art to combine Vatnikov with Caradonna and Naidu to teach the claimed invention in order to provide a method enabling resolution, modification, and deployment of network configuration on virtual machines undergoing disaster recovery from a source to destination virtualization environment.
With regard to claim 2, CaraDonna teaches converting the snapshot from a backup format, used by a cloud backup process to back up the snapshot to the storage bucket, to a virtual machine disk file format ([0017], Logical storage containers 103, 105 contain the data underling the virtual stacks 109-112. The logical storage container 103 (e.g., a volume) contains virtual disks 105A-105N. Each of the virtual disks 105A-105N includes boot data for corresponding virtual stacks (Examiner notes: such that the virtual machine disks can be extracted from the snapshot backup volume format upon invocation);
invoking the cloud import API to convert the boot virtual machine disk ([0014], With application programming interface function calls and/or scripted task automation and configuration management commands, different applications and tools are coordinated to convert the boot disks into widely adopted storage representations, to instantiate VMs in the destination environment) from the virtual machine disk file format to the second virtual machine format (Fig. 2, 507 Use sub-file cloning on each virtual disk of the set to create a LUN for the virtual disk without the metadata of the virtual disk; [0047], At block 507, orchestrator uses sub-file cloning on each virtual disk of the select virtual disk set to convert each virtual disk into a LUN with the boot data. The orchestrator creates the LUN without virtual disk metadata that can be incompatible in the foreign VM environment).
With regard to claim 3, CaraDonna teaches in response to performing a backup operation to back up a new snapshot of the primary virtual machine to the storage bucket ([0019], At stage 1a, the logical storage container 103 is replicated to the logical storage container 115 … These replication relationships can be created directly with data management software or via a cloud DR orchestrator 120), converting a virtual machine disk file of the new snapshot to a new destination virtual machine disk having the second virtual machine format, wherein the new destination virtual machine disk is available for performing a subsequent disaster recovery orchestration process ([0021], Based on detection of a disaster recovery trigger, the cloud DR orchestrator 120 orchestrates operations that carry out the failover from the storage layer into the foreign virtualization environment of cloud site 113. Stages 3-5 encompass the orchestration of failover operations by the cloud DR orchestrator 120; [0022], Over stages 3a-3d, the orchestrator 120 converts virtual disks from the primary site 101 into LUNs the cloud site 113); and
CaraDonna reasonably teaches deletion of VM metadata after conversion (CaraDonna, [0022]). However, CaraDonna does not explicitly teach that deletion of the prior virtual machine disk snapshot used for disaster recovery.
Naidu teaches deleting a prior destination virtual machine disk ([0022], Once the restore is finished, the snapshot at the computing environment (the common snapshot at the computing environment) may be deleted).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Naidu with the teachings of CaraDonna and Vatnikov in order to provide a method that teaches deletion of prior destination virtual machine disk. The motivation for applying Naidu teaching with CaraDonna and Vatnikov teaching is to provide a method that allows for resource usage and cost saving by reducing the maintenance overhead of storing obsolete snapshots (Naidu, [0022]). CaraDonna, Vatnikov, and Naidu are analogous art directed towards hypervisor-specific management and virtual machine handling. Therefore, it would have been obvious for one of ordinary skill in the art to combine Naidu with CaraDonna and Vatnikov to teach the claimed invention in order to provide cost and performance benefits by removing prior snapshot maintenance overhead.
With regard to claim 4, CaraDonna teaches mounting the destination data virtual machine disk from a cloud volume hosted within the cloud storage environment for access by the destination virtual machine ([0041], FIG. 5 is a flowchart of example operations of failing over a virtualized system to a foreign virtualization environment of a cloud site; [0049], At block 511, the orchestrator attaches the LUN(s) to the deployed VM. The orchestrator also calls a function to mount the boot LUN (i.e., the Lun converted from the virtual boot disk). This allows the deployed VM to issue I/O commands, such as small computer system interface (SCSI) commands to the LUN).
With regard to claim 5, CaraDonna teaches during failover operation of the destination virtual machine ([0021], Based on detection of a disaster recovery trigger, the cloud Dr orchestrator 120 orchestrates operations that carry out the failover from the storage layer into the foreign virtualization environment cloud site 113),
CaraDonna reasonably teaches that replication relationship of storage volumes between the primary site and cloud site can include incremental snapshots to capture updates (CaraDonna, [0019]). However, CaraDonna does not explicitly describe the method of creating and storing incremental snapshots through capturing updated virtual disk data in a cloud storage environment.
Naidu teaches creating and storing incremental snapshots of the destination virtual machine into the storage bucket to capture changes made to the destination data virtual disk by the destination virtual machine ([0023], Performing the incremental restore of the virtual machine conserves computing resources and network bandwidth by transferring data of the incremental snapshot from the computing environment to the storage environment in order to establish the common snapshots (e.g., the transferred data of the snapshot may merely comprises incremental backup data corresponding to changes to the virtual machine since a last backup) and then transferring the data difference from the storage environment to the computing environment)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Naidu with the teachings of CaraDonna and Vatnikov in order to provide a method that teaches the creation and storage of incremental snapshots. The motivation for applying Naidu teaching with CaraDonn and Vatnikov a teaching is to provide a method that allows for transferring of incremental updates applied to virtual machines such that increments provide conservation of computing resources and network bandwidth over full backups containing redundant data (Naidu, [0023]). CaraDonna, Vatnikov, and Naidu are analogous art directed towards hypervisor-specific management and virtual machine handling. Therefore, it would have been obvious for one of ordinary skill in the art to combine Naidu with CaraDonna and Vatnikov to teach the claimed invention in order to provide substantially efficient use of computing resources and network bandwidth by performing incremental backups of virtual machines.
With regard to claim 6, CaraDonna teaches in response to determining that the primary virtual machine has recovered, implementing a failback procedure that includes ([0054], At some point, the virtualized system likely fails back to the source site. The management layer will determine that the source site has recovered and/or become available for hosting the virtualized system):
invoking a cloud export API to convert an incremental snapshot of the destination virtual machine from the second virtual machine format to the first virtual machine format as a virtual machine disk file ([0054], Since the logical storage targets refer to the data ranges (blocks or extents) of the logical storage containers in the cloud DR site, changes made to the underlying data during failover can be replicated back to the source site at the storage layer. Orchestrator calls storage application defined functions to reverse the replication relationship between the logical storage containers of the different sites. Thus, the source and destinations are reversed and copies of the logical storage container in the cloud Dr site are created at the source site (Examiner notes: such that the method of claim 1 is substantially applied here in the reverse of source and destination environments); and
initializing the primary virtual machine using the virtual machine disk file ([0054], Since operating at the storage layer, the orchestrator can failback to the source/primary site by batch (e.g., an entire storage array supporting the logical storage containers) instead of by individual VMs. With the virtual disks back in the source VM environment, VMs can be instantiated from the virtual disks).
With regard to claim 8, CaraDonna teaches A computing device comprising ([0011], The description that follows includes example systems, methods, techniques, and program flows that embody embodiments of the disclosure; [0062], FIG. 6 depicts an example computer system with a storage layer based virtualization portability orchestrator):
a memory comprising instructions ([0057], A machine-readable storage medium may … store program code … the machine-readable medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM) … or any suitable combination of the foregoing; [0062], The computer system includes memory 607); and
a processor coupled to the memory ([0062], Although illustrated as being coupled to the bus 603, the memory 607 may be coupled to the processor 601), the processor configured to execute the instructions to cause the processor to perform operations comprising ([0060], Computer program code for carrying out aspects of the disclosure … may execute on a stand-alone machine, may execute in a distributed manner across multiple machines, and may execute on one machine while providing results and or accepting input on another machine). Claim 8 is a computing device having similar limitations as claim 1. Thus, claim 8 is rejected for the same rationale as applied to claim 1.
With regard to claim 15, CaraDonna teaches A non-transitory machine readable medium comprising instructions for performing a method, which when executed by a machine, causes the machine to perform operations comprising ([0061], The program code/instructions may also be stored in a machine-readable medium that can direct a machine to function in a particular manner such that the instructions stored in the machine-readable medium produce an article of manufacture including instructions which implement the function/act specified). Claim 15 is a non-transitory machine readable medium having similar limitations to claim 1. Thus, claim 15 is rejected for the same rationale as applied to claim 1.
With regard to claim 16, CaraDonna teaches wherein the operations comprise:
in response to the primary virtual machine experiencing a failure ([0021], Based on detection of a disaster recovery trigger), triggering a disaster recovery orchestration process that utilizes the destination machine disk for performing the restore operation to create a new destination virtual machine hosted by the second hypervisor of the cloud storage environment ([0021], the cloud DR orchestrator 120 orchestrates operations that carry out the failover from the storage layer into the foreign virtualization environment of the cloud site 113. Stages 3-5 encompass the orchestration of failover operations by the cloud Dr orchestrator 120; [0016], To avoid overcomplicating the illustration, virtual machine monitors are not depicted (Examiner notes: it is implicit that a disaster recovery orchestration process restores a virtual machine upon a second hypervisor hosted in a cloud environment).
With regard to claim 20, CaraDonna teaches wherein the operations comprise:
triggering the orchestration process based upon a request to migrate data to the cloud storage environment ([0026], Further disclosed technique can be used to port or migrate virtualized system into a foreign environment, for example from on-premise devices to public cloud, without a DR motivation; [0030], At block 401, an orchestrator detects a trigger. A trigger can be detection of a failure at a primary site, request to test a failover plan, moving a virtualized system into a different virtualization environment, etc.).
Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over CaraDonna in view of Naidu in view of Vatnikov as applied to claims 1 and 16 above, and further in view of Srikantan et al. Patent No. US 11,036,419 B1 (hereinafter Srikantan).
With regard to claim 7, CaraDonna teaches in response to implementing a failback procedure for hosting a new instance of the primary virtual machine using a restored boot virtual machine disk for providing access to a restore data virtual machine disk ([0054], At some point, the virtualized system likely fails back to the source site. The management layer will determine that the source site has recovered and/or become available for hosting the virtualized system. The cloning used for converting virtual disks into logical storage targets for rapid failover also allows rapid failback),
CaraDonna reasonably teaches the implementation of a failback procedure to return and restore the virtual machine disk on a new instance located in the primary site. However, the combination does not explicitly teach that in response to the failback procedure to delete the original destination virtual machine.
Srikantan teaches deleting the destination virtual machine to free resources consumed by the destination virtual machine (Col. 15, lines 29-32, In some embodiments, after failback virtual machine 705 is instantiated at the first cluster 702, the second cluster 704 is configured to delete the failover virtual machine 732 to conserve resources).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Srikantan with the teachings of CaraDonna, Naidu, and Vatnikov in order to provide a method that teaches deletion of the destination virtual machine to free resources consumed by destination virtual machine. The motivation for applying Srikantan teaching with CaraDonna, Naidu, and Vatnikov teaching is to provide a method that allows for cost and resource savings through efficient and automatic management of virtual machines in the recovery event after a disaster (Srikantan, Col. 15). CaraDonna, Naidu, Vatnikov and Srikantan are analogous art directed towards hypervisor-specific management incorporating the creating, deleting, and cloning of virtual machines. Therefore, it would have been obvious for one of ordinary skill in the art to combine Srikantan with CaraDonna, Naidu, and Vatnikov to teach the claimed invention in order to provide resource and cost saving measures through removal of the destination virtual machine.
With regard to claim 17, it is a non-transitory machine-readable medium having similar limitations as claim 7. Thus, claim 17 is rejected for the same rationale as applied to claim 7.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over CaraDonna in view of Naidu in view of Vatnikov as applied to claim 8 above, and further in view of Dailianas et al. Pub. No. US 2020/0314175 A1 (hereinafter Dailianas).
With regard to claim 9, Dailianas teaches wherein the request is a migration request to migrate client data, and wherein the operations comprise:
in response to receiving the request, evaluating one or more cloud provider charging models of available one or more cloud storage environments to select the cloud storage environment ([0011], Accordingly, in a first aspect, the invention relates to a computer-implemented method comprising … determining, after a predetermined period of time has passed since the selection or after the cost for running the workload has increased by at least a predetermined amount, a cost for running the workload on a second computational resource provider, wherein the first computational resource provider is an in-house resource provider and the second computational resource provider is a cloud-based service provider offering a plurality of templates each of the templates specifying a quantity of one or more resources selected from CPUs, memory, databases, network bandwidth, and/or input-output capacity) as a migration destination for migrating the client data hosted by the primary virtual machine to the destination virtual machine hosted within the cloud storage environment ([0011], moving the workload to the second provider if the utilization value exceeds a utilization value of continuing to host the workload on the first provider).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Dailianas with the teachings of CaraDonna, Naidu, and Vatnikov in order to provide a system that teaches evaluating cloud provider models to select the cloud environment for migration. The motivation for applying Dailianas teaching with CaraDonna, Naidu, and Vatnikov teaching is to provide a system that allows for improvements in availability and robustness of applications through the capability of shifting workloads to the cloud in fail-over situations (Dailianas, [0004]). CaraDonna, Naidu, Vatnikov, and Dailianas are analogous art directed towards distribution of virtual machines through migration and load balancing. Therefore, it would have been obvious for one of ordinary skill in the art to combine Dailianas with CaraDonna, Naidu, and Vatnikov to teach the claimed invention in order to provide cloud-based redundancy.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over CaraDonna in view of Naidu in view of Vatnikov as applied to claim 8 above, and further in view of Lu et al. Pub. No. US 2023/0315503 A1 (hereinafter Lu) in view of Ashok et al. Pub. No. US 2015/0373093 A1 (hereinafter Ashok).
With regard to claim 10, CaraDonna reasonably teaches the migration of virtual machines managed under virtual machine monitors between different virtual environments hosted by different providers (CaraDonna, [0016]). However, the combination does not explicitly teach a selection of hypervisors for virtual machine migration.
Lu teaches wherein the request is a migration request to migrate data, and wherein the operations comprise:
in response to receiving the request, … select the second hypervisor as a migration destination for migrating the client data hosted by the primary virtual machine to the destination virtual machine hosted by the second hypervisor ([0084], In some aspects, the data management system 410 may restore a virtual machine to a different hypervisor platform based on a request from a user … In the example of FIG. 4, the data management system 401 may display a list of target hypervisor platforms 405 including the hypervisor platform 405-b and the hypervisor platform 405-c via the user interface 420, and the user may select the hypervisor platform 405-b from the list, which may indicate a request to restore or backup a first virtual machine on the hypervisor platform 405-a to the hypervisor platform 405-b).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Lu with the teachings of CaraDonna, Naidu, and Vatnikov in order to provide a method that teaches the selection of a hypervisor for migration. The motivation for applying Lu teaching with CaraDonna, Naidu, and Vatnikov teaching is to provide a method that allows for restoration of virtual machines to reduces costs, provide diversity for improved security and reliability or transfer computing resources between hypervisors (Lu, [0085]). CaraDonna, Naidu, Vatnikov and Lu are analogous art directed towards hypervisor-specific management. Therefore, it would have been obvious for one of ordinary skill in the art to combine Lu with CaraDonna, Naidu, and Vatnikov to teach the claimed invention in order to provide mechanism for selecting a particular hypervisor.
However, Lu does not explicitly teach the evaluation of hypervisor charging models.
Ashok teaches evaluating one or more hypervisor charging models of available one or more hypervisors ([0017], A ranking algorithm is then applied to a list of pools of compute nodes that are best suited for satisfying the attribute requirements of the application workload by comparing hypervisor characteristics of the pools of compute nodes with the attribute requirements of the application workload (Examiner notes: evaluation of the hypervisors). Each pool of compute nodes runs on a particular hypervisor platform which has unique combination of characteristics that correspond to a combination of a set of attribute requirements; [0067], Examples of the attributes of the application workload include, but not limited to, Central Processing Unit (CPU) capacity, memory capacity, disk capacity, network capacity, CPU performance, memory performance, disk performance, network performance, power consumption, cost of service (Examiner notes: charging model for the hypervisor platform)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Ashok with the teachings of CaraDonna, Naidu, Vatnikov, and Lu in order to provide a system that incorporates a process for evaluating available hypervisor models. The motivation for applying Ashok teaching with CaraDonna, Naidu, Vatnikov, and Lu teaching is to provide a system that allows for a particular set of attributes to be associated with a hypervisor environment, such that enables streamlining a selection of a hypervisor best suited for the specified requirements and its associated cost (Ashok, [0006]). CaraDonna, Naidu, Vatnikov, and Lu and Ashok are analogous art directed towards hypervisor-specific management. Therefore, it would have been obvious for one of ordinary skill in the art to combine Ashok with CaraDonna, Naidu, Vatnikov, and Lu to teach the claimed invention in order to provide a mechanism in which hypervisors models and associated cost can be evaluated for a given workload.
Claims 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over CaraDonna in view of Naidu in view of Vatnikov as applied to claim 8 above, and further in view of Paraschiv et al. Patent No. US 10,725,885 B1 (hereinafter Paraschiv) in view of Lu.
With regard to claim 11, Paraschiv teaches wherein the operations comprise:
determining a current load of the primary virtual machine (Col. 3, lines 1-7, The VM load monitor obtains the VM load data via the dedicated communications channel, analyzes the VM load data, and makes VM management decisions (e.g., launch VM and/or migrate VM decisions) based on the analysis. The VM load data obtained by the VM load monitor is input to one or more analysis engines that execute on the VM load monitor);
in response to the current load reaching a high load threshold (Col. 15, lines 58-66, The load information received from VMLMs 952A-952n may be input to a host selection module 908 of the control plane process 906 that analyzes the load information and specific load requirements of VM(s) to be launched or migrated to select a host device 940 with a current or predicted load capacity that can support the workload of the VM(s) that need to be launched or migrated. As a non-limiting example, if a VM to be launched or migrated is computation-intensive), generating the request to specify the second hypervisor as a destination hypervisor for hosting the destination virtual machine (Col. 15-Col. 16, lines 66-67 and 1-4, then a host device 940 may be selected that has a low (or the lowest) current computation workload. As shown in FIG. 9B, at (4), once a target host device 940 is selected … the VM(s) may be launched on or migrated to the host device 940A; Fig. 12, Col. 18, lines 53-56, A hypervisor, or virtual machine monitor (VMM) 4122, on a host 412 presents the VMs 4124 on the host with a virtual platform (Examiner notes: host 4120A and 4120B of Fig. 12 contain separate hypervisors of VMM 4122A and 4122B wherein it is implicit that the migration request to a destination host device further includes the corresponding hypervisor)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Paraschiv with the teachings of CaraDonna, Naidu, and Vatnikov in order to provide a system that teaches cross-site hypervisor load balancing of high load workloads. The motivation for applying Paraschiv teaching with CaraDonna, Naidu, and Vatnikov teaching is to provide a system that allows for migration of high-load workloads in order to effectively fulfill service-level agreements (Paraschiv, Col. 5). CaraDonna, Naidu, Vatnikov, and Paraschiv are analogous art directed towards distribution of virtual machines through migration and load balancing. Therefore, it would have been obvious for one of ordinary skill in the art to combine Paraschiv with CaraDonna, Naidu, and Vatnikov to teach the claimed invention in order to provide virtual machine load balancing across hypervisor environments upon high load thresholds.
Paraschiv teaches migration of virtual machine from a first to a second hypervisor based on a service level agreement between a customer and provider (Col. 11). However, Paraschiv does not explicitly recite the rationale of the second hypervisor providing better performance than the first hypervisor.
Lu teaches based upon the second hypervisor providing better performance than the first hypervisor ([0106], the data management system 610 may detect a condition of the first hypervisor platform 615, and the data management system 610 may determine to transfer the data and the metadata to the second virtual machine executing on the second hypervisor platform 620 based on detecting the condition … the first hypervisor platform 615 being subject to a ransomware attack, a disaster recovery condition for the first hypervisor platform 615, a malware infection of one or more virtual machines on the first hypervisor platform 615, some other condition indicative of a degraded performance of the first hypervisor, or any combination thereof (Examiner notes: wherein the second hypervisor provides better performance over the first).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Lu with the teachings of CaraDonna, Naidu, Vatnikov and Paraschiv in order to provide a system that teaches the destination hypervisor providing better performance than the source hypervisor. The motivation for applying Lu teaching with CaraDonna, Naidu, Vatnikov and Paraschiv teaching is to provide a system that allows for improved reliability of the system such that the system would maintain performance quality and availability (Lu, [0074]). CaraDonna, Naidu, Vatnikov and Paraschiv and Lu are analogous art directed towards distribution of virtual machines through migration and load balancing. Therefore, it would have been obvious for one of ordinary skill in the art to combine Lu with CaraDonna, Naidu, Vatnikov and Paraschiv to teach the claimed invention in order to provide high reliability through maintaining performance quality and system availability.
With regard to claim 12, Paraschiv teaches wherein the operations comprise:
determining a current load of the primary virtual machine (Col. 3, lines 1-7, The VM load monitor obtains the VM load data via the dedicated communications channel, analyzes the VM load data, and makes VM management decisions (e.g., launch VM and/or migrate VM decisions) based on the analysis. The VM load data obtained by the VM load monitor is input to one or more analysis engines that execute on the VM load monitor);
in response to the current load being less than a low load threshold (Col. 13, lines 56-59, In some embodiments, the VMLM 752A may select VM(s) for migration that exhibited relatively low load statistics for a period of time when compared to other VMs executing on the host device 740A), generating the request to specify the second hypervisor as a destination hypervisor for hosting the destination virtual machine (Col. 13, lines 59-60, The relatively low load VM(s) 758 can be migrated; Col. 14, lines 15-18, Upon locating a suitable destination host device 740, at (3a) the VM migration process 706 may respond to the VM migration proposal indicating that the VM(s) 748 can be migrated. At (3b), the VM migration process 706 may perform the migration of the VM(s) 748 to the destination host device 740; Fig. 12, Col. 18, lines 53-56, A hypervisor, or virtual machine monitor (VMM) 4122, on a host 412 presents the VMs 4124 on the host with a virtual platform (Examiner notes: host 4120A and 4120B of Fig. 12 contain separate hypervisors of VMM 4122A and 4122B wherein it is implicit that the migration request to a destination host device further includes the corresponding hypervisor)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Paraschiv with the teachings of CaraDonna, Naidu, and Vatnikov in order to provide a system that teaches cross-site hypervisor load balancing of low load worklaods. The motivation for applying Paraschiv teaching with CaraDonna, Naidu, and Vatnikov teaching is to provide a system that allows for migration of low-load workloads in order to relequnish resources to high VM workloads and mitigate against customer impact (Paraschiv, Col. 12-Col. 13). CaraDonna, Naidu, Vatnikov, and Paraschiv are analogous art directed towards distribution of virtual machines through migration and load balancing. Therefore, it would have been obvious for one of ordinary skill in the art to combine Paraschiv with CaraDonna, Naidu, and Vatnikov to teach the claimed invention in order to provide virtual machine load balancing across hypervisor environments upon low load thresholds.
Paraschiv teaches migration of virtual machine from a first to a second hypervisor based on a service level agreement between a customer and provider (Col. 11). However, Paraschiv does not explicitly recite the rationale of the second hypervisor being less costly than the first hypervisor.
Lu teaches based upon the second hypervisor being less costly than the first hypervisor ([0086], In some aspects, primary instances of software and other applications for the user may run on a first hypervisor platform 405 that may be more expensive than other hypervisor platforms 405. To reduce costs, the user may request to utilize one or more cheaper hypervisor platforms as recovery sites. In such cases, the user may pay for less capacity on the first hypervisor platform 405 while maintaining backed-up or replicated copies of the virtual machine on the other hypervisor platforms 405).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Lu with the teachings of CaraDonna, Naidu, Vatnikov, and Paraschiv in order to provide a system that teaches a destination hypervisor being cheaper than the source hypervisor. The motivation for applying Lu teaching with CaraDonna, Naidu, Vatnikov, and Paraschiv teaching is to provide a system that allows for a customer to quickly recover from a disaster by maintaining an environment with sufficient resource capacity for virtual machine execution at a reduced expense (Lu, [0017]). CaraDonna, Naidu, Vatnikov, and Paraschiv and Lu are analogous art directed towards distribution of virtual machines through migration and load balancing. Therefore, it would have been obvious for one of ordinary skill in the art to combine Lu with CaraDonna, Naidu, Vatnikov, and Paraschiv to teach the claimed invention in order to provide a migration approach for adequate recovery that maintains sufficient resource capacity for virtual machine execution while minimizing expense.
With regard to claim 13, Paraschiv teaches wherein the operations comprise:
monitoring load of the primary virtual machine to identify load trends (Col. 5, lines 22-27, In some embodiments, the VM load monitor system may employ machine learning algorithms to detect patterns of load behavior for VMs or groups of VMs on the host devices over time. These patterns may be used to more accurately predict future VM load behaviors based on past VM load behaviors);
utilizing the load trends to predict an upcoming period of high load (Col. 11, lines 43-50, The analysis of the current load behavior in light of the previous load behavior may be used by the VM load monitor system to predict future behavior of the VMs 348 or VM group 349 in host device 340 in addition to determining current load behavior so that at least some VM management decisions can be made proactively; Col. 7, lines 37-40, Upon detecting that current or predicted load may exceed a threshold for the respective host device 14, the VMLM 152 may send a migration request to a control plane process 106); and
generating the request to specify the second hypervisor as a destination hypervisor for hosting the destination virtual machine during the upcoming period (Col. 13, lines 34-37 The VM load monitor system may proactively initiate live migrations of VMs from a source host device to a destination host device based on analysis of current or predicted VM load behavior on the source host device; Col. 14, lines 15-18, Upon locating a suitable destination host device 740, at (3a) the VM migration process 706 may respond to the VM migration proposal indicating that the VM(s) 748 can be migrated)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Paraschiv with the teachings of CaraDonna, Naidu, and Vatnikov in order to provide a system that teaches forecasting of high load workloads for cross-site hypervisor load balancing. The motivation for applying Paraschiv teaching with CaraDonna, Naidu, and Vatnikov teaching is to provide a system that allows for VM management decisions to be made proactively before the load behavior of a VM becomes problematic (Paraschiv, Col. 10). CaraDonna, Naidu, Vatnikov, and Paraschiv are analogous art directed towards distribution of virtual machines through migration and load balancing. Therefore, it would have been obvious for one of ordinary skill in the art to combine Paraschiv with CaraDonna, Naidu, and Vatnikov to teach the claimed invention in order to provide preemptive control over load balancing to mitigate against high load issues, such as over-utilization.
Paraschiv teaches migration of virtual machine from a first to a second hypervisor based on a service level agreement between a customer and provider (Paraschiv, Col. 11). However, Paraschiv does not explicitly recite the rationale of the second hypervisor providing better performance than the first hypervisor.
Lu teaches based upon the second hypervisor providing better performance than the first hypervisor ([0106], the data management system 610 may detect a condition of the first hypervisor platform 615, and the data management system 610 may determine to transfer the data and the metadata to the second virtual machine executing on the second hypervisor platform 620 based on detecting the condition … the first hypervisor platform 615 being subject to a ransomware attack, a disaster recovery condition for the first hypervisor platform 615, a malware infection of one or more virtual machines on the first hypervisor platform 615, some other condition indicative of a degraded performance of the first hypervisor, or any combination thereof (Examiner notes: wherein the second hypervisor provides better performance over the first) which is substantially similar to claim 11 and therefore rejected with similar rationale.
Examiner notes: It would be obvious for one of ordinary skill in the art to recognize that the system of claim 11 is being substantially recited again as limitations for the system of claim 13.
With regard to claim 14, Paraschiv teaches wherein the operations comprise:
monitoring load of the primary virtual machine to identify load trends (Col. 5, lines 22-27, In some embodiments, the VM load monitor system may employ machine learning algorithms to detect patterns of load behavior for VMs or groups of VMs on the host devices over time. These patterns may be used to more accurately predict future VM load behaviors based on past VM load behaviors);
utilizing the load trends to predict an upcoming period of low load (Col. 11, lines 43-50, The analysis of the current load behavior in light of the previous load behavior may be used by the VM load monitor system to predict future behavior of the VMs 348 or VM group 349 in host device 340 in addition to determining current load behavior so that at least some VM management decisions can be made proactively; Col. 11, lines 54-60, Example VM management decisions include, but are not limited to, whether current load is low enough (or whether predicted load should remain low enough) to accept a request to launch … or whether a VM 348 or VM group 349 needs to be migrated from the host device 240 due to current (or predicted) load behavior); and
generating the request to specify the second hypervisor as a destination hypervisor for hosting the destination virtual machine during the upcoming period (Col. 13, lines 34-37 The VM load monitor system may proactively initiate live migrations of VMs from a source host device to a destination host device based on analysis of current or predicted VM load behavior on the source host device; Col. 14, lines 15-18, Upon locating a suitable destination host device 740, at (3a) the VM migration process 706 may respond to the VM migration proposal indicating that the VM(s) 748 can be migrated)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Paraschiv with the teachings of CaraDonna, Naidu, and Vatnikov in order to provide a system that teaches forecasting of low load workloads for cross-site hypervisor load balancing. The motivation for applying Paraschiv teaching with CaraDonna, Naidu, and Vatnikov teaching is to provide a system that allows for VM management decisions to be made proactively before the load behavior of a VM becomes problematic (Paraschiv, Col. 10). CaraDonna, Naidu, Vatnikov, and Paraschiv are analogous art directed towards distribution of virtual machines through migration and load balancing. Therefore, it would have been obvious for one of ordinary skill in the art to combine Paraschiv with CaraDonna, Naidu, and Vatnikov to teach the claimed invention in order to provide preemptive control lover load balancing to mitigate against low load issues, such as idle resources.
Paraschiv teaches migration of virtual machine from a first to a second hypervisor based on a service level agreement between a customer and provider (Col. 11). However, Paraschiv does not explicitly recite the rationale of the second hypervisor being less costly than the first hypervisor.
Lu teaches based upon the second hypervisor being less costly than the first hypervisor ([0086], In some aspects, primary instances of software and other applications for the user may run on a first hypervisor platform 405 that may be more expensive than other hypervisor platforms 405. To reduce costs, the user may request to utilize one or more cheaper hypervisor platforms as recovery sites. In such cases, the user may pay for less capacity on the first hypervisor platform 405 while maintaining backed-up or replicated copies of the virtual machine on the other hypervisor platforms 405) which is substantially similar to claim 12 and therefore rejected with similar rationale.
Examiner notes: It would be obvious for one of ordinary skill in the art to recognize that the system of claim 12 is being substantially recited again as limitations for the system of claim 14.
Claims 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over CaraDonna and Naidu in view of Vatnikov as applied to claim 16 above, and further in view of Lu.
With regard to claim 18, Lu teaches wherein the operations comprise:
selecting the second hypervisor for hosting the destination virtual machine based upon a cost comparison between the first hypervisor and the second hypervisor ([0086], In some aspects, primary instances of software and other applications for the user may run on a first hypervisor platform 405 that may be more expensive than other hypervisor platforms 405. To reduce costs, the user may request to utilize one or more cheaper hypervisor platforms as recovery sites. In such cases, the user may pay for less capacity on the first hypervisor platform 405 while maintaining backed-up or replicated copies of the virtual machine on the other hypervisor platforms 405).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Lu with the teachings of CaraDonna, Naidu, and Vatnikov in order to provide a computer program product that teaches selection of a hypervisor environment based on cost comparisons. The motivation for applying Lu teaching with CaraDonna, Naidu, and Vatnikov teaching is to provide a computer program product that allows for resource allocation strategies that includes provisioning a main site hypervisor with reduced capacity while maintaining a cost-effective hypervisor platform with redundant backup storage, thereby reducing the overall infrastructure costs (Lu, [0086]). CaraDonna, Naidu, Vatnikov, and Lu are analogous art directed towards hypervisor-specific management. Therefore, it would have been obvious for one of ordinary skill in the art to combine Lu with CaraDonna, Naidu, and Vatnikov to teach the claimed invention in order to provide cost-effective strategy for allocating resources between a primary and recovery site.
With regard to claim 19, Lu teaches wherein the operations comprise:
selecting the second hypervisor for hosting the destination virtual machine based upon a performance comparison between the first hypervisor and the second hypervisor ([0106], Additionally or alternatively, the data management system 61 may detect a condition of the first hypervisor platform 615, and the data management system 61 may determine to transfer the data and the metadata to the second virtual machine executing in the second hypervisor platform based on detecting the condition. The condition may be, for example, a storage capacity of the first hypervisor platform 616 being less than a threshold capacity … some other condition indicative of a degraded performance (Examiner notes: where migration occurs upon the determination of a difference in performance) of the first hypervisor platform 615).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Lu with the teachings of CaraDonna, Naidu, and Vatnikov in order to provide a computer program product that teaches selection of a hypervisor environment based on performance comparisons. The motivation for applying Lu teaching with CaraDonna, Naidu, and Vatnikov teaching is to provide a computer program product that allows for the restoration of virtual machine functions such that improves the reliability and availability of functions and data on a system (Lu, [0092]). CaraDonna, Naidu, Vatnikov, and Lu are analogous art directed towards hypervisor-specific management. Therefore, it would have been obvious for one of ordinary skill in the art to combine Lu with CaraDonna, Naidu, and Vatnikov to teach the claimed invention in order to provide an efficient redundancy site.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 2023/0222041 A1
teaches
Techniques for Package Injection for Virtual Machine Configuration
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN A CASTANEDA whose telephone number is (571)272-0465. The examiner can normally be reached Monday-Friday 9:30AM-5:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/I.A.C./Examiner, Art Unit 2195 /Aimee Li/Supervisory Patent Examiner, Art Unit 2195