DETAILED ACTION
Claims 1-3, 5-10, 12-18, 19-23 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-6, 8-10, 12-13, 15-17, and 19-23 are rejected under 35 U.S.C. 103 as being unpatentable over Ulatoski (US 2017/0344392 A1) in view of Desai et al. (US 2013/0054890 A1), in further view of Gupta et al. (US 9,886,443 B1).
Ulatoski and Gupta were cited in the previous Office Action.
Regarding claim 1, Ulatoski teaches a method of providing a common volume (cVol) datastore for virtual machines (VMs) managed by a hypervisor in a cloud computing system, ([0016] To provide each of the end users with the required applications, and prevent access to unnecessary applications, one or more application storage volumes are made available in the virtual environment that are capable of being attached to the individual virtual machines. These application storage volumes may include, but are not limited to, virtual machine disks (VMDKs), virtual hard disks (VHDs), or some other virtual disk file capable of storing applications for execution on the virtual machines.; [0071]) the method comprising:
mounting, by the hypervisor in cooperation with a network file system server, a network file system share of a common volume (cVol), the network file system share storing the metadata for a first VM, the metadata including a VM configuration file for the first VM ([0019] To attach the application volumes to the virtual machine, the virtual computing service may initiate a process to mount the volumes to the allocated virtual machine for the end user, and overlay the contents of the volumes to make the one or more applications within the volumes executable by the virtual machine.; [0021]; [0031] […] Once attached, the contents of the volume may be overlaid in the virtual machine. This overlaying may include modifying the registry keys of the virtual machine to include the registry keys for the application and may further include, in some examples, making the application files appear as though they have been locally installed within the virtual machine. Thus, using a Microsoft Windows virtual machine as an example, the files and directories for the application may appear within the “C:\Program Files” directory, despite the location of the executables remaining in the application storage volume.; [0032] This attach process may include mounting, via a hypervisor for the virtual machine, the required storage volumes, and modifying any registry keys to make application files in the storage volumes executable via the virtual machine.; [0058] Turning to FIG. 7B, at step 3, the attach process is completed for application storage volumes 740 to virtual machines 731-733. In some implementations, virtual machines 731-733 may share one or more volumes; [0069] Virtual computing service 120 may comprise one or more server computers);
routing file operations targeting the metadata to the file system ([0031] Thus, using a Microsoft Windows virtual machine as an example, the files and directories for the application may appear within the “C:\Program Files” directory, despite the location of the executables remaining in the application storage volume. As a result of the overlaying, when the user selects an application within the C: drive, the selection may be identified in the virtual machine, and the proper executable files will be executed from the application storage volume attached to the virtual machine.);
attaching a volume as a device on a host of the hypervisor, the volume referenced as the provider by the descriptor in the metadata ([0019] To attach the application volumes to the virtual machine, the virtual computing service may initiate a process to mount the volumes to the allocated virtual machine for the end user, and overlay the contents of the volumes to make the one or more applications within the volumes executable by the virtual machine.; [0032] This attach process may include mounting, via a hypervisor for the virtual machine, the required storage volumes, and modifying any registry keys to make application files in the storage volumes executable via the virtual machine.), the volume storing a virtual disk, which is attached to the first VM, as the instance of the object ([0016] These application storage volumes may include, but are not limited to, virtual machine disks (VMDKs), virtual hard disks (VHDs), or some other virtual disk file capable of storing applications for execution on the virtual machine.; [0020] In some implementations, administrators may manage and perform installation processes to store the applications in the application storage volumes. These installation processes may extract the necessary files and registry keys from an installer, and store the files and registry keys to an appropriate application storage volume. In some examples, the administrator may define application stacks, or groups of applications that are commonly assigned, and store these applications within a single application storage volume. For example, a first application storage volume may include productivity applications to be supplied to a first set of end users, and a second application storage volume may include video and image editing software to be provided to a second set of end users. Once the applications are stored within the application volumes, the administrator may define which of the applications or volumes are associated with each individual end user of the virtual environment.; [0023]; each volume has a specific set of application types (i.e., stored according a policy) which are then correlated to credentials of users requesting incoming user requests Fig. 5); and
routing file operations targeting virtual disks of the VMs to the devices ([0031] when the user selects an application within the C: drive, the selection may be identified in the virtual machine, and the proper executable files will be executed from the application storage volume attached to the virtual machine.).
Ulatoski does not expressly teach the metadata including a VM configuration file for the first VM and a descriptor file, the descriptor file including identifiers for the cVol, a provider of the cVol for storing an object for the first VM, and an instance of the object stored by the provider;
creating a file system container backed by the network file system (NFS) share; and
wherein the attached volume is a cloud volume, the metadata for the first VM, stored in the network file system share, separated from the virtual disk, stored in the cloud volume.
However, Desai teaches the metadata including a VM configuration file for the first VM and a descriptor file, the descriptor file including identifiers for the cVol, a provider of the cVol for storing an object for the first VM, and an instance of the object stored by the provider ([0009]; [0094] The VM also has metadata files that describe the configurations of the VM. The metadata files include VM configuration file, VM log files, disk descriptor files, one for each of the virtual disks for the VM, a VM swap file, etc. A disk descriptor file for a virtual disk contains information relating to the virtual disk such as its vvol ID, its size, whether the virtual disk is thinly provisioned (i.e., provider), and identification of one or more snapshots (i.e., instance objects) created for the virtual disk, etc. The VM swap file provides a swap space of the VM on the storage system. In one embodiment, these VM configuration files are stored in a vvol, and this vvol is referred to herein as a metadata vvol.; [0051] Also, in the example of FIG. 3, distributed storage system manager 135 has provisioned (on behalf of requesting computer systems 100) multiple vvols, each from a different storage container. In general, vvols may have a fixed physical size or may be thinly provisioned, and each vvol has a vvol ID, which is a universally unique identifier that is given to the vvol when the vvol is created. For each vvol, a vvol database 314 stores for each vvol, its vvol ID, the container ID of the storage container in which the vvol is created, and an ordered list of <offset, length> values within that storage container that comprise the address space of the vvol.; Claim 2);
creating a file system container backed by the network file system (NFS) share ([0009] According to embodiments of the invention, the storage system exports logical storage volumes, referred to herein as "virtual volumes," that are provisioned as storage objects on a per-workload basis, out of a logical storage capacity assignment, referred to herein as "storage containers." For a VM, a virtual volume may be created for each of the virtual disks and snapshots of the VM. In one embodiment, the virtual volumes are accessed on demand by connected computer systems using standard protocols, such as SCSI and NFS, through logical endpoints for the protocol traffic, known as "protocol endpoints," that are configured in the storage system.; [0045]; [0067])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Desai with the teachings of Ulatoski to utilize a metadata file to store a correlation of a VM with its allocated volume and data and further create a datastore of the file system for a shared storage system. The modification would have been motivated by the desire of combining known elements to yield predictable results.
Ulatoski nor Desai expressly teach wherein the attached volumes are cloud volumes, the metadata for the VMs, stored in the network file system share, separated from the virtual disks, stored in the cloud volumes.
However, Gupta teaches wherein the attached volumes are cloud volumes (Col. 3, lines 41-58: The multiple tiers of storage may include storage that is accessible through a network 140, such as cloud storage 126 or networked storage 128 (e.g., a SAN or “storage area network”). Unlike the prior art, the present embodiment also permits local storage 122/124 that is within or directly attached to the server and/or appliance to be managed as part of the storage pool 160. Examples of such storage include Solid State Drives (henceforth “SSDs”) 125 or Hard Disk Drives (henceforth “HDDs” or “spindle drives”) 127. These collected storage devices, both local and networked, form a storage pool 160. Virtual disks (or “vDisks”) can be structured from the storage devices in the storage pool 160, as described in more detail below. As used herein, the term vDisk refers to the storage abstraction that is exposed by a Service VM to be used by a user VM. In some embodiments, the vDisk is exposed via iSCSI (“internet small computer system interface”) or NFS (“network file system”) and is mounted as a virtual disk on the user VM.), the metadata for the VMs, stored in the network file system share, separated from the virtual disks, stored in the cloud volumes (Fig. 3, shows NFS metadata 208 separated from data storage 206 which as shown in Fig. 1 and Col. 3 as cloud storage).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Gupta with the teachings of Ulatoski and Desai to utilize cloud disks as devices attached to virtual machines. The modification would have been motivated by the desire of allowing the virtual machine to utilize cloud resources as if they were local.
Regarding claim 2, Ulatoski teaches wherein directories of the NFS share are inserted in the hierarchy of the file system ([0031] […] Once attached, the contents of the volume may be overlaid in the virtual machine. This overlaying may include modifying the registry keys of the virtual machine to include the registry keys for the application and may further include, in some examples, making the application files appear as though they have been locally installed within the virtual machine. Thus, using a Microsoft Windows virtual machine as an example, the files and directories for the application may appear within the “C:\Program Files” directory, despite the location of the executables remaining in the application storage volume; [0047] FIG. 5 illustrates file system views of a virtual machine according to one implementation. In particular, FIG. 5 illustrates file system view 500 of a virtual machine prior to the attachment of an application storage volume, and file system view 501 of the virtual machine after the attachment of an application storage volume.);
In addition, Cashman teaches wherein the network file system share is mounted external to the file system container (Fig. 1, Storage System 130 (external); [0040] The term “datastore” may broadly refer to a logical container that hides specifics of the underlying storage resources to provide a uniform model for storing virtual machine data. The datastores 132-n may each represent a formatted file system that physical servers 110 mount and share. The file system may be a cluster file system that supports virtualization, for example, such as Virtual Machine File System (VMFS) and Network File System (NFS) provided by network attached storage (NAS). The term “datastore” may also refer to one or more storage pools that each represents a logical slice of datastore 132. In the example in Figure. 1, datastores 132-1 to 132-3 are illustrated as each including one storage pool, labeled “SP1”, “SP2” and “SP3” respectively.).).
Regarding claim 3, Ulatoski teaches wherein each of the directories stores a portion of the metadata for a namespace of the first VM ([0019] For example, when an application storage volume is attached to a virtual machine, the files and directories for the application may appear in the “C:\Program Files” directory; [0048] As depicted prior to the attachment of an application storage volume, a virtual machine may have a user files directory 515, and three locally installed applications 520-522 within application directory 510. These applications may provide various operations including productivity operations, image or video editing operations, or any other similar application operation on a virtual machine.; [0050]; [0066]).
Regarding claim 5, Ulatoski teaches wherein the file operations targeting the virtual disk are routed to the devices by a data plane of the hypervisor ([0030] Once the configuration time is determined, method 200 includes configuring a virtual machine at the configuration time for each user in the subset of users by attaching at least one storage volume associated with the user to the virtual machine (204). These storage volumes may include user volumes that can be read and written to, and may further include application storage volumes that are read only, and capable of being attached to multiple virtual machines at any given instance.; [0066]).
Regarding claim 6, Ulatoski teaches wherein the file operations targeting the metadata are routed to the file system container by a control plane of the hypervisor ([0032] This attach process may include mounting, via a hypervisor for the virtual machine, the required storage volumes, and modifying any registry keys to make application files in the storage volumes executable via the virtual machine.).
Regarding claim 8, it is a media/product type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale above.
Regarding claim 9, it is a media/product type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale above.
Regarding claim 10, it is a media/product type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale above.
Regarding claim 11, it is a media/product type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale above.
Regarding claim 12, it is a media/product type claim having similar limitations as claim 5 above. Therefore, it is rejected under the same rationale above.
Regarding claim 13, it is a media/product type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale above.
Regarding claim 15, it is a system type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale above.
Regarding claim 16, it is a system type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale above.
Regarding claim 17, it is a system type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale above.
Regarding claim 19, it is a system type claim having similar limitations as claim 5 above. Therefore, it is rejected under the same rationale above.
Regarding claim 20, it is a system type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale above.
Regarding claim 21, Gupta teaches wherein the cloud volume is an object datastore configured to store objects, the objects including the instance of the object storing the virtual disk, and wherein the NFS share is a file-based datastore configured to store files, the files including the metadata (Fig. 3, shows NFS metadata 208 separated from data storage 206 which as shown in Fig. 1 and Col. 3 as cloud storage; Abstract; Col. 3, line 65 through Col. 4, line 24: Each Service VM 110a-b exports one or more block devices or NFS server targets that appear as disks to the client VMs 102a-d. These disks are virtual, since they are implemented by the software running inside the Service VMs 110a-b. Thus, to the user VMs 102a-d, the Service VMs 110a-b appear to be exporting a clustered storage appliance that contains some disks. All user data (including the operating system) in the client VMs 102a-d resides on these virtual disks.).
Regarding claim 22, it is a media/product type claim having similar limitations as claim 21 above. Therefore, it is rejected under the same rationale above.
Regarding claim 23, it is a system type claim having similar limitations as claim 21 above. Therefore, it is rejected under the same rationale above.
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Ulatoski, Desai and Gupta, as applied to claim 1, in further view of Chen et al. (US 9,852,203 B1).
Chen was cited in the previous Office Action.
Regarding claim 7, Ulatoski, Desai nor Gupta expressly teach wherein the step of attaching comprises:
invoking, by the hypervisor, an application programming interface (API) of a cloud control plane configured to manage a cloud storage pool having the cloud volume, the API configured to cooperate with hardware of the host to attach the cloud volume as the device.
However, Chen teaches wherein the step of attaching comprises:
invoking, by the hypervisor, an application programming interface (API) of a cloud control plane configured to manage a cloud storage pool having the cloud volumes, the API configured to cooperate with hardware of the host to attach the cloud volumes as the devices (Col. 4, lines 15-29; Col. 4, lines 60-61: In an example, the hypervisor 140 may directly access the cloud storage 180 by invoking an API.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chen with the teachings of Ulatoski, Desai and Gupta to use API calls from the hypervisor to access cloud storage. The modification would have been motivated by the desire of allowing VMs to access cloud resources as if they were local.
Regarding claim 14, it is a media/product type claim having similar limitations as claim 7 above. Therefore, it is rejected under the same rationale above.
Response to Arguments
Applicant’s arguments with respect to claims 1-3, 5-10, 12-18, 19-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE A CHU JOY-DAVILA whose telephone number is (571)270-0692. The examiner can normally be reached Monday-Friday, 6:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JORGE A CHU JOY-DAVILA/Primary Examiner, Art Unit 2195