DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to applicant’s amendment filed on 02/23/2026.
Claims 1-20 are pending and examined.
Response to Arguments
Applicant's arguments filed 02/23/2026 with respect to 35 U.S.C. 112(b) have been fully considered are persuasive. The 35 U.S.C. 112(b) rejection for claim 4 has been withdrawn.
Applicant's arguments filed 02/23/2026 with respect to 35 U.S.C. 102 and 103 have been fully considered but they are not persuasive. Applicant argued that “Office Action does not establish that Rajagopal discloses a hypervisor [and] a datastore pipeline [and] a tiered configuration” and that “No other reference has been shown to cure the deficiencies of the rejection of claim 1… Neither Camargos nor Dai remedies the deficiencies of the rejection of claim 1 discussed above.” Examiner respectfully disagrees, see 35 U.S.C. 103 rejections below for a detailed analysis. In view of the cited present application’s specification paragraphs, the examiner agrees that Rajagopal does not explicitly disclose a hypervisor, a hypervisor with a datastore pipeline, and a tiered configuration explicitly including a hypervisor. However, examiner interprets the Dai reference to remedy the deficiencies above, in addition to disclosing the additionally amended features. For example, Dai’s hypervisor including a VSAN module which executes a process which includes a step of adding a namespace object to a logical datastore as a datastore pipeline of a hypervisor configuring the first virtual storage object into a first logical container datastore. The underlying storage pool used for the two or more logical datastores correlates to the virtual datastore. The hypervisor executing a process to create and mount a new logical datastore under the same path as the default datastore, which are under the same storage pool volumes as seen in Fig. 5, correlates to connecting the virtual datastore and the first logical container datastore to the hypervisor in a tiered configuration, with the virtual datastore between the hypervisor and first logical container datastore. With regards to the newly added limitation, Dai’s the hypervisor including a VSAN module which can execute a method comprising a step to create or generate a new logical datastore correlates to provisioning, by the hypervisor, the first logical container datastore. Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with Dai because grouping different name space objects such as virtual machines which were conventionally associated with a default datastore under different logical datastores allows different access privileges to be defined for each logical datastore. This allows one or more users that have full access to the objects of one logical datastore to be denied access to the objects of another logical datastore, and other particular users such as administrative users may be granted full access to objects of different groups of namespace objects grouped under different logical datastores.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4-5, 7, 10-12, 14, 16-17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable by Rajagopal et al. (U.S. Patent No. US 20140095826 A1), hereinafter “Rajagopal” in view of Dai et al. (U.S. Patent No. US 20210303530 A1), hereinafter “Dai.”
With regards to Claim 1, Rajagopal teaches:
A computer-implemented method comprising: generating a virtual datastore (Paragraph 29, “In one embodiment, the abstraction algorithm 100 generates a volume, volume 1a, with the identified physical storage entities that satisfy the capability and quota requirements of a VM, such as VM2, at a given time. To begin with, a virtual datastore, VDS1, is generated using volume 1a.” The virtual datastore being generated correlates to generating a virtual datastore);
generating a first virtual storage object having a first storage policy (Paragraphs 28 and 32, “The abstraction layer algorithm 100 includes logic to analyze the storage requirements of the respective VMs in order to identify a set of capabilities and quota required for storage by the VMs. Capabilities, as used in this application, include a large number of attributes that together describe storage requirements of a VM. Some of the attributes may include data mirroring, frequency of data backup, storage vendor type, etc., that are associated with storage or provided by other services… The volume is generated internally by an abstraction algorithm during a creation of a virtual datastore and is a logical representation of one or more physical storage entities. The one or more physical storage entities are associated with one or more capabilities and specific quota. In one embodiment, each volume includes one or more physical logical unit numbers (LUNs), each LUN is mapped to one array of a array network and each array is a combination of one or more disks. In one embodiment, an array in the array network may be mapped to more than one LUNs in a LUN network. The volume can include one or more LUNs, one or more arrays, one or more disks or any combination thereof. When an abstraction algorithm needs to provision storage for a VM, the abstraction algorithm analyzes the storage requirements of the VM, traverses through the different hierarchical levels of a data storage tree and selects one or more of the physical storage entities that match the capability and quota requirements of the VM.” The one or more physical storage entities, LUNs, arrays, disks, or a combination thereof making up a volume correlate to a first virtual storage object. The volume being generated internally by an abstraction algorithm to represent one or more physical storage entities correlates to generating a first virtual storage object. The physical storage entities being associated with one or more capabilities and a specific quota including storage vendor type requirements correlates to the first virtual storage object having a first storage policy);
and storing data in the first logical container datastore according to the first storage policy (Paragraphs 7 and 10, “At least one volume of physical storage in the physical storage system having physical storage available to satisfy the request to allocate the datastore is identified and the server maintains a mapping of the unique identifier to the at least one volume of physical storage and provides the mapping to the host computer upon running the virtual machine, thereby enabling the host computer to store data for the datastore in the at least one volume of physical storage… Each volume has specific capability and quota that is reflective of the combined capabilities and quota of the underlying storage entities.” The data being stored in the at least one volume of physical storage correlates to storing data in the first logical container datastore. Each volume having a specific capability and quota reflective of the combined underlying storage entities correlates to the data being stored in the first logical container datastore according to the first storage policy).
Rajagopal does not explicitly teach:
configuring, by a datastore pipeline of a hypervisor, the first virtual storage object into a first logical container datastore;
provisioning, by the hypervisor, the first logical container datastore;
connecting the virtual datastore and the first logical container datastore to the hypervisor in a tiered configuration, with the virtual datastore between the hypervisor and first logical container datastore;
However, Dai teaches:
configuring, by a datastore pipeline of a hypervisor, the first virtual storage object into a first logical container datastore (Fig. 6, Paragraphs 18, 23, 62, and 64 “In one embodiment, VSAN module 114 may be implemented as a “VSAN” device driver within hypervisor 113. In such an embodiment, VSAN module 114 may provide access to a conceptual “VSAN” 115 through which an administrator can create a number of top-level “device” or namespace objects that are backed by object store 116… The method 600 may be performed by a module such as VSAN module 114, as described in FIGS. 1-3 in some embodiments. In some other embodiments, the method may be performed by some other modules that reside in the hypervisor or outside of the hypervisor. In some embodiments, the method 600 may be performed when a user (e.g., an administrator, or a client) defines (or creates) a name space object to be added to a datastore... When process 600 determines that a logical datastore with matching security policies defined for the namespace object exists, the process may add, at 630, the namespace object to the logical datastore that covers (or matches) the security policies defined for the namespace object.” The hypervisor including a VSAN module which executes a process which includes a step of adding a namespace object to a logical datastore correlates to a datastore pipeline of a hypervisor configuring the first virtual storage object into a first logical container datastore);
provisioning, by the hypervisor, the first logical container datastore (Paragraphs 18, 62 and 65, “In one embodiment, VSAN module 114 may be implemented as a “VSAN” device driver within hypervisor 113… The method 600 may be performed by a module such as VSAN module 114, as described in FIGS. 1-3 in some embodiments. In some other embodiments, the method may be performed by some other modules that reside in the hypervisor or outside of the hypervisor… On the other hand, when process 600 determines that no logical datastore with matching security policies defined for the namespace object exists, the process may create/generate, at 640, a new logical datastore and assign at least the security policies that are defined for the namespace object to the newly generated logical datastore.” The hypervisor including a VSAN module which can execute a method comprising a step to create or generate a new logical datastore correlates to provisioning, by the hypervisor, the first logical container datastore);
connecting the virtual datastore and the first logical container datastore to the hypervisor in a tiered configuration, with the virtual datastore between the hypervisor and first logical container datastore (Fig. 5, paragraphs 47, 62, and 65, “In some embodiments, instead of a single default datastore for a single underlying storage pool, two or more datastores (e.g., logical datastores) may be created that share the same underlying storage pool… The method 600 may be performed by a module such as VSAN module 114, as described in FIGS. 1-3 in some embodiments. In some other embodiments, the method may be performed by some other modules that reside in the hypervisor or outside of the hypervisor… the process may create/generate, at 640, a new logical datastore and assign at least the security policies that are defined for the namespace object to the newly generated logical datastore. The process may mount the new logical datastore under the same path that the default datastore is mounted in some embodiments. The logical datastore may be mounted (e.g., by an OSFS submodule of process 600) and formatted with a specific file system (e.g., VMFS) similar to the default datastore.” The underlying storage pool used for the two or more logical datastores correlates to the virtual datastore. The hypervisor executing a process to create and mount a new logical datastore under the same path as the default datastore, which are under the same storage pool volumes as seen in Fig. 5, correlates to connecting the virtual datastore and the first logical container datastore to the hypervisor in a tiered configuration, with the virtual datastore between the hypervisor and first logical container datastore).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with configuring, by a datastore pipeline of a hypervisor, the first virtual storage object into a first logical container datastore; provisioning, by the hypervisor, the first logical container datastore; connecting the virtual datastore and the first logical container datastore to the hypervisor in a tiered configuration, with the virtual datastore between the hypervisor and first logical container datastore as taught by Dai because grouping different name space objects such as virtual machines which were conventionally associated with a default datastore under different logical datastores allows different access privileges to be defined for each logical datastore. This allows one or more users that have full access to the objects of one logical datastore to be denied access to the objects of another logical datastore, and other particular users such as administrative users may be granted full access to objects of different groups of namespace objects grouped under different logical datastores (Dai: paragraph 17).
With regards to Claims 10 and 16, the method of Claim 1 performs the same steps as the machine and manufacture of Claims 10 and 16 respectively, and Claims 10 and 16 are therefore rejected using the same rationale set forth above in the rejection of Claim 1.
With regards to Claim 2, Rajagopal in view of Dai teaches the method of Claim 1 above. Rajagopal further teaches:
generating a second virtual storage object having a second storage policy different than the first storage policy (Paragraphs 28 and 32, “The abstraction layer algorithm 100 includes logic to analyze the storage requirements of the respective VMs in order to identify a set of capabilities and quota required for storage by the VMs. Capabilities, as used in this application, include a large number of attributes that together describe storage requirements of a VM. Some of the attributes may include data mirroring, frequency of data backup, storage vendor type, etc., that are associated with storage or provided by other services… The volume is generated internally by an abstraction algorithm during a creation of a virtual datastore and is a logical representation of one or more physical storage entities. The one or more physical storage entities are associated with one or more capabilities and specific quota. In one embodiment, each volume includes one or more physical logical unit numbers (LUNs), each LUN is mapped to one array of a array network and each array is a combination of one or more disks. In one embodiment, an array in the array network may be mapped to more than one LUNs in a LUN network. The volume can include one or more LUNs, one or more arrays, one or more disks or any combination thereof. When an abstraction algorithm needs to provision storage for a VM, the abstraction algorithm analyzes the storage requirements of the VM, traverses through the different hierarchical levels of a data storage tree and selects one or more of the physical storage entities that match the capability and quota requirements of the VM.” The one or more physical storage entities, LUNs, arrays, disks, or a combination thereof making up a volume correlate to a second virtual storage object. The volume being generated internally by an abstraction algorithm to represent one or more physical storage entities correlates to generating a second virtual storage object. The physical storage entities each being associated with one or more capabilities and a specific quota including storage vendor type requirements correlates to the second virtual storage object having a second storage policy different from the first storage policy);
configuring the second virtual storage object into a second logical container datastore (Fig. 2, paragraphs 10 and 32, “The method includes identifying one or more physical storage entities and generating one or more volumes using the physical storage entities… In one embodiment, physical storage entities are distributed in a hierarchical manner and are represented using a data storage tree. At the top of data storage tree is a volume. The volume is generated internally by an abstraction algorithm during a creation of a virtual datastore and is a logical representation of one or more physical storage entities. The one or more physical storage entities are associated with one or more capabilities and specific quota. In one embodiment, each volume includes one or more physical logical unit numbers (LUNs), each LUN is mapped to one array of a array network and each array is a combination of one or more disks. In one embodiment, an array in the array network may be mapped to more than one LUNs in a LUN network. The volume can include one or more LUNs, one or more arrays, one or more disks or any combination thereof. When an abstraction algorithm needs to provision storage for a VM, the abstraction algorithm analyzes the storage requirements of the VM, traverses through the different hierarchical levels of a data storage tree and selects one or more of the physical storage entities that match the capability and quota requirements of the VM. The identified physical storage entities are then used to generate a virtual datastore. During the creation of a virtual datastore, the abstraction algorithm creates a volume internally that is a logical representation of the underlying identified one or more physical storage entities.” Each physical storage entity, LUN, array, or disk correlates to a virtual storage object. The one or more volumes, which each comprise one or more physical storage entities, LUNs, arrays, or disks correlates to a second logical container datastore. The physical storage entities distributed in a hierarchical manner through a data storage tree, where a volume is created using a logical representation of the underlying physical storage entities, correlates to configuring the second virtual storage object into a second logical container datastore);
and storing data in the second logical container datastore according to the second storage policy, such that data stored in the second logical container datastore is isolated from data stored in the first logical container datastore (Paragraphs 7, 10 and 29, “At least one volume of physical storage in the physical storage system having physical storage available to satisfy the request to allocate the datastore is identified and the server maintains a mapping of the unique identifier to the at least one volume of physical storage and provides the mapping to the host computer upon running the virtual machine, thereby enabling the host computer to store data for the datastore in the at least one volume of physical storage… Each volume has specific capability and quota that is reflective of the combined capabilities and quota of the underlying storage entities... Currently, in the embodiment illustrated in FIG. 1, VDS1 is currently associated with volumes 1a, 1b, 1c and 1d. Similarly, VDSn is associated with volumes n1, n2, and n3, respectively.” The data being stored in the at least one volume of physical storage correlates to storing data in the second logical container datastore. Each volume having a specific capability and quota reflective of the combined underlying storage entities correlates to the data being stored in the second logical container datastore according to the second storage policy. VDS1 being associated with volumes 1a-1d and VDS2 being associated with volumes n1-n3 correlate to the data stored in the second logical container datastore being isolated from the data stored in the first logical container datastore).
Dai further teaches:
connecting the second logical container datastore to the hypervisor in the tiered configuration, with the virtual datastore in-between the hypervisor and the second logical container datastore (Fig. 5, paragraphs 47, 62, and 65, “In some embodiments, instead of a single default datastore for a single underlying storage pool, two or more datastores (e.g., logical datastores) may be created that share the same underlying storage pool… The method 600 may be performed by a module such as VSAN module 114, as described in FIGS. 1-3 in some embodiments. In some other embodiments, the method may be performed by some other modules that reside in the hypervisor or outside of the hypervisor… the process may create/generate, at 640, a new logical datastore and assign at least the security policies that are defined for the namespace object to the newly generated logical datastore. The process may mount the new logical datastore under the same path that the default datastore is mounted in some embodiments. The logical datastore may be mounted (e.g., by an OSFS submodule of process 600) and formatted with a specific file system (e.g., VMFS) similar to the default datastore.” The underlying storage pool used for the two or more logical datastores correlates to the virtual datastore. The hypervisor executing a process to create and mount a new logical datastore under the same path as the default datastore, which are under the same storage pool volumes as seen in Fig. 5, correlates to connecting the second logical container datastore to the hypervisor in the tiered configuration, with the virtual datastore in-between the hypervisor and the second logical container datastore).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with connecting the second logical container datastore to the hypervisor in the tiered configuration, with the virtual datastore in-between the hypervisor and the second logical container datastore as taught by Dai because grouping different name space objects such as virtual machines which were conventionally associated with a default datastore under different logical datastores allows different access privileges to be defined for each logical datastore. This allows one or more users that have full access to the objects of one logical datastore to be denied access to the objects of another logical datastore, and other particular users such as administrative users may be granted full access to objects of different groups of namespace objects grouped under different logical datastores (Dai: paragraph 17).
With regards to Claim 11, the method of Claim 2 performs the same steps as the machine of Claim 11, and Claim 11 is therefore rejected using the same rationale set forth above in the rejection of Claim 2.
With regards to Claim 4, Rajagopal in view of Dai teaches the method of Claim 2 above. Dai further teaches:
wherein only a first VM has write access to the first logical container datastore and only a second VM has write access to the second logical container datastore (Paragraphs 17, 48 and 67, “The VSAN module 114 of some embodiments may also group different name space objects (e.g., virtual machines) that were conventionally associated with a datastore (e.g., a default VSAN datastore) under different logical datastores, and define different access privileges (e.g., read, write, or read/write privileges) for each logical datastore. This way, one or more users that have full access to the objects of one logical datastore may be denied access (or may be granted limited access) to the objects of another logical datastore… For example, when a user creates several VMs, a first logical datastore may be created that may include a first group of the VMs, and a second logical datastore may be created that may include a second group of the VMs... The namespace objects (e.g., VMs) and their related objects (e.g., storage objects) may share the same access permissions (read, write, read/write) as the logical datastore with which they are associated (or to which the namespace objects are added).” The different logical datastores which include a first and second logical datastore correlates to a first and second logical container datastore. The first logical datastore including a group of VMs and a second logical datastore including a second group of VMs, where each group can comprise a single VM, correlates to only a first VM associated with a first logical datastore and only a second VM associated with a first logical datastore. The different virtual machines associated with different write access privileges for each logical datastore correlates to only a single VM having access to a first logical datastore and only a single VM having access to a second logical datastore).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with wherein only a single VM has write access to the first logical container datastore and only a single VM has write access to the second logical container datastore as taught by Dai because assigning different security permissions to different namespace objects such as virtual machines makes it possible for users to have different types of access to the objects belonging to different namespace objects (Dai: paragraph 68).
With regards to Claim 5, Rajagopal in view of Dai teaches the method of Claim 1 above. Rajagopal further teaches:
generating a third virtual storage object (Paragraph 32, “The volume is generated internally by an abstraction algorithm during a creation of a virtual datastore and is a logical representation of one or more physical storage entities. The one or more physical storage entities are associated with one or more capabilities and specific quota. In one embodiment, each volume includes one or more physical logical unit numbers (LUNs), each LUN is mapped to one array of a array network and each array is a combination of one or more disks. In one embodiment, an array in the array network may be mapped to more than one LUNs in a LUN network. The volume can include one or more LUNs, one or more arrays, one or more disks or any combination thereof.” The one or more physical storage entities, LUNs, arrays, disks, or a combination thereof making up a volume correlate to a third virtual storage object. The volume being generated internally by an abstraction algorithm to represent one or more physical storage entities correlates to generating a third virtual storage object);
and provisioning, by a computing entity other than the hypervisor, the third virtual storage object (Paragraphs 32-33, “When an abstraction algorithm needs to provision storage for a VM, the abstraction algorithm analyzes the storage requirements of the VM, traverses through the different hierarchical levels of a data storage tree and selects one or more of the physical storage entities that match the capability and quota requirements of the VM. The identified physical storage entities are then used to generate a virtual datastore… FIG. 3 illustrates a simplified schematic representation of the various modules of an abstraction algorithm running on a server that are involved in the virtualization of storage, in one embodiment of the invention.” The server which runs the abstraction algorithm provisioning physical storage entities for a VM correlates to a computing entity other than the hypervisor provisioning the third virtual storage object).
Dai further teaches:
connecting the third virtual storage object to the hypervisor in the tiered configuration as a virtual storage object datastore, with the third virtual storage object beneath the virtual datastore (Fig. 5, paragraphs 47, 62, and 65, “In some embodiments, instead of a single default datastore for a single underlying storage pool, two or more datastores (e.g., logical datastores) may be created that share the same underlying storage pool… The method 600 may be performed by a module such as VSAN module 114, as described in FIGS. 1-3 in some embodiments. In some other embodiments, the method may be performed by some other modules that reside in the hypervisor or outside of the hypervisor… the process may create/generate, at 640, a new logical datastore and assign at least the security policies that are defined for the namespace object to the newly generated logical datastore. The process may mount the new logical datastore under the same path that the default datastore is mounted in some embodiments. The logical datastore may be mounted (e.g., by an OSFS submodule of process 600) and formatted with a specific file system (e.g., VMFS) similar to the default datastore.” The underlying storage pool used for the two or more logical datastores correlates to the virtual datastore. The hypervisor executing a process to create and mount a new logical datastore under the same path as the default datastore, which are under the same storage pool volumes as seen in Fig. 5, correlates to connecting the third virtual storage object to the hypervisor in the tiered configuration as a virtual storage object datastore, with the third virtual storage object beneath the virtual datastore).
provisioning, by the hypervisor, the first logical container datastore (Fig. 1, paragraph 16, “As depicted in the embodiment of FIG. 1, each node 111 includes a virtualization layer or hypervisor 113, a VSAN module 114, and hardware 119 (which includes the SSDs 117 and magnetic disks 118 of a node 111). Through hypervisor 113, a node 111 is able to launch and run multiple VMs 112. Hypervisor 113, in part, manages hardware 119 to properly allocate computing resources (e.g., processing power, random access memory, etc.) for each VM 112. Furthermore, as described below, each hypervisor 113, through its corresponding VSAN module 114, may provide access to storage resources located in hardware 119 (e.g., SSDs 117 and magnetic disks 118) for use as storage for storage objects, such as virtual disks (or portions thereof) and other related files that may be accessed by any VM 112 residing in any of nodes 111 in cluster 110.” The hypervisor providing access to storage resources for use as storage for storage objects including virtual disks for access by a VM correlates to the hypervisor provisioning the first logical container datastore);
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with connecting the third virtual storage object to the hypervisor in the tiered configuration as a virtual storage object datastore, with the third virtual storage object beneath the virtual datastore and provisioning, by the hypervisor, the first logical container datastore as taught by Dai because hypervisors allow nodes to launch and run multiple VMs while managing allocation of computing resources for each VM. Hypervisors can also utilize VSAN modules to provide access to a variety of storage resources and other related files which can be accessed by any VM in any of the nodes of a given cluster. Grouping different name space objects such as virtual machines which were conventionally associated with a default datastore under different logical datastores allows different access privileges to be defined for each logical datastore. This allows one or more users that have full access to the objects of one logical datastore to be denied access to the objects of another logical datastore, and other particular users such as administrative users may be granted full access to objects of different groups of namespace objects grouped under different logical datastores (Dai: paragraphs 16-17).
With regards to Claims 12 and 19, the method of Claim 5 performs the same steps as the machine and manufacture of Claims 12 and 19 respectively, and Claims 12 and 19 are therefore rejected using the same rationale set forth above in the rejection of Claim 5.
With regards to Claim 7, Rajagopal in view of Dai teaches the method of Claim 1 above. Rajagopal further teaches:
migrating the first logical container datastore to a new storage location (Paragraphs 25-26, Claim 4, “The virtualization also enables vendor-agnostic non-disruptive data migration. As the physical storage is isolated from the virtual machines due to the virtualization which introduces an abstraction layer, data can be migrated without any downtime to the virtual machines (VMs)… Datastore migration is eased by allowing virtual datastore to move to a different host while physical data is moved to a different backing that is different from the current physical storage but having the same capabilities as the current physical storage... The method of claim 1, further comprising the steps of: migrating the contents of the at least one volume of physical storage to a different volume of physical storage.” Migrating the contents of one volume of physical storage to a different volume of physical storage, where the physical storage is virtualized, correlates to migrating the first logical container datastore to a new storage location), wherein the migration comprises moving the first virtual storage object as a single moved object (Paragraph 5, “If a volume needs to be retired, then all the data in the volume has to be moved to a new volume and all references to the volume has to be updated to reflect the new volume. Such updates are either done manually or by running a program script. The program script or manual updates need to ensure that any policies associated with resource allocation of the virtual machines are not violated. Special care has to be taken to ensure that the maintenance and provisioning of the physical storage does not disrupt or, otherwise, severely affect the virtual infrastructure management.” All of the data in the volume being moved to a new volume through a program script following policies or a manual update which would utilize a user interface correlates to the migration comprising moving the first virtual storage object as a single moved object).
With regards to Claim 14, the method of Claim 7 performs the same steps as the machine of Claim 14, and Claim 14 is therefore rejected using the same rationale set forth above in the rejection of Claim 7.
With regards to Claim 17, Rajagopal in view of Dai teaches the method of Claim 16 above. Rajagopal further teaches:
generating a second virtual storage object having a second storage policy different than the first storage policy (Paragraphs 28 and 32, “The abstraction layer algorithm 100 includes logic to analyze the storage requirements of the respective VMs in order to identify a set of capabilities and quota required for storage by the VMs. Capabilities, as used in this application, include a large number of attributes that together describe storage requirements of a VM. Some of the attributes may include data mirroring, frequency of data backup, storage vendor type, etc., that are associated with storage or provided by other services… The volume is generated internally by an abstraction algorithm during a creation of a virtual datastore and is a logical representation of one or more physical storage entities. The one or more physical storage entities are associated with one or more capabilities and specific quota. In one embodiment, each volume includes one or more physical logical unit numbers (LUNs), each LUN is mapped to one array of a array network and each array is a combination of one or more disks. In one embodiment, an array in the array network may be mapped to more than one LUNs in a LUN network. The volume can include one or more LUNs, one or more arrays, one or more disks or any combination thereof. When an abstraction algorithm needs to provision storage for a VM, the abstraction algorithm analyzes the storage requirements of the VM, traverses through the different hierarchical levels of a data storage tree and selects one or more of the physical storage entities that match the capability and quota requirements of the VM.” The one or more physical storage entities, LUNs, arrays, disks, or a combination thereof making up a volume correlate to a second virtual storage object. The volume being generated internally by an abstraction algorithm to represent one or more physical storage entities correlates to generating a second virtual storage object. The physical storage entities each being associated with one or more capabilities and a specific quota including storage vendor type requirements correlates to the second virtual storage object having a second storage policy different from the first storage policy);
configuring the second virtual storage object into a second logical container datastore (Fig. 2, paragraphs 10 and 32, “The method includes identifying one or more physical storage entities and generating one or more volumes using the physical storage entities… In one embodiment, physical storage entities are distributed in a hierarchical manner and are represented using a data storage tree. At the top of data storage tree is a volume. The volume is generated internally by an abstraction algorithm during a creation of a virtual datastore and is a logical representation of one or more physical storage entities. The one or more physical storage entities are associated with one or more capabilities and specific quota. In one embodiment, each volume includes one or more physical logical unit numbers (LUNs), each LUN is mapped to one array of a array network and each array is a combination of one or more disks. In one embodiment, an array in the array network may be mapped to more than one LUNs in a LUN network. The volume can include one or more LUNs, one or more arrays, one or more disks or any combination thereof. When an abstraction algorithm needs to provision storage for a VM, the abstraction algorithm analyzes the storage requirements of the VM, traverses through the different hierarchical levels of a data storage tree and selects one or more of the physical storage entities that match the capability and quota requirements of the VM. The identified physical storage entities are then used to generate a virtual datastore. During the creation of a virtual datastore, the abstraction algorithm creates a volume internally that is a logical representation of the underlying identified one or more physical storage entities.” Each physical storage entity, LUN, array, or disk correlates to a virtual storage object. The one or more volumes, which each comprise one or more physical storage entities, LUNs, arrays, or disks correlates to a second logical container datastore. The physical storage entities distributed in a hierarchical manner through a data storage tree, where a volume is created using a logical representation of the underlying physical storage entities, correlates to configuring the second virtual storage object into a second logical container datastore);
and storing data in the second logical container datastore according to the second storage policy, such that data stored in the second logical container datastore is isolated from data stored in the first logical container datastore (Paragraphs 7, 10 and 29, “At least one volume of physical storage in the physical storage system having physical storage available to satisfy the request to allocate the datastore is identified and the server maintains a mapping of the unique identifier to the at least one volume of physical storage and provides the mapping to the host computer upon running the virtual machine, thereby enabling the host computer to store data for the datastore in the at least one volume of physical storage… Each volume has specific capability and quota that is reflective of the combined capabilities and quota of the underlying storage entities... Currently, in the embodiment illustrated in FIG. 1, VDS1 is currently associated with volumes 1a, 1b, 1c and 1d. Similarly, VDSn is associated with volumes n1, n2, and n3, respectively.” The data being stored in the at least one volume of physical storage correlates to storing data in the second logical container datastore. Each volume having a specific capability and quota reflective of the combined underlying storage entities correlates to the data being stored in the second logical container datastore according to the second storage policy. VDS1 being associated with volumes 1a-1d and VDS2 being associated with volumes n1-n3 correlate to the data stored in the second logical container datastore being isolated from the data stored in the first logical container datastore),
wherein each logical container datastore is accessed by a non-overlapping set of VMs that each operate beneath the virtual datastore (Paragraphs 28-29, “FIG. 1 illustrates an overview of an abstraction layer that provides virtualization of physical storage, in one embodiment of the invention. As illustrated, a plurality of virtual machines (VMs, such as VM1, VM2, VM3, . . . VMn) are registered on one or more hosts running applications… The abstraction layer algorithm or abstraction algorithm 100 then traverses a storage farm over the network to identify one or more physical storage entities or services that satisfy the requirements of a particular VM and generates a virtual datastore (VDS) for each of the VM using the identified storage entities…. At this time, the VDS1 includes two volumes that together satisfy the capability and quota requirements of VM2… Currently, in the embodiment illustrated in FIG. 1, VDS1 is currently associated with volumes 1a, 1b, 1c and 1d. Similarly, VDSn is associated with volumes n1, n2, and n3, respectively.” A virtual datastore being generated for each VM correlates to one virtual datastore for each VM. Therefore, VDS2 which is associated with volumes n1-n3 is also associated with a different VM such as VMn. VDS1 being associated with volumes 1a-1d and VM2 and VDS2 being associated with a different VM correlate to each logical container datastore being accessed by a non-overlapping set of VMs which each operate beneath the virtual datastore).
Dai further teaches:
connecting the second logical container datastore to the hypervisor in the tiered configuration, with the second logical container datastore beneath the virtual datastore (Fig. 5, paragraphs 47, 62, and 65, “In some embodiments, instead of a single default datastore for a single underlying storage pool, two or more datastores (e.g., logical datastores) may be created that share the same underlying storage pool… The method 600 may be performed by a module such as VSAN module 114, as described in FIGS. 1-3 in some embodiments. In some other embodiments, the method may be performed by some other modules that reside in the hypervisor or outside of the hypervisor… the process may create/generate, at 640, a new logical datastore and assign at least the security policies that are defined for the namespace object to the newly generated logical datastore. The process may mount the new logical datastore under the same path that the default datastore is mounted in some embodiments. The logical datastore may be mounted (e.g., by an OSFS submodule of process 600) and formatted with a specific file system (e.g., VMFS) similar to the default datastore.” The underlying storage pool used for the two or more logical datastores correlates to the virtual datastore. The hypervisor executing a process to create and mount a new logical datastore under the same path as the default datastore, which are under the same storage pool volumes as seen in Fig. 5, correlates to connecting the second logical container datastore to the hypervisor in the tiered configuration, with the second logical container datastore beneath the virtual datastore).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with connecting the second logical container datastore to the hypervisor in the tiered configuration, with the second logical container datastore beneath the virtual datastore as taught by Dai because grouping different name space objects such as virtual machines which were conventionally associated with a default datastore under different logical datastores allows different access privileges to be defined for each logical datastore. This allows one or more users that have full access to the objects of one logical datastore to be denied access to the objects of another logical datastore, and other particular users such as administrative users may be granted full access to objects of different groups of namespace objects grouped under different logical datastores (Dai: paragraph 17).
Claim(s) 3 and 8 are rejected under 35 U.S.C. 103 as being unpatentable by Rajagopal in view of Dai and Camargos et al. (U.S. Patent No. US 20220103622 A1), hereinafter “Camargos.”
With regards to Claim 3, Rajagopal in view of Dai teaches the method of Claim 2 above. Rajagopal in view of Dai does not explicitly teach:
wherein a vSphere platform provisions the first logical container datastore and the second logical container datastore.
However, Camargos teaches:
wherein a vSphere platform provisions the first logical container datastore and the second logical container datastore (Paragraphs 67 and 80, “Every virtual disk 170 provisioned on the system is partitioned into fixed size chunks, each of which is called a storage container… The illustrative system features a vCenter plug-in that enables provisioning, management, snapshotting, and cloning of virtual disks 170 directly from the vSphere Web Client. Additionally, the system incorporates support for the VMware vSphere Storage APIs Array Integration (VAAI).” Each virtual disk being partitioned into fixed size chunks called storage containers correlates to a first and second logical container datastore. The illustrative system using a vSphere web client and vCenter plug-in to enable provisioning of virtual disks correlates to the vSphere platform provisioning the first and second logical container datastore).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with wherein a vSphere platform provisions the first logical container datastore and the second logical container datastore as taught by Camargos because a variety of data-generating platforms include systems that generate primary production data and systems that generate backup data from primary sources. Virtual disks can be backed by volumes for deduplication, compression, replication factor, or block size operations before being attached to a host (Camargos: paragraph 80).
With regards to Claim 8, Rajagopal in view of Dai teaches the method of Claim 1 above. Rajagopal in view of Dai does not explicitly teach:
generating a snapshot of the first virtual storage object.
However, Camargos teaches:
generating a snapshot of the first virtual storage object (Paragraphs 67 and 78, “Every virtual disk 170 provisioned on the system is partitioned into fixed size chunks, each of which is called a storage container… Snapshots And Clones. In addition to replication policies, data management tasks include taking snapshots and making “zero-copy” clones of virtual disks. There is no limit to the number of snapshots or clones that can be created.” Each storage container in a virtual disk partition correlates to a first virtual storage object. Taking a snapshot of a virtual disk includes its partitions and therefore correlates to generating a snapshot of the first virtual storage object).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with generating a snapshot of the first virtual storage object as taught by Camargos because snapshots and clones are a space-efficient data management method and only require capacity for changed blocks (Camargos: paragraph 78).
Claim(s) 6, 13, and 18 are rejected under 35 U.S.C. 103 as being unpatentable by Rajagopal in view of Dai and Dickmann et al. (U.S. Patent No. US 20210224097 A1), hereinafter “Dickmann.”
With regards to Claim 6, Rajagopal in view of Dai teaches the method of Claim 1 above. Rajagopal in view of Dai does not explicitly teach:
resizing the first logical container datastore while its VM is executing.
However, Dickmann teaches:
resizing the first logical container datastore while its VM is executing (Paragraphs 64-65 and 98, “In contrast, host computing devices dedicated to storage resource pool 240 may include larger disks 258, 278, and 298 and faster data transmission busses to increase the storage space and data throughput performance of a virtualized datastore, such as, but not limited to, vSAN 230… Each of the host computing devices may include a virtualization layer (e.g., a virtual machine monitor (VMM) or a hypervisor), which may host or otherwise implement one or more VMs. As shown in the non-limiting embodiment of FIG. 3A, each of the visualization layers is hosting P VMs, where P is any positive integer… At block 536, the additional storage resource of the additional host computing device and a first virtualized datastore of the first storage domain may be aggregated. Aggregating the additional storage resource and the first virtualized datastore may increase a storage capacity of the first virtualized datastore of the first storage domain.” The host computing devices currently hosting or implementing one or more VMs correlates to a VM executing. The virtualized datastore of a host computing device correlates to the first logical container datastore. The storage space of the virtualized datastore for a host computing device being increased or aggregated with additional storage resources correlates to resizing the first logical container datastore while its VM is executing)
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with resizing the first logical container datastore while its VM is executing as taught by Dickmann because host computing devices dedicated to processing resource pools may include specialized processors with specialized processing pipelines to increase the performance of virtualization layers. Host computing devices dedicated to storage resource pools can also include larger disks and faster data transmission busses to increase the storage space and data throughput performance of a virtualized datastore (Dickmann: paragraph 16).
With regards to Claim 13, the method of Claim 6 performs the same steps as the machine of Claim 13, and Claim 13 is therefore rejected using the same rationale set forth above in the rejection of Claim 6.
With regards to Claim 18, Rajagopal in view of Dai teaches the method of Claim 17 above. Rajagopal in view of Dai does not explicitly teach:
resizing the first logical container datastore while its VM is executing; and
resizing the second logical container datastore while its VM is executing.
However, Dickmann teaches:
resizing the first logical container datastore while its VM is executing (Paragraphs 64-65 and 98, “In contrast, host computing devices dedicated to storage resource pool 240 may include larger disks 258, 278, and 298 and faster data transmission busses to increase the storage space and data throughput performance of a virtualized datastore, such as, but not limited to, vSAN 230… Each of the host computing devices may include a virtualization layer (e.g., a virtual machine monitor (VMM) or a hypervisor), which may host or otherwise implement one or more VMs. As shown in the non-limiting embodiment of FIG. 3A, each of the visualization layers is hosting P VMs, where P is any positive integer… At block 536, the additional storage resource of the additional host computing device and a first virtualized datastore of the first storage domain may be aggregated. Aggregating the additional storage resource and the first virtualized datastore may increase a storage capacity of the first virtualized datastore of the first storage domain.” The host computing devices currently hosting or implementing one or more VMs correlates to a VM executing. The virtualized datastore of a host computing device correlates to the first logical container datastore. The storage space of the virtualized datastore for a host computing device being increased or aggregated with additional storage resources correlates to resizing the first logical container datastore while its VM is executing); and
resizing the second logical container datastore while its VM is executing (Paragraphs 64-65 and 98, “In contrast, host computing devices dedicated to storage resource pool 240 may include larger disks 258, 278, and 298 and faster data transmission busses to increase the storage space and data throughput performance of a virtualized datastore, such as, but not limited to, vSAN 230… Storage domain 300 may include M host computing devices, where M is a positive integer. As shown in FIG. 3A, storage domain 300 includes host computing device_1 310, host computing device_2 320, . . . , host computing device_M 330. Each of the host computing devices may include a virtualization layer (e.g., a virtual machine monitor (VMM) or a hypervisor), which may host or otherwise implement one or more VMs. As shown in the non-limiting embodiment of FIG. 3A, each of the visualization layers is hosting P VMs, where P is any positive integer… At block 536, the additional storage resource of the additional host computing device and a first virtualized datastore of the first storage domain may be aggregated. Aggregating the additional storage resource and the first virtualized datastore may increase a storage capacity of the first virtualized datastore of the first storage domain.” The host computing devices currently hosting or implementing one or more VMs correlates to a VM executing. The virtualized datastore of one of many host computing devices correlates to the second logical container datastore. The storage space of the virtualized datastore for a host computing device being increased or aggregated with additional storage resources correlates to resizing the second logical container datastore while its VM is executing).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with resizing the first logical container datastore while its VM is executing and resizing the second logical container datastore while its VM is executing as taught by Dickmann because host computing devices dedicated to processing resource pools may include specialized processors with specialized processing pipelines to increase the performance of virtualization layers. Host computing devices dedicated to storage resource pools can also include larger disks and faster data transmission busses to increase the storage space and data throughput performance of a virtualized datastore (Dickmann: paragraph 16).
Claim(s) 9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable by Rajagopal in view of Dai, Camargos and Neelakantam et al. (U.S. Patent No. US 11184233 B1), hereinafter “Neelakantam.”
With regards to Claim 9, Rajagopal in view of Dai and Camargos teaches the method of Claim 8 above. Rajagopal in view of Dai and Camargos does not explicitly teach:
monitoring input/output (I/O) traffic for the first logical container datastore;
detecting a malicious logic trigger event during the monitoring; and
based on at least detecting the malicious logic trigger event, restoring the first logical container datastore from the snapshot.
However, Neelakantam teaches:
monitoring input/output (I/O) traffic for the first storage system (Col. 40, lines 24-32, “In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time.” The system activity showing no reads or writes coming into the system for a predetermined amount of time correlates to monitoring I/O traffic for the first container system);
detecting a malicious logic trigger event during the monitoring (Col. 40, lines 16-32, “Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources 314 within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system. In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time.” The presence of ransomware being explicitly detected or inferred in response to system activity correlates to detecting a malicious logic trigger event during the monitoring);
and based on at least detecting the malicious logic trigger event, restoring the first storage system from the snapshot (Col. 40, lines 14-24, “Readers will further appreciate that the backups (often in the form of one or more snapshots) may also be utilized to perform rapid recovery of the storage system. Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources 314 within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system.” The storage system being configured to restore its state to a point in time prior to the ransomware infection through a backup or snapshot after detecting it is infected correlates to restoring the first storage system from a snapshot based on detecting the malicious logic trigger event).
Neelakantam does not explicitly teach that the first storage system is a first logical container datastore. However, logical container datastores are a popular storage unit in virtualized computing environments as evidenced by Rajagopal above (paragraphs 29 and 31).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with monitoring input/output (I/O) traffic for the first storage system; detecting a malicious logic trigger event during the monitoring; and based on at least detecting the malicious logic trigger event, restoring the first storage system from the snapshot as taught by Neelakantam because backups or snapshots can be used to perform rapid recovery of a storage system in the event it is infected with ransomware. Monitoring using software tools, keys, or system activity to detect ransomware infections can reduce the time that a user is locked out of a storage system (Neelakantam: Col. 40, lines 14-36).
With regards to Claim 15, the method of Claims 8 and 9 perform the same steps as the machine of Claim 15, and Claim 15 is therefore rejected using the same rationale set forth above in the rejections of Claims 8 and 9.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable by Rajagopal in view of Dai, Camargos, Neelakantam and Hong et al. (U.S. Patent No. US 20220255817 A1), hereinafter “Hong.”
With regards to Claim 20, Rajagopal in view of Dai teaches the manufacture of Claim 16 above. Rajagopal in view of Dai does not explicitly teach:
generating a snapshot of the first virtual storage object;
monitoring, by a machine learning (ML) model, input/output (I/O) traffic for the first logical container datastore;
detecting a malicious logic trigger event during the monitoring; and
based on at least detecting the malicious logic trigger event, restoring the first logical container datastore from the snapshot.
However, Camargos teaches:
generating a snapshot of the first virtual storage object (Paragraphs 67 and 78, “Every virtual disk 170 provisioned on the system is partitioned into fixed size chunks, each of which is called a storage container… Snapshots And Clones. In addition to replication policies, data management tasks include taking snapshots and making “zero-copy” clones of virtual disks. There is no limit to the number of snapshots or clones that can be created.” Each storage container in a virtual disk partition correlates to a first virtual storage object. Taking a snapshot of a virtual disk includes its partitions and therefore correlates to generating a snapshot of the first virtual storage object).
Additionally, Neelakantam teaches:
monitoring input/output (I/O) traffic for the first storage system (Col. 40, lines 24-32, “In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time.” The system activity showing no reads or writes coming into the system for a predetermined amount of time correlates to monitoring I/O traffic for the first container system);
detecting a malicious logic trigger event during the monitoring (Col. 40, lines 16-32, “Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources 314 within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system. In such an example, the presence of ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way. Likewise, the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time.” The presence of ransomware being explicitly detected or inferred in response to system activity correlates to detecting a malicious logic trigger event during the monitoring);
and based on at least detecting the malicious logic trigger event, restoring the first storage system from the snapshot (Col. 40, lines 14-24, “Readers will further appreciate that the backups (often in the form of one or more snapshots) may also be utilized to perform rapid recovery of the storage system. Consider an example in which the storage system is infected with ransomware that locks users out of the storage system. In such an example, software resources 314 within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system.” The storage system being configured to restore its state to a point in time prior to the ransomware infection through a backup or snapshot after detecting it is infected correlates to restoring the first storage system from a snapshot based on detecting the malicious logic trigger event).
Neelakantam does not explicitly teach that the first storage system is a first logical container datastore and that the monitoring is done by a machine learning (ML) model. However, logical container datastores are a popular storage unit in virtualized computing environments as evidenced by Rajagopal above (paragraphs 29 and 31). Additionally, the use of machine learning models for monitoring I/O traffic is a popular method of monitoring as evidenced by Hong (Paragraph 45, 51, and 58, “FIG. 1 is a configuration diagram illustrating an example of a virtual network management-specific machine learning-based virtualized network function (VNF) anomaly detection system 100 according to the present disclosure… The monitoring measurements collected by the monitoring agent include a total of 73 items, including sub-items such as CPU utilization, memory usage, and network traffic load. The monitoring agent sends time-series monitoring data, which includes the collected measures, to the monitoring module 111… Table 1 is a list of features selected for abnormal-state detection learning… Feature Description Time Measurement time instance VNF instance name… CPU-I/O standby time… disk_read Disk-read I/O disk_write Disk-write I/O disk_Io_time Disk-I/O execution time.” The machine learning-based VNF anomaly detection system detecting anomalies based on monitored data including I/O read and write operations correlates to the monitoring of I/O traffic done through machine learning).
Therefore, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with generating a snapshot of the first virtual storage object as taught by Camargos because snapshots and clones are a space-efficient data management method and only require capacity for changed blocks (Camargos: paragraph 78).
Additionally, it would have been obvious to one of ordinary skill in the art to which said subject matter pertains before the effective filing date of the claimed invention to combine Rajagopal with monitoring input/output (I/O) traffic for the first logical container datastore; detecting a malicious logic trigger event during the monitoring; and based on at least detecting the malicious logic trigger event, restoring the first storage system from the snapshot as taught by Neelakantam because backups or snapshots can be used to perform rapid recovery of a storage system in the event it is infected with ransomware. Monitoring using software tools, keys, or system activity to detect ransomware infections can reduce the time that a user is locked out of a storage system (Neelakantam: Col. 40, lines 14-36).
Prior Art Made of Record
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Pabón et al. (U.S. Patent No. US 20240037229 A1); teaching a method of monitoring for security threads in a container system. The container storage management system is configured to manage storage resources for containerized applications deployed on one or more nodes within a container system and activity related to the container system. Anomalies associated with the monitored activity are detected and recovery operations on a potentially infected container system are executed in response to the detection.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SELINA HU whose telephone number is (571)272-5428. The examiner can normally be reached Monday-Friday 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at (571) 272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
The publicPAIR and privatePAIR systems are no longer available. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SELINA ELISA HU/Examiner, Art Unit 2193
/Chat C Do/Supervisory Patent Examiner, Art Unit 2193