DETAILED ACTION
Claims 1, 3, 5-7, 9, 11, 13-15, 17, 19, and 21-24 are pending.
Claims 1, 9, 17, and 23 are amended, of which claims 1, 9, and 17 are independent.
Claims 1, 3, 5-7, 9, 11, 13-15, 17, 19, 21-22, and 24 are rejected.
Claim 23 is objected to.
Notice of AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Statutory Review under 35 USC § 101
Claims 1, 3, 5-7 and 21-24 are directed towards a method and have been reviewed.
Claims 1, 3, 5-7, and 21-24 remain patent-eligible as claim 1 performs a method directed to significantly more than an abstract idea based on currently known judicial exceptions, namely as a stripe is appended to existing data in a cloud-based store in response to a determination that a stripe in disk-based data storage is closed. This demonstrates intent to improve the functioning of the computer itself, providing a technological solution to a technological problem, a consideration in Eligibility Step 2B, determining whether a claim amounts to significantly more (see MPEP 2106.05(a)).
Claims 9, 11, and 13-15 are directed toward a system and have been reviewed.
Claims 9, 11, and 13-15 remain patent-eligible as claim 9 performs a method directed to significantly more than an abstract idea based on currently known judicial exceptions, namely as a stripe is appended to existing data in a cloud-based store in response to a determination that a stripe in disk-based data storage is closed. This demonstrates intent to improve the functioning of the computer itself, providing a technological solution to a technological problem, a consideration in Eligibility Step 2B, determining whether a claim amounts to significantly more (see MPEP 2106.05(a)).
Claims 17 and 19 are directed toward an article of manufacture and have been reviewed.
Claims 17 and 19 initially appear to be statutory, as the article of manufacture excludes transitory signals (claim says “non-transitory”).
Claims 17 and 19 remain patent-eligible as claim 17 performs a method directed to significantly more than an abstract idea based on currently known judicial exceptions, namely as a stripe is appended to existing data in a cloud-based store in response to a determination that a stripe in disk-based data storage is closed. This demonstrates intent to improve the functioning of the computer itself, providing a technological solution to a technological problem, a consideration in Eligibility Step 2B, determining whether a claim amounts to significantly more (see MPEP 2106.05(a)).
Response to Arguments
Applicant’s arguments, see pp9-10, filed 12/18/2025, with respect to the rejection(s) of claim(s) 1, 9, and 17 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made under 35 U.S.C. 103 as being unpatentable over Maybee in view of Macko in further view of Chang in further view of Dornemann in further view of newly incorporated reference Saito.
Dependent claims 3, 5-7, 11, 13-15, 19, 21-22, and 24 remain rejected at least by virtue of their dependence on rejected base claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 5-7; 9, 11, 13-15; 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Maybee et al., U.S. Patent Application Publication No. 2018/0196832 (hereinafter Maybee) in view of Macko et al., U.S. Patent Application Publication No. 2017/0123944 (hereinafter Macko) in further view of Chang et al., U.S. Patent Application Publication No. 2020/0351347 (published November 5, 2020; hereinafter Chang) in further view of Dornemann, U.S. Patent Application Publication No. 2017/0262350 (hereinafter Dornemann) and Saito, U.S. Patent Application Publication No. 2022/0083273 (hereinafter Saito).
Regarding claim 1, Maybee teaches:
A computer-implemented method performed by a computer system having a memory and at least one hardware processor, the computer-implemented method comprising: (Maybee FIGs. 1-3, ¶ 0069: Steps of various methods may be performed by the processors, memory devices, interfaces, and/or circuitry of the system in FIGS. 1-2)
receiving, by a user space file system hosted on the computer system and from an application that is hosted on the computer system, a first request to write a first set of … data; (Maybee FIG. 4, ¶ 0086 shows an application hosted on the computer system as claimed: requests to perform one or more transactions with respect to one or more files may be received from the application 202 at an application layer of the file system 200-1, and through the system call interface 208 of the interface layer of the file system 200-1; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; Maybee ¶ 0098 shows the requests being received and involving "writes" as claimed: As indicated by block 702, POSIX-compliant request(s) to perform particular operation(s) (i.e., a transaction(s)) may be received from the application 202. Such an operation may correspond to writing and/or modifying data ... transactions effected through the DMU 218 may include a series of operations that are committed to one or both of the system storage pool 416 and the cloud object store 404 as a group ... As indicated by block 706, the DMU 218 may translate requests to perform operations on data objects directly to requests to perform write operations (i.e., I/O requests) directed to a physical location within the system storage pool 416 and/or the cloud object store 404)
based on the receiving of the first request to write the first set of … data to the file from the application hosted on the computer system… writing, by the user space file system, the first set of … data… (Maybee FIGs. 4-5, ¶ 0086: The requests may be POSIX-compliant and may be converted by one or more components of the file system 200-1 into one or more object interface requests to perform one or more operations with respect to a cloud-based instantiation 300A of the logical tree 300 stored in the cloud object store 404; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; see also Maybee ¶ 0074: The hybrid cloud storage system 400 may allow the local file system 200 to use the cloud object storage 404 as a “drive.” || Maybee ¶ 0098 shows the requests involving "writes" as claimed: As indicated by block 702, POSIX-compliant request(s) to perform particular operation(s) (i.e., a transaction(s)) may be received from the application 202. Such an operation may correspond to writing and/or modifying data)
receiving, by the user space file system, a second request to read … a second set of … data from the file; (Maybee FIG. 4, ¶ 0086 shows an application hosted on the computer system as claimed: requests to perform one or more transactions with respect to one or more files may be received from the application 202 at an application layer of the file system 200-1, and through the system call interface 208 of the interface layer of the file system 200-1; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; Maybee FIG. 9, steps 902-908, ¶ 0120 shows the requests being received and involving "reads" as claimed: As indicated by block 904, the POSIX-compliant request may be forwarded from the operating system, via the system call interface 208, to the DMU 218. As indicated by block 906, the DMU 218 may translate requests to perform operations on data objects directly to requests to perform one or more read operations (i.e., one or more I/O requests) directed to the cloud object store 404)
based on the receiving of the second request to read the second set of … data from the file, fetching, by the user space file system, the second set of … data from the file in the cloud-based … object store. (Maybee FIGs. 4-5, ¶ 0086: The requests may be POSIX-compliant and may be converted by one or more components of the file system 200-1 into one or more object interface requests to perform one or more operations with respect to a cloud-based instantiation 300A of the logical tree 300 stored in the cloud object store 404; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; Maybee ¶ 0088 shows this file is at a cloud-based object store: the file system 200-1 may cause storage of the data objects and the corresponding metadata of the logical tree 300 in the cloud object store 404; see also Maybee ¶ 0074: The hybrid cloud storage system 400 may allow the local file system 200 to use the cloud object storage 404 as a “drive.” || Maybee FIG. 9, steps 902-908, ¶ 0120 shows this involving fetching data as claimed: As indicated by block 910, the cloud interface appliance 402 may receive the I/O request(s) and may send corresponding cloud interface request(s) to the cloud object store 404 ... As indicated by block 912, the cloud interface appliance 402 may receive data object(s) responsive to the object interface requests)
Maybee does not expressly disclose a first request to write a first set of snapshot data associated with a previously generated snapshot of a virtual machine accessible by the host computer system, wherein the request is directed to a file of the user space file system, wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine.
Maybee further does not expressly disclose writing the first set of snapshot data to the file.
Maybee further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
determining that a stripe in the disk-based data storage is closed as a result of the stripe being full;
in response to determining that the stripe in the disk-based data storage is closed, appending the stripe containing the first set of snapshot data to the file in a cloud-based key-value object store that is separate from the computer system that hosts the user space file system and the application;
Maybee further does not expressly disclose a cloud-based key-value object store.
Maybee further does not expressly disclose reading, for recovery of the virtual machine, a second set of snapshot data from the file.
However, Macko addresses some of this by teaching the following:
writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, (Macko FIG. 1, ¶ 0043: if array 150 were configured in RAID-4, one of the three disks would be redundant and used for parity information, meaning that a data stripe 136 would consist of two data blocks 132. In some aspects, the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision; Macko shows this can be snapshot data in ¶ 0016, "shingled drives can, for many operational environments, effectively replace tape for backup and archival data because of the largely-sequential, rather than random, nature of writes to backup and archival storage" and in FIG. 4, ¶ 0065, "In RAID-1, data blocks are written to Disk 1 and mirrored on the corresponding zone on Disk 2 for use as a redundant backup copy")
determining that a stripe in the disk-based data storage is closed as a result of the stripe being full; (Macko ¶ 0043: the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision)
in response to determining that the stripe in the disk-based data storage is closed, appending the stripe containing the first set of snapshot data to … object store… (Macko ¶ 0019: a storage system can operate to organize or identify collections of disk zones on an array of shingled drives as an array zone and maintain a non-volatile buffer for each array zone that contains recently appended data until all of its blocks are written to the drives; ¶ 0043: the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision; see Macko FIG. 1, ¶ 0037: Array zones 134 correspond to a set of disk zones 160 on the SMR drives, illustrated as Disk 1, Disk 2, and Disk 3, which comprise array 150 ... Shingled drives store the bulk of their data in disk zones 160, which are collections of adjacent tracks in which data can only be appended; see Macko FIG. 3, ¶ 0056-0058: the SMR array subsystem 130 can stripe and buffer the data 102 in a manner that minimizes the amount of non-volatile storage 140 required to ensure data integrity during the writing process (310) ... The SMR array subsystem 130 can then write each of the data blocks 132 that comprise the data stripe 136 and its corresponding redundant information to the corresponding disk zones on the array 150 (330))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data migration of Maybee with the data streams of Macko.
In addition, both of the references (Maybee and Macko) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as management of data reads and writes.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to improve storage density (Macko ¶ 0003) and implement improved performance, efficiency, and reliability of computer systems used for data storage as se in Macko ¶ 0022.
Maybee in view of Macko does not expressly disclose:
a first request to write a first set of snapshot data associated with a previously generated snapshot of a virtual machine accessible by the computer system, wherein the first request is directed to a file of the user space file system, wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine.
Maybee in view of Macko further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
Maybee in view of Macko further does not expressly disclose performing its appending to the file in a cloud-based key-value object store.
Maybee in view of Macko further does not expressly disclose reading, for recovery of the virtual machine, a second set of snapshot data from the file.
However, Chang addresses this by teaching the following:
receiving … a first request to write a first set of snapshot data associated with a previously generated snapshot of a virtual machine accessible by the computer system, wherein the first request is directed to a file of the user space file system, (Chang shows snapshots of virtual machines as claimed in ¶ 0118: perform a backup of a virtual machine from the data center of an organization; Chang shows receiving a first request as claimed in FIG. 9, ¶ 0120-0123; see first ¶ 0120: The fingerprint service 98 may receive a fingerprint query from the backup agent 84 (reference numeral 172); ¶ 0122: the backup service 80 may receive the L0, L0MD, and L1 object IDs (or the file ID for the corresponding virtual disk) from the backup agent 84 (reference numeral 178); ¶ 0123: The backup postprocessing may include updating the fingerprint database 100 with the fingerprints of the blocks captured in the backup. The backup service 80 may get the L1 from the object storage 90 using the L1 object ID (reference numeral 182); Chang shows this involving a previously generated snapshot as claimed in ¶ 0124-0126: The backup postprocessing may include merging the L1 data from a previous backup with the partially-populated L1 provided by the backup agent 84 to provide a complete L1 for the backup ... The backup service 80 may communicate with the catalog service 78 to obtain a backup ID for the most recent backup of the same virtual machine [shows snapshots of virtual machines as claimed] ... The backup service 80 may replace each invalid fingerprint in the partially-populated L1 for the current backup with the fingerprint from the corresponding offset in the previous L1, merging the fingerprints for the unchanged blocks to create the complete L1 for the current backup (reference numeral 186))
Chang teaches appending to the file. (Chang ¶ 0064: The contents of the virtual disk file 40A-40C may be the blocks of data stored on the virtual disk. Logically, the blocks may be stored in order from offset zero at the beginning of the virtual disk file to the last offset on the virtual disk at the end of the file. For example, if the virtual disk is 100 megabytes (MB), the virtual disk file is 100 MB in size with the byte at offset 0 logically located at the beginning of the file and the byte at offset 100 MB at the end of the file)
Chang further teaches a cloud-based key-value object store. (Chang ¶ 0109: The backup agent 84 may put the L0, L0MD, and L1 data in the object storage 90 of the public cloud 12 (reference numeral 138) ... the object ID may be referred to as a key in the AWS public cloud, and the object itself is the value associated with the key)
Chang further teaches:
receiving … a second request to read, for recovery of the virtual machine, a second set of snapshot data from the file; (Chang FIG. 10, ¶ 0129-0131: Once the spot or on-demand VM instance is started, the backup service 80 may establish a block storage for the VM instance that is large enough to accommodate the backed-up virtual disk (reference numeral 208). The backup service 80 may load code into the VM instance to perform the restore and verification process ... restoring the backup to the VM instance (and more particularly to the block storage established for the VM instance) (reference numeral 210); Chang shows the claimed recovery of a virtual machine in FIG. 11, ¶ 0134-0135: FIG. 11 illustrates the restore of a backup to a VM instance (reference numeral 210 in FIG. 11, and also reference numeral 210 in FIG. 14))
based on the receiving of the second request to read the second set of snapshot data from the file, fetching … the second set of snapshot data from the file in the cloud-based key-value object store. (Chang ¶ 0134-0136, see ¶ 0134: FIG. 11 illustrates the restore of a backup to a VM instance (reference numeral 210 in FIG. 11, and also reference numeral 210 in FIG. 14); ¶ 0136: The VM instance may read the backup data block from the offset within the L0 (reference numeral 240); see also Chang ¶ 0109 re:claimed cloud-based key-value object store: The backup agent 84 may put the L0, L0MD, and L1 data in the object storage 90 of the public cloud 12 (reference numeral 138) ... the object ID may be referred to as a key in the AWS public cloud, and the object itself is the value associated with the key)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud-based functionality of Maybee as modified with the cloud-based functionality of Chang.
In addition, both of the references (Maybee as modified and Chang) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as cloud storage management.
Motivation to do so would be to improve the functioning of Maybee as modified allowing for switching default local, cached, or cloud storage of files with the functioning of similar reference Chang also providing remote cloud object storage but with the ability to rely on a public cloud for its cloud-based data protection service.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to implement improved backup efficiency (Chang ¶ 0124) and improved backup performance and reduced backup cost as seen in Chang ¶ 0160).
Maybee in view of Macko and Chang does not expressly disclose wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine.
Maybee in view of Macko and Chang further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
However, Dornemann addresses teaches wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine. (Dornemann ¶ 0003: A company might back up critical computing systems such as databases, file servers, web servers, virtual machines, and so on as part of a daily, weekly, or monthly maintenance schedule; Dornemann ¶ 0075: A secondary copy 116 can comprise a separate stored copy of data that is derived from one or more earlier-created stored copies (e.g., derived from primary data 112 or from another secondary copy 116). Secondary copies 116 can include point-in-time data, and may be intended for relatively long-term retention before some or all of the data is moved to other storage or discarded ... a disk array capable of performing hardware snapshots stores primary data 112 and creates and stores hardware snapshots of the primary data 112 as secondary copies 116)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud data migration of Maybee as modified with the cloud operations of Dornemann.
In addition, both of the references (Maybee as modified and Dornemann) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as cloud-based snapshot management.
Motivation to do so would be the teaching, suggestion, or motivation for a person of ordinary skill in the art to greatly improve the speed of performed information management operations and to improve the capacity of the system to handle large numbers of such operations, while reducing the computational load on the production environment of client computing devices as seen in Dornemann ¶ 0080.
Maybee in view of Macko and Chang and Dornemann further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
However, Saito addresses this by teaching the following:
Saito teaches, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.” (Saito ¶ 0275: the In-Place Update method is applied when the data is written to the first memory area 32a … the In-Place Update method is a method of writing (overwriting) the data to a physical address [relevant to disk-based data storage] associated with a logical address designated in the write command at the time when the data based on the write command are written to the first memory area 32a as described above; see also Saito ¶ 0499: When the write command is received from the CPU10 (host), the SCM controller 31 determines the write method of writing the data. When the In-Place Update method (first method) is determined as the write method, the SCM controller 31 writes the data to the first memory area 32a in the In-Place Update method; see further support in Saito ¶ 0004, "Such a memory system can be used as a main memory or a storage device since the memory system has middle characteristics between a main memory such as a dynamic random access memory (DRAM) and a storage device such as a solid state drive (SSD)" and Saito ¶ 0058: "The non-volatile memory includes first and second memory areas")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data migration of Maybee as modified with the data migration of Saito.
In addition, both of the references (Maybee as modified and Saito) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as management of data reads and writes.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to improve write performance as seen in Saito (¶ 0277, ¶ 0374, ¶ 0500-0502).
Regarding claim 9, Maybee teaches:
A computer system comprising: at least one hardware processor of a managed private cloud architecture serving an organization; and (Maybee FIGs. 1-3, ¶ 0069: Steps of various methods may be performed by the processors, memory devices, interfaces, and/or circuitry of the system in FIGS. 1-2; ¶ 0231: services may be provided under a private cloud model in which cloud infrastructure system 2102 is operated solely for a single organization and may provide services for one or more entities within the organization)
a non-transitory computer-readable medium storing executable instructions that, when executed, cause the at least one hardware processor to perform operations comprising: (Maybee FIGs. 1-3, ¶ 0069: Steps of various methods may be performed by the processors, memory devices, interfaces, and/or circuitry of the system in FIGS. 1-2; ¶ 0254: Storage subsystem 2218 may also provide a tangible computer-readable storage medium for storing the basic programming and data constructs that provide the functionality of some embodiments. Software (programs, code modules, instructions) that when executed by a processor provide the functionality described above may be stored in storage subsystem 2218. These software modules or instructions may be executed by processing unit 2204)
receiving, by a user space file system hosted on the computer system and from an application that is hosted on the computer system, a first request to write a first set of … data; (Maybee FIG. 4, ¶ 0086 shows an application hosted on the computer system as claimed: requests to perform one or more transactions with respect to one or more files may be received from the application 202 at an application layer of the file system 200-1, and through the system call interface 208 of the interface layer of the file system 200-1; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; Maybee ¶ 0098 shows the requests being received and involving "writes" as claimed: As indicated by block 702, POSIX-compliant request(s) to perform particular operation(s) (i.e., a transaction(s)) may be received from the application 202. Such an operation may correspond to writing and/or modifying data ... transactions effected through the DMU 218 may include a series of operations that are committed to one or both of the system storage pool 416 and the cloud object store 404 as a group ... As indicated by block 706, the DMU 218 may translate requests to perform operations on data objects directly to requests to perform write operations (i.e., I/O requests) directed to a physical location within the system storage pool 416 and/or the cloud object store 404)
based on the receiving of the first request to write the first set of … data to the file from the application hosted on the computer system… writing, by the user space file system, the first set of … data… (Maybee FIGs. 4-5, ¶ 0086: The requests may be POSIX-compliant and may be converted by one or more components of the file system 200-1 into one or more object interface requests to perform one or more operations with respect to a cloud-based instantiation 300A of the logical tree 300 stored in the cloud object store 404; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; see also Maybee ¶ 0074: The hybrid cloud storage system 400 may allow the local file system 200 to use the cloud object storage 404 as a “drive.” || Maybee ¶ 0098 shows the requests involving "writes" as claimed: As indicated by block 702, POSIX-compliant request(s) to perform particular operation(s) (i.e., a transaction(s)) may be received from the application 202. Such an operation may correspond to writing and/or modifying data)
receiving, by the user space file system, a second request to read … a second set of … data from the file; (Maybee FIG. 4, ¶ 0086 shows an application hosted on the computer system as claimed: requests to perform one or more transactions with respect to one or more files may be received from the application 202 at an application layer of the file system 200-1, and through the system call interface 208 of the interface layer of the file system 200-1; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; Maybee FIG. 9, steps 902-908, ¶ 0120 shows the requests being received and involving "reads" as claimed: As indicated by block 904, the POSIX-compliant request may be forwarded from the operating system, via the system call interface 208, to the DMU 218. As indicated by block 906, the DMU 218 may translate requests to perform operations on data objects directly to requests to perform one or more read operations (i.e., one or more I/O requests) directed to the cloud object store 404)
based on the receiving of the second request to read the second set of … data from the file, fetching, by the user space file system, the second set of … data from the file in the cloud-based … object store. (Maybee FIGs. 4-5, ¶ 0086: The requests may be POSIX-compliant and may be converted by one or more components of the file system 200-1 into one or more object interface requests to perform one or more operations with respect to a cloud-based instantiation 300A of the logical tree 300 stored in the cloud object store 404; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; Maybee ¶ 0088 shows this file is at a cloud-based object store: the file system 200-1 may cause storage of the data objects and the corresponding metadata of the logical tree 300 in the cloud object store 404; see also Maybee ¶ 0074: The hybrid cloud storage system 400 may allow the local file system 200 to use the cloud object storage 404 as a “drive.” || Maybee FIG. 9, steps 902-908, ¶ 0120 shows this involving fetching data as claimed: As indicated by block 910, the cloud interface appliance 402 may receive the I/O request(s) and may send corresponding cloud interface request(s) to the cloud object store 404 ... As indicated by block 912, the cloud interface appliance 402 may receive data object(s) responsive to the object interface requests)
Maybee does not expressly disclose a first request to write a first set of snapshot data associated with a previously generated snapshot of a virtual machine accessible by the host computer system, wherein the request is directed to a file of the user space file system, wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine.
Maybee further does not expressly disclose writing the first set of snapshot data to the file.
Maybee further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
determining that a stripe in the disk-based data storage is closed as a result of the stripe being full;
in response to determining that the stripe in the disk-based data storage is closed, appending the stripe containing the first set of snapshot data to the file in a cloud-based key-value object store that is separate from the computer system that hosts the user space file system and the application;
Maybee further does not expressly disclose a cloud-based key-value object store.
Maybee further does not expressly disclose reading, for recovery of the virtual machine, a second set of snapshot data from the file.
However, Macko addresses some of this by teaching the following:
writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, (Macko FIG. 1, ¶ 0043: if array 150 were configured in RAID-4, one of the three disks would be redundant and used for parity information, meaning that a data stripe 136 would consist of two data blocks 132. In some aspects, the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision; Macko shows this can be snapshot data in ¶ 0016, "shingled drives can, for many operational environments, effectively replace tape for backup and archival data because of the largely-sequential, rather than random, nature of writes to backup and archival storage" and in FIG. 4, ¶ 0065, "In RAID-1, data blocks are written to Disk 1 and mirrored on the corresponding zone on Disk 2 for use as a redundant backup copy")
determining that a stripe in the disk-based data storage is closed as a result of the stripe being full; (Macko ¶ 0043: the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision)
in response to determining that the stripe in the disk-based data storage is closed, appending the stripe containing the first set of snapshot data to … object store… (Macko ¶ 0019: a storage system can operate to organize or identify collections of disk zones on an array of shingled drives as an array zone and maintain a non-volatile buffer for each array zone that contains recently appended data until all of its blocks are written to the drives; ¶ 0043: the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision; see Macko FIG. 1, ¶ 0037: Array zones 134 correspond to a set of disk zones 160 on the SMR drives, illustrated as Disk 1, Disk 2, and Disk 3, which comprise array 150 ... Shingled drives store the bulk of their data in disk zones 160, which are collections of adjacent tracks in which data can only be appended; see Macko FIG. 3, ¶ 0056-0058: the SMR array subsystem 130 can stripe and buffer the data 102 in a manner that minimizes the amount of non-volatile storage 140 required to ensure data integrity during the writing process (310) ... The SMR array subsystem 130 can then write each of the data blocks 132 that comprise the data stripe 136 and its corresponding redundant information to the corresponding disk zones on the array 150 (330))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data migration of Maybee with the data streams of Macko.
In addition, both of the references (Maybee and Macko) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as management of data reads and writes.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to improve storage density (Macko ¶ 0003) and implement improved performance, efficiency, and reliability of computer systems used for data storage as se in Macko ¶ 0022.
Maybee in view of Macko does not expressly disclose:
a first request to write a first set of snapshot data associated with a previously generated snapshot of a virtual machine accessible by the computer system, wherein the first request is directed to a file of the user space file system, wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine.
Maybee in view of Macko further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
Maybee in view of Macko further does not expressly disclose performing its appending to the file in a cloud-based key-value object store.
Maybee in view of Macko further does not expressly disclose reading, for recovery of the virtual machine, a second set of snapshot data from the file.
However, Chang addresses this by teaching the following:
receiving … a first request to write a first set of snapshot data associated with a previously generated snapshot of a virtual machine accessible by the computer system, wherein the first request is directed to a file of the user space file system, (Chang shows snapshots of virtual machines as claimed in ¶ 0118: perform a backup of a virtual machine from the data center of an organization; Chang shows receiving a first request as claimed in FIG. 9, ¶ 0120-0123; see first ¶ 0120: The fingerprint service 98 may receive a fingerprint query from the backup agent 84 (reference numeral 172); ¶ 0122: the backup service 80 may receive the L0, L0MD, and L1 object IDs (or the file ID for the corresponding virtual disk) from the backup agent 84 (reference numeral 178); ¶ 0123: The backup postprocessing may include updating the fingerprint database 100 with the fingerprints of the blocks captured in the backup. The backup service 80 may get the L1 from the object storage 90 using the L1 object ID (reference numeral 182); Chang shows this involving a previously generated snapshot as claimed in ¶ 0124-0126: The backup postprocessing may include merging the L1 data from a previous backup with the partially-populated L1 provided by the backup agent 84 to provide a complete L1 for the backup ... The backup service 80 may communicate with the catalog service 78 to obtain a backup ID for the most recent backup of the same virtual machine [shows snapshots of virtual machines as claimed] ... The backup service 80 may replace each invalid fingerprint in the partially-populated L1 for the current backup with the fingerprint from the corresponding offset in the previous L1, merging the fingerprints for the unchanged blocks to create the complete L1 for the current backup (reference numeral 186))
Chang teaches appending to the file. (Chang ¶ 0064: The contents of the virtual disk file 40A-40C may be the blocks of data stored on the virtual disk. Logically, the blocks may be stored in order from offset zero at the beginning of the virtual disk file to the last offset on the virtual disk at the end of the file. For example, if the virtual disk is 100 megabytes (MB), the virtual disk file is 100 MB in size with the byte at offset 0 logically located at the beginning of the file and the byte at offset 100 MB at the end of the file)
Chang further teaches a cloud-based key-value object store. (Chang ¶ 0109: The backup agent 84 may put the L0, L0MD, and L1 data in the object storage 90 of the public cloud 12 (reference numeral 138) ... the object ID may be referred to as a key in the AWS public cloud, and the object itself is the value associated with the key)
Chang further teaches:
receiving … a second request to read, for recovery of the virtual machine, a second set of snapshot data from the file; (Chang FIG. 10, ¶ 0129-0131: Once the spot or on-demand VM instance is started, the backup service 80 may establish a block storage for the VM instance that is large enough to accommodate the backed-up virtual disk (reference numeral 208). The backup service 80 may load code into the VM instance to perform the restore and verification process ... restoring the backup to the VM instance (and more particularly to the block storage established for the VM instance) (reference numeral 210); Chang shows the claimed recovery of a virtual machine in FIG. 11, ¶ 0134-0135: FIG. 11 illustrates the restore of a backup to a VM instance (reference numeral 210 in FIG. 11, and also reference numeral 210 in FIG. 14))
based on the receiving of the second request to read the second set of snapshot data from the file, fetching … the second set of snapshot data from the file in the cloud-based key-value object store. (Chang ¶ 0134-0136, see ¶ 0134: FIG. 11 illustrates the restore of a backup to a VM instance (reference numeral 210 in FIG. 11, and also reference numeral 210 in FIG. 14); ¶ 0136: The VM instance may read the backup data block from the offset within the L0 (reference numeral 240); see also Chang ¶ 0109 re:claimed cloud-based key-value object store: The backup agent 84 may put the L0, L0MD, and L1 data in the object storage 90 of the public cloud 12 (reference numeral 138) ... the object ID may be referred to as a key in the AWS public cloud, and the object itself is the value associated with the key)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud-based functionality of Maybee as modified with the cloud-based functionality of Chang.
In addition, both of the references (Maybee as modified and Chang) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as cloud storage management.
Motivation to do so would be to improve the functioning of Maybee as modified allowing for switching default local, cached, or cloud storage of files with the functioning of similar reference Chang also providing remote cloud object storage but with the ability to rely on a public cloud for its cloud-based data protection service.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to implement improved backup efficiency (Chang ¶ 0124) and improved backup performance and reduced backup cost as seen in Chang ¶ 0160).
Maybee in view of Macko and Chang does not expressly disclose wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine.
Maybee in view of Macko and Chang further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
However, Dornemann addresses teaches wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine. (Dornemann ¶ 0003: A company might back up critical computing systems such as databases, file servers, web servers, virtual machines, and so on as part of a daily, weekly, or monthly maintenance schedule; Dornemann ¶ 0075: A secondary copy 116 can comprise a separate stored copy of data that is derived from one or more earlier-created stored copies (e.g., derived from primary data 112 or from another secondary copy 116). Secondary copies 116 can include point-in-time data, and may be intended for relatively long-term retention before some or all of the data is moved to other storage or discarded ... a disk array capable of performing hardware snapshots stores primary data 112 and creates and stores hardware snapshots of the primary data 112 as secondary copies 116)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud data migration of Maybee as modified with the cloud operations of Dornemann.
In addition, both of the references (Maybee as modified and Dornemann) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as cloud-based snapshot management.
Motivation to do so would be the teaching, suggestion, or motivation for a person of ordinary skill in the art to greatly improve the speed of performed information management operations and to improve the capacity of the system to handle large numbers of such operations, while reducing the computational load on the production environment of client computing devices as seen in Dornemann ¶ 0080.
Maybee in view of Macko and Chang and Dornemann further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
However, Saito addresses this by teaching the following:
Saito teaches, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.” (Saito ¶ 0275: the In-Place Update method is applied when the data is written to the first memory area 32a … the In-Place Update method is a method of writing (overwriting) the data to a physical address [relevant to disk-based data storage] associated with a logical address designated in the write command at the time when the data based on the write command are written to the first memory area 32a as described above; see also Saito ¶ 0499: When the write command is received from the CPU10 (host), the SCM controller 31 determines the write method of writing the data. When the In-Place Update method (first method) is determined as the write method, the SCM controller 31 writes the data to the first memory area 32a in the In-Place Update method; see further support in Saito ¶ 0004, "Such a memory system can be used as a main memory or a storage device since the memory system has middle characteristics between a main memory such as a dynamic random access memory (DRAM) and a storage device such as a solid state drive (SSD)" and Saito ¶ 0058: "The non-volatile memory includes first and second memory areas")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data migration of Maybee as modified with the data migration of Saito.
In addition, both of the references (Maybee as modified and Saito) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as management of data reads and writes.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to improve write performance as seen in Saito (¶ 0277, ¶ 0374, ¶ 0500-0502).
Regarding claim 17, Maybee teaches:
A non-transitory computer-readable medium tangibly embodying a set of instructions that, when executed by at least one hardware processor of a computer system, cause the at least one hardware processor to perform operations comprising: (Maybee FIGs. 1-3, ¶ 0069: Steps of various methods may be performed by the processors, memory devices, interfaces, and/or circuitry of the system in FIGS. 1-2; ¶ 0254: Storage subsystem 2218 may also provide a tangible computer-readable storage medium for storing the basic programming and data constructs that provide the functionality of some embodiments. Software (programs, code modules, instructions) that when executed by a processor provide the functionality described above may be stored in storage subsystem 2218. These software modules or instructions may be executed by processing unit 2204)
receiving, by a user space file system hosted on the computer system and from an application that is hosted on the computer system, a first request to write a first set of … data; (Maybee FIG. 4, ¶ 0086 shows an application hosted on the computer system as claimed: requests to perform one or more transactions with respect to one or more files may be received from the application 202 at an application layer of the file system 200-1, and through the system call interface 208 of the interface layer of the file system 200-1; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; Maybee ¶ 0098 shows the requests being received and involving "writes" as claimed: As indicated by block 702, POSIX-compliant request(s) to perform particular operation(s) (i.e., a transaction(s)) may be received from the application 202. Such an operation may correspond to writing and/or modifying data ... transactions effected through the DMU 218 may include a series of operations that are committed to one or both of the system storage pool 416 and the cloud object store 404 as a group ... As indicated by block 706, the DMU 218 may translate requests to perform operations on data objects directly to requests to perform write operations (i.e., I/O requests) directed to a physical location within the system storage pool 416 and/or the cloud object store 404)
based on the receiving of the first request to write the first set of … data to the file from the application hosted on the computer system… writing, by the user space file system, the first set of … data… (Maybee FIGs. 4-5, ¶ 0086: The requests may be POSIX-compliant and may be converted by one or more components of the file system 200-1 into one or more object interface requests to perform one or more operations with respect to a cloud-based instantiation 300A of the logical tree 300 stored in the cloud object store 404; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; see also Maybee ¶ 0074: The hybrid cloud storage system 400 may allow the local file system 200 to use the cloud object storage 404 as a “drive.” || Maybee ¶ 0098 shows the requests involving "writes" as claimed: As indicated by block 702, POSIX-compliant request(s) to perform particular operation(s) (i.e., a transaction(s)) may be received from the application 202. Such an operation may correspond to writing and/or modifying data)
receiving, by the user space file system, a second request to read … a second set of … data from the file; (Maybee FIG. 4, ¶ 0086 shows an application hosted on the computer system as claimed: requests to perform one or more transactions with respect to one or more files may be received from the application 202 at an application layer of the file system 200-1, and through the system call interface 208 of the interface layer of the file system 200-1; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; Maybee FIG. 9, steps 902-908, ¶ 0120 shows the requests being received and involving "reads" as claimed: As indicated by block 904, the POSIX-compliant request may be forwarded from the operating system, via the system call interface 208, to the DMU 218. As indicated by block 906, the DMU 218 may translate requests to perform operations on data objects directly to requests to perform one or more read operations (i.e., one or more I/O requests) directed to the cloud object store 404)
based on the receiving of the second request to read the second set of … data from the file, fetching, by the user space file system, the second set of … data from the file in the cloud-based … object store. (Maybee FIGs. 4-5, ¶ 0086: The requests may be POSIX-compliant and may be converted by one or more components of the file system 200-1 into one or more object interface requests to perform one or more operations with respect to a cloud-based instantiation 300A of the logical tree 300 stored in the cloud object store 404; Maybee ¶ 0087 shows this involving a file: the data objects and metadata corresponding to the one or more files may be stored as a logical tree 300; Maybee ¶ 0088 shows this file is at a cloud-based object store: the file system 200-1 may cause storage of the data objects and the corresponding metadata of the logical tree 300 in the cloud object store 404; see also Maybee ¶ 0074: The hybrid cloud storage system 400 may allow the local file system 200 to use the cloud object storage 404 as a “drive.” || Maybee FIG. 9, steps 902-908, ¶ 0120 shows this involving fetching data as claimed: As indicated by block 910, the cloud interface appliance 402 may receive the I/O request(s) and may send corresponding cloud interface request(s) to the cloud object store 404 ... As indicated by block 912, the cloud interface appliance 402 may receive data object(s) responsive to the object interface requests)
Maybee does not expressly disclose a first request to write a first set of snapshot data associated with a previously generated snapshot of a virtual machine accessible by the host computer system, wherein the request is directed to a file of the user space file system, wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine.
Maybee further does not expressly disclose writing the first set of snapshot data to the file.
Maybee further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
determining that a stripe in the disk-based data storage is closed as a result of the stripe being full;
in response to determining that the stripe in the disk-based data storage is closed, appending the stripe containing the first set of snapshot data to the file in a cloud-based key-value object store that is separate from the computer system that hosts the user space file system and the application;
Maybee further does not expressly disclose a cloud-based key-value object store.
Maybee further does not expressly disclose reading, for recovery of the virtual machine, a second set of snapshot data from the file.
However, Macko addresses some of this by teaching the following:
writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, (Macko FIG. 1, ¶ 0043: if array 150 were configured in RAID-4, one of the three disks would be redundant and used for parity information, meaning that a data stripe 136 would consist of two data blocks 132. In some aspects, the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision; Macko shows this can be snapshot data in ¶ 0016, "shingled drives can, for many operational environments, effectively replace tape for backup and archival data because of the largely-sequential, rather than random, nature of writes to backup and archival storage" and in FIG. 4, ¶ 0065, "In RAID-1, data blocks are written to Disk 1 and mirrored on the corresponding zone on Disk 2 for use as a redundant backup copy")
determining that a stripe in the disk-based data storage is closed as a result of the stripe being full; (Macko ¶ 0043: the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision)
in response to determining that the stripe in the disk-based data storage is closed, appending the stripe containing the first set of snapshot data to … object store… (Macko ¶ 0019: a storage system can operate to organize or identify collections of disk zones on an array of shingled drives as an array zone and maintain a non-volatile buffer for each array zone that contains recently appended data until all of its blocks are written to the drives; ¶ 0043: the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision; see Macko FIG. 1, ¶ 0037: Array zones 134 correspond to a set of disk zones 160 on the SMR drives, illustrated as Disk 1, Disk 2, and Disk 3, which comprise array 150 ... Shingled drives store the bulk of their data in disk zones 160, which are collections of adjacent tracks in which data can only be appended; see Macko FIG. 3, ¶ 0056-0058: the SMR array subsystem 130 can stripe and buffer the data 102 in a manner that minimizes the amount of non-volatile storage 140 required to ensure data integrity during the writing process (310) ... The SMR array subsystem 130 can then write each of the data blocks 132 that comprise the data stripe 136 and its corresponding redundant information to the corresponding disk zones on the array 150 (330))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data migration of Maybee with the data streams of Macko.
In addition, both of the references (Maybee and Macko) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as management of data reads and writes.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to improve storage density (Macko ¶ 0003) and implement improved performance, efficiency, and reliability of computer systems used for data storage as se in Macko ¶ 0022.
Maybee in view of Macko does not expressly disclose:
a first request to write a first set of snapshot data associated with a previously generated snapshot of a virtual machine accessible by the computer system, wherein the first request is directed to a file of the user space file system, wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine.
Maybee in view of Macko further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
Maybee in view of Macko further does not expressly disclose performing its appending to the file in a cloud-based key-value object store.
Maybee in view of Macko further does not expressly disclose reading, for recovery of the virtual machine, a second set of snapshot data from the file.
However, Chang addresses this by teaching the following:
receiving … a first request to write a first set of snapshot data associated with a previously generated snapshot of a virtual machine accessible by the computer system, wherein the first request is directed to a file of the user space file system, (Chang shows snapshots of virtual machines as claimed in ¶ 0118: perform a backup of a virtual machine from the data center of an organization; Chang shows receiving a first request as claimed in FIG. 9, ¶ 0120-0123; see first ¶ 0120: The fingerprint service 98 may receive a fingerprint query from the backup agent 84 (reference numeral 172); ¶ 0122: the backup service 80 may receive the L0, L0MD, and L1 object IDs (or the file ID for the corresponding virtual disk) from the backup agent 84 (reference numeral 178); ¶ 0123: The backup postprocessing may include updating the fingerprint database 100 with the fingerprints of the blocks captured in the backup. The backup service 80 may get the L1 from the object storage 90 using the L1 object ID (reference numeral 182); Chang shows this involving a previously generated snapshot as claimed in ¶ 0124-0126: The backup postprocessing may include merging the L1 data from a previous backup with the partially-populated L1 provided by the backup agent 84 to provide a complete L1 for the backup ... The backup service 80 may communicate with the catalog service 78 to obtain a backup ID for the most recent backup of the same virtual machine [shows snapshots of virtual machines as claimed] ... The backup service 80 may replace each invalid fingerprint in the partially-populated L1 for the current backup with the fingerprint from the corresponding offset in the previous L1, merging the fingerprints for the unchanged blocks to create the complete L1 for the current backup (reference numeral 186))
Chang teaches appending to the file. (Chang ¶ 0064: The contents of the virtual disk file 40A-40C may be the blocks of data stored on the virtual disk. Logically, the blocks may be stored in order from offset zero at the beginning of the virtual disk file to the last offset on the virtual disk at the end of the file. For example, if the virtual disk is 100 megabytes (MB), the virtual disk file is 100 MB in size with the byte at offset 0 logically located at the beginning of the file and the byte at offset 100 MB at the end of the file)
Chang further teaches a cloud-based key-value object store. (Chang ¶ 0109: The backup agent 84 may put the L0, L0MD, and L1 data in the object storage 90 of the public cloud 12 (reference numeral 138) ... the object ID may be referred to as a key in the AWS public cloud, and the object itself is the value associated with the key)
Chang further teaches:
receiving … a second request to read, for recovery of the virtual machine, a second set of snapshot data from the file; (Chang FIG. 10, ¶ 0129-0131: Once the spot or on-demand VM instance is started, the backup service 80 may establish a block storage for the VM instance that is large enough to accommodate the backed-up virtual disk (reference numeral 208). The backup service 80 may load code into the VM instance to perform the restore and verification process ... restoring the backup to the VM instance (and more particularly to the block storage established for the VM instance) (reference numeral 210); Chang shows the claimed recovery of a virtual machine in FIG. 11, ¶ 0134-0135: FIG. 11 illustrates the restore of a backup to a VM instance (reference numeral 210 in FIG. 11, and also reference numeral 210 in FIG. 14))
based on the receiving of the second request to read the second set of snapshot data from the file, fetching … the second set of snapshot data from the file in the cloud-based key-value object store. (Chang ¶ 0134-0136, see ¶ 0134: FIG. 11 illustrates the restore of a backup to a VM instance (reference numeral 210 in FIG. 11, and also reference numeral 210 in FIG. 14); ¶ 0136: The VM instance may read the backup data block from the offset within the L0 (reference numeral 240); see also Chang ¶ 0109 re:claimed cloud-based key-value object store: The backup agent 84 may put the L0, L0MD, and L1 data in the object storage 90 of the public cloud 12 (reference numeral 138) ... the object ID may be referred to as a key in the AWS public cloud, and the object itself is the value associated with the key)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud-based functionality of Maybee as modified with the cloud-based functionality of Chang.
In addition, both of the references (Maybee as modified and Chang) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as cloud storage management.
Motivation to do so would be to improve the functioning of Maybee as modified allowing for switching default local, cached, or cloud storage of files with the functioning of similar reference Chang also providing remote cloud object storage but with the ability to rely on a public cloud for its cloud-based data protection service.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to implement improved backup efficiency (Chang ¶ 0124) and improved backup performance and reduced backup cost as seen in Chang ¶ 0160).
Maybee in view of Macko and Chang does not expressly disclose wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine.
Maybee in view of Macko and Chang further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
However, Dornemann addresses teaches wherein the previously generated snapshot of the virtual machine captures a point-in-time version of the virtual machine. (Dornemann ¶ 0003: A company might back up critical computing systems such as databases, file servers, web servers, virtual machines, and so on as part of a daily, weekly, or monthly maintenance schedule; Dornemann ¶ 0075: A secondary copy 116 can comprise a separate stored copy of data that is derived from one or more earlier-created stored copies (e.g., derived from primary data 112 or from another secondary copy 116). Secondary copies 116 can include point-in-time data, and may be intended for relatively long-term retention before some or all of the data is moved to other storage or discarded ... a disk array capable of performing hardware snapshots stores primary data 112 and creates and stores hardware snapshots of the primary data 112 as secondary copies 116)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud data migration of Maybee as modified with the cloud operations of Dornemann.
In addition, both of the references (Maybee as modified and Dornemann) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as cloud-based snapshot management.
Motivation to do so would be the teaching, suggestion, or motivation for a person of ordinary skill in the art to greatly improve the speed of performed information management operations and to improve the capacity of the system to handle large numbers of such operations, while reducing the computational load on the production environment of client computing devices as seen in Dornemann ¶ 0080.
Maybee in view of Macko and Chang and Dornemann further does not expressly disclose, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.”
However, Saito addresses this by teaching the following:
Saito teaches, based “on a write pattern of the first set of snapshot data,” “writing, by the user space file system, the first set of snapshot data to a disk-based data storage on the computer system, wherein the first set of snapshot data is written to the disk-based data storage when the write pattern is in-place.” (Saito ¶ 0275: the In-Place Update method is applied when the data is written to the first memory area 32a … the In-Place Update method is a method of writing (overwriting) the data to a physical address [relevant to disk-based data storage] associated with a logical address designated in the write command at the time when the data based on the write command are written to the first memory area 32a as described above; see also Saito ¶ 0499: When the write command is received from the CPU10 (host), the SCM controller 31 determines the write method of writing the data. When the In-Place Update method (first method) is determined as the write method, the SCM controller 31 writes the data to the first memory area 32a in the In-Place Update method; see further support in Saito ¶ 0004, "Such a memory system can be used as a main memory or a storage device since the memory system has middle characteristics between a main memory such as a dynamic random access memory (DRAM) and a storage device such as a solid state drive (SSD)" and Saito ¶ 0058: "The non-volatile memory includes first and second memory areas")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data migration of Maybee as modified with the data migration of Saito.
In addition, both of the references (Maybee as modified and Saito) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as management of data reads and writes.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to improve write performance as seen in Saito (¶ 0277, ¶ 0374, ¶ 0500-0502).
Regarding claims 3, 11, and 19, Maybee and Macko and Chang and Dornemann and Saito teaches all the features with respect to claims 1, 9, and 17 above respectively.
Maybee teaches: writing, by the user space file system, the first set of … data … to … the cloud-based … object store based at least in part on a time period after the writing of the first set of … data to the disk-based data storage satisfying a threshold amount of time and being devoid of a request to write data to the file. (Maybee FIG. 13, ¶ 0154-0157, see ¶ 0155: The ARC 222-5 may define a recency attribute for the one or more new objects in order to track recency of access of the one or more objects. The recency attribute may correspond to a time parameter that indicates a last access time corresponding to one or more objects (e.g., by absolute time, system time, time differential, etc.); see ¶ 0156: the transition criteria may include one or more recency thresholds defined in order for objects to qualify for transition from current stages ... the ARC 222-5 may determine if the one or more objects should be transitioned to LFU or LRU stages (or eviction) based at least in part on the value of the recency attribute assigned to the one or more objects; see ¶ 0157: the adjustment of cache staging may include updating one or more frequency attributes. With the particular I/O operation, the ARC 222-5 may increment a frequency attribute defined for the one or more objects in order to track the frequency of access of the one or more objects. The frequency attribute may indicate numbers of accesses over any suitable time period, which could be an absolute time period, an activity-based time period (e.g., a user session, or time since a last amount of access activity that meets a minimum activity threshold), and/or the like [these passages show a threshold amount of time devoid of user object access (which would include writes) as required by the claims])
Macko teaches appending the stripe containing the first set of snapshot data to … object store… (Macko ¶ 0019: a storage system can operate to organize or identify collections of disk zones on an array of shingled drives as an array zone and maintain a non-volatile buffer for each array zone that contains recently appended data until all of its blocks are written to the drives; ¶ 0043: the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision; see Macko FIG. 1, ¶ 0037: Array zones 134 correspond to a set of disk zones 160 on the SMR drives, illustrated as Disk 1, Disk 2, and Disk 3, which comprise array 150 ... Shingled drives store the bulk of their data in disk zones 160, which are collections of adjacent tracks in which data can only be appended; see Macko FIG. 3, ¶ 0056-0058: the SMR array subsystem 130 can stripe and buffer the data 102 in a manner that minimizes the amount of non-volatile storage 140 required to ensure data integrity during the writing process (310) ... The SMR array subsystem 130 can then write each of the data blocks 132 that comprise the data stripe 136 and its corresponding redundant information to the corresponding disk zones on the array 150 (330))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data migration of Maybee with the data streams of Macko.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to improve storage density (Macko ¶ 0003) and implement improved performance, efficiency, and reliability of computer systems used for data storage as se in Macko ¶ 0022.
Chang teaches: wherein the appending … to the file… (Chang ¶ 0064: The contents of the virtual disk file 40A-40C may be the blocks of data stored on the virtual disk. Logically, the blocks may be stored in order from offset zero at the beginning of the virtual disk file to the last offset on the virtual disk at the end of the file. For example, if the virtual disk is 100 megabytes (MB), the virtual disk file is 100 MB in size with the byte at offset 0 logically located at the beginning of the file and the byte at offset 100 MB at the end of the file)
…in the cloud-based key-value object store comprises: writing … the first set of snapshot data … to the file in the cloud-based … object store… [and thus also teaches “the writing of the first set of snapshot data”] (Chang FIG. 9, ¶ 0120-0123; see first ¶ 0120: The fingerprint service 98 may receive a fingerprint query from the backup agent 84 (reference numeral 172); ¶ 0122: the backup service 80 may receive the L0, L0MD, and L1 object IDs (or the file ID for the corresponding virtual disk) from the backup agent 84 (reference numeral 178); ¶ 0123: The backup postprocessing may include updating the fingerprint database 100 with the fingerprints of the blocks captured in the backup. The backup service 80 may get the L1 from the object storage 90 using the L1 object ID (reference numeral 182); Chang ¶ 0124-0126: The backup postprocessing may include merging the L1 data from a previous backup with the partially-populated L1 provided by the backup agent 84 to provide a complete L1 for the backup ... The backup service 80 may communicate with the catalog service 78 to obtain a backup ID for the most recent backup of the same virtual machine ... The backup service 80 may replace each invalid fingerprint in the partially-populated L1 for the current backup with the fingerprint from the corresponding offset in the previous L1, merging the fingerprints for the unchanged blocks to create the complete L1 for the current backup (reference numeral 186))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud-based functionality of Maybee as modified with the cloud-based functionality of Chang.
Motivation to do so would be to improve the functioning of Maybee as modified allowing for switching default local, cached, or cloud storage of files with the functioning of similar reference Chang also providing remote cloud object storage but with the ability to rely on a public cloud for its cloud-based data protection service.
Regarding claims 5 and 13, Maybee and Macko and Chang and Dornemann and Saito teaches all the features with respect to claims 1 and 9 above respectively including:
wherein the disk-based data storage comprises a solid-state disk. (Macko ¶ 0032: Other examples of computer storage media include portable storage units, such as CD or DVD units, flash or solid state memory (such as carried on many cell phones and consumer electronic devices) and magnetic memory; Macko ¶ 0036: host 105 receives data, such as from network 101, and an I/O controller 110 on the host 105 determines that the data 102 should be written to SMR drives. Host 105 can contain many other possible subsystems connected to I/O controller 110 that are not illustrated, such as regular hard disk drives, solid state drives, optical media drives, etc.)
Regarding claims 6 and 14, Maybee in view of Macko and Chang and Dornemann and Saito teaches all the features with respect to claims 1 and 9 above respectively including:
further comprising: storing, by the user space file system, the fetched second set of … data in a cache of the disk-based data storage; (Maybee ¶ 0088: file system 200-1 may cause storage of the data objects and the corresponding metadata of the logical tree 300 in the cloud object store 404 [this passage shows the claimed 'user space file system]; some embodiments may create at least part of the logical tree 300 in cache and then migrate it to the cloud object store 404; see also relevant ¶ 0144: the cloud storage appliance 402 may coordinate the caching and servicing of read and write requests; Maybee FIG. 4 shows disk-based data storage through element 416 and through ¶ 0053: the system 200 may interact with an application 202 through an operating system. The operating system may include functionality to interact with a file system, which in turn interfaces with a storage pool)
receiving, by the user space file system, a third request to read the second set of … data from the file; and based on the receiving of the third request to read the second set of … data from the file, reading, by the user space file system, the second set of … data from the cache of the disk-based data storage. (Maybee ¶ 0210: In block 1912, the I/O pipeline 224 may initiate reading of one or more data objects from the back-end object store determined in block 1910 (e.g., 404a, 404b, 416, etc.). In some embodiments, the ARC 222 may be checked first for a cached version of the one or more data objects; see also relevant ¶ 0144: the cloud storage appliance 402 may coordinate the caching and servicing of read and write requests; Maybee FIG. 4 shows disk-based data storage through element 416 and through ¶ 0053: the system 200 may interact with an application 202 through an operating system. The operating system may include functionality to interact with a file system, which in turn interfaces with a storage pool)
Chang teaches the second set of data being the second set of snapshot data. (Chang shows snapshots as claimed in ¶ 0118: perform a backup of a virtual machine from the data center of an organization; Chang FIG. 9, ¶ 0120-0123; see first ¶ 0122: the backup service 80 may receive the L0, L0MD, and L1 object IDs (or the file ID for the corresponding virtual disk) from the backup agent 84 (reference numeral 178); ¶ 0123: The backup service 80 may get the L1 from the object storage 90 using the L1 object ID (reference numeral 182); Chang ¶ 0124-0126: The backup service 80 may communicate with the catalog service 78 to obtain a backup ID for the most recent backup of the same virtual machine)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud-based functionality of Maybee as modified with the cloud-based functionality of Chang.
Motivation to do so would be to improve the functioning of Maybee as modified allowing for switching default local, cached, or cloud storage of files with the functioning of similar reference Chang also providing remote cloud object storage but with the ability to rely on a public cloud for its cloud-based data protection service.
Regarding claims 7 and 15, Maybee and Macko and Chang and Dornemann and Saito teaches all the features with respect to claims 1 and 9 above respectively.
Maybee teaches:
writing, by the user space file system, … data to … the cloud-based key-value object store in parallel, the … data included in the first set of snapshot data. (Maybee FIG. 19, ¶ 0206: In block 1902, the file system 1600 (or 1700, etc.) may receive a POSIX-compliant request(s) to perform particular operation(s), for example, retrieving file data, writing and/or modifying data, etc. The request in block 1902 may be received from an application, via the application layer or interface layer of the file system 1600; ¶ 0213: Although only two instance of the write operation are shown in this embodiment, the tree hierarchy of file system data may be synchronously mirrored across additional data object stores (e.g., multiple cloud object stores 404), and thus additional instance of the write operation may be initiated in other embodiments)
Macko teaches appending the stripe containing the first set of snapshot data to … object store… (Macko ¶ 0019: a storage system can operate to organize or identify collections of disk zones on an array of shingled drives as an array zone and maintain a non-volatile buffer for each array zone that contains recently appended data until all of its blocks are written to the drives; ¶ 0043: the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision; see Macko FIG. 1, ¶ 0037: Array zones 134 correspond to a set of disk zones 160 on the SMR drives, illustrated as Disk 1, Disk 2, and Disk 3, which comprise array 150 ... Shingled drives store the bulk of their data in disk zones 160, which are collections of adjacent tracks in which data can only be appended; see Macko FIG. 3, ¶ 0056-0058: the SMR array subsystem 130 can stripe and buffer the data 102 in a manner that minimizes the amount of non-volatile storage 140 required to ensure data integrity during the writing process (310) ... The SMR array subsystem 130 can then write each of the data blocks 132 that comprise the data stripe 136 and its corresponding redundant information to the corresponding disk zones on the array 150 (330))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data migration of Maybee with the data streams of Macko.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to improve storage density (Macko ¶ 0003) and implement improved performance, efficiency, and reliability of computer systems used for data storage as se in Macko ¶ 0022.
Chang teaches: wherein the appending … to the file… (Chang ¶ 0064: The contents of the virtual disk file 40A-40C may be the blocks of data stored on the virtual disk. Logically, the blocks may be stored in order from offset zero at the beginning of the virtual disk file to the last offset on the virtual disk at the end of the file. For example, if the virtual disk is 100 megabytes (MB), the virtual disk file is 100 MB in size with the byte at offset 0 logically located at the beginning of the file and the byte at offset 100 MB at the end of the file)
…in the cloud-based key-value object store comprises: writing … data to the file in the cloud-based key-value object store in parallel, (Chang FIG. 9, ¶ 0120-0123; see first ¶ 0120: The fingerprint service 98 may receive a fingerprint query from the backup agent 84 (reference numeral 172); ¶ 0122: the backup service 80 may receive the L0, L0MD, and L1 object IDs (or the file ID for the corresponding virtual disk) from the backup agent 84 (reference numeral 178); ¶ 0123: The backup postprocessing may include updating the fingerprint database 100 with the fingerprints of the blocks captured in the backup. The backup service 80 may get the L1 from the object storage 90 using the L1 object ID (reference numeral 182) || Chang ¶ 0109: The backup agent 84 may put the L0, L0MD, and L1 data in the object storage 90 of the public cloud 12 (reference numeral 138) ... the object ID may be referred to as a key in the AWS public cloud, and the object itself is the value associated with the key; Chang shows parallel writing in ¶ 0111: the backup agent 84 may comprise multiple processes operating in parallel to perform the various operations illustrated in FIG. 7. Thus, for example, blocks may be compressed and encrypted in parallel with assembling the previously encrypted data blocks into an L0)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud-based functionality of Maybee as modified with the cloud-based functionality of Chang.
Motivation to do so would be to improve the functioning of Maybee as modified allowing for switching default local, cached, or cloud storage of files with the functioning of similar reference Chang also providing remote cloud object storage but with the ability to rely on a public cloud for its cloud-based data protection service.
Dornemann teaches writing, by the user space file system, chunks of data to … the cloud-based … store. (Dornemann ¶ 0060: storage devices are provided in a cloud storage environment (e.g., a private cloud or one operated by a third-party vendor), whether for primary data or secondary copies or both; Dornemann ¶ 0249: secondary copies 116 are formatted as a series of logical data units or “chunks” (e.g., 512 MB, 1 GB, 2 GB, 4 GB, or 8 GB chunks). This can facilitate efficient communication and writing to secondary storage devices 108, e.g., according to resource availability. For example, a single secondary copy 116 may be written on a chunk-by-chunk basis to one or more secondary storage devices 108 ... during a secondary copy operation, media agent 144, storage manager 140, or other component may divide files into chunks and generate headers for each chunk by processing the files.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud data migration of Maybee as modified with the cloud operations of Dornemann.
Motivation to do so would be to improve the functioning of Maybee as modified involving writing to cloud stores with the functioning of similar reference Dornemann also writing to cloud stores but with the improvement of efficient communication and writing to storage devices (Dornemann ¶ 0249).
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Maybee in view of Macko and Chang and Dornemann and Saito in further view of Wang et al., U.S. Patent Application Publication No. 2017/0262345 (hereinafter Wang).
Regarding claim 21, Maybee in view of Macko and Chang and Dornemann and Saito teaches all the features with respect to claims 1 above including:
writing, by the user space file system, … data to the file in the cloud-based key-value object store, (Chang FIG. 9, ¶ 0120-0123; see first ¶ 0120: The fingerprint service 98 may receive a fingerprint query from the backup agent 84 (reference numeral 172); ¶ 0122: the backup service 80 may receive the L0, L0MD, and L1 object IDs (or the file ID for the corresponding virtual disk) from the backup agent 84 (reference numeral 178); ¶ 0123: The backup postprocessing may include updating the fingerprint database 100 with the fingerprints of the blocks captured in the backup. The backup service 80 may get the L1 from the object storage 90 using the L1 object ID (reference numeral 182) || Chang ¶ 0109: The backup agent 84 may put the L0, L0MD, and L1 data in the object storage 90 of the public cloud 12 (reference numeral 138) ... the object ID may be referred to as a key in the AWS public cloud, and the object itself is the value associated with the key)
Macko teaches the appending the stripe containing the first set of snapshot data to … object store… (Macko ¶ 0019: a storage system can operate to organize or identify collections of disk zones on an array of shingled drives as an array zone and maintain a non-volatile buffer for each array zone that contains recently appended data until all of its blocks are written to the drives; ¶ 0043: the SMR array subsystem 130 waits until a full data stripe 136 of data blocks 132 is present in a buffer 145 before writing it to array zone 134. The term ‘block’ is used herein for simplicity and can refer to logical blocks on the disks, a partial logical block of data, or any other data subdivision; see Macko FIG. 1, ¶ 0037: Array zones 134 correspond to a set of disk zones 160 on the SMR drives, illustrated as Disk 1, Disk 2, and Disk 3, which comprise array 150 ... Shingled drives store the bulk of their data in disk zones 160, which are collections of adjacent tracks in which data can only be appended; see Macko FIG. 3, ¶ 0056-0058: the SMR array subsystem 130 can stripe and buffer the data 102 in a manner that minimizes the amount of non-volatile storage 140 required to ensure data integrity during the writing process (310) ... The SMR array subsystem 130 can then write each of the data blocks 132 that comprise the data stripe 136 and its corresponding redundant information to the corresponding disk zones on the array 150 (330))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data migration of Maybee with the data streams of Macko.
Motivation to do so would also be the teaching, suggestion, or motivation for a person of ordinary skill in the art to improve storage density (Macko ¶ 0003) and implement improved performance, efficiency, and reliability of computer systems used for data storage as se in Macko ¶ 0022.
Chang teaches: wherein the appending … to the file… (Chang ¶ 0064: The contents of the virtual disk file 40A-40C may be the blocks of data stored on the virtual disk. Logically, the blocks may be stored in order from offset zero at the beginning of the virtual disk file to the last offset on the virtual disk at the end of the file. For example, if the virtual disk is 100 megabytes (MB), the virtual disk file is 100 MB in size with the byte at offset 0 logically located at the beginning of the file and the byte at offset 100 MB at the end of the file)
…in the cloud-based key-value object store comprises: writing … data to the file in the cloud-based key-value object store, (Chang FIG. 9, ¶ 0120-0123; see first ¶ 0120: The fingerprint service 98 may receive a fingerprint query from the backup agent 84 (reference numeral 172); ¶ 0122: the backup service 80 may receive the L0, L0MD, and L1 object IDs (or the file ID for the corresponding virtual disk) from the backup agent 84 (reference numeral 178); ¶ 0123: The backup postprocessing may include updating the fingerprint database 100 with the fingerprints of the blocks captured in the backup. The backup service 80 may get the L1 from the object storage 90 using the L1 object ID (reference numeral 182) || Chang ¶ 0109: The backup agent 84 may put the L0, L0MD, and L1 data in the object storage 90 of the public cloud 12 (reference numeral 138) ... the object ID may be referred to as a key in the AWS public cloud, and the object itself is the value associated with the key; Chang shows parallel writing in ¶ 0111: the backup agent 84 may comprise multiple processes operating in parallel to perform the various operations illustrated in FIG. 7. Thus, for example, blocks may be compressed and encrypted in parallel with assembling the previously encrypted data blocks into an L0)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud-based functionality of Maybee as modified with the cloud-based functionality of Chang.
Motivation to do so would be to improve the functioning of Maybee as modified allowing for switching default local, cached, or cloud storage of files with the functioning of similar reference Chang also providing remote cloud object storage but with the ability to rely on a public cloud for its cloud-based data protection service.
Dornemann teaches writing, by the user space file system, chunks of data to … the cloud-based … store. (Dornemann ¶ 0060: storage devices are provided in a cloud storage environment (e.g., a private cloud or one operated by a third-party vendor), whether for primary data or secondary copies or both; Dornemann ¶ 0249: secondary copies 116 are formatted as a series of logical data units or “chunks” (e.g., 512 MB, 1 GB, 2 GB, 4 GB, or 8 GB chunks). This can facilitate efficient communication and writing to secondary storage devices 108, e.g., according to resource availability. For example, a single secondary copy 116 may be written on a chunk-by-chunk basis to one or more secondary storage devices 108 ... during a secondary copy operation, media agent 144, storage manager 140, or other component may divide files into chunks and generate headers for each chunk by processing the files.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud data migration of Maybee as modified with the cloud operations of Dornemann.
Motivation to do so would be to improve the functioning of Maybee as modified involving writing to cloud stores with the functioning of similar reference Dornemann also writing to cloud stores but with the improvement of efficient communication and writing to storage devices (Dornemann ¶ 0249).
Maybee in view of Macko and Chang and Dornemann and Saito further does not expressly disclose:
wherein each chunk is referenced by a respective key in the cloud-based key-value object store.
However, Wang addresses this by teaching writing … chunks of data to the file in the cloud-based key-value object store, wherein each chunk is referenced by a respective key in the cloud-based key-value object store. (Wang FIG. 4, ¶ 0029: FIG. 4 describes the key-value data structures of an SG component ... each file is separated into contiguous data chunks ... The keys are ordered according to the offset of data chunks. The first key is associated to the first data chunk, etc., and the last key for the last data chunk; see also FIGs. 1-2, ¶ 0026-0027 describing the object stores being "cloud-based" as claimed: distributed storage over multiple clouds including on-premises, replicated and public clouds ... The on-premises private cloud (2) is the primary data-center/office site for an enterprise while the replicated private cloud (3) is typically located at a remote data-center/office site geographically apart from the primary on-premises site)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the cloud-based functionality of Maybee as modified with the cloud-based functionality of Wang.
In addition, both of the references (Maybee as modified and Wang) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as cloud storage management.
Motivation to do so would be to improve the functioning of Maybee as modified by Chang involving private and public clouds with the functioning of similar reference Wang also involving private and public clouds but with the ability to reduce computational cost (Wang ¶ 0029).
Claims 22 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Maybee in view of Macko and Chang and Dornemann and Saito in further view of Farmahini Farahani et al., U.S. Patent Application Publication No. 2018/0165214 (hereinafter Farmahini Farahani).
Regarding claim 22, Maybee in view of Macko and Chang and Dornemann and Saito teaches all the features with respect to claims 1 above including data from the cloud-based key-value object store. (Chang ¶ 0134-0136, see ¶ 0134: FIG. 11 illustrates the restore of a backup to a VM instance (reference numeral 210 in FIG. 11, and also reference numeral 210 in FIG. 14); see also Chang ¶ 0109 re:claimed cloud-based key-value object store: The backup agent 84 may put the L0, L0MD, and L1 data in the object storage 90 of the public cloud 12 (reference numeral 138) ... the object ID may be referred to as a key in the AWS public cloud, and the object itself is the value associated with the key)
Maybee in view of Macko and Chang and Dornemann and Saito further address:
wherein at least a portion of the minimum predefined amount of data comprises the second set of snapshot data. (Dornemann ¶ 0280: Preferably, extents for backup are sized the same as the pages received from cloud-based account 304 [shows claimed 'minimum predefined amount of data'] in order to minimize data processing and reduce delays at proxy server 306. To reduce downloading delays, source pages are pre-fetched from cloud-based account 304 in anticipation of upcoming backup read requests)
Maybee in view of Macko and Chang and Dornemann and Saito does not expressly disclose:
prefetching, by the user space file system and prior to receiving the second request, at least a minimum predefined amount of data … into a read cache area of the disk-based data storage,
However, Farmahini Farahani addresses this by teaching:
prefetching, by the user space file system and prior to receiving the second request, at least a minimum predefined amount of data … into a read cache area of the disk-based data storage, (Farmahini Farahani ¶ 0042: The prefetcher is configured to prefetch cache lines [interpreted as addressing 'minimum predefined amount of data'] into the cache 730. The prefetcher identifies patterns (e.g., requests for a sequence of addresses) that can be used to predict the addresses of subsequent requests [prediction shows that it would occur prior to the claimed 'second request']; see Farmahini Farahani FIG. 1, ¶ 0016 relevant to the claimed 'disk-based data storage': the cache 130 could be composed of fast memory, such as dynamic random access memory (DRAM), and main memory 150 could be composed of slow memory, such as non-volatile memory; see also Farmahini Farahani ¶ 0045: The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data migration of Maybee as modified with the data fetching of Farmahini Farahani.
In addition, both of the references (Maybee as modified and Farmahini Farahani) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as data migration management.
Motivation to do so would be to combine the prefetching of Maybee as modified by at least Dornemann with similar reference Farmahini Farahani also performing prefetching but with the improvement of pattern identification and prediction (Farmahini Farahani ¶ 0042).
Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Maybee in view of Macko and Chang and Dornemann and Saito in further view of Sosseh et al., U.S. Patent No. 11,275,684 (filed September 15, 2020; hereinafter Sosseh).
Regarding claim 22, Maybee in view of Macko and Chang and Dornemann and Saito teaches all the features with respect to claims 1 above including the cloud-based key-value object store. (Chang ¶ 0109: The backup agent 84 may put the L0, L0MD, and L1 data in the object storage 90 of the public cloud 12 (reference numeral 138) ... the object ID may be referred to as a key in the AWS public cloud, and the object itself is the value associated with the key)
Maybee in view of Macko and Chang and Dornemann and Saito also disclose evicting data. (Saito ¶ 0232: When the sufficient area does not exist, preparation of the sufficient area is awaited or the sufficient area is prepared by evicting and writing the data in the buffer to the SCM 32)
Maybee in view of Macko and Chang and Dornemann and Saito does not expressly disclose:
evicting data from the disk-based data storage to the … object store in response to the disk-based data storage reaching a threshold data capacity, wherein the evicted data devoid of a write request for a threshold amount of time.
However, Sosseh addresses this by teaching:
evicting data from the disk-based data storage to the … object store in response to the disk-based data storage reaching a threshold data capacity, (Sosseh col. 4, lines 6-20: When the VC 106 reaches a capacity threshold, data may need to be removed (e.g. flushed or evicted from the VC 106 to another memory) to make space for data from new reads or writes)
wherein the evicted data devoid of a write request for a threshold amount of time. (Sosseh col. 16, lines 33-55: The copy operation may be implemented to clear space in the volatile cache memory by deleting or making data available to overwrite (e.g. an eviction or flush operation), to commit some data to nonvolatile storage, or to copy data to a MRC so that the data can be removed from the volatile cache at a later point without implementing a disc write. In some embodiments, data may be copied from the volatile cache while the DSD is idle (e.g. there are no pending commands, or the number of pending commands are below a selected threshold) ... in some embodiments the threshold may be different for idle periods (e.g. a low threshold) versus high workload periods (e.g. a higher threshold))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the functioning of the data migration of Maybee as modified with the data eviction of Sosseh.
In addition, both of the references (Maybee as modified and Sosseh) disclose features that are directed to analogous art, and they are directed to the same field of endeavor, such as data migration management.
Motivation to do so would be to combine the data migration of Maybee as modified with similar reference Sosseh also performing data movement but with the improvement of threshold-triggered data movement.
Allowable Subject Matter
Amended claim 23, formerly rejected under 35 U.S.C. 103 as being unpatentable over Maybee in view of Macko and Chang and Dornemann in further view of Farmahini Farahani, is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Condit et al., U.S. Patent Application Publication No. 2010/0106895, "Hardware And Operating System Support For Persistent Memory On A Memory Bus"; see Condit FIG. 10, ¶ 0064-0067 and ¶ 0076-0077 describing "in-place updates" and "in-place appends," further describing determining if the write is limited to appending (¶ 0076 describing step 1012 to step 1014) or not (¶ 0077 describing step 1012 to step 1018), relevant to at least the independent claim 1, 9, and 17 and dependent claim 23 limitations involving a write pattern being in-place or the write pattern being append-only.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEDIDIAH P FERRER whose telephone number is (571)270-7695. The examiner can normally be reached Monday-Friday 12:00pm-8:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached at (571)272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.P.F/Examiner, Art Unit 2153 January 9, 2026
/KRIS E MACKES/Primary Examiner, Art Unit 2153