DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 31-51 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicklin et al. (US 20130204893) in view of Roth (US 20170242725) and Dam et al. (US 20120310892).
Regarding claim 31, Nicklin teaches a system in a network, comprising:
a first computing device operable to store data ([0068]-[0069], F5A:12, 14); a second computing device operable to maintain metadata for the data ([0039] “manages metadata in data storage 18”, [0051]); and a virtual file system ([0041]) comprising:
a plurality of distributed processors ([0020] “file systems distributed across several independent, network storage devices”, [0039], [0077]), and
a plurality of resiliency nodes (F1:DS1, DS2), wherein:
each distributed processor of the plurality of distributed processors is operable to manage metadata associated with particular data ([0039], [0042], [0048], [0050]-[0051]), each resiliency node is operable to store resiliency information, the resiliency information is generated by the plurality of distributed processors ([0042], [0044] “unique virtual snapshot configuration record is copied to each data storage systems 16(1) and 16(2)”, [0046], [0068], [0073]),
if the particular data is determined to be restored NOTE), the resiliency information on one or more resiliency nodes of the plurality of resiliency nodes is used to recover the particular data ([0048], [0069]), and
each of the plurality of distributed processors is operable to determine, according to data-specific metadata ([0072]-[0073] “searches virtualization metadata (cached or persistent) to map the request to a captured physical snapshot and the other method searches the captured physical snapshot for each of the storage systems for the target of the request”]) tracks the location of files and directories that are distributed across data storage systems 16(1) and 16(2)”, [0042] “metadata storage system that allows an operator or program to locate components, e.g. a file of a virtual snapshot”, [0043] “stored metadata … can locate a particular file or directory. … use that metadata and the snapshot configuration record to locate a file”, [0048] “Using the stored metadata .. translates the request … suitable for execution on data storage system 16(1) … in which the file is actually located”) of the plurality of resiliency nodes that store the resiliency information used to recover the particular data ([0072]-[0073], [0075]).
NOTE Nicklin does not explicitly teach the event that the particular data is determined to be corrupt. Instead, Nicklin teaches computer system requesting a restore. However, it is reasonable to conclude that when system needs a restore, the data on the system is either inaccessible or damaged. In either case, it is only obvious that such data can be corrupt. However, to merely obviate such reasoning Roth teaches in the event that the particular data is determined to be corrupt ([0018] “predetermined events may be … detection of the software or hardware error of the virtual machine”, [0051], [0076] “upon detection of an intrusion/security compromise of the virtual machine, detection of the software or hardware error … “upon the occurrence of certain events, such as upon the occurrence of certain execution errors, upon the detection that the virtual machine is in and on authorized configuration ( e.g., unauthorized software detected is being installed on the virtual machine”), the resiliency information on one or more resiliency nodes of the plurality of resiliency nodes is used to recover the particular data ([0037]).
It would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Nicklin to include an event that the particular data is determined to be corrupt as disclosed by Roth. Doing so would help in eliminating evidence of attacks and sources of other issues that may have arisen with the system state (Roth [0003]).
Nicklin does not explicitly teach, however Dam discloses hash-based logic ([0043], [0091], [0094]).
It would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Nicklin to include hash-based logic as disclosed by Dam. Doing so would efficiently improve storage utilization and storage management such as file mirroring, detection/compaction of files with same contents (Dam [0031]).
Dependent claims are rejected in further view of Bobbitt et al. (US 2003/0115218), Bone et al. (US 2004/0098415), Bergsten et al. (US 2015/0248366), Roth (US 20170242725), Gross et al. (US 2014/0281280) as indicated in the primary rejection above.
Claim(s) 31-51 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sen et al. (US 20070220320) in view of Bobbitt et al. (US 20030115218) or Bone et al. (US 20040098415).
Regarding claim 31, Sen teaches a system in a network, comprising:
a first computing device operable to store data ([0022] “servers hold data generated by the relevant client computer system”); a second computing device (F1A105, 110, 115, [0065]-[0066]) operable to maintain metadata for the data ([0024], [0027], [0064] “settings include … any other instructions or metadata needed to cause each storage node to implement a protection intent”); and a
a plurality of distributed processors ([0020] “storage nodes can be located geographically close together, or co-located on the same machine”, [0021] “backup system 100 as it may be distributed between two locations”), and
a plurality of resiliency nodes, wherein:
each distributed processor of the plurality of distributed processors is operable to manage metadata associated with particular data ([0066], [0068]), each resiliency node is operable to store resiliency information, the resiliency information is generated by the plurality of distributed processors ([0020] “each storage node in the backup system can be configured to receive and store application-consistent backups”, [0039] “backups stored by each storage node”),
if that the particular data is determined to be corrupt ([0039] “restored in the event of an "entire site disaster”), the resiliency information on one or more resiliency nodes of the plurality of resiliency nodes is used to recover the particular data ([0040] “allow each given production server to be restored”, “production server that needs to recover its data can simply contact remote storage node” [0041], [0068]), and
each of the plurality of distributed processors is operable to determine, according to data-specific metadata ([0032], [0061] “identifies such criteria as write rate, network and geographical positioning”, [0064] “based on any number of factors, such as data redundancy requirements”; “determines an appropriate backup policy for each of the production servers based on … write and read rates, available storage … or metadata needed to cause each storage node to implement a protection intent”, [0068] “Each protection intent may be specifically tailored”)(see NOTE) or hash-based logic, the one or more resiliency nodes of the plurality of resiliency nodes that store the resiliency information used to recover the particular data ([0043] “determination module, wherein DPM server identifies, for example, what storage nodes should be servicing what production servers”, [0045] “select storage nodes to be used in the backup process”, [0049]-[0054]).
Sen does not explicitly teach, however Bobbitt and Bone discloses a virtual file system (Bobbitt [0008], [0038], Bone [0028]) comprising: a plurality of distributed processors (Bobbitt [0006], Bone [0016]).
NOTE Bone is also discloses determine, according to data-specific metadata … the one or more resiliency nodes of the plurality of resiliency nodes ([0087] “gather a predefined set of filesystem metadata. The filesystem metadata can include any filesystem metadata associated with the data”, [0098])
It would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Sen to include a virtual file system as disclosed by Bobbitt and Bone. Doing so enables file systems to be virtualized (Bobbitt [0008]) and storage asset virtualization, which appear to be a single resource from a single server from a client's perspective (Bone [0012]).
Regarding claim 32, Sen as modified teaches the system of claim 31, wherein:
the plurality of resiliency nodes connected to a plurality of electronically addressed nonvolatile storage devices (Sen [0024], [0049], [0054], Bone [0065], [0104]-[0105] where tape devices are nonvolatile),
the virtual file system comprises one or more instances of a virtual file system front end (Bobbitt Fig.3:11, where client 43 is a front end, [0041], Bone [0111], [0122], see F1, 12 where client devices are front end and virtual file system is back end, which are divided by a middleware), one or more instances of a virtual file system back end (Bobbitt [0043]-[0045]),
a first instance of a virtual file system memory controller (Bobbitt [0040], [0082], [0109], [0110]) configured to control accesses to a first of the plurality of electronically addressed nonvolatile storage devices (Sen [0024], [0049], [0054], Bone [0065], [0104]-[0105] where tape devices are nonvolatile), and a second instance of a virtual file system memory controller (Bobbitt [0042]-[0043], [0110]) configured to control accesses to a second of the plurality of electronically addressed nonvolatile storage devices (Bone [0104]-[0105], [0114], [0126]).
Sen as modified does not explicitly teach, however Bone discloses the plurality of resiliency nodes comprises a plurality of electronically addressed nonvolatile storage devices (F1:106a, [0003], [0065], [0104]-[0105] where tape devices are nonvolatile).
It would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Sen as modified to include nonvolatile storage devices as disclosed by Bone. Doing so would provide low-level interface in which individual storage subunits may be individually addressed (Bone [0003]).
Regarding claim 33, Sen as modified teaches the system of claim 32, wherein:
the plurality of distributed processors comprises a plurality of computing devices (Sen [0018], Bobbitt [0038]), each instance of the virtual file system front end is configured to:
receive a file system call from a file system driver residing on the plurality of computing devices (Bobbitt [0082], [0084], Bone [0111], [0112]);
determine which of the one or more instances of the virtual file system back end is responsible for servicing the file system call (Sen [0043], [0049]-[0054], Bobbitt [0082], [0045], Bone [0076], [0086], [0112] see “in response to the original request including determining which server should receive a request and passing the request”, [0123]); and
send one or more file system calls to the determined one or more instances of the plurality of virtual file system back end (Sen [0043], [0045], [0049]-[0054], Bobbitt [0082], [0045]).
Regarding claim 34, Sen as modified teaches the system of claim 32, wherein each instance of the virtual file system back end is configured to:
receive a file system call from the one or more instances of the virtual file system front end (Bobbitt [0082], [0084], Bone [0111], [0122]); and
allocate memory of the plurality of electronically addressed nonvolatile storage devices to achieve a distribution of the data across the plurality of electronically addressed nonvolatile storage devices (Sen [0043], [0057], Bobbitt [0038], [0090], Bone [0016]).
Regarding claim 35, Sen as modified teaches the system of claim 32, wherein each instance of the virtual file system back end is configured to: receive a file system call from the one or more instances of the virtual file system front end (Bobbitt [0082], [0084], Bone [0111], [0122]); and update file system metadata for data affected by the servicing of the file system call (Bobbitt [0082]-[0084], Bone [0074], [0087], [0095], [0097]-[0098]).
Regarding claim 36, Sen as modified teaches the system of claim 32, wherein: the number of instances in the one or more instances of the virtual file system front end is dynamically adjustable based on demand on resources of the plurality of computing devices (Sen [0044], Bobbitt [0084], [0088]-[0090]); and the number of instances in the one or more instances of the virtual file system back end is dynamically adjustable based on demand on resources of the plurality of computing devices (Sen [0049], Bobbitt [0088]-[0090], [0084]).
Regarding claim 37, Sen as modified teaches the system of claim 32, wherein: the number of instances in the one or more instances of the virtual file system front end is dynamically adjustable independent of the number of instances in the one or more instances of the virtual file system back end (Sen [0044], Bobbitt [0088]-[0090], [0084]); and the number of instances in the one or more instances of the virtual file system back end is dynamically adjustable independent of the number of instances in the one or more instances of the virtual file system front end (Bone ([0114], [0022] “each filesystem on each server-is a separate entity, it is therefore necessary to perform each data collection independently on each server”; [0131] “two "volumes" or independent filesystem directory trees srv1 and srv2”, [0136] “filesystem paths independent from either clients or servers”).
Regarding claim 38, Sen as modified teaches the system of claim 32, wherein: a first one or more of the plurality of electronically addressed nonvolatile storage devices are used for a first tier of storage (Sen [0024], 0075], Bone [0008] “architecture can be characterized as a "two-tier" client-server system”, [0104]-[0105], [0115]); and a second one or more of the plurality of electronically addressed nonvolatile storage devices (Bobbitt [0032]) are used for a second tier of storage (Sen [0041], [0075]-[0076]).
Regarding claim 39, Sen as modified teaches the system of claim 38, wherein:
the first one or more of the plurality of electronically addressed nonvolatile storage devices (Bobbitt [0032]) are characterized by a first value of a latency metric (Sen [0031], [0034], [0041]-[0042], [0051], [0058]); and the second one or more of the plurality of electronically addressed nonvolatile storage devices (Bobbitt [0032]) are characterized by a second value of the latency metric (Sen [0033], [0040], [0054]).
Regarding claim 40, Sen as modified teaches the system of claim 38, wherein: the first one or more of the plurality of electronically addressed nonvolatile storage devices (Bobbitt [0032]) are characterized by a first value of an endurance metric (Sen [0030], [0034], [0046]); and
the second one or more of the plurality of electronically addressed nonvolatile storage devices (Bobbitt [0032]) are characterized by a second value of the endurance metric (Sen [0034]-[0035], [0041], [0047], [0055]).
Regarding claim 41, Sen as modified teaches the system of claim 40, wherein data written to the virtual file system is first stored to the first tier of storage and then migrated to the second tier of storage according to policies of the virtual file system (Sen [0034], [0038]-[0039], [0057], Bobbitt [0042], [0046], [0047]-[0048], [0090], [0097]-[0098]).
Regarding claim 42, Sen as modified teaches the system of claim 31, comprising one or more mechanically addressed nonvolatile storage device, wherein the data stored to the virtual file system is distributed across the plurality of electronically addressed nonvolatile storage devices and one or more mechanically addressed nonvolatile storage devices (Sen [0024], [0058], Bobbitt [0032], [0109]-[0110]).
Regarding claim 43, Sen as modified teaches the system of claim 31, comprising:
a first one or more other nonvolatile storage devices residing on the local area network (Sen [0021], [0024], Bobbitt [0031], Fig.10, note that elements 24 are hard disks, which are nonvolatile [0032], [0110]); and
a second one or more other nonvolatile storage devices residing on one or more other computing devices coupled to the local area network via the Internet (Sen [0021], Bobbitt [0031], Fig.2, Bone [0065], [0104]), wherein:
the plurality of electronically addressed nonvolatile storage devices are used for a first tier of storage and a second tier of storage (Sen [0024], 0075], Bone [0008] “architecture can be characterized as a "two-tier" client-server system”, [0104]-[0105], [0115]);
the first one or more other nonvolatile storage devices residing on the local area network are used for a third tier of storage (Sen [0041], [0075]-[0076], Bobbitt [0032]); and
the second one or more other nonvolatile storage devices residing on one or more other computing devices coupled to the local area network via the Internet are used for a fourth tier of storage (Sen [0041], [0075]-[0076], Bobbitt [0032]).
Note that Sen and Bobbitt teach a local area network LAN or WAN (wide area network) implementing the system, wherein it is well-known that the Internet is a good example of a WAN (i.e. The Internet may be considered a WAN). Therefore, the storage device connected to each other over the network via LAN, WAN (se Bobbitt Fig.2) using CIFS, is construed to be analogous to devices coupled to said local area network via the Internet.
Regarding claim 44, Sen as modified teaches the system of claim 31, comprising one or more other nonvolatile storage devices residing on one or more other computing devices coupled to the local area network via the Internet (Sen [0021], Bobbitt [0031], Fig.2, Bone [0065], [0104]).
Note that Sen and Bobbitt teach a local area network LAN or WAN (wide area network) implementing the system, wherein it is well-known that the Internet is a good example of a WAN (i.e. The Internet may be considered a WAN). Therefore, the storage device connected to each other over the network via LAN, WAN (se Bobbitt Fig.2) using CIFS, is construed to be analogous to devices coupled to said local area network via the Internet.
Regarding claim 45, Sen as modified teaches the system of claim 44, wherein:
the plurality of electronically addressed nonvolatile storage devices are used for a first tier of storage; and the one or more other storage devices are used for a second tier of storage (Sen [0041], [0075]-[0076], Bobbitt [0032], Bone [0065], [0104]).
Regarding claim 46, Sen as modified teaches the system of claim 45, wherein data written to the virtual file system is first stored to the first tier of storage and then migrated to the second tier of storage according to policies of the virtual file system (Sen [0034], [0038]-[0039], [0057], Bobbitt [0042], [0046], [0047]-[0048], [0090], [0097]-[0098]).
Regarding claim 47, Sen as modified teaches the system of claim 45, wherein the second tier of storage is an object- based storage (Bone [0004]-[0005]).
Regarding claim 48, Sen as modified teaches the system of claim 45, wherein the one or more other nonvolatile storage devices comprises one or more mechanically addressed nonvolatile storage devices (Sen [0024], [0058], Bobbitt [0032], [0109]-[0110]);
wherein optionally file system calls from the client application are handled by a virtual file system front end instance residing on a second one of the plurality of computing devices (Bobbitt [0027], [0034], [0038]).
Regarding claim 49, Sen as modified teaches the system of claim 31, wherein: a client application resides on a first one of the plurality of computing devices (Sen [0022], Bobbitt Fig.3:11, where client 43 is a front end, [0041]); and one or more components of the virtual file system reside on the first one of the plurality of computing devices (Bobbitt [0027], [0034], [0038]).
Regarding claim 50, Sen as modified teaches the system of claim 49, wherein the client application and the one or more components of the virtual file system share resources of a processor of the first one of the plurality of computing devices (Sen [0042], Bobbitt [0027], [0034], [0038]).
NOTE in analogous art Bergsten et al. (US 2015/0248366) likewise teaches claim 50 in [0072], [0064], [0111] and further obviates the teachings of Bobbitt.
Regarding claim 51, Sen as modified teaches the system of claim 49, wherein: the client application is implemented by a main processor chipset of the first one of the plurality of computing devices (Bobbitt [0032], [0108]); and the one or more components of the virtual file system are implemented by a processor of a network adaptor of the first one of the plurality of computing devices (Bobbitt [0033], [0108]).
Claim(s) 32, 36, 37 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over Sen as modified and in further view of Bergsten et al. (US 20150248366) (see IDS filed 12/14/2023) or Roth (US 20170242725).
Regarding claim 32, Sen as modified teaches the system of claim 31, wherein: the plurality of resiliency nodes connected to a plurality of electronically addressed nonvolatile storage devices (Sen [0024], [0049], [0054], Bone [0065], [0104]-[0105] where tape devices are nonvolatile),
the virtual file system comprises one or more instances of a virtual file system front end (Bobbitt Fig.3:11, where client 43 is a front end, [0041], Bone [0111], [0122], see F1, 12 where client devices are front end and virtual file system is back end, which are divided by a middleware), one or more instances of a virtual file system back end (Bobbitt [0043]-[0045]),
a first instance of a virtual file system memory controller (Bobbitt [0040], [0082], [0109], [0110]) configured to control accesses to a first of the plurality of electronically addressed nonvolatile storage devices (Sen [0024], [0049], [0054], Bone [0065], [0104]-[0105] where tape devices are nonvolatile), and a second instance of a virtual file system memory controller (Bobbitt [0042]-[0043], [0110]) configured to control accesses to a second of the plurality of electronically addressed nonvolatile storage devices (Bone [0104]-[0105], [0114], [0126]).
Sen as modified does not explicitly teach, however Bergsten discloses the plurality of resiliency nodes comprises a plurality of electronically addressed nonvolatile storage devices ([0017], [0061], [0097]).
It would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Sen as modified to include nonvolatile storage devices as disclosed by Bergsten. Doing so would provide an efficient or optimal solution that addresses all the requirements without any trade-offs (Bergsten [0055]).
Regarding claim 36, if Sen as does not explicitly teach, Bergsten and Roth discloses the system of claim 32, wherein: the number of instances in the one or more instances of the virtual file system front end is dynamically adjustable based on demand on resources of the plurality of computing devices; and the number of instances in the one or more instances of the virtual file system back end is dynamically adjustable based on demand on resources of the plurality of computing devices (Bergsten [0081], [0084], [0064], [0106], Roth [0028], [0030], [0032], [0040]).
It would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Sen as modified to include independent adjustment of instances as disclosed by Bergsten. Doing so would greatly benefits the compute server applications by making performance of the storage devices available to the network (Bergsten [0075]) and reduce the amount of upfront investment needed for the infrastructure and often resulting in an overall lower cost (Roth [0003]).
Regarding claim 37, if Sen as does not explicitly teach, Bergsten and Roth discloses the system of claim 32, wherein: the number of instances in the one or more instances of the virtual file system front end is dynamically adjustable independent of the number of instances in the one or more instances of the virtual file system back end (Bergsten [0081], [0084], [0064], [0106]); and the number of instances in the one or more instances of the virtual file system back end is dynamically adjustable independent of the number of instances in the one or more instances of the virtual file system front end (Bergsten [0081], [0084], [0064], [0106], Roth [0028], [0030], [0032], [0040]).
It would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Sen as modified to include independent adjustment of instances as disclosed by Bergsten. Doing so would greatly benefits the compute server applications by making performance of the storage devices available to the network (Bergsten [0075]) and reduce the amount of upfront investment needed for the infrastructure and often resulting in an overall lower cost (Roth [0003]).
Claim(s) 39-41, 43, 47 is/are additionally rejected under 35 U.S.C. 103 as being unpatentable over Sen as modified and in further view of Goss et al. (US 2014/0281280).
Regarding claim 39, if Sen as does not explicitly teach, Goss discloses teaches the system of claim 38, wherein:
the first one or more of the plurality of electronically addressed nonvolatile storage devices are characterized by a first value of a latency metric; and the second one or more of the plurality of electronically addressed nonvolatile storage devices are characterized by a second value of the latency metric ([0017], [0029]).
It would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Sen to include metrics as disclosed by Goss. Doing so would increase efficiency of both the memory component and the overall data storage device (Goss [0013]).
NOTE in analogous art Chou et al. (US 2015/0254088) likewise discloses first and second value of the latency metrics in [0073], [0146], [0149], [0154], [0156], [0161] and further obviates the teachings of Sen.
Regarding claim 40, Sen as modified teaches if Sen as does not explicitly teach, Goss discloses the system of claim 38, wherein: the first one or more of the plurality of electronically addressed nonvolatile storage devices are characterized by a first value of an endurance metric; and the second one or more of the plurality of electronically addressed nonvolatile storage devices are characterized by a second value of the endurance metric ([0017], [0029]).
It would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Sen to include metrics as disclosed by Goss. Doing so would increase efficiency of both the memory component and the overall data storage device (Goss [0013]).
NOTE in analogous art Chou et al. (US 2015/0254088) likewise discloses first and second value of the endurance metrics in [0073], [0146], [0149], [0154], [0156], [0161] and further obviates the teachings of Sen.
Regarding claim 41, Sen as modified teaches the system of claim 40, wherein data written to the virtual file system is first stored to the first tier of storage and then migrated to the second tier of storage according to policies of the virtual file system (Sen [0034], [0038]-[0039], [0057], Bobbitt [0042], [0046], [0047]-[0048], [0090], [0097]-[0098]).
Regarding claim 43, Sen as modified teaches the system of claim 31, comprising:
a first one or more other nonvolatile storage devices residing on the local area network (Sen [0021], [0024], Bobbitt [0031], Fig.10, note that elements 24 are hard disks, which are nonvolatile [0032], [0110]); and
a second one or more other nonvolatile storage devices residing on one or more other computing devices coupled to the local area network via the Internet (Sen [0021], Bobbitt [0031], Fig.2, Bone [0065], [0104]), wherein:
the plurality of electronically addressed nonvolatile storage devices are used for a first tier of storage and a second tier of storage (Sen [0024], 0075], Bone [0008] “architecture can be characterized as a "two-tier" client-server system”, [0104]-[0105], [0115]);
the first one or more other nonvolatile storage devices residing on the local area network are used for a third tier of storage (Sen [0041], [0075]-[0076], Bobbitt [0032]); and
the second one or more other nonvolatile storage devices residing on one or more other computing devices coupled to the local area network via the Internet are used for a fourth tier of storage (Sen [0041], [0075]-[0076], Bobbitt [0032]).
However, if Sen as modified does not explicitly teach, however Goss discloses second tier of storage and a fourth tier of storage ([0045]-[0046]).
It would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Sen to include multi-tier storage as disclosed by Goss. Doing so would provide wear leveling to the tier and ensures that the repeated writes from the frequently incremented state information do not induce resistance drift or other endurance related effects to an extent sufficient to reduce the ability to reliability recover the data (Goss [0053]).
Regarding claim 47, if Sen as does not explicitly teach, Goss discloses teaches the system of claim 45, wherein the second tier of storage is an object- based storage ([0013], [0027], [0029).
Response to Arguments
Applicant's arguments, filed 02/26/2026, s have been fully considered but they are not deemed persuasive.
With respect to the rejection of independent claim 31 under 35 U.S.C. §103 over Nicklin in view of Roth and Dam, the applicant argues –
“The cited combination fails to teach or suggest the distributed resiliency architecture required by claim 31, particularly the limitation that each distributed processor determines, according to data-specific metadata or hash-based logic, which resiliency node or nodes store the resiliency information used to recover particular data. This limitation defines a deterministic, distributed, metadata- or hash-driven resiliency routing framework that is absent from the cited art,” “Even when considered in combination, the cited references do not teach or suggest the claimed requirement that each distributed processor determine, according to data- specific metadata or hash-based logic, which resiliency node or nodes store resiliency information for particular data. The OA does not identify any disclosure in the references describing deterministic, per-data routing of resiliency information using metadata or hashing.”
Specifically, with respect to Nicklin, the applicant states –
“Nicklin … does not disclose resiliency nodes storing resiliency information generated by distributed processors, nor does it disclose corruption-triggered recovery using such resiliency information. Most importantly, Nicklin does not teach that distributed processors determine, based on data-specific metadata or hash-based logic, which nodes store resiliency information for particular data.”
The arguments are not persuasive. Nicklin clearly teaches generating “a unified virtual snapshot from a plurality of physical snapshots of contents of file systems distributed across several independent , network storage devices” [0020], F1:16, “unified virtual snapshot comprises the captured physical snapshots.” The virtual and physical snapshot are used in a recovery and are the “resiliency information,” which is “written in each of the data storage systems 16(1) and 16(2) [distributed nodes] and metadata storage system 18 that allows an operator or program to locate components, e.g. a file of a virtual snapshot” [0042]. Nicklin further teaches “stored metadata on the file virtualization system 14, the file virtualization system 14 can locate a particular file or directory” [0043], specifically “Using the stored metadata, the file virtualization system 14 translates the request CL-REQ-1-1 into a file virtualization request FV-REQ-1-1 which is suitable for execution on data storage system 16(1) (also known as DS1) in which the file is actually located” [0048].
To elaborate, each of the distributed systems 16 captures and stores a snapshot (resiliency information) [0073]. “Virtual directories are dynamically created that contain a list of available virtual snapshots” [0046] and “virtualization metadata from data storage systems 16(1) and 16(2)” is recorded., “virtualization system 14 manages metadata in data storage 18 that tracks the location of files and directories that are distributed across data storage systems 16(1) and 16(2)” [0039].
It is further noted that the term “data-specific metadata” is very broad and is allowed to encompass any metadata about the file, such as “metadata about the file creation operation” [0050] disclosed by Nicklin. Clearly, each node of the distributed system can produce the snapshot and the “stored metadata on the file virtualization system” is used to locate a snapshot file (resiliency information) in the distributed system.
Thus, Nicklin fully teaches “the resiliency information on one or more resiliency nodes of the plurality of resiliency nodes is used to recover the particular data, and each of the plurality of distributed processors is operable to determine, according to data-specific metadata or hash-based logic, the one or more resiliency nodes of the plurality of resiliency nodes that store the resiliency information used to recover the particular data” as required by the claim 31.
With respect to the reference of Dam, the applicant argues –
“Dam separates metadata management from object storage, it does not disclose resiliency nodes that store resiliency information generated by distributed processors. Nor does Dam disclose per- data, metadata- or hash-based determination of which nodes store recovery information. The MDS/OST architecture of Dam concerns namespace management and object storage distribution, not deterministic resiliency node selection for corruption recovery. Thus, Dam does not supply the missing resiliency-routing logic required by claim 31.”
The arguments are not persuasive. Dam analogously teaches a distributed storage system [0036], which includes a meta-data server [0039] that handles file backup/restore operations and providing “computation of hash (e.g., MD5) checksum on each file and save of the checksum in the respective meta-data for the file” [0043]. Further note, that “meta-data of a file is generally expanded to include information about the data objects and the type of data object” [0053] and thus, Dam also discloses “data-specific metadata.” Dam further teaches – “hash value is generally stored as an attribute of the meta-data for the file in the MDS” [0091]. Such metadata and hash information is used to locate such objects – “The object id (i.e., identifier) and the location (in storage) of the object identify the object itself” [0028], “examines all hash buckets and find files that have … hash value” [0095], [0097].
The reference of Dan is merely used to shown that backup files can be located based on a hash value stored with the file. However, it is further noted that the limitation “data-specific metadata or hash-based logic” is written in the alternative and thus, only an optional functionality.
With respect to the reference of Roth, the applicant argues –
“Roth does not disclose a distributed virtual file system architecture, resiliency nodes, or distributed processors managing metadata for particular data. It also does not disclose hash-based determination of which nodes store resiliency information for recovery. VM snapshot deltas are conceptually and architecturally distinct from distributed resiliency information generated and placed according to metadata or hashing logic within a virtual file system.”
The arguments are not persuasive. Roth analogously teaches a obtaining snapshots of virtual machines in distributed computing environment [0016], [0018] and “automatically launch replacements for the failed one or more virtual machines” [0032], based on occurrences of events, such as –
“occurrence of certain execution errors, upon the detection that the virtual machine is in and on authorized configuration ( e.g., unauthorized software detected is being installed on the virtual machine” [0076].
The detections of certain execution errors or unauthorized software is analogous to the limitation of determining “if the particular data is determined to be corrupt.” This is the only feature that Roth is relied upon to teach. Other features of Roth do not need to be included when this modification takes place.
Therefore, Nicklin in view of Roth and Dom fully teaches claim 31 as required.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to POLINA G PEACH whose telephone number is (571)270-7646. The examiner can normally be reached Monday-Friday, 9:30 - 5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at 571-270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/POLINA G PEACH/ Primary Examiner, Art Unit 2165 March 19, 2026