DETAILED ACTION
Claims 1-20 are pending in this application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 9/5/2025 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-6, 8, 10-12, and 14-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gladwin et al. (U.S. PGPub No. 2011/0122523) in view of Healey et al. (U.S. PGPub No. 2013/0132800) in view of Storer et al. (U.S. PGPub No. 2015/0324123).
Claim 1
Gladwin (2011/0122523) teaches:
A method, comprising:
distributing user data throughout a plurality of storage nodes through erasure coding using a first erasure coding scheme; P. 0071 encoder 77 encodes the pre-manipulated data segment 92 using some type of erasure coding scheme to produce encoded segment 94 (which are transformed into EC data slices, see P. 0073); P. 0079 the DS processing 34 stores the data slices across more than one of DS unit 102 memories
adding a storage node to the plurality of storage nodes; and P. 0102 add the memory to a storage set that requires more capacity; P. 0108 the new memory may have more capacity that the missing memory it is replacing
configuring the plurality of storage nodes, with the added storage node, to support a second erasure coding scheme differing from the first erasure coding scheme held by the plurality of storage nodes prior to inclusion of the added storage node […] P. 0134 operational parameters may be changed based on a new configuration message from the DS managing unit; P. 0058 operational parameters include an error coding algorithm; P. 0132 a configuration change of DS unit storage set includes the addition of DS units assigned as pillars
Gladwin does not explicitly teach the plurality of storage nodes supporting multiple erasure coding schemes.
Healey (2013/0132800) teaches:
configuring the plurality of storage nodes, with the added storage node, to support a second erasure coding scheme differing from the first erasure coding scheme held by the plurality of storage nodes prior to inclusion of the added storage node while maintaining the first erasure coding scheme for at least a portion of the user data, P. 0067 the storage system can dynamically determine which erasure code algorithm to use for coding each respective incoming data; P. 0044 allocating the encoded data objects encoded by the different algorithms to any available allocation units on the same or different devices in a pool of disk storage devices
It would have been obvious to a person with ordinary skill in the art at the effective filing date of the application to include the invention of Gladwin with the plurality of storage nodes supporting multiple erasure coding schemes taught by Healey
The motivation it results in an efficient utilization of storage space and an acceptable number of compute cycles (see Healey P. 0067)
The systems of Gladwin and Healey do not explicitly teach storage nodes including a solid state memory and a zoned storage device storing logical addresses sequentially.
Storer (2015/0324123) teaches:
wherein the storage nodes comprise non-volatile solid state memory and at least one zone storage device where the at least one zone storage device provides a logical block address range that is written sequentially. P. 0026 and FIG. 1 each node 110A, 110B, 110C or 110D includes a number of nonvolatile mass storage devices 165, which may be non-volatile solid-state memory; P. 0042 storage space on each node is divided into zones as logical containers for data objects; P. 0051 Each data object is written sequentially to a data zone on a node
It would have been obvious to a person with ordinary skill in the art at the effective filing date of the application to include the invention of Gladwin and Healey with storage nodes including a solid state memory and a zoned storage device storing logical addresses sequentially taught by Storer.
The motivation being to provide reliability in the face of node failure (see Storer P. 0042)
The systems of Gladwin, Healey and Storer are analogous because they are from the “same field of endeavor” and from the same “problem solving area.” Namely, they are both from the field of memory systems.
Therefore it would have been obvious to combine Gladwin and Healey with Storer to obtain the invention as recited in claims 1-7.
Claim 2
Storer (2015/0324123) teaches:
The method of claim 1, wherein the at least one zone storage device differs from another zone storage device with respect to size of page groups in a zone. P. 0055 data node 310 divides the data object into data chunks (707), the size of the chunk is configurable and can be dynamically determined; P. 0053 different striping rules may be applied to different chunk sizes
Claim 3
Gladwin (2011/0122523) teaches:
The method of claim 1, further comprising:
reading the user data in the plurality of storage nodes according to a first erasure coding scheme; and P. 0067 grid module 82 decodes the slices in accordance with the error coding dispersal storage function [first erasure coding scheme] to reconstruct the data segment
writing the user data to the plurality of storage nodes, including at least one storage node with a differing storage capacity, according to the second erasure coding scheme. P. 0067 access module 80 reconstructs the data object from the data segments and the gateway module 78 formats the data object for transmission to the user device; FIG. 10 and P. 0107 DS processing creates all new slices for every pillar by encoding and slicing data segments in accordance with new operational parameters [second erasure coding scheme]; P. 0108 the new memory may have more capacity that the missing memory it is replacing
Claim 4
Gladwin (2011/0122523) teaches:
The method of claim 1, further comprising: replacing a first storage node, having a first storage capacity, with a second storage node, having a second, differing storage capacity; and P. 0102 add the memory to a storage set that requires more capacity; P. 0108 the new memory may have more capacity that the missing memory it is replacing
configuring the plurality of storage nodes, with the second storage node, to support the second erasure coding scheme differing from the first erasure coding scheme used by the first storage node, wherein the configuring is initiated by the plurality of storage nodes in response to replacing the first storage node. P. 0134 operational parameters may be changed based on a new configuration message from the DS managing unit; P. 0132 a configuration change of DS unit storage set includes the addition of DS units assigned as pillars
Claim 5
Healey (2013/0132800) teaches:
The method of claim 1, further comprising: configuring the plurality of storage nodes, with the added storage node, to accommodate multiple erasure coding schemes. P. 0067 the storage system can dynamically determine which erasure code algorithm to use for coding each respective incoming data; P. 0064 and FIG. 1 incoming data objects 102 are stored in any of disk storages
Claim 6
Gladwin (2011/0122523) teaches:
The method of claim 1, further comprising: recovering the user data from a remainder of the plurality of storage nodes in order to write the user data to the remainder of the plurality of storage nodes plus the added storage node. P. 0117 data may be read and decoded using the initial erasure code parameters, and encoded and written using new erasure code parameters; P. 0117 data is spread over the new storage nodes as a background repair process operation using new erasure code parameters
Claim 8
Gladwin (2011/0122523) teaches:
A method, comprising: distributing user data throughout a plurality of storage nodes through erasure coding, […] P. 0071 encoder 77 encodes the pre-manipulated data segment 92 using some type of erasure coding scheme to produce encoded segment 94 (which are transformed into EC data slices, see P. 0073); P. 0079 the DS processing 34 stores the data slices across more than one of DS unit 102 memories
reading the user data in the plurality of storage nodes according to a first erasure coding scheme maintained across the plurality of storage nodes; and P. 0134 operational parameters may be changed based on a new configuration message from the DS managing unit; P. 0058 operational parameters include an error coding algorithm; P. 0067 grid module 82 decodes the slices in accordance with the error coding dispersal storage function [first erasure coding scheme] to reconstruct the data segment
writing the user data to the plurality of storage nodes, including at least one additional storage node added to the plurality of storage nodes, according to a second erasure coding scheme […] P. 0067 access module 80 reconstructs the data object from the data segments and the gateway module 78 formats the data object for transmission to the user device; FIG. 10 and P. 0107 DS processing creates all new slices for every pillar by encoding and slicing data segments in accordance with new operational parameters [second erasure coding scheme]; P. 0108 the new memory may have more capacity that the missing memory it is replacing
Gladwin does not explicitly teach the plurality of storage nodes supporting multiple erasure coding schemes.
Healey (2013/0132800) teaches:
writing the user data to the plurality of storage nodes, including at least one additional storage node added to the plurality of storage nodes, according to a second erasure coding scheme while maintaining the first erasure coding scheme across the plurality of storage nodes. P. 0067 the storage system can dynamically determine which erasure code algorithm to use for coding each respective incoming data; P. 0044 allocating the encoded data objects encoded by the different algorithms to any available allocation units on the same or different devices in a pool of disk storage devices
It would have been obvious to a person with ordinary skill in the art at the effective filing date of the application to include the invention of Gladwin with the plurality of storage nodes supporting multiple erasure coding schemes taught by Healey
The motivation it results in an efficient utilization of storage space and an acceptable number of compute cycles (see Healey P. 0067)
The systems of Gladwin and Healey do not explicitly teach a storage node comprising a zoned storage device storing data sequentially.
Storer (2015/0324123) teaches:
[…] wherein the plurality of storage nodes are configured to accommodate one or more zone storage devices, wherein the storage nodes comprise non-volatile solid state memory and where a zone storage device of a storage node provides zones with a logical block address range that is written sequentially; P. 0026 and FIG. 1 each node 110A, 110B, 110C or 110D includes a number of nonvolatile mass storage devices 165, which may be non-volatile solid-state memory; P. 0042 storage space on each node is divided into zones as logical containers for data objects; P. 0051 Each data object is written sequentially to a data zone on a node
It would have been obvious to a person with ordinary skill in the art at the effective filing date of the application to include the invention of Gladwin and Healey with storage nodes including a solid state memory and a zoned storage device storing logical addresses sequentially taught by Storer.
The systems of Gladwin, Healey and Storer are analogous because they are from the “same field of endeavor” and from the same “problem solving area.” Namely, they are both from the field of memory systems.
Therefore it would have been obvious to combine Gladwin and Healey with Storer to obtain the invention as recited in claims 8-13.
Claim 10
Gladwin (2011/0122523) teaches:
The method of claim 8, further comprising: adding a storage node, having the additional storage capacity differing from at least one of the plurality of storage nodes; and P. 0102 add the memory to a storage set that requires more capacity; P. 0108 the new memory may have more capacity that the missing memory it is replacing
configuring the plurality of storage nodes, with the added storage node, to support the second erasure coding scheme. P. 0134 operational parameters may be changed based on a new configuration message from the DS managing unit; P. 0132 a configuration change of DS unit storage set includes the addition of DS units assigned as pillars
Claim 11
Gladwin (2011/0122523) teaches:
The method of claim 8, further comprising: replacing a first storage node, having a first storage capacity, with a second storage node, having a second, differing storage capacity; and P. 0102 the new memory may be a replacement memory; P. 0108 the new memory may have more capacity that the missing memory it is replacing
self-configuring the plurality of storage nodes, with the second storage node, to support the second erasure coding scheme, wherein the configuring is initiated by the plurality of storage nodes in response to replacing the first storage node. P. 0134 operational parameters may be changed based on a new configuration message from the DS managing unit; P. 0132 a configuration change of DS unit storage set includes the addition of DS units assigned as pillars
Claim 12
Gladwin (2011/0122523) teaches:
The method of claim 8, further comprising: self-configuring the plurality of storage nodes, with the additional storage node, to accommodate the first erasure coding scheme and the second erasure coding scheme. P. 0130 DS processing 34 may detect that DS unit 2 was added and may move a portion of stored data from the memories of DS unit 208 to DS unit 2 in response
Claim 14
Gladwin (2011/0122523) teaches:
The method of claim 8 wherein the plurality of storage nodes comprise a storage cluster. P. 0071 and FIG. 6 Each RAID group further comprises a plurality of disks 630
Claim 15
Gladwin (2011/0122523) teaches:
A method, comprising: distributing the user data throughout a plurality of storage nodes through erasure coding, P. 0102 data is forward error corrected using Reed-Solomon scheme and sliced into error coded (EC) data slices 42-48; P. 0110 EC data slices are stored in one or more of the memories of multiple DS units [nodes]
wherein the plurality of storage nodes are configured to accommodate uniform and non-uniform storage capacities of the storage nodes and […] P. 0108 a memory may have more capacity that the missing memory it is replacing
replacing a first storage node, having a first storage capacity, with a second storage node; and P. 0102 add the memory to a storage set that requires more capacity; P. 0108 the new memory may have more capacity that the missing memory it is replacing
the storage cluster self-configuring the plurality of storage nodes, with the second storage node utilizing a second erasure coding scheme differing from a first erasure coding scheme utilized by the first storage node, wherein the configuring is initiated by the plurality of storage nodes in response to replacing the first storage node […]. P. 0134 operational parameters may be changed based on a new configuration message from the DS managing unit; P. 0132 a configuration change of DS unit storage set includes the addition of DS units assigned as pillars; P. 0043 the number of pillars affects how data is forward error corrected
Gladwin does not explicitly teach the plurality of storage nodes supporting multiple erasure coding schemes.
Healey (2013/0132800) teaches:
the storage cluster self-configuring the plurality of storage nodes, with the second storage node utilizing a second erasure coding scheme differing from a first erasure coding scheme utilized by the first storage node, wherein the configuring is initiated by the plurality of storage nodes in response to replacing the first storage node wherein the first erasure coding scheme is maintained for at least a portion of the user data. P. 0067 the storage system can dynamically determine which erasure code algorithm to use for coding each respective incoming data; P. 0044 allocating the encoded data objects encoded by the different algorithms to any available allocation units on the same or different devices in a pool of disk storage devices
The systems of Gladwin and Healey do not explicitly teach storage nodes including a solid state memory and a zoned storage device storing logical addresses sequentially.
Storer (2015/0324123) teaches:
wherein at least one of the plurality of storage nodes includes a zone storage device, wherein the storage nodes comprise non-volatile solid state memory and where a zone storage device of a storage node provides zones with a logical block address range that is written sequentially; P. 0026 and FIG. 1 each node 110A, 110B, 110C or 110D includes a number of nonvolatile mass storage devices 165, which may be non-volatile solid-state memory; P. 0042 storage space on each node is divided into zones as logical containers for data objects; P. 0051 Each data object is written sequentially to a data zone on a node
It would have been obvious to a person with ordinary skill in the art at the effective filing date of the application to include the invention of Gladwin and Healey with storage nodes including a solid state memory and a zoned storage device storing logical addresses sequentially.
The motivation being to provide reliability in the face of node failure (see Storer P. 0042)
The systems of Gladwin, Healey and Storer are analogous because they are from the “same field of endeavor” and from the same “problem solving area.” Namely, they are both from the field of memory systems.
Therefore it would have been obvious to combine Gladwin and Healey with Storer to obtain the invention as recited in claims 15-20.
Claim 16
Storer (2015/0324123) teaches:
The method of claim 15, wherein a zone of the zone storage device differs from another zone of another zone storage device with respect to size of page groups in a zone. P. 0055 data node 310 divides the data object into data chunks (707), the size of the chunk is configurable and can be dynamically determined; P. 0053 different striping rules may be applied to different chunk sizes
Claim 17
Gladwin (2011/0122523) teaches:
The method of claim 15, further comprising: adding a storage node, having a storage capacity differing from at least one of the plurality of storage nodes, and configuring the plurality of storage nodes, with the added storage node supporting the second erasure coding scheme. P. 0136-0137 DS processing retrieves, de-slices and decodes slices from the current storage set in accordance with the current operational parameters, then creates and writes slices in accordance with the new operational parameters to the pillars of the DS unit; P. 0043 the number of pillars affects how data is forward error corrected
Claim 18
Gladwin (2011/0122523) teaches:
The method of claim 15, further comprising: self-configuring the plurality of storage nodes, with the second storage node to accommodate the first erasure coding scheme and the second erasure coding scheme. P. 0134 operational parameters may be changed based on a new configuration message from the DS managing unit; P. 0132 a configuration change of DS unit storage set includes the addition of DS units assigned as pillars; P. 0043 the number of pillars affects how data is forward error corrected
Claim 19
Gladwin (2011/0122523) teaches:
The method of claim 15, wherein the plurality of storage nodes comprise a storage cluster. P. 0032 DSN memory 22 includes a plurality of distributed storage (DS) units 36
Claim(s) 7, 13 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gladwin et al. (U.S. PGPub No. 2011/0122523) in view of Healey et al. (U.S. PGPub No. 2013/0132800) in view of Storer et al. (U.S. PGPub No. 2015/0324123) in view of Goel et al. (U.S. PGPub No. 2014/0237211).
Claim 7
The systems of Gladwin, Healey and Storer do not explicitly teach dynamically switching between RAID schemes.
Goel (2014/0237211) teaches:
The method of claim 1 wherein the plurality of storage nodes can dynamically switch between redundant array of independent disks (RAID) schemes. FIG. 10 and P. 0088 a parity (re)-balancing algorithm has determined that some parity blocks should be moved to the newly added disk; P. 0086 the mapping parameters 800 for each RAID group include a number of disks 815, which is changed when disks are added to the RAID group
It would have been obvious to a person with ordinary skill in the art at the effective filing date of the application to include the invention of Gladwin, Healey and Storer with dynamically switching between RAID schemes taught by Goel.
The motivation to ensure a balanced/uniform distribution of parity blocks across all disks even after a disk addition (See Goel P. 0009)
The systems of Gladwin, Healey, Storer and Goel are analogous because they are from the “same field of endeavor” and from the same “problem solving area.” Namely, they are both from the field of memory systems.
Therefore it would have been obvious to combine Gladwin and Healey with Goel to obtain the invention as recited in claim 7.
Claim 13
The systems of Gladwin, Healey and Storer do not explicitly teach dynamically switching between RAID schemes.
Goel (2014/0237211) teaches:
The method of claim 8 wherein the plurality of storage nodes can dynamically switch between RAID schemes. FIG. 10 and P. 0088 a parity (re)-balancing algorithm has determined that some parity blocks should be moved to the newly added disk; P. 0086 the mapping parameters 800 for each RAID group include a number of disks 815, which is changed when disks are added to the RAID group
It would have been obvious to a person with ordinary skill in the art at the effective filing date of the application to include the invention of Gladwin, Healey and Storer with dynamically switching between RAID schemes taught by Goel.
The motivation to ensure a balanced/uniform distribution of parity blocks across all disks even after a disk addition (See Goel P. 0009)
The systems of Gladwin, Healey, Storer and Goel are analogous because they are from the “same field of endeavor” and from the same “problem solving area.” Namely, they are both from the field of memory systems.
Therefore it would have been obvious to combine Gladwin and Healey with Goel to obtain the invention as recited in claim 13.
Claim 20
The systems of Gladwin, Healey and Storer do not explicitly teach dynamically switching between RAID schemes.
Goel (2014/0237211) teaches:
The method of claim 15 wherein the plurality of storage nodes can dynamically switch between RAID schemes. FIG. 10 and P. 0088 a parity (re)-balancing algorithm has determined that some parity blocks should be moved to the newly added disk; P. 0086 the mapping parameters 800 for each RAID group include a number of disks 815, which is changed when disks are added to the RAID group
It would have been obvious to a person with ordinary skill in the art at the effective filing date of the application to include the invention of Gladwin, Healey and Storer with dynamically switching between RAID schemes taught by Goel.
The motivation to ensure a balanced/uniform distribution of parity blocks across all disks even after a disk addition (See Goel P. 0009)
The systems of Gladwin, Healey, Storer and Goel are analogous because they are from the “same field of endeavor” and from the same “problem solving area.” Namely, they are both from the field of memory systems.
Therefore it would have been obvious to combine Gladwin and Healey with Goel to obtain the invention as recited in claim 20.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gladwin et al. (U.S. PGPub No. 2011/0122523) in view of Healey et al. (U.S. PGPub No. 2013/0132800) in view of Storer et al. (U.S. PGPub No. 2015/0324123) in view of Willhalm et al. (U.S. PGPub No. 2017/0123980).
Claim 9
Storer (2015/0324123) teaches:
[…] a zone of the zone storage device differs from another zone of another zone storage device with respect to size of page groups in a zone. P. 0055 data node 310 divides the data object into data chunks (707), the size of the chunk is configurable and can be dynamically determined; P. 0053 different striping rules may be applied to different chunk sizes
The systems of Gladwin and Basham do not explicitly teach storage class memory.
Willhalm (2017/0123980) teaches:
The method of claim 8, wherein the storage nodes have non-volatile solid state memory that comprises storage class memory […] P. 0027 possible technology choices include storage class memory (SCM); P. 0005 solid state drives (SSDs)
It would have been obvious to a person with ordinary skill in the art at the effective filing date of the application to include the invention of Gladwin and Basham with the storage class memory taught by Willhalm
The motivation it is an obvious variant of common memory technology solutions.
The systems of Gladwin, Basham and Willhalm are analogous because they are from the “same field of endeavor” and from the same “problem solving area.” Namely, they are both from the field of memory systems.
Therefore it would have been obvious to combine Gladwin and Basham with Willhalm to obtain the invention as recited in claim 9.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
The examiner notes the arguments are directed towards Basham, while the applicable limitations are now taught by Storer.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Feng et al. (U.S. Patent No. 8862847) using erasure coding while expanding a distributed storage system by adding a new storage node.
Hayden et al. (U.S. PGPub No. 2011/0238936) teaches transforming a 4+2 erasure-coding-redundancy scheme to an 8+2 erasure-coding-redundancy scheme by recomputing all the checksum bits and redistributing the data.
Barall et al. (U.S. PGPub No. 2006/0112222) teaches reconfiguring data stored on a first arrangement of storage devices using a first redundancy scheme to a second redundancy scheme on a different arrangement of storage devices for accommodating without data loss at least one of expansion of capacity by the addition of another storage device to the set.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHANIE WU whose telephone number is (571)272-0257. The examiner can normally be reached 1pm to 6pm, and 10pm to 1am Eastern time (10am to 3pm, and 7pm to 10pm Pacific time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rocio Del Mar Perez-Velez can be reached on (571) 270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEPHANIE WU/ Primary Examiner, Art Unit 2133