The present application, filed on or after March 16, 2013, is being examined under first to invent provisions of the AIA .
DETAILED ACTION
This Action is in response to communications filed 12/26/2025.
Claims 16-18, 22-24 and 28-30 are amended.
Claims 1-15 are cancelled. Claims 16-33 are pending.
Claims 16-33 are rejected.
Continued Examination under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 26, 2025 has been entered.
Response to Arguments
7. Applicant`s arguments filed December 26, 2025 have been fully considered but they are not persuasive with respect to prior art rejection.
8. As per the 103 rejection of claims 16, 22 and 28, Applicant argued Nowell/Lee fails to disclose or suggest the feature of " wherein the hot spare area always exists in the device to provide a cache of the device to temporarily store target data", where Examiner relies on a newly cited reference Linnell to disclose the limitations.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9. Claims 16-33 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen et al. (US PGPUB 2017/0123915, hereinafter "Nguyen"), in view of Yang et al. (US 11,782,624) (hereinafter ‘Yang’), and further in view of Linnell et al. (US 8,694,724) (hereinafter ‘Linnell’).
As per independent claim 16, Nguyen discloses a device, comprising: a controller; a hot spare area; and a data area [(Paragraphs 0006, 0028-0030 and 0036; FIGs.1 and 4) where Nguyen teaches a method is provided for a storage system having a plurality of solid-state drives (SSDs). Each of the SSDs may have an advertised space and a device-level OP space. For each of the SSDs, a controller of the storage system may designate a portion of the advertised space as a system-level OP space, thereby forming a collection of system-level OP spaces. In response to the failure of one of the SSDs, the storage system controller may repurpose a portion of the collection of system-level OP spaces into a temporary spare drive, rebuild data of the failed SSD, and store the rebuilt data onto the temporary spare drive. The temporary spare drive may be distributed across the SSDs that have not failed. FIG. 4 depicts system 400 with host device 102 communicatively coupled to storage system 404, in accordance with one embodiment. In storage system 404, a portion of the system-level OP space may be repurposed as one or more temporary hot spare drives. For example, a portion of system-level OP space 320a may be repurposed as temporary spare space (SP) 422a; a portion of system-level OP space 320b may be repurposed as temporary spare space (SP) 422b to correspond to the claimed limitation], wherein a storage function of the data area is provided by a first storage unit set comprised in the device, and the first storage unit set comprises a plurality of first-level storage units [(Paragraphs 0006, 0028-0030 and 0034-0036; FIGs.1 and 4) where Nguyen teaches SSD controller 110a may access any storage space within SSD 108a (i.e., advertised space 316a, system-level OP space 320a and device-level OP space 218a). SSD controller 110b may access any storage space within SSD 108b (i.e., advertised space 316b, system-level OP space 320b and device-level OP 218b). The system-level OP space may be used by storage system controller 106 to perform system-level garbage collection (e.g., garbage collection which involves copying blocks from one storage unit to another storage unit). The system-level OP space may increase the system-level garbage collection efficiency, which reduces the system-level write amplification. If there is a portion of the system-level OP space not being used by the system-level garbage collection, such portion of the system-level OP space can be used by the device-level garbage collection to correspond to the claimed limitation], wherein a storage function of the hot spare area is provided by a plurality of second-level storage units comprised in the device [(Paragraphs 0006, 0028-0030 and 0034-0036; FIGs.1 and 4) where Nguyen teaches FIG. 4 depicts system 400 with host device 102 communicatively coupled to storage system 404, in accordance with one embodiment. In storage system 404, a portion of the system-level OP space may be repurposed as one or more temporary hot spare drives. For example, a portion of system-level OP space 320a may be repurposed as temporary spare space (SP) 422a; a portion of system-level OP space 320b may be repurposed as temporary spare space (SP) 422b; and a portion of system-level OP space 320c may be repurposed as temporary spare space (SP) 422c. Temporary spare space 422a, temporary spare space 422b and temporary spare space 422c may collectively form one or more temporary spare drives which may be used to rebuild the data of one or more failed storage units. Upon recovery of the failed storage unit(s), the rebuilt data may be copied from the temporary spare drive(s) onto the recovered storage unit(s), and the temporary spare drive(s) may be converted back into system-level OP space (i.e., storage system 404 reverts to storage system 304) to correspond to the claimed limitation]; wherein the hot spare area provides a cache of the device to temporarily store target data [(Paragraphs 0006, 0028-0030, 0034-0036 and 0062; FIGs.1 and 4) where the re-purposing of a fraction of the system-level OP space acts as a temporary hot spare, it is possible, in some embodiments, to re-purpose a fraction of the system-level OP space for other purposes, such as for logging data, caching data, storing a process core dump and storing a kernel crash dump. More generally, it is possible to re-purpose a fraction of the system-level OP space for any use case, as long as the use is for a short-lived “emergency” task that is higher in priority than garbage collection efficiency to correspond to the claimed limitation]; and wherein the controller is configured to write, under a specific condition, the target data stored in the hot spare area into the data area [(Paragraphs 0006, 0028-0030, 0034-0036, 0042 and 0062; FIGs.1 and 4) where the storage system 304 may enter a failure mode (e.g., one of the storage units may fail). At step 506, storage system controller 106 may repurpose a fraction of the system-level OP space as a temporary hot spare. At step 508, storage system controller 106 may rebuild data of the failed storage unit. At step 510, storage system controller 106 may store the rebuilt data on the temporary hot spare. At step 512, the failed storage unit may be restored, either by being replaced or by being repaired. At step 514, storage system controller 106 may copy the rebuilt data from the temporary hot spare onto the restored storage unit. At step 516, storage system controller 106 may convert the temporary hot spare drive back into system-level OP space. Storage system 304 may then resume a normal mode of operation, in which system-level OP space is used to more efficiently perform system-level garbage collection (step 504) to correspond to the claimed limitation].
Nguyen does not appear to explicitly disclose a storage capacity of each of the first-level storage units that provides the storage function of the data area is greater than a storage capacity of each of the second-level storage units that provides the storage function of the hot spare area.
However, Yang discloses a storage capacity of each of the first-level storage units that provides the storage function of the data area is greater than a storage capacity of each of the second-level storage units that provides the storage function of the hot spare area [(Column 15, lines 1-15 and 45-54; FIGs. 1 and 4-6) wherein Tier 1 may be implemented with storage devices based on persistent memory such as cross-gridded nonvolatile memory with bulk resistance change which may provide very high performance. Tier 1 may operate, for example, as a storage cache for one or more of the other tiers. Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs which may provide high performance but not as high as Tier 1. Tier 3 may be implemented with multi-level cell (MLC) NVMe SSDs which may provide middle-level performance. Tiers 4 and 5 may be implemented with triple-level cell (TLC) and quad-level (QLC) NVMe SSDs, respectively, which may provide successively lower performance but higher capacity. In some embodiments, a partition manager system in accordance with example embodiments of the disclosure may apply similar adjustment methods, for example, between Tier 2 and Tier 3, where Tier 2 may operate as a cache tier for Tier 3, and so on; Tier 1 may be implemented with SLC NVMe SSDs which may provide a very high level of performance and a relatively fine level of granularity. Tier 1 may operate, for example, as a storage cache for one or more of the other tiers. Tier 2 may be implemented with QLC NVMe SSDs which may provide a relatively high level of performance and a relatively coarse level of granularity. Tier 3 may be implemented with HDDs which may provide a relatively lower level of performance and relatively coarse granularity, but at a relatively low cost to correspond to the claimed limitation].
Nguyen and Yang are analogous art because they are from the same field of endeavor of memory management.
Before the effective filling date, it would have been obvious to one of ordinary skill in the art, having the teachings of Nguyen and Yang before him or her, to modify the method of Nguyen to include the storage units of Yang because it will improve reliability, endurance and/or throughput performance of the data storage device.
The motivation for doing so would be to [“provide automated re-partitioning decision making and/or operations based on one or more factors such as runtime workload analysis, performance improvement estimation, quality-of-service (QoS), service level agreements (SLAs), and/or the like” (Column 6, lines 5-10 by Yang)].
Nguyen/ Yang does not appear to explicitly disclose wherein the hot spare area always exists in the device to provide a cache of the device to temporarily store target data.
However, Linnell discloses wherein the hot spare area always exists in the device to provide a cache of the device to temporarily store target data [(Column 11, lines 40-62 and Column 15, lines 29-34) wherein the technique also comprises provisioning 430 at least a portion of the cache as a virtual hot spare device in response to detecting a failure state in connection with one of the data storage devices. In this embodiment, the technique is configured such that a portion at least of the cache may be dynamically provisioned to act as the hot spare in response to detecting a failure state in connection with one of the data storage devices. A problem associated with conventional approaches is that physical hot spares introduce issues of location in the data storage system. The provisioning of the cache as the virtual hot spare device in response to detecting a failure state has the advantage of providing topology-independent hot spare capability. In one embodiment, as will be described in more detail below, the data associated with the failed device may be rebuilt in the virtual hot spare device in the cache. Additionally, the failed device may be repaired, corrected or replaced. The rebuilt data may be copied and returned to the new data storage device in response to repairing, correcting or replacing the failed data storage device with the new data storage device. In such a scenario, the virtual hot spare device may be re-provisioned as cache in response to copying or returning rebuilt data to the new data storage device the cache and data storage devices are configured so that at least a portion of the cache is provisioned as a virtual hot spare device dv in response to detecting the failure state. The virtual hot spare device dv can form with the surviving four data storage devices (d0, d1, d2, d4) a RAID configuration to correspond to the claimed limitation].
Nguyen/Yang and Linnell are analogous art because they are from the same field of endeavor of memory management.
Before the effective filling date, it would have been obvious to one of ordinary skill in the art, having the teachings of Nguyen/Yang and Linnell before him or her, to modify the method of Nguyen to include the spare devices of Linnell because it will improve reliability, endurance and/or throughput performance of the data storage device.
The motivation for doing so would be to [“provide both improved performance and complete automation of the rebuild/repair cycle” (Column 6, lines 56-58 by Linnell)].
Therefore, it would have been obvious to combine Nguyen and Linnell to obtain the invention as specified in the instant claim.
As per claim 17, Yang discloses wherein each of the first-level storage units is a multi-level cell (MLC), a triple-level cell (TLC), or a quad-level cell (QLC), and each of the second-level storage units is a single level cell (SLC);each of the first-level storage units is a TLC or a QLC, and each of the second-level storage units is an SLC or an MLC; or each of the first-level storage units is a QLC, and each of the second-level storage units is an SLC, an MLC, or a TLC [(Column 15, lines 1-15 and 45-54; FIGs. 1 and 4-6) wherein Tier 1 may be implemented with storage devices based on persistent memory such as cross-gridded nonvolatile memory with bulk resistance change which may provide very high performance. Tier 1 may operate, for example, as a storage cache for one or more of the other tiers. Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs which may provide high performance but not as high as Tier 1. Tier 3 may be implemented with multi-level cell (MLC) NVMe SSDs which may provide middle-level performance. Tiers 4 and 5 may be implemented with triple-level cell (TLC) and quad-level (QLC) NVMe SSDs, respectively, which may provide successively lower performance but higher capacity. In some embodiments, a partition manager system in accordance with example embodiments of the disclosure may apply similar adjustment methods, for example, between Tier 2 and Tier 3, where Tier 2 may operate as a cache tier for Tier 3, and so on; Tier 1 may be implemented with SLC NVMe SSDs which may provide a very high level of performance and a relatively fine level of granularity. Tier 1 may operate, for example, as a storage cache for one or more of the other tiers. Tier 2 may be implemented with QLC NVMe SSDs which may provide a relatively high level of performance and a relatively coarse level of granularity. Tier 3 may be implemented with HDDs which may provide a relatively lower level of performance and relatively coarse granularity, but at a relatively low cost to correspond to the claimed limitation].
As per dependent claim 18, Yang discloses wherein the device further comprises a second storage unit set, the second storage unit set comprises a second plurality of the first-level storage units [(Column 15, lines 1-15 and 45-54; FIGs. 1 and 4-6) wherein Tier 1 may be implemented with storage devices based on persistent memory such as cross-gridded nonvolatile memory with bulk resistance change which may provide very high performance. Tier 1 may operate, for example, as a storage cache for one or more of the other tiers. Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs which may provide high performance but not as high as Tier 1. Tier 3 may be implemented with multi-level cell (MLC) NVMe SSDs which may provide middle-level performance. Tiers 4 and 5 may be implemented with triple-level cell (TLC) and quad-level (QLC) NVMe SSDs, respectively, which may provide successively lower performance but higher capacity. In some embodiments, a partition manager system in accordance with example embodiments of the disclosure may apply similar adjustment methods, for example, between Tier 2 and Tier 3, where Tier 2 may operate as a cache tier for Tier 3, and so on; Tier 1 may be implemented with SLC NVMe SSDs which may provide a very high level of performance and a relatively fine level of granularity. Tier 1 may operate, for example, as a storage cache for one or more of the other tiers. Tier 2 may be implemented with QLC NVMe SSDs which may provide a relatively high level of performance and a relatively coarse level of granularity. Tier 3 may be implemented with HDDs which may provide a relatively lower level of performance and relatively coarse granularity, but at a relatively low cost to correspond to the claimed limitation],
Linnell discloses second plurality of first-level storage units providing the storage function of the hot spare area, the second plurality of first-level storage units is separate from the plurality of first- level storage units [(Column 11, lines 40-62 and Column 15, lines 29-34) wherein the technique also comprises provisioning 430 at least a portion of the cache as a virtual hot spare device in response to detecting a failure state in connection with one of the data storage devices. In this embodiment, the technique is configured such that a portion at least of the cache may be dynamically provisioned to act as the hot spare in response to detecting a failure state in connection with one of the data storage devices. A problem associated with conventional approaches is that physical hot spares introduce issues of location in the data storage system. The provisioning of the cache as the virtual hot spare device in response to detecting a failure state has the advantage of providing topology-independent hot spare capability. In one embodiment, as will be described in more detail below, the data associated with the failed device may be rebuilt in the virtual hot spare device in the cache. Additionally, the failed device may be repaired, corrected or replaced. The rebuilt data may be copied and returned to the new data storage device in response to repairing, correcting or replacing the failed data storage device with the new data storage device. In such a scenario, the virtual hot spare device may be re-provisioned as cache in response to copying or returning rebuilt data to the new data storage device the cache and data storage devices are configured so that at least a portion of the cache is provisioned as a virtual hot spare device dv in response to detecting the failure state. The virtual hot spare device dv can form with the surviving four data storage devices (d0, d1, d2, d4) a RAID configuration to correspond to the claimed limitation].
Nguyen discloses the plurality of second-level storage units is configured to be obtained by converting the second plurality of first-level storage units comprised in the second storage unit set [(Paragraphs 0006, 0028-0030, 0034-0036, 0042 and 0062; FIGs.1 and 4) where FIG. 4 depicts system 400 with host device 102 communicatively coupled to storage system 404, in accordance with one embodiment. In storage system 404, a portion of the system-level OP space may be repurposed as one or more temporary hot spare drives. For example, a portion of system-level OP space 320a may be repurposed as temporary spare space (SP) 422a; a portion of system-level OP space 320b may be repurposed as temporary spare space (SP) 422b; and a portion of system-level OP space 320c may be repurposed as temporary spare space (SP) 422c. Temporary spare space 422a, temporary spare space 422b and temporary spare space 422c may collectively form one or more temporary spare drives which may be used to rebuild the data of one or more failed storage units. Upon recovery of the failed storage unit(s), the rebuilt data may be copied from the temporary spare drive(s) onto the recovered storage unit(s), and the temporary spare drive(s) may be converted back into system-level OP space (i.e., storage system 404 reverts to storage system 304) to correspond to the claimed limitation], such that a storage capacity of each of the second plurality of first-level storage units is greater than the storage capacity of each of the plurality of second-level storage units [(Paragraphs 0006, 0028-0030, 0054 and 0062; FIGs.1 and 4) where FIG. 12 depicts an arrangement of data blocks and error-correction blocks, after blocks of SSD 2 have been rebuilt and saved in the second temporary spare drive, in accordance with one embodiment. More specifically, blocks d.02, d.13, d.24, P.3, Q.4, R.5, d.60, d.80 and d.91 may be stored on spare blocks S.01, S.11, S.21, S.31, S.41, S.51, S.61, S.81 and S.91, respectively. After the contents of SSD 2 have been rebuilt and saved in the second temporary spare drive, the storage system once again recovers a triple-parity level of redundancy (and no longer operates in a degraded mode of operation). However, the amount of system-level OP space is further reduced to correspond to the claimed limitation].
As per dependent claim 19, Nguyen discloses wherein in a running process of the device, the controller is configured to convert the second plurality of first-level storage units comprised in the second storage unit set into the plurality of second-level storage units [(Paragraphs 0005-0006, 0012, 0028-0030 and 0034-0040; FIGs.1, 4 and 5) where Nguyen teaches FIG. 5 depicts a flow diagram of a process for repurposing system-level OP space into a temporary hot spare and using the temporary hot spare to store rebuilt data (i.e., data of a failed drive rebuilt using data and error-correction blocks from non-failed drives). FIG. 5 depicts flow diagram 500 of a process for repurposing system-level OP space as a temporary hot spare and using the temporary hot spare to store rebuilt data (i.e., data of a failed storage unit rebuilt using data and error-correction blocks from non-failed drives), in accordance with one embodiment. In step 502, storage system controller 106 may designate a portion of the advertised space (i.e., advertised by a drive manufacturer) as a system-level OP space. Step 502 may be part of an initialization of storage system 204 to correspond to the claimed limitation].
As per dependent claim 20, Nguyen discloses wherein the controller is configured to when storage space of the hot spare area is insufficient to store the target data, convert the second plurality of first-level storage units comprised in the second storage unit set into the plurality of second-level storage units [(Paragraphs 0005-0006, 0028-0030 and 0034-0037; FIGs.1 and 4) where Nguyen teaches that wherein the amount of system-level OP space that is repurposed may be the number of failed SSDs multiplied by the advertised capacity (e.g., 216a, 216b, 216c) of each of the SSDs (assuming that all the SSDs have the same capacity). In another embodiment, the amount of system-level OP space that is repurposed may be the sum of each of the respective advertised capacities (e.g., 216a, 216b, 216c) of the failed SSDs. In another embodiment, the amount of system-level OP space that is repurposed may be equal to the amount of space needed to store all the rebuilt data. In yet another embodiment, system-level OP space may be re-purposed on the fly (i.e., in an as needed basis). For instance, a portion of the system-level OP space may be re-purposed to store one rebuilt data block, then another portion of the system-level OP space may be re-purposed to store another rebuilt data block, and so on to correspond to the claimed limitation].
As per dependent claim 21, Nguyen discloses wherein when a hard disk in the device is faulty, the controller is configured to convert the plurality of second-level storage units into the plurality of first-level storage units [(Paragraphs 0005-0006, 0028-0030 and 0034-0036; FIGs.1 and 4) where Nguyen teaches wherein during the failure of a storage unit, a portion of the system-level OP space may be repurposed as a temporary hot spare, trading off system-level garbage collection efficiency (and possibly device-level garbage collection efficiency) for a shortened degraded mode of operation (as compared to waiting for the repair and/or replacement of the failed drive). The recovered or rebuilt data may be saved on the temporary hot spare (avoiding the need for a dedicated hot spare). After the failed storage unit has been repaired and/or replaced, the rebuilt data may be copied from the temporary hot spare onto the restored storage unit, and the storage space allocated to the temporary hot spare may be returned to the system-level OP space to correspond to the claimed limitation], wherein the plurality of first-level storage units obtained through conversion is used to restore data stored in the hard disk [(Paragraphs 0006, 0028-0030 and 0034-0036; FIGs.1 and 4) where Nguyen teaches FIG. 4 depicts system 400 with host device 102 communicatively coupled to storage system 404, in accordance with one embodiment. In storage system 404, a portion of the system-level OP space may be repurposed as one or more temporary hot spare drives. For example, a portion of system-level OP space 320a may be repurposed as temporary spare space (SP) 422a; a portion of system-level OP space 320b may be repurposed as temporary spare space (SP) 422b; and a portion of system-level OP space 320c may be repurposed as temporary spare space (SP) 422c. Temporary spare space 422a, temporary spare space 422b and temporary spare space 422c may collectively form one or more temporary spare drives which may be used to rebuild the data of one or more failed storage units. Upon recovery of the failed storage unit(s), the rebuilt data may be copied from the temporary spare drive(s) onto the recovered storage unit(s), and the temporary spare drive(s) may be converted back into system-level OP space (i.e., storage system 404 reverts to storage system 304) to correspond to the claimed limitation].
As for independent claims 22 and 28, the applicant is directed to the rejections to claim 16 set forth above, as they are rejected based on the same rationale.
As for dependent claims 23 and 29, the applicant is directed to the rejections to claim 17 set forth above, as they are rejected based on the same rationale.
As for dependent claims 24 and 30, the applicant is directed to the rejections to claim 18 set forth above, as they are rejected based on the same rationale.
As for dependent claims 25 and 31, the applicant is directed to the rejections to claim 19 set forth above, as they are rejected based on the same rationale.
As for dependent claims 26 and 32, the applicant is directed to the rejections to claim 20 set forth above, as they are rejected based on the same rationale.
As for dependent claims 27 and 33, the applicant is directed to the rejections to claim 21 set forth above, as they are rejected based on the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED GEBRIL whose telephone number is (571)270-1857. The examiner can normally be reached on Monday-Friday, 8:00am-5:00pm.ALT. Friday.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared Rutz can be reached on 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-270-2857.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMED M GEBRIL/Primary Examiner, Art Unit 2135