Prosecution Insights
Last updated: April 19, 2026
Application No. 19/004,902

STORAGE DEVICE CONTROLLING TARGET OPERATION BASED ON COLLECTED PERFORMANCE INFORMATION AND OPERATING METHOD THEREOF

Non-Final OA §103§112
Filed
Dec 30, 2024
Examiner
MENDEL, JULIAN SCOTT
Art Unit
2133
Tech Center
2100 — Computer Architecture & Software
Assignee
SK Hynix Inc.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
26 granted / 33 resolved
+23.8% vs TC avg
Strong +56% interview lift
Without
With
+55.6%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
23 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
10.1%
-29.9% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
15.2%
-24.8% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§103 §112
DETAILED ACTION This Action is responsive to the Application filed on 12/30/2024. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-15 are pending and have been examined. Information Disclosure Statement The information disclosure statement filed on 12/30/2024 does not fully comply with the requirements of 37 CFR 1.98(b) because: no copy or translation of foreign references is provided. Since the submission appears to be bona fide, applicant is given ONE (1) MONTH from the date of this notice to supply the above mentioned omissions or corrections in the information disclosure statement. NO EXTENSION OF THIS TIME LIMIT MAY BE GRANTED UNDER EITHER 37 CFR 1.136(a) OR (b). Failure to timely comply with this notice will result in the above mentioned information disclosure statement being placed in the application file with the noncomplying information not being considered. See 37 CFR 1.97(i). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 5, Claim 5, 2-3rd lines recites “wherein the performance information includes a size of data written (size A), during a reference time period, in a first memory area” (Emphasis added), the scope of which cannot be determined due to multiple reasonable interpretations in light of the specification. In particular, examiner cannot determine whether applicant intends to claim an embodiment whereby: 1) The “size A” performance information is stored specifically in “a first memory area” (i.e., further limiting where in “a target memory area” certain performance information is stored; see Claim 1, 4-5th lines); OR 2) The “size A” performance information corresponds to a measure of an amount of data which is written into “a first memory area” (i.e., further limiting what kind of data the claimed performance information represents). Therefore, the scope of Claim 5 is indefinite, and the claim is rejected under 35 U.S.C. 112(b). Similarly, Claim 5, 5-7th lines recites “and a size of data written (size B), due to a failure to write to the first memory area during the reference time period, in a second memory area” (Emphasis added), the scope of which cannot be determined due to multiple reasonable interpretations in light of the specification. In particular, examiner cannot determine whether applicant intends to claim an embodiment whereby: 1) The “size B” performance information is stored specifically in “a second memory area”; OR 2) The “size B” performance information corresponds to a measure of an amount of data which is written into “a second memory area” as a result of a failed write to the first memory area Therefore, the scope of Claim 5 is indefinite, and the claim is rejected under 35 U.S.C. 112(b). For the purposes of prior art, examiner will interpret Claim 5 according to each of the aforementioned 2) interpretations provided above. Claims 6-8 depend on Claim 5 and are similarly rejected under 35 U.S.C. 112(b). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4, 9, and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hoang et al. (US 20170131948 A1)(cited by applicant in IDS dated 12/30/2024)(hereafter referred to as Hoang) further in view of Park et al. (US 20210326061 A1)(hereafter referred to as Park). Regarding Claim 1, Hoang discloses the following limitations: A storage device(Solid State Drive 200, Fig. 2A) comprising: a memory(Flash Memory 220, Fig. 2A) including a plurality of memory blocks (¶0057); and a controller (Controller 230, Fig. 2A) configured to: store performance information (“logged data” [0070]) of the storage device in a target memory area including one or more of the plurality of memory blocks (“A logging program can be used to periodically record the monitored attributes … The time logged data 292 can be stored in the solid state drive” [0070] // Fig. 2A // ¶¶0068-71) – As shown in Fig. 2A and taught in ¶0070, logged data including “attributes or indicators of the solid state drive” (see ¶0071; i.e., including “performance information” of SSD 200) is stored in SSD 200 (i.e., in “a target memory area”)-- on determination that a target condition is satisfied (“periodically logged … at a regular number of writes written to the memories of the solid state drive” [0080] // ¶0070) -- As taught in ¶0070 and clarified in ¶0080, logged data is periodically logged when a predetermined number of writes are performed on the SSD--; wherein the performance information includes a size of invalid data stored in the memory (“logged data can include … Retired Block Count” [0070]) – In this context, examiner considers a “Retired Block Count” as reading on the claimed concept of “a size of invalid data stored in the memory”)— at a time when a size of a free space included in the memory is changed (“time logged data” [0071] // ¶¶0035-37; 0048) – As taught in ¶¶0035 and 0071, the aforementioned logged data is “time logged” with a timestamp associate the data with a particular time. As clarified in ¶¶0036-37 and 0048, workloads performed on the SSD cause data to be written into and erased from the SSD. One of ordinary skill in the art would accordingly understand that at any given time, as a result of the workload on the SSD, an amount of space available for storing data in the SSD (i.e., “a size of free space included in the memory”) might change due to writing and erasing data from the SSD. Examiner accordingly considers any time logged data as occurring “at a time when a size of a free space … is changed” (i.e., data is logged as the amount of available space in the SSD changes over time)--, wherein the controller is configured to: execute a target operation(“In some embodiments, the attributes or indicators of the solid state drive … can be recorded … The recorded data can also be analyzed to identify algorithms, e.g., garbage collection or wear leveling algorithm, that can improve the drive life time or performance” [0065]) – As taught in ¶0065, the time logged data is analyzed so that garbage collection or wear levelling can be performed to improve the SSD lifetime (i.e., at least “a target operation” is executed to improve the SSD lifetime)— Although Hoang ¶0065 discloses that garbage collection or wear leveling can be performed after analysis of the time logged data, Hoang is silent regarding a free space threshold which causes the target operation to be performed. Specifically, Hoang is silent regarding the following limitations: execute a target operation when the size of the free space included in the memory is less than a threshold free space size. However, Park discloses that target operations such as garbage collection are performed when a number of free memory blocks in a memory device is below a predetermined threshold. Park discloses the following limitations: execute(Fig. 13, step S250) a target operation (“a garbage collection operation” [0140]) when the size of the free space (“a number NFB of free blocks” [0139]) included in the memory is less than (Fig. 13, step S230 YES) a threshold free space size (“a predetermined second threshold value NGB” [0139] // “When the number NFB of free blocks is less than the predetermined second threshold value NGB (e.g., S230, YES), a garbage collection operation may be performed” [0140]) – As shown in Fig. 13 and taught in ¶¶0139-140, garbage collection is performed when a number of free blocks included in a memory (i.e., “the size of the free space included in the memory”) is less than a predetermined second threshold (i.e., “a threshold free space size”). Hoang and Park are considered analogous to the claimed invention because they all relate to the same field of scheduling maintenance operations on memory devices based on memory device performance. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hoang with the teachings of Park and realize a storage device which performs a target operation when a size of free memory space is less than a threshold size. Performing target operations on a memory device based on a monitored state of the memory device improves performance by enabling efficient use of channels in systems with non-uniform memory device configurations, as disclosed in Park ¶0113: “In accordance with the embodiment, the memory controller 200 may select a memory device on which the write operation is to be performed … based on the memory state. Accordingly, channels can be used as efficient as possible under a structure in which different numbers of ways are coupled with respect to the channels.” [0113] Regarding Claim 2, The same motivation to combine provided in Claim 1 is equally applicable to Claim 2. The combined teachings of Hoang and Park disclose the following limitations: The storage device according to claim 1, wherein the target condition is satisfied when a cumulative size of write-requested data from outside of the storage device, after a reference time period, is a multiple of a unit size (Hoang, “The monitored attributes can be periodically logged … For example, at a first data logged, the logged attributes can include the accumulated number of writes to the SSD. When the number of writes increases to a new write number, e.g., having a constant write increment (which can be considered as an index interval), the attributes can be logged again.” [0080] // ¶0060) – As taught in Hoang ¶0080, logged data is periodically logged when an accumulated number of writes (i.e., “a cumulative size of write-requested data”) which are received from a host application (see ¶0060; i.e., “from outside of the storage device”) since a previous logging of data (i.e., “after a reference time period”) reaches a particular “write increment” (i.e., “is a multiple of a unit size”). Regarding Claim 4, The same motivation to combine provided in Claim 1 is equally applicable to Claim 4. The combined teachings of Hoang and Park disclose the following limitations: The storage device according to claim 1, wherein the target operation is a garbage collection operation (Park, “a garbage collection operation” [0140]) Regarding Claim 9, Hoang discloses the following limitations: A storage device(Solid State Drive 200, Fig. 2A) comprising: a memory(Flash Memory 220, Fig. 2A) including a plurality of memory blocks (¶0057); and a controller (Controller 230, Fig. 2A) configured to: store performance information (“logged data” [0070]) of the storage device in a target memory area including one or more of the plurality of memory blocks (“A logging program can be used to periodically record the monitored attributes … The time logged data 292 can be stored in the solid state drive” [0070] // Fig. 2A // ¶¶0068-71) – As shown in Fig. 2A and taught in ¶0070, logged data including “attributes or indicators of the solid state drive” (see ¶0071; i.e., including “performance information” of SSD 200) is stored in SSD 200 (i.e., in “a target memory area”)-- on determination that a target condition is satisfied (“periodically logged … at a regular number of writes written to the memories of the solid state drive” [0080] // ¶0070) -- As taught in ¶0070 and clarified in ¶0080, logged data is periodically logged when a predetermined number of writes are performed on the SSD--; and control a target operation based on the stored performance information (“In some embodiments, the attributes or indicators of the solid state drive … can be recorded … The recorded data can also be analyzed to identify algorithms, e.g., garbage collection or wear leveling algorithm, that can improve the drive life time or performance” [0065]) – As taught in ¶0065, the time logged data is analyzed so that garbage collection or wear levelling can be performed to improve the SSD lifetime (i.e., at least “a target operation” is executed based on the time logged data)— wherein the performance information includes a size of invalidated data (“logged data can include … Retired Block Count” [0070]) – In this context, examiner considers a “Retired Block Count” as reading on the claimed concept of “a size of invalidated data”-- … during a reference time period (“time logged data” [0071] // ¶¶0035-37; 0048) – As taught in ¶¶0035 and 0071, the aforementioned logged data is “time logged” with a timestamp associate the data with a particular time (i.e., is associated with “a reference time period”). Although Hoang ¶0065 discloses that garbage collection can generally be performed on the SSD, Hoang does not appear to explicitly link the concept of garbage collection to invalidated and migrated data. Specifically, Hoang does not explicitly disclose the following limitations: invalidated data among data migrated by a garbage collection operation However, Park clarifies how garbage collection relates to invalidated and migrated data. Park discloses the following limitations: invalidated data among data migrated by a garbage collection operation (“The garbage collection operation may be performed in a manner that transfers and stores, in a free block, valid data of memory blocks … and then invalidates the valid data stored in the sacrificial memory block. Meanwhile, the garbage collection operation may include an operation of erasing the sacrificial memory block” [0140]) – As taught in Park ¶0140, garbage collection causes data of a “sacrificial memory block” (e.g., analogous to a retired block of Huang) to be invalidated after valid data is migrated elsewhere. One of ordinary skill in the art would accordingly understand that a number retired/sacrificial memory blocks would correspond to an amount of invalidated data as a result of garbage collection. Hoang and Park are considered analogous to the claimed invention because they all relate to the same field of scheduling maintenance operations on memory devices based on memory device performance. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hoang with the teachings of Park and realize a storage device which monitors and stores performance data, including information relating to data invalidated by garbage collection, in order perform target operations on a memory device. Performing target operations on a memory device based on a monitored state of the memory device improves performance by enabling efficient use of channels in systems with non-uniform memory device configurations, as disclosed in Park ¶0113: “In accordance with the embodiment, the memory controller 200 may select a memory device on which the write operation is to be performed … based on the memory state. Accordingly, channels can be used as efficient as possible under a structure in which different numbers of ways are coupled with respect to the channels.” [0113] Regarding Claim 12, Hoang discloses the following limitations: A storage device(Solid State Drive 200, Fig. 2A) comprising: a memory(Flash Memory 220, Fig. 2A) including a plurality of memory blocks (¶0057); and a controller (Controller 230, Fig. 2A) configured to: store performance information (“logged data” [0070]) of the storage device in a target memory area including one or more of the plurality of memory blocks (“A logging program can be used to periodically record the monitored attributes … The time logged data 292 can be stored in the solid state drive” [0070] // Fig. 2A // ¶¶0068-71) – As shown in Fig. 2A and taught in ¶0070, logged data including “attributes or indicators of the solid state drive” (see ¶0071; i.e., including “performance information” of SSD 200) is stored in SSD 200 (i.e., in “a target memory area”)-- on determination that a target condition is satisfied (“periodically logged … at a regular number of writes written to the memories of the solid state drive” [0080] // ¶0070) -- As taught in ¶0070 and clarified in ¶0080, logged data is periodically logged when a predetermined number of writes are performed on the SSD--; and control a target operation based on the stored performance information (“In some embodiments, the attributes or indicators of the solid state drive … can be recorded … The recorded data can also be analyzed to identify algorithms, e.g., garbage collection or wear leveling algorithm, that can improve the drive life time or performance” [0065]) – As taught in ¶0065, the time logged data is analyzed so that garbage collection or wear levelling can be performed to improve the SSD lifetime (i.e., at least “a target operation” is executed based on the time logged data)— wherein the performance information includes a size of data requested to be written from outside of the storage device during a reference time period (“data 310 can be monitored and logged by the solid state drive … The data can include the number of reads from the SSD or writes to the SSD, accumulated over the life of the SSD … the index intervals can be the difference between the accumulated writes at a subsequent logged data and the accumulated writes at a previous logged data.” [0079-81] // ¶0060) – As taught in ¶¶0079-81, time logged data can include an accumulated number of writes (i.e., “a size of data requested to be written”) which are received from a host application (see ¶0060; i.e., “from outside of the storage device”) since a previous logging of data (i.e., “during a reference time period”)-- and a size of a free space (“The data can include … reserve (or spare) block count” [0079]) – In this context, examiner considers “spare block count” as reading on the claimed concept of “a size of free space” in the SSD)—… during the reference time period (“time logged data” [0071] // ¶¶0035-37; 0048) – As taught in ¶¶0035 and 0071, the aforementioned logged data is “time logged” with a timestamp associate the data with a particular time (i.e., is associated with “the reference time period”). Although Hoang ¶0065 discloses that garbage collection can generally be performed on the SSD, Hoang does not appear to explicitly link the concept of garbage collection to the aforementioned spare block count. Specifically, Hoang does not explicitly disclose the following limitations: a size of a free space increased through a garbage collection operation However, Park clarifies how garbage collection relates to an amount of free space in a memory. Park discloses the following limitations: a size of a free space increased through a garbage collection operation (“The garbage collection operation may be performed in a manner that transfers and stores, in a free block, valid data of memory blocks … and then invalidates the valid data stored in the sacrificial memory block. Meanwhile, the garbage collection operation may include an operation of erasing the sacrificial memory block” [0140]) – As taught in Park ¶140, garbage collection causes data of a “sacrificial memory block” to be invalidated and then subsequently erased, effectively freeing up the erased sacrificial memory block (e.g., analogous to a spare block of Huang) to store new data. One of ordinary skill in the art would accordingly understand that a number spare/erased sacrificial memory blocks would correspond to an amount of free space increased as a result of garbage collection. Hoang and Park are considered analogous to the claimed invention because they all relate to the same field of scheduling maintenance operations on memory devices based on memory device performance. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hoang with the teachings of Park and realize a storage device which monitors and stores performance data, including information relating to free space increased via garbage collection, in order perform target operations on a memory device. Performing target operations on a memory device based on a monitored state of the memory device improves performance by enabling efficient use of channels in systems with non-uniform memory device configurations, as disclosed in Park ¶0113: “In accordance with the embodiment, the memory controller 200 may select a memory device on which the write operation is to be performed … based on the memory state. Accordingly, channels can be used as efficient as possible under a structure in which different numbers of ways are coupled with respect to the channels.” [0113] Regarding Claim 13, The same motivation to combine provided in Claim 12 is equally applicable to Claim 13. The combined teachings of Hoang and Park disclose the following limitations: The storage device according to claim 12, wherein the controller is further configured to: execute(Park, Fig. 13, step S250) the garbage collection operation (Park, “a garbage collection operation” [0140]) when the size of the free space (“a number NFB of free blocks” [0139]) included in the memory is less than (Fig. 13, step S230 YES) a threshold free space size (“a predetermined second threshold value NGB” [0139] // “When the number NFB of free blocks is less than the predetermined second threshold value NGB (e.g., S230, YES), a garbage collection operation may be performed” [0140]) – As shown in Park Fig. 13 and taught in ¶¶0139-140, garbage collection is performed when a number of free blocks included in a memory (i.e., “the size of the free space included in the memory”) is less than a predetermined second threshold (i.e., “a threshold free space size”). Regarding Claim 14, The same motivation to combine provided in Claim 12 is equally applicable to Claim 14. The combined teachings of Hoang and Park disclose the following limitations: The storage device according to claim 13, wherein the target operation is the garbage collection operation. (Park, “a garbage collection operation” [0140]) Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Hoang further in view of Park and Muchherla et al. (US 20200210330 A1)(hereafter referred to as Muchherla). Regarding Claim 3, The same motivation to combine provided in Claim 1 is equally applicable to Claim 3. The combined teachings of Hoang and Park disclose the following limitations: The storage device according to claim 1 (see Claim 1 limitation mappings above), The combined teachings of Hoang and Park are silent regarding the following limitations: wherein the controller is further configured to: increase the threshold free space size when the size of invalid data is greater than or equal to a first threshold invalid data size, decrease the threshold free space size when the size of invalid data is less than a second threshold invalid data size; and wherein the first threshold invalid data size is greater than the second threshold invalid data size. However, Muchherla discloses the following within the context of scheduling garbage collection operations: increase the threshold free space size (“the high threshold” [0026]) when the size of invalid data is greater than or equal to a first threshold invalid data size (“If, however, a lesser amount of storage is available on memory component 112A, the garbage collection component 113 can be less selective and increase the high threshold” [0026] // Fig. 2 // ¶¶0021-30) – As taught in Muchherla ¶¶0021-30 and shown in Fig. 2, a “high threshold” is used by a garbage collection component in order to determine when to schedule garbage collection (see Fig. 2, steps 250 + 260/270), similar to how a “predetermined second threshold value” of Park ¶0139 is used in order to determine when to schedule garbage collection for memory devices. Examiner accordingly considers the “high threshold” of Muchherla ¶0026 as analogous to the claimed “threshold free space size”. As taught in Muchherla ¶0026, data stored in a data block is either “valid” or “invalid”; and thus an amount of “available”/valid storage is inversely proportional to an amount of invalid data. As clarified in ¶0026, the high threshold is increased when “a lesser amount of storage is available” (i.e., when a greater amount of data is invalid; i.e., when “the size of invalid data” is at least “a first threshold invalid data size”)-- decrease the threshold free space size when the size of invalid data is less than a second threshold invalid data size(“For example, if a large amount of storage space is available, garbage collection component 113 can afford to be more selective in identifying candidate blocks for garbage collection and thus, can reduce the high threshold” [0026]) – As additionally disclosed in ¶0026, the high threshold is decreased when “a large amount of storage space is available” (i.e., when a smaller amount of data is invalid; i.e., when “the size of invalid data” is less than “a second threshold invalid data size”-- ; and wherein the first threshold invalid data size is greater than the second threshold invalid data size (¶0026) – One of ordinary skill in the art would understand that the threshold size of invalid data which is present when “a lesser amount of storage space is available” (i.e., an instance where the high threshold is increased; i.e., when the invalid data size exceeds the first threshold) would be greater than the threshold size of invalid data which is present when “a large amount of storage space is available” (i.e., an instance where the high threshold is decreased; i.e., when the invalid data size is below the second threshold). Hoang, Park, and Muchherla are all considered analogous to the claimed invention because they all relate to the same field of scheduling maintenance operations on memory devices based on memory device performance. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hoang and Park with the teachings of Muchherla and realize a storage device which increases and decreases a threshold trigger for garbage collection based on an amount of invalid data. Adjusting a garbage collection trigger based on an amount of invalid data improves performance by decreasing write amplification, as disclosed in Muchherla ¶¶0011-12: “Conventional garbage collection solutions simply identify and erase the blocks on the memory component that have the least amount of valid data at the time garbage collection is performed … the conventional memory subsystem is likely garbage collecting blocks with higher levels of valid data, resulting in additional write amplification … In one embodiment, the percentage of valid (or invalid) data in each block is logged periodically … Thus, if garbage collection for those blocks that have been recently written is delayed, there is a higher chance that more of that data will become invalidated over time resulting in lesser write amplification.” [0011-12] Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Hoang further in view of Park and Cariello (US 20210048952 A1)(hereafter referred to as Cariello). Regarding Claim 5, The same motivation to combine provided in Claim 1 is equally applicable to Claim 5. The combined teachings of Hoang and Park disclose the following limitations: The storage device according to claim 1, wherein the performance information includes a size of data written (size A), during a reference time period, in a first memory area including one or more first type memory blocks from among the plurality of memory blocks (Hoang, “The data can include the number of reads from the SSD or writes to the SSD, accumulated over the life of the SSD … the index intervals can be the difference between the accumulated writes at a subsequent logged data and the accumulated writes at a previous logged data.” [0079-81]) – As taught in Hoang ¶¶0079-81, the time logged data can include a number of writes (i.e., “a size of data written”) performed on the SSD (i.e., at least “in a first memory area including one or more first type memory blocks”) since a previous logging of data (i.e., “during a reference time period”)--, and a size of data written (size B), due to a failure to write to the first memory area during the reference time period (Hoang, “The time logged data can include … Program Fail Count” [0070] // ¶0079) – As clarified in ¶0070, the time logged data can additionally include a “Program Fail Count” depicting a number of (i.e., “a size of”) flash program failures (see ¶0079; i.e., “data written … due to a failure to write”) The combined teachings of Hoang and Park are silent regarding a storage device including a second memory area including second type memory blocks used for storing data which failed to be written to a first memory area including first type memory blocks. Specifically, the combined teachings of Hoang and Park are silent regarding the following limitations: data written … due to a failure to write to the first memory area … in a second memory area including one or more second type memory blocks from among the plurality of memory blocks, and wherein the first type memory blocks operate at a higher speed than the second type memory blocks. However, Cariello discloses the following limitations: data (“2 bit-per-cell data” [0052]) written … due to a failure to write to the first memory area (Write Buffer Areas 622 + 624, Fig. 6) … in a second memory area (Write Buffer Area 626, Fig. 6) including one or more second type memory blocks (“TLC data” [0052] // ¶0073) from among the plurality of memory blocks (¶¶0031; 0052; 0073 // Figs. 1 + 6 + 7) – Examiner considers Memory Device 110 of Cariello Fig. 1 as analogous to SSD 200 of Hoang Fig. 2A. As taught in Cariello, a memory system includes a plurality of memory block types (e.g., “TLC” and “2-bit-per-cell”; see ¶0052) which differ according to a number of bits represented by an individual cell (¶0031). Separate memory areas (e.g., Areas 622 + 624 and Area 626 of Fig. 6) include separate types of memory cells (¶0073) and store respective types of host write data (e.g., “TLC data” and “2 bit-per-cell data” [0052]). When a first type memory area (e.g., Areas 622 + 624; see also ¶0073) fills with a first type of data (e.g., “2 bit-per-cell data”; see ¶0052), remaining overflow of the first type of data is instead stored in a second type memory area (e.g., Area 626; see also ¶0073 and Fig. 7) which typically stores a second type of data (e.g., “TLC data”; see ¶0052 embodiment whereby 2 bit-per-cell data is allocated 2 memory pages and an overflow page shared with TLC data). In such an embodiment, the overflowed 2 bit-per-cell data which is written into a TLC memory area corresponds to data which is written into a second memory area (e.g., a TLC memory block) due to a failure to write to a first memory area (e.g., due to no remaining space in a 2-bit-per-cell area).— and wherein the first type memory blocks operate at a higher speed than the second type memory blocks (¶¶0031; 0042) – As taught in Cariello, a TLC type of cell stores more bits than a “2-bit-per-cell” (e.g., MLC; see ¶0031) type of cell; and therefore requires more data to program (see ¶0042). One of ordinary skill in the art would accordingly understand that a “2-bit-per-cell” type of memory block would “operate at a higher speed” as compared to a TLC type of cell. Park, Hoang, and Cariello are all considered analogous to the claimed invention because they all relate to the same field of monitoring and managing free space within a memory device . Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Park and Hoang with the teachings of Cariello and realize a storage device which writes data to second memory area when a first memory area runs out of space. Doing so allows for stream data to be overflowed into a separate memory area without first performing a flush, thereby enabling a host to simultaneously stream separate types of data and thereby saving hardware resources, as disclosed in Cariello ¶0043: “An approach to save on hardware resources is to share the write buffer between the SLC cursor and the MLC cursor … the TLC data is stored in SLC data memory space of the write buffer when there is overflow of the TLC data memory space. This allows both an SLC data steam and a TLC data stream (or other MLC data stream) from the host to coexist at the same time.” [0043] Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Hoang further in view of Park, Cariello, and Sugawara et al. (US 20220374216 A1)(hereafter referred to as Sugawara). Regarding Claim 6, The same motivation to combine provided in Claim 5 is equally applicable to Claim 6. The combined teachings of Hoang, Park, and Cariello disclose the following limitations: The storage device according to claim 5 (see Claim 5 limitation mappings above)-- The combined teachings of Hoang, Park, and Cariello are silent regarding the following limitations: wherein the controller is configured to further increase a size of the first memory area by a reference size when a ratio of the size B to the size A is greater than or equal to a threshold ratio. However, Sugawara discloses the following limitations: wherein the controller is configured to further increase (Fig. 6, step S205) a size of the first memory area (“SLC Area” [0077]) by a reference size when (Fig. 6, step S204 ‘Yes’) a ratio of the size B to the size A is greater than or equal to a threshold ratio.(“in step S204, the main control unit determines whether the size of data to be moved is larger than the free space of the SLC area A1 … In step S205, the main control unit 10 changes part of the QLC area QA1 into the SLC area SA1 … When the dynamic QLC area QA12 is changed into the dynamic SLC area SA12, since the storage capacity becomes (1/4), the main control unit is required to change, into the dynamic SLC area SA12, part of the dynamic QLC area QA12 corresponding to four times the insufficient capacity of the SLC area SA1” [0077-78] // Figs. 1 + 6) – Examiner considers SSD 40 shown in Sugawara Fig. 1 as analogous to Memory Device 110 of Cariello Fig. 1. As shown in Sugawara Fig. 6, data is written into memory into either an SLC area (see Fig. 6, step S206; i.e., into a “first memory area”) or into a second QLC area (see Fig. 6, step S202). When an overflow condition occurs (e.g., when an amount of data to be written into the SLC area exceeds the free space of the SLC area; see Fig. 6, step S204), the size of the SLC area is increased by a capacity equal to four times the insufficient capacity of the SLC (i.e., the SLC area is increased “by a reference size”). Examiner notes that when an overflow condition occurs, the amount of overflow data (e.g., “the insufficient capacity of the SLC area”; i.e., “the size B”; see also Claim 5 limitation mappings above) is necessarily non-zero; and thus a ratio of the size B to a total amount of data which is written (e.g., “the size of data to be moved”; i.e.., “the size A”; i.e., a non-zero number) would be non-zero. Otherwise, the aforementioned ratio would equal zero because the amount of overflow data is zero. In such an embodiment, a ratio of 0 is the threshold for determining to increase the size of the SLC area (i.e., “a threshold ratio” corresponds to 0). Hoang, Park, Cariello, and Sugawara are all considered analogous to the claimed invention because they all relate to the same field of monitoring and managing free space in a memory device. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hoang, Park, and Cariello with the teachings of Sugawara and realize a storage device which increases the size of a first memory area by a reference amount when an overflow condition occurs. Doing so enables a storage device to ensure proper disk space for pre-installing data into an SLC area, which reduces the chance of data corruption and enables a memory device to be operated normally after shipment, as disclosed in Sugawara ¶¶ 0082 // 0061: “Thus, in the manufacturing method (pre-installation method) of the information processing apparatus 1 according to the present embodiment, even when the SLC area SA1 is out of disk space, the QLC area QA1 can be changed into the SLC area SA1 to move the program (pre-installed program) and data to be preloaded from the QLC area QA1 to the SLC area SA1 properly.” [0082] // “the pre-installation of data in the SLC area SA1 can reduce the chance of corrupted data, and hence the information processing apparatus 1 can be operated normally after shipment.” [0061] Regarding Claim 7, The same motivation to combine provided in Claim 6 is equally applicable to Claim 7. The combined teachings of Hoang, Park, Cariello, and Sugawara disclose the following limitations: The storage device according to claim 6, wherein the target operation is an operation of storing write- requested data from outside of the storage device in the memory (Cariello, “transfer data from the write buffer to a memory array of the memory device” [0049] // Figs. 6 + 9) – As shown in Cariello Fig. 9, write data received from a host (step 910) is flushed into memory (step 945) when no remaining space remains in a given memory area (step 935). In this context, flushing (i.e., “storing write-requested data from outside of the storage device”) corresponds to the “target operation” which is performed when an amount of free space falls below a threshold free space size (e.g., 0). Regarding Claim 8, The same motivation to combine provided in Claim 6 is equally applicable to Claim 8. The combined teachings of Hoang, Park, Cariello, and Sugawara disclose the following limitations: The storage device according to claim 6, wherein the controller is further configured to decrease the size of the first memory area by the reference size (Sugawara, ¶0057) after all data stored in the first memory area is flushed (Sugawara, Fig. 6, step S202) to the second type memory blocks from among the plurality of memory blocks. (Sugawara, “switch some or all dynamic SLC areas SA12 to become dynamic QLC areas instead” [0057] // “The main control unit 10 executes the installer store the pre-installed programs and data … from the SLC area SA1 to the QLC area QA1 in order to secure a temporary file area in the SLC area” [0052] // ¶0049) – As shown in Sugawara Fig. 6, data is moved (i.e., “is flushed”) from the SLC area (“the first memory area”) to the QLC area (“to the second type memory blocks”) (see ¶0052) during a “pre-installation” process (see ¶0049). As taught in ¶0057, all dynamic SLC areas are converted back into QLC areas after the memory device boots up for the first time (i.e., at least “after” the pre-installation process / the method of Fig. 6). In this context, converting all dynamic SLC areas back into QLC areas (e.g., in the opposite manner as shown in Fig. 6, step S205) would correspond to decreasing the amount of SLC space (i.e., “the size of the first memory area”) by at least the amount increased during step S205 (i.e., by at least “the reference size”). See also Claim 6 limitation mappings above. Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Hoang further in view of Park and Jeddeloh (US 20120144152 A1)(hereafter referred to as Jeddeloh). Regarding Claim 10, The same motivation to combine provided in Claim 9 is equally applicable to Claim 10. The combined teachings of Hoang and Park disclose the following limitations: The storage device according to claim 9 (see Claim 9 limitation mappings above) Although Hoang ¶0065 discloses that the time logged data can be analyzed to determine when to perform wear leveling, the combined teachings of Hoang and Park do not explicitly disclose the following limitations: wherein the controller is configured to further determine whether to execute a wear leveling operation when the size of invalidated data is greater than or equal to a threshold data size However, Jeddeloh discloses the following limitations: the controller (Controller 108, Fig. 1 // Memory Management Circuitry 218, Fig. 2) is configured to further determine (¶0034) whether to execute a wear leveling operation when the size of invalidated data is greater than or equal to a threshold data size (“Wear leveling can include dynamic wear leveling to minimize the amount of valid data blocks moved to reclaim a block. Dynamic wear leveling can include a technique called garbage collection in which blocks with more than a threshold amount of invalid pages are reclaimed by erasing the block.” [0025] // ¶0034) – Examiner considers Controller 108 of Jeddeloh Fig. 1 as analogous to Controller 230 of Hoang Fig. 2A. As taught in Jeddeloh ¶0025, wear leveling is performed on memory blocks which are determined as having more than a threshold amount of invalidated pages (i.e., when “the size of invalidated data is greater than … a threshold data size”). Hoang, Park, and Jeddeloh are considered analogous to the claimed invention because they all relate to the same field of scheduling maintenance operations on memory devices based on memory device performance. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hoang and Park with the teachings of Jeddeloh and realize a storage device which performs wear leveling when an amount of invalidated data exceeds a threshold. Performing wear leveling improves memory device performance by controlling the wear rate on memory devices and subsequently reducing failures experienced by the solid state devices, as disclosed in Jeddeloh ¶0025: “The memory system 104 can implement wear leveling to control the wear rate on the solid state memory devices … A solid state memory device can experience failure after a number of program and/or erase cycles. Wear leveling can reduce the number of program and/or erase cycles performed on a particular group.” [0025] Regarding Claim 11, The same motivation to combine provided in Claim 10 is equally applicable to Claim 11. The combined teachings of Hoang, Park, and Jeddeloh disclose the following limitations: he storage device according to claim 10, wherein the target operation is the wear leveling operation (Jeddeloh, “Dynamic wear leveling” [0025]) Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Hoang further in view of Park and Watt (US 20210109855 A1)(hereafter referred to as Watt). Regarding Claim 15, The same motivation to combine provided in Claim 13 is equally applicable to Claim 15. The combined teachings of Hoang and Park disclose the following limitations: The storage device according to claim 14 (see Claim 14 limitation mappings above) The combined teachings of Hoang and Park are silent regarding the following limitations: wherein the controller is further configured to: increase the threshold free space size when the size of data is greater than the size of the free space by a threshold difference value or more, and decrease the threshold free space size when the size of the free space is greater than the size of data by the threshold difference value or more. However, Watt discloses the following within the context of scheduling garbage collection operations: increase the threshold free space size (“increase one or more thresholds” [0045]) when the size of data is greater than the size of the free space by a threshold difference value (“writing large files” [0045]) or more (¶0045), and decrease the threshold free space size (“decrease one or more thresholds” [0046]) when the size of the free space is greater than the size of data by the threshold difference value (“writes of smaller data files” [0046]) or more (¶0046) (“the garbage collection management system 104 can determine one or more modified thresholds … For example, where the application(s) 106 has a workload data that indicates a history of writing large files … the garbage collection management system 104 may increase one or more thresholds to have a lower ratio between allocated space and available space … Alternatively, … where the workload data indicates more frequent writes or writes of smaller data files, the garbage collection management system 104 may decrease one or more thresholds to have a higher ratio between allocated space and available space … Modifying the thresholds to ratios in this way may reduce media wear as a result of performing garbage collection less frequently.” [0045-46] // Fig. 3) – As taught in Watt ¶0046 and shown in Fig. 3, a garbage collection management system adjusts “one or more thresholds” to determine when to perform garbage collection, similar to how a “predetermined second threshold value” of Park ¶0139 is used in order to determine when to schedule garbage collection for memory devices. Examiner accordingly considers the “one or more thresholds” of Watt ¶¶0045-46 as analogous to the claimed “threshold free space size”. As taught in Watt ¶0045, thresholds for garbage collection are increased when performing writes of small amounts of data (e.g., “smaller data files”), thereby increasing the frequency of garbage collection (see ¶0046) to accommodate for workloads which have “a lower ratio between allocated space and available space” (i.e., a lower ratio between “the size of data” and “the size of the free space”). In contrast, as taught in ¶0046, the thresholds for garbage collection are decreased when performing writes of large amounts of data (e.g., “large files”), thereby decreasing the frequency of garbage collection in order to accommodate for workloads which have “a higher ratio between allocated space and available space” (i.e., a higher ratio between the size of data and the size of free space). In this context, the difference between a “lower” and a “higher” ratio of allocated to available space corresponds to the claimed “threshold difference value”. Hoang, Park, and Watt are all considered analogous to the claimed invention because they all relate to the same field of scheduling garbage collection in memory devices based on logged memory device performance information. Therefore, it would have been obvious for someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hoang and Park with the teachings of Watt and realize a storage device which increases and decreases a threshold trigger for garbage collection based on a difference between an amount of data written and an amount of free space. Doing so improves performance by preventing overly aggressive garbage collection for certain workload types, resulting in reduced wear, as disclosed in Watt ¶¶0003 // 0046: “conventional data management tools often cause a significant amount of media wear as a result of performing overly aggressive writing and re-writing of data on the storage system. In addition, conventional data management tools often sacrifice processing performance of the computing device as a result of garbage collection consuming significant processing resources at times that limit performance of one or more host applications.” [0003] // “Modifying the thresholds to ratios in this way may reduce media wear as a result of performing garbage collection less frequently.” [0046]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Marcu et al. (US 20140281127 A1) – Discloses a method of adjusting garbage collection trigger thresholds based on previous write activity (see Fig. 3 // ¶0024) Bennett (US 20180373627 A1) – Discloses a method of adjusting a size of a buffer overflow area (see Fig. 6) based on a system load and other performance metrics (see ¶0094) Any inquiry concerning this communication or earlier communications from the examiner should be directed to JULIAN SCOTT MENDEL whose telephone number is (703)756-1608. The examiner can normally be reached M-F 10am - 4pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rocío del Mar Pérez-Vélez can be reached at 571-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.S.M./Examiner, Art Unit 2133 /ROCIO DEL MAR PEREZ-VELEZ/Supervisory Patent Examiner, Art Unit 2133
Read full office action

Prosecution Timeline

Dec 30, 2024
Application Filed
Jan 13, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596500
BLOOM FILTER INTEGRATION INTO A CONTROLLER
2y 5m to grant Granted Apr 07, 2026
Patent 12572469
INDEPENDENT FLASH TRANSLATION LAYER TABLES FOR MEMORY
2y 5m to grant Granted Mar 10, 2026
Patent 12572301
PEER-TO-PEER FILE SHARING USING CONSISTENT HASHING FOR DISTRIBUTING DATA AMONG STORAGE NODES
2y 5m to grant Granted Mar 10, 2026
Patent 12561066
DATA STORAGE DURING POWER STATE TRANSITION OF A MEMORY SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12541451
SOLVING SUBMISSION QUEUE ENTRY OVERFLOW WITH AN ADDITIONAL OUT-OF-ORDER SUBMISSION QUEUE ENTRY
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+55.6%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month