Prosecution Insights
Last updated: April 19, 2026
Application No. 18/829,797

MEMORY SYSTEM, HOST DEVICE AND METHOD FOR CONTROLLING NONVOLATILE MEMORY

Non-Final OA §103
Filed
Sep 10, 2024
Examiner
MACKALL, LARRY T
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Kioxia Corporation
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
661 granted / 779 resolved
+29.9% vs TC avg
Moderate +8% lift
Without
With
+8.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
810
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
24.8%
-15.2% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 779 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Information Disclosure Statement The Information Disclosure Statement filed on 10 Sep 2024 has been considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 6, 8, 9, 11, 12, 17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Donlan et al. (U.S. Patent No. 10,417,190) in view of Kuzmin et al. (Pub. No. US 2014/0215129). Claim 1: Donlan et al. disclose a memory system connectable to a host device, comprising: a nonvolatile memory including a plurality of storage areas [fig. 1; column 2, lines 33-38; column 4, lines 55-58 – “In some examples, “log-structured file system” may refer to a data storage management system wherein data is written to a sequential (i.e., append-only) buffer, called a “log,” of a persistent storage medium (e.g., a hard disk or flash memory).” … “The hard drive 106 may be any type of computer-readable medium, including a shingled magnetic recording drive, a random access hard drive, flash memory, optical media, and a solid-state device drive.”]; and a controller configured to control access including writing and reading of data to and from the nonvolatile memory, based on a command received from the host device [fig. 2 – Customers interface over a network with storage controlled by control logic], wherein the controller is configured to: manage a plurality of zones using first information indicating (i) a correspondence between the plurality of zones and the plurality of storage areas [fig. 6; column 4, lines 23-32; column 21, lines 2-5; column 27, lines 10-15 – “In some examples, a “zone” may be a series of sectors of a hard drive, such as a shingled magnetic recording disk, that form an append-only section of the disk. Zones may be individually append-only, but may be written to independently and may be resettable through a write-pointer reset. Data may be written to a zone as a series of records (e.g., data objects) with a fixed maximum size (e.g., 1 megabyte); however, in some cases checkpoints and index records may have a different fixed maximum size or may have no fixed maximum size.” … “In some examples, a “data object index” may be data stored as a tree, such as a B+ tree or two-level tree, that may be used to track the physical location of all data objects stored by the file system on disk.” … “The volumelet table 610 may be used to list and describe each of the volumelets. For example, the checkpoint 6 shows that the index segments 618 for data objects 0-4096 may be located at address 0x80AFABCD in the metadata zone. Likewise, the index segments 618 for data objects 4096-8192 may be found at address 0x80BA78CE.”] and (ii) a status of each of the plurality of zones, each of the plurality of zones corresponding to a logical address range within a logical address space that is used in an access from the host device to the memory system, the status of one of the plurality of zones including at least a first status and a second status, the first status indicating that data is written over an entire logical address range corresponding to the one of the plurality of zones, the second status indicating that the one of the plurality of zones is reset [column 18, lines 45-48; column 20, lines 21-27; column 21, lines 14-18 – “When an active zone becomes full and another zone is needed for writing, the active zone may be closed and a new zone may be activated for writing. An index may be utilized to find data written to the zones.” … “Each zone may be in one of three states: open, close, or empty. Initially, all zones may be empty, and transition to open when actively accepting writes. When zones are closed, they may take no more writes until zone cleaned. When zone cleaned, the zone may return to an empty state. At any given time, multiple zones may be open and accepting writes.” … “FIG. 4 is an illustration 400 of three possible states for a zone in accordance with an embodiment of the present disclosure. Zones may be in one of three states, open, closed, or free, as respectively represented by closed zone 402, open zone 404, and free zone 406 in FIG. 4.”]; However, Donlan et al. do not specifically disclose, the controller is configured to: in response to receiving a first command from the host device, the first command requesting a zone which is to be garbage collected, transmit to the host device a first list including information indicating the zone which is to be garbage collected, the zone which is to be garbage collected being determined based on the first information [column 3, lines 4-24; column 34, lines 28-63 – Donlan et al. disclose selecting candidates for cleaning, but does not disclose, sending the candidates to a host in response to a command from the host. (“To optimize performance and maximize space, zones may be cleaned periodically or on command. Zone cleaning heuristics may attempt to determine which zones are candidates for data relocation, which may be based on which zones have the greatest proportions of deleted space. Relocating live data from a mostly-empty zone to another zone may allow the mostly-empty zone to be flagged as empty and available for writing. Zone cleaning may be efficiently performed with reference to zone footers, which may provide the ability to determine quickly that a zone is closed and a summary of the data stored on the zone. Periodically, or as needed, to clean up and recover any corrupt or dangly data, a fixity check called a scrub may be performed. In some examples, “fixity check” may refer to a process of verifying that data objects are properly indexed and accounted for, and have not been altered or corrupted. The scrub may initially mark all records as not found in the index pages and, reading through the zones, mark records found as they are found and validated. At the conclusion of the scrub, any records that were not found or were determined to be corrupt or incomplete may be recovered or may be considered deleted.” … “Zone cleaning may be performed by selecting a zone having the least amount of live data, determining which records in the zone are live, copying all of the live records to the end of a log and updating a respective index, and then marking the zone as empty. The empty zone may be distinguished from an non-empty zone because the index of the empty zone will indicate that no data exists in the empty zone. However, in some cases the state (e.g., empty state) of the zone may be tracked in memory in a space index. Zone cleaning may be triggered in various ways. For example, zone cleaning may be triggered when the number of free zones falls below a predetermined minimum level. As another example, zone cleaning may be triggered when a predetermined amount of free space is available to be reclaimed. As still another example, zone cleaning may be scheduled to run as a background task periodically or during periods of idle disk activity to reclaim available space. However, in some embodiments another drive may be used as a temporary staging area for zone cleaning. In cases where zone cleaning is triggered by a low number of free zones, two available zones may be reserved exclusively for use by the zone cleaner to ensure that the file system has sufficient space to copy records during the zone cleaning. That is, once the multi-active zone file system gets at or below two available free zones, writing of new data objects may be paused until additional zones are freed by zone cleaning. One of the two reserved zones may be used for data relocation and the other reserved zone may be available for any metadata updates—such as updates to index pages and/or checkpoints—that may be needed. When zone cleaning is triggered by a large amount of free space available for reclamation, the zone cleaning may provide efficiency because a relatively small amount of active data may need to be copied. Likewise, cleaning zones as a background task may provide efficiency because it may reduce contention for log heads.”]. In the same field of endeavor, Kuzmin et al. disclose, the controller is configured to: in response to receiving a first command from the host device, the first command requesting a zone which is to be garbage collected, transmit to the host device a first list including information indicating the zone which is to be garbage collected, the zone which is to be garbage collected being determined based on the first information [figs. 9, 11; pars. 0106, 0134-0139 – “Where the host immediately needs free space, it can issue a synchronous command to the memory controller, for example, requiring a listing of units where page utilization falls below a specific threshold (e.g., any EU where released page space is greater than a threshold, e.g., 50% of an EU's capacity). Many choices of metric are possible, and in some embodiments, complex conditions can be evaluated (e.g., EUs where more than 50% of pages are released, but where less than 10% of space is currently available). In response to such a command, the memory controller returns a listing of EUs (or logical units), sorted by any desired priority scheme (e.g., by lowest amount of wear).”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Donlan et al. to include shared garbage collection, as taught by Kuzmin et al., in order to improve performance by reducing performance penalties and associated overhead. Claim 6 (as applied to claim 1 above): Kuzmin et al. disclose, wherein the controller is further configured to write valid data included in the zone which is to be garbage collected whose information is included in the first list, to another zone, based on a second command received from the host device, after transmitting the first list to the host device [pars. 0100-0107; 0134-0139 – Delegated copy may be performed as part of garbage collection. The host instructs the controller to copy the valid data and free the old memory location. (“Note that a delegated copy operation as just described can provide substantial performance benefits, i.e., the memory controller is relieved from the bulk of address translation duties, with the host being primarily responsible for issuing commands that directly specify physical address. Furthermore, the use of the delegate copy operation charges the host with scheduling of copy operations, with the memory controller being responsible for completing a delegated copy operation once issued; since the host is in charge of scheduling such a command, it can once again pipeline command issuance so as to no unduly interfere with read and write operations, and it can hide a delegated copy operation behind operations in other memory (e.g., other planes or SSDs). Delegating the copy operation to the memory controller frees up host-controller interface bandwidth that might otherwise be consumed by the need to send data to be copied from the controller to the host and then back from the host from the controller.”)]. Claim 8 (as applied to claim 1 above): Kuzmin et al. disclose, wherein the nonvolatile memory includes a NAND flash memory [fig. 2; par. 0058 – Donlan et al. disclose a flash memory. Kuzmin et al. disclose a NAND flash memory. “FIG. 2 shows a solid-state drive (SSD) having a memory controller 200 and NAND flash memory comprising one or more NAND flash memory devices 207.”]. Claim 9: Donlan et al. disclose a host device connectable to a memory system, comprising: an interface circuit configured to be connected to the memory system [fig. 2 – Interface between controlling hardware and storage devices]; and a processor configured to transmit a command to the memory system via the interface circuit, the command requesting access to the memory system, the access including writing of data and reading of data for the memory system [fig. 2; column 45, lines 33-48; column 47, lines 13-24 – “Similarly, any necessary files for performing the functions attributed to the computers, servers or other network devices may be stored locally and/or remotely, as appropriate. Where a system includes computerized devices, each such device can include hardware elements that may be electrically coupled via a bus, the elements including, for example, at least one central processing unit (“CPU” or “processor”), at least one input device (e.g., a mouse, keyboard, controller, touch screen or keypad) and at least one output device (e.g., a display device, printer or speaker). Such a system may also include one or more storage devices, such as disk drives, optical storage devices and solid-state storage devices such as random access memory (“RAM”) or read-only memory (“ROM”), as well as removable media devices, memory cards, flash cards, etc.” … “Processes described (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. The code may be stored on a computer-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable storage medium may be non-transitory.”], wherein the processor is configured to: manage a plurality of zones, each of the plurality of zones corresponding to one of a plurality of logical address ranges within a logical address space and one of a plurality of storage areas provided in the memory system, the logical address space being used for the access to the memory system [fig. 6; column 4, lines 23-32; column 21, lines 2-5; column 27, lines 10-15 – “In some examples, a “zone” may be a series of sectors of a hard drive, such as a shingled magnetic recording disk, that form an append-only section of the disk. Zones may be individually append-only, but may be written to independently and may be resettable through a write-pointer reset. Data may be written to a zone as a series of records (e.g., data objects) with a fixed maximum size (e.g., 1 megabyte); however, in some cases checkpoints and index records may have a different fixed maximum size or may have no fixed maximum size.” … “In some examples, a “data object index” may be data stored as a tree, such as a B+ tree or two-level tree, that may be used to track the physical location of all data objects stored by the file system on disk.” … “The volumelet table 610 may be used to list and describe each of the volumelets. For example, the checkpoint 6 shows that the index segments 618 for data objects 0-4096 may be located at address 0x80AFABCD in the metadata zone. Likewise, the index segments 618 for data objects 4096-8192 may be found at address 0x80BA78CE.”]; However, Donlan et al. do not specifically disclose, transmit to the memory system a first command for requesting a zone which is to be garbage collected among the plurality of zones; and receive a first list from the memory system as a response for the first command, the first list including information indicating one or more zones which is to be garbage collected, the one or more zones which is to be garbage collected are determined based on (i) a correspondence between the plurality of zones and the plurality of storage areas and (ii) a status of each of the plurality of zones, the status of one of the plurality of zones including at least a first status and a second status, the first status indicating that data is written over an entire logical address range corresponding to the one of the plurality of zones, the second status indicating that the one of the plurality of zones is reset [column 3, lines 4-24; column 34, lines 28-63 – Donlan et al. disclose selecting candidates for cleaning, but does not disclose, sending the candidates to a host in response to a command from the host. (“To optimize performance and maximize space, zones may be cleaned periodically or on command. Zone cleaning heuristics may attempt to determine which zones are candidates for data relocation, which may be based on which zones have the greatest proportions of deleted space. Relocating live data from a mostly-empty zone to another zone may allow the mostly-empty zone to be flagged as empty and available for writing. Zone cleaning may be efficiently performed with reference to zone footers, which may provide the ability to determine quickly that a zone is closed and a summary of the data stored on the zone. Periodically, or as needed, to clean up and recover any corrupt or dangly data, a fixity check called a scrub may be performed. In some examples, “fixity check” may refer to a process of verifying that data objects are properly indexed and accounted for, and have not been altered or corrupted. The scrub may initially mark all records as not found in the index pages and, reading through the zones, mark records found as they are found and validated. At the conclusion of the scrub, any records that were not found or were determined to be corrupt or incomplete may be recovered or may be considered deleted.” … “Zone cleaning may be performed by selecting a zone having the least amount of live data, determining which records in the zone are live, copying all of the live records to the end of a log and updating a respective index, and then marking the zone as empty. The empty zone may be distinguished from an non-empty zone because the index of the empty zone will indicate that no data exists in the empty zone. However, in some cases the state (e.g., empty state) of the zone may be tracked in memory in a space index. Zone cleaning may be triggered in various ways. For example, zone cleaning may be triggered when the number of free zones falls below a predetermined minimum level. As another example, zone cleaning may be triggered when a predetermined amount of free space is available to be reclaimed. As still another example, zone cleaning may be scheduled to run as a background task periodically or during periods of idle disk activity to reclaim available space. However, in some embodiments another drive may be used as a temporary staging area for zone cleaning. In cases where zone cleaning is triggered by a low number of free zones, two available zones may be reserved exclusively for use by the zone cleaner to ensure that the file system has sufficient space to copy records during the zone cleaning. That is, once the multi-active zone file system gets at or below two available free zones, writing of new data objects may be paused until additional zones are freed by zone cleaning. One of the two reserved zones may be used for data relocation and the other reserved zone may be available for any metadata updates—such as updates to index pages and/or checkpoints—that may be needed. When zone cleaning is triggered by a large amount of free space available for reclamation, the zone cleaning may provide efficiency because a relatively small amount of active data may need to be copied. Likewise, cleaning zones as a background task may provide efficiency because it may reduce contention for log heads.”]. In the same field of endeavor, Kuzmin et al. disclose, transmit to the memory system a first command for requesting a zone which is to be garbage collected among the plurality of zones [figs. 9, 11; pars. 0106, 0134-0139 – “Where the host immediately needs free space, it can issue a synchronous command to the memory controller, for example, requiring a listing of units where page utilization falls below a specific threshold (e.g., any EU where released page space is greater than a threshold, e.g., 50% of an EU's capacity). Many choices of metric are possible, and in some embodiments, complex conditions can be evaluated (e.g., EUs where more than 50% of pages are released, but where less than 10% of space is currently available). In response to such a command, the memory controller returns a listing of EUs (or logical units), sorted by any desired priority scheme (e.g., by lowest amount of wear).”]; and receive a first list from the memory system as a response for the first command, the first list including information indicating one or more zones which is to be garbage collected, the one or more zones which is to be garbage collected are determined based on (i) a correspondence between the plurality of zones and the plurality of storage areas and (ii) a status of each of the plurality of zones, the status of one of the plurality of zones including at least a first status and a second status, the first status indicating that data is written over an entire logical address range corresponding to the one of the plurality of zones, the second status indicating that the one of the plurality of zones is reset [figs. 9, 11; pars. 0106, 0134-0139 – “Where the host immediately needs free space, it can issue a synchronous command to the memory controller, for example, requiring a listing of units where page utilization falls below a specific threshold (e.g., any EU where released page space is greater than a threshold, e.g., 50% of an EU's capacity). Many choices of metric are possible, and in some embodiments, complex conditions can be evaluated (e.g., EUs where more than 50% of pages are released, but where less than 10% of space is currently available). In response to such a command, the memory controller returns a listing of EUs (or logical units), sorted by any desired priority scheme (e.g., by lowest amount of wear).”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Donlan et al. to include shared garbage collection, as taught by Kuzmin et al., in order to improve performance by reducing performance penalties and associated overhead. Claim 11 (as applied to claim 9 above):Kuzmin et al. disclose, wherein the plurality of storage areas is included in a nonvolatile memory of the memory system, the nonvolatile memory includes a NAND flash memory [fig. 2; par. 0058 – Donlan et al. disclose a flash memory. Kuzmin et al. disclose a NAND flash memory. “FIG. 2 shows a solid-state drive (SSD) having a memory controller 200 and NAND flash memory comprising one or more NAND flash memory devices 207.”]. Claim 12: Claim 12, directed to a method, is rejected for the same reasons set forth in the rejection of claim 1 above. Claim 17 (as applied to claim 12 above): Claim 17, directed to a method, is rejected for the same reasons set forth in the rejection of claim 6 above. Claim 19 (as applied to claim 12 above): Claim 19, directed to a method, is rejected for the same reasons set forth in the rejection of claim 8 above. Claim(s) 2, 3, 13, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Donlan et al. (U.S. Patent No. 10,417,190) in view of Kuzmin et al. (Pub. No. US 2014/0215129) as applied to claims 1 and 12 above, respectively, and further in view of Lee (Pub. No. US 2017/0371572). Claim 2 (as applied to claim 1 above): Donlan et al. and Kuzmin et al. disclose all the limitations above but do not specifically disclose the memory system, further comprising a volatile memory configured to store the first list, wherein the controller is further configured to: in response to receiving the first command from the host device, read the first list from the volatile memory; and transmit the read first list to the host device [As discussed above, Kuzmin et al. disclose returning the list to the host. However, Kuzmin et al. do not appear to disclose storing the list in a volatile memory prior to transferring the list.]. In the same field of endeavor, Lee discloses, in response to receiving the first command from the host device, read the first list from the volatile memory [fig. 1; pars. 0201 – “In an embodiment, the controller 110 may include components such as, but not limited to, a random access memory (RAM), a read only memory (ROM), a processing unit (e.g., a central processing unit or processor), a host interface, a memory interface, and an error correction unit. For example, the firmware may be stored in the ROM and the processing unit may be configured to execute the firmware.” … “When performing the garbage collection operation by using the mapping table 111 and the snapshot table 112, the controller 110 stores memory blocks to be erased in the victim block list 113. The controller 110 may perform garbage collection based on the victim block list 113. For example, the victim block list 113 may identify blocks among the nonvolatile memory device 120 that are to be scheduled for garbage collection. For example, the controller 110 may add identifiers that identify the memory blocks to be erased into the victim block list 113. In an embodiment, the controller 110 performs the garbage collection. The garbage collection may include copying the valid data of multiple source memory blocks to a clean destination memory block and then erasing the source memory blocks.”]; and transmit the read first list to the host device [The combination of Kuzmin et al. and Lee provides that the list may be stored in controller memory prior to being transmitted to the host.]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Donlan et al. and Kuzmin et al. to include storing the list in memory, as taught by Lee, in order to improve performance by storing the list in a high speed memory. Claim 3 (as applied to claim 2 above): Donlan et al. disclose, wherein: the controller is further configured to: when data is written over the entire logical address range corresponding to a first zone among the plurality of zones, update the first information to indicate that the first zone is the first status [column 18, lines 45-48; column 20, lines 21-27; column 21, lines 14-18; column 22, lines 60-67 – “A closed zone 402 may be a zone that was once active, may or may not contain live (i.e., not obsolete) data, but is not actively accepting further write requests. In some cases, a closed zone may have been fully written, and no more data may be appended until the zone is cleaned and reset. In other cases, a zone may have been closed before it was written because an EARLY_CLOSE_THRESHOLD was exceeded for the zone.”]; when the first zone is reset based on a second command received from the host device, update the first information to indicate that the first zone is the second status [column 18, lines 45-48; column 20, lines 21-27; column 21, lines 14-18; column 22, lines 60-67; column 34, lines 37-63 – “Zone cleaning may be triggered in various ways. For example, zone cleaning may be triggered when the number of free zones falls below a predetermined minimum level. As another example, zone cleaning may be triggered when a predetermined amount of free space is available to be reclaimed. As still another example, zone cleaning may be scheduled to run as a background task periodically or during periods of idle disk activity to reclaim available space. However, in some embodiments another drive may be used as a temporary staging area for zone cleaning. In cases where zone cleaning is triggered by a low number of free zones, two available zones may be reserved exclusively for use by the zone cleaner to ensure that the file system has sufficient space to copy records during the zone cleaning. That is, once the multi-active zone file system gets at or below two available free zones, writing of new data objects may be paused until additional zones are freed by zone cleaning. One of the two reserved zones may be used for data relocation and the other reserved zone may be available for any metadata updates—such as updates to index pages and/or checkpoints—that may be needed. When zone cleaning is triggered by a large amount of free space available for reclamation, the zone cleaning may provide efficiency because a relatively small amount of active data may need to be copied. Likewise, cleaning zones as a background task may provide efficiency because it may reduce contention for log heads.”]; and in response to that the first information is updated, update the first list in the volatile memory based on the first information which is updated [column 18, lines 45-48; column 20, lines 21-27; column 21, lines 14-18; column 22, lines 60-67; column 34, lines 37-63; column 26, line 47 – column 27, line 9 – Donlan et al. disclose storing state information for the zones. Lee provides the additional teaching of storing management tables in controller memory. (“The list of open zones 606 comprises a list of the active (i.e., non-empty, non-closed) zones and an offset for each respective zone indicating that all records before the offset are reflected in the checkpoint/commit. In some cases this offset may be the position of the write pointer for the zone as of the time the checkpoint was generated (e.g., the zone write pointer). In the checkpoint 600 illustrated in FIG. 6, when the checkpoint was generated, the write pointer of zone 1 was located at offset 0x032BAFED. The zone summary 608 includes a list of all of the zones and their respective states and a list of volumelets in each zone and how much space in the zone is being taken up by the respective volumelets. A zone summary table in a checkpoint may contain, for each log zone, the state of the log zone (e.g., closed, open, or empty), and, if the log zone is not empty, a map between volumelet identifiers and an amount of live data of the volumelet. In some examples, a “live data” may be data that is currently part of the logical state of the file system; i.e., data that has not yet been obsoleted by being copied elsewhere by zone cleaning or by deletion. For the checkpoint 600, zone 1 is open and has 50 megabytes allocated to data for volumelet 1, zone 2 is closed and hosts 250 megabytes of data for volumelet 42 and 300 megabytes of data for volumelet 128, and zone 3 is free and therefore contains no volumelet data. Using this information, a zone cleaning process may quickly identify zones that have deleted space available (e.g., fewest live volumelets) and/or which volumelets have data that may be relocated to free up the most space on zones.”)]. Claim 13 (as applied to claim 12 above): Claim 13, directed to a method, is rejected for the same reasons set forth in the rejection of claim 2 above. Claim 14 (as applied to claim 13 above): Claim 14, directed to a method, is rejected for the same reasons set forth in the rejection of claim 3 above. Claim(s) 7 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Donlan et al. (U.S. Patent No. 10,417,190) in view of Kuzmin et al. (Pub. No. US 2014/0215129) as applied to claims 1 and 12 above, respectively, and further in view of Asnaashari et al. (Pub. No. US 2013/0339587). Claim 7 (as applied to claim 1 above): Donlan et al. and Kuzmin et al. disclose all the limitations above but do not specifically disclose, wherein the controller is further configured to: when the memory system is started up, read from the nonvolatile memory second information indicating a correspondence between logical addresses in the logical address space and the plurality of storage areas; and construct the first information with reference to the second information. In the same field of endeavor, Asnaashari et al. disclose, when the memory system is started up, read from the nonvolatile memory second information indicating a correspondence between logical addresses in the logical address space and the plurality of storage areas [par. 0007 – “In yet another prior art technique, the flash block management tables are maintained in a volatile random access memory (RAM), the flash block management tables are periodically and/or based on some events (such as a Sleep Command) saved (copied) back to flash, and to avoid the time consuming reconstruction upon power-up from a power failure additionally a power back-up means provides enough power to save the flash block management tables in the flash in the event of a power failure. Such power back-up may comprise of a battery, a rechargeable battery, or a dynamically charged super capacitor.”]; and construct the first information with reference to the second information [par. 0007 – “In yet another prior art technique, the flash block management tables are maintained in a volatile random access memory (RAM), the flash block management tables are periodically and/or based on some events (such as a Sleep Command) saved (copied) back to flash, and to avoid the time consuming reconstruction upon power-up from a power failure additionally a power back-up means provides enough power to save the flash block management tables in the flash in the event of a power failure. Such power back-up may comprise of a battery, a rechargeable battery, or a dynamically charged super capacitor.”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings to include loading tables, as taught by Asnaashari et al., in order to improve performance by eliminating the need to reproduce tables every time the system is restarted. Claim 18 (as applied to claim 12 above): Claim 18, directed to a method, is rejected for the same reasons set forth in the rejection of claim 7 above. Allowable Subject Matter Claims 4-5, 10 and 15-16 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art does not disclose the limitations of the listed claims in conjunction with the limitations of the base claim and intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kanno et al. (Pub. No. US 2021/0223994) disclose, “The controller receives a plurality of first write commands from the host. Each of the plurality of first write commands specifies (i) a logical address indicative of both a first zone of the plurality of zones and an offset within the first zone where write data is to be written, (ii) a data size of the write data, and (iii) a location in a write buffer of the host where the write data is stored. Based on the offset and the data size specified by each of the plurality of first write commands, the controller reorders the plurality of first write commands in an order in which writing within the first zone is sequentially executed from a next write location within the first zone, by using a first command buffer corresponding to the first zone.” [par. 0030] Any inquiry concerning this communication or earlier communications from the examiner should be directed to LARRY T MACKALL whose telephone number is (571)270-1172. The examiner can normally be reached Monday - Friday, 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald G Bragdon can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. LARRY T. MACKALL Primary Examiner Art Unit 2131 3 January 2026 /LARRY T MACKALL/Primary Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Sep 10, 2024
Application Filed
Jan 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591389
MEMORY CONTROLLER AND OPERATION METHOD THEREOF FOR PERFORMING AN INTERLEAVING READ OPERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12572308
STORAGE DEVICE SUPPORTING REAL-TIME PROCESSING AND METHOD OF OPERATING THE SAME
2y 5m to grant Granted Mar 10, 2026
Patent 12561065
PROVIDING ENDURANCE TO SOLID STATE DEVICE STORAGE VIA QUERYING AND GARBAGE COLLECTION
2y 5m to grant Granted Feb 24, 2026
Patent 12555170
TRANSFORMER STATE EVALUATION METHOD BASED ON ECHO STATE NETWORK AND DEEP RESIDUAL NEURAL NETWORK
2y 5m to grant Granted Feb 17, 2026
Patent 12554400
METHOD OF OPERATING STORAGE DEVICE USING HOST REQUEST BYPASS AND STORAGE DEVICE PERFORMING THE SAME
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
93%
With Interview (+8.1%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 779 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month