Prosecution Insights
Last updated: April 19, 2026
Application No. 18/506,874

MEMORY CELL FOLDING OPERATIONS USING HOST SYSTEM MEMORY

Final Rejection §103§112
Filed
Nov 10, 2023
Examiner
TALUKDAR, ARVIND
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
4 (Final)
81%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
84%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
449 granted / 557 resolved
+25.6% vs TC avg
Minimal +4% lift
Without
With
+3.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
36 currently pending
Career history
593
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§103 §112
DETAILED ACTION Claim 11 is amended. Claims 6, 9, 18 are canceled. Claims 1-5, 7-8, 10-17, 19-20 are pending. Priority: 11/16/2022 Assignee: Micron Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-5, 7-8, 10-17, 19-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. 1.Amended Claim 1 is rejected for reciting a limitation that is unclear, vague and indefinite. Claim 1 recites, ‘wherein the accumulated data is received from the second volatile memory device based….on a size of the accumulated data satisfying a threshold quantity of data that corresponds to a quantity of data stored by a pageline of the second non-volatile memory device’. Spec, Para-0132 recites, ‘transmitting, to the host system, a request for the host system to allocate a portion of the second volatile memory device to the memory system for storage and receiving, from the host system’. But nowhere does the spec recite that the allocated portion/size of the host resident second volatile memory device is greater than ‘a threshold quantity of data stored by a pageline of the second NVM device’, as recited in claim 1. In short, verification of the allocated size of the second volatile device, a critical parameter, is undisclosed, hence making claim 1 indefinite. Though the spec recites a controller using a host memory buffer, there is no capacity-based communication between them. There is no disclosure of how ‘threshold’ is calculated. It is unclear if ‘threshold’ is a number, a percentage, a voltage level etc. In NVM/NAND, pageline varies. So it is also unclear how size of ‘data stored by a pageline of the second NVM device’ is determined, so it can be verified if accumulated data ‘threshold’ corresponds to the calculated size. Since allocated portion/size of the second memory device is indefinite, it is unclear how the controller tracks how much data can be transmitted to the host, everytime. Therefore it is unclear how the ‘accumulated data’ can be reliably determined. Accordingly it is also unclear how the controller determines if the received, unreliable ‘accumulated data’ has reached an (undisclosed) ‘threshold’ corresponding to an (undisclosed) size stored by a (variable) pageline of the second NVM device, after final folding. The lack of written description support to verify that the allocated size of the second volatile memory is always greater than the ‘threshold quantity that corresponds to data stored by a pageline of the second NVM device’, suggests that the applicant's possession of the claimed subject matter, at the time of filing, was incomplete. The lack of disclosure leads to uncertainty about the scope of the disclosure. Hence claim 1 is rejected for reciting a limitation that is unclear, vague and indefinite. Claims 13, 20 also have a similar issue. Dependent claims 2-5, 7-8, 10-19, 20 are rejected for failing to cure the deficiency from their respective parent claim by dependency. Note: Claim 9, which represents spec, Para-0132 has been canceled. But the issue which is related to claim 1 is unresolved. Based on the amendment and the arguments, the rejection has been clarified and maintained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 10, 12-16, 20 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Byun (20200174690) in view of Huang et al (20110149650) and Gorobets et al (20050144365). As per Claim 1, Byun discloses an apparatus (Byun, [0030,0031 - Fig. 1 shows a data processing system 100 including a memory system 110, wherein the data processing system 100 includes host 102 and memory system 110]), comprising: a first non-volatile memory device (Byun, [0040 - In Fig. 1, memory device 150 includes a plurality of memory blocks 152]; [0041 - Memory device 150 includes a plurality of memory dies, and each memory die includes planes. Memory device 150 is a flash memory having a 3D stack structure]; [0126 – In Fig. 10, memory device 6340 includes a plurality of nonvolatile memories/NVMs, thereby implying that the first NVM is the first NVM memory device]); a second non-volatile memory device (Byun, [0126 – In Fig. 10, memory device 6340 includes a plurality of nonvolatile memories/NVMs, thereby implying that the second NVM is the second NVM memory device]); a controller (Byun, [Fig. 1: controller 130]) comprising a first volatile memory device (Byun, [0048 – In Fig. 1, memory 144 is dynamic random access memory/DRAM]), the controller being coupled with the first non-volatile memory device and the first volatile memory device (Byun, [0044 – In Fig. 1, controller 130 includes host interface 132, processor 134, memory interface 142, and memory 144, all coupled to each other via an internal bus]; [0038 - Controller 130 and the memory device 150 may be integrated into a single semiconductor device]), wherein the controller (Byun, [Fig. 1: controller 130]) is configured to cause the apparatus to: initiate a folding operation (Byun, [0016 - Fig. 6A shows a garbage collection operation which facilitates the folding operation]) to transfer data from the first non-volatile memory device to the second non-volatile memory device (Byun, [0082 – In Fig. 6A, processor 134 of controller 130 searches a plurality of memory blocks included in memory device 150 to select at least one sacrificial memory block 610/first NVM from the plurality of memory blocks]; [0009 - The sacrificial data is valid data stored in the sacrificial memory block, thereby implying the start of the folding operation; This is similar to Para-0086 of the spec]); transfer a first portion of the data from the first non-volatile memory device to the first volatile memory device (Byun, [0087 – In Fig. 6B, step S603, processor 134 loads the valid data, i.e., sacrificial data, stored in the sacrificial memory block/first NVM to memory 144/first volatile memory]) as part of the folding operation (Byun, [Figs. 5, 6A-6B, 7A-7C]); transmit, as part of the folding operation (Byun, [Fig. 6A]), the first portion from the first volatile memory device (Byun, [Fig. 1: memory 144/first volatile memory]) to a second volatile memory device (Byun, [Fig. 6A: Integrated Memory 104/second volatile memory]; [0034 - Integrated memory 104 is a unified memory/UM in host 102 that includes a RAM]) of a host system (Byun, [Fig. 6A: host 102]) based at least in part on a size of the first portion of the data equaling a storage capacity of the first volatile memory device (Byun, [0089 - When the available capacity of the integrated memory 104 is equal to the size of the sacrificial data, ‘Yes’ in step S605, processor 134 provides the sacrificial data to host 102 under the control of the processor 134 in step S607]); receive, as part of the folding operation (Byun, [Figs. 7A-7C show garbage collection operation which facilitates the folding operation]) and at the first volatile memory device (Byun, [Fig. 7C: step S701]) from the second volatile memory device (Byun, [Fig. 6A: Integrated Memory 104/RAM in host 102]), accumulated data comprising the first portion of the data and a second portion of the data (Byun, [0006 – The available capacity of the first memory in the host can be larger than the size of the current valid data; This implies that the host memory can accumulate sufficient data that comprises the current valid data/first portion and has space for additional data. This further implies that the accumulated data includes the first portion of the data and additional data/second portion]), write, as part of the folding operation (Byun, [Fig. 7C]), the accumulated data to the second non-volatile memory device (Byun, [Fig. 7A]; [Fig. 10]) that comprises a set of multiple-level memory cells for storing four or more bits of information (Byun, [0108 - In Fig. 7C, step S705, processor 134 stores the sorted target data in a target memory block in memory device 150]; [0061 - Memory device 150 includes a plurality of quadruple-level cell/QLC memory blocks. The QLC memory block includes a plurality of pages including memory cells each capable of storing 4-bit data]). Huang discloses receiving the accumulated data as part of the folding operation as follows, receive, as part of the folding operation (Huang, [Fig. 11]; [0028 - A folding operation includes reading the portions of the data from multiple locations in the first section into the read/write registers and performing a multi-state programming operation of the portions of the data from the read/write registers into a location the second section of the non-volatile memory/second NVM]; [Figs. 23-27 give examples how to combine data folding operation with writes to the binary portion of the memory]) and at the first volatile memory device from the second volatile memory device (Huang, [0128 – In Fig. 20, data is transferred from host 501 onto memory 503, where it is initially stored on the controller resident volatile buffer memory RAM 511/smaller size]), accumulated data (Huang, [0020 - The mapping data is buffered in RAM 511; Here the buffering implies receiving ‘accumulated data’ from the host]), write, as part of the folding operation (Huang, [Fig. 11]; [0127 – In on-chip data folding, data written into a binary section of the memory is repackaged and written into a multi-state format]), the accumulated data (Huang, [0019 - Consolidating the valid sectors among the various blocks and rewriting the sectors after rearranging them in logically sequential order]) to the second non-volatile memory device (Huang, [Fig. 20: non-volatile memory 513]) that comprises a set of multiple-level (Huang, [0138 - In Fig. 20, for balanced mode, interspersing writes to D1 memory between the foggy and fine and fine and foggy phases of the multi-level programming used in the folding process]) memory cells for storing four or more bits of information (Huang, [0081 - In D3, each cell stores 3 bits, .i.e. low, middle and upper bits, and there are 8 regions. In D4/QLC, there are 4 bits and 16 regions]; [0119 - All Logical Groups in the triplet will be fully consolidated to Virtual Update Blocks in D1 memory 301 before folding to D3 memory 303]; [0128 – In Fig. 20, from RAM 511 the data is then written into NVM 513/second, first into the binary section D1 515 and then on into the MLC section D3 517. In the on-chip D1 to D3 folding operation, same read write registers and other peripheral circuitry is used for both the initial D1 write operation and the folding operation]; [0118 - Update Blocks that consist of three D1/Binary blocks where a full image of all data to be programmed to D3 block is created prior to a folding operation of copying data from the D1 blocks to a D3 block using a foggy-fine programming operation; Here foggy-fine programming is a two-step process used in multi-level cell/MLC NAND, particularly QLC/quad-level cell NAND which stores 4 bits per cell]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the folding operation of Huang into the host-based processing system of Byun, for the benefit of receiving data from a host, storing the received data in the buffer memory, then transferring from the buffer memory to into read/write registers of the non-volatile memory circuit and a binary write operation of the data is then performed from the read/write registers to the first section of the non-volatile memory circuit and then folds portions of the data from the first section of the non-volatile memory to the second section of the non-volatile memory (Huang, 0028); Gorobets discloses receiving the accumulated data which comprises of the first portion and a second portion, as follows, receive (Gorobets, [0468 – In Fig. 32A, step 1112: Receiving host data packaged in logical units]), as part of the folding operation and at the first volatile memory device (Gorobets, [Fig. 1: RAM 130]) from the second volatile memory device (Gorobets, [Fig. 2: host 10, host-side memory manager]), accumulated data (Gorobets, [Fig. 24A: Update Block Sequential with padding/second data]; [0028 - A normal consolidation operation consolidates into a consolidation block the current versions of all logical units of a logical group residing among an original block and an update block]; [Figs. 28-29 show consolidation/accumulated data]]) comprising the first portion of the data and a second portion of the data (Gorobets, [0412 - Fig. 24A shows the plane-aligned sequential update with padding/second data; Here the valid data is the first portion]; [0421 - Fig. 24C shows the plane aligned chaotic/non-sequential update with padding/second data; Here the valid data is the first portion]), wherein the accumulated data (Gorobets, [0142 - The consolidated update block will be in logically sequential order and can be used to replace the original block. Under some predetermined condition, the consolidation process is preceded by one or more compaction processes]; [0465 - Fig. 32A shows initial update operation that results in a consolidation operation]) is received from the second volatile memory device (Gorobets, [0137 – In Fig. 6, interface 110 allows the metablock management system to interface with a host]) based at least in part on a size of the accumulated data satisfying a threshold quantity of data (Gorobets, [0386 - When combining multiple planes, a maximum aggregated unit of parallel read or write is a metapage of memory cells, where the metapage is constituted by a page from each of the multiple planes; This is similar to Fig. 1 of the spec]) that corresponds to a quantity of data (Gorobets, [0387 – cyclic filling in the planes]) stored by a pageline of the second non-volatile memory device (Gorobets, [Fig. 2: Flash memory 200/second NVM]; [0386 - Typically a logical unit is a sector of size 512 bytes. A page is a maximum unit of parallel read or write in a plane. A logical page contains one or more logical units]; [0391 - In Fig. 21, a metapage is formed by multiple logical pages, one in each plane]; [0386 - A metapage such as MP0 has four pages, one from each of the planes, P0, P1, P2 and P3, storing in parallel logical pages LP0, LP1, LP2, LP3, thereby implying that a pageline is a single row of pages in a virtual block, wherein the virtual block includes virtual/logical pages, as recited in Paras-0032,0087 of the spec]); Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the memory block management of Gorobets into the host-based processing system of Byun, Huang, for the benefit of performing a consolidation operation that consolidates into a consolidation block the current versions of all logical units of a logical group residing among an original block and an update block (Gorobets, 0028). As per Claim 2, the rejection of claim 1 is incorporated and Byun discloses, wherein the controller (Byun, [Fig. 1: controller 130]) is further configured to cause the apparatus to: scan (Byun, [Fig. 6B, step S601]), prior to transferring the first portion of the data from the first non-volatile memory device to the first volatile memory device, one or more source blocks of the first non-volatile memory device for the data (Byun, [0009 - A memory system configured to select/scan at least one sacrificial memory block from a plurality of memory blocks]; [0086 – In Fig. 6B, step S601, on the basis of the number of valid data stored in each of the plurality of memory blocks in memory device 150, processor 134/controller selects the sacrificial memory block, thereby implying scanning source blocks of the first non-volatile memory device for valid data]) based at least in part on page validity information corresponding to logical-to-physical translations associated with the one or more source blocks (Byun, [0128 – In Fig. 10, buffer memory 6325/first volatile memory temporarily stores metadata of flash memories NVMs, for example, map data including a mapping table]), wherein the first portion of the data is stored at the one or more source blocks (Byun, [0009 - The sacrificial data is valid data stored in the sacrificial memory block]). As per Claim 3, the rejection of claim 1 is incorporated and Byun, Huang, Gorobets disclose, wherein, to write the accumulated data to the second non-volatile memory device (Huang, [0107 - In Fig. 11, NVM 200/2nd NVM is partitioned into two portions. The first portion 202 has the memory cells operating as a main memory for user data in either MLC or binary mode. The second portion 204 has the memory cells operating as a cache in a binary mode. Thus, the memory 200 is partitioned into a main memory 202 and a binary cache]), the controller is configured to cause the apparatus to: perform a first stage of a write operation on a subset of the set of multiple-level memory cells (Huang, [0028 - Receiving data from a host and storing the received data in the buffer memory. The data is then transferred from the buffer memory into read/write registers of the non-volatile memory/2nd NVM and a binary write operation of the data is then performed from the read/write registers to the first section of the non-volatile memory]); perform a second stage of the write operation on the subset of the set of multiple-level memory cells after the first stage (Huang, [0028 - The method then subsequently folds portions of the data from the first section of the non-volatile memory to the second section of the non-volatile memory]), wherein the accumulated data (Huang, [0023 - The cache buffers the data between a fast host and a slower MLC memory and for accumulation to write to a block]) is written to the second non-volatile memory device based at least in part on the second stage (Huang, [0028 - A folding operation includes reading the portions of the data from multiple locations in the first section into the read/write registers and performing a multi-state programming operation of the portions of the data from the read/write registers into a location the second section of the non-volatile memory/2nd NVM]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the folding operation of Huang into the host-based processing system of Byun, Gorobets for the benefit of using multi-state programming operations that include a first phase and a second phase and one or more binary write operations are performed between the phases of the multi-state programming operations (Huang, 0028). As per Claim 4, the rejection of claim 3 is incorporated and Byun, Huang, Gorobets disclose, wherein, to receive the accumulated data (Gorobets, [0144 - The update block is allocated when a command is received from the host to write a segment of one or more sectors of the logical group for which an existing metablock has been storing all its sectors intact]; [0028 - A normal consolidation operation consolidates/accumulated data into a consolidation block the current versions of all logical units of a logical group residing among an original block and an update block]), the controller (Gorobets, [Fig. 1: controller 10]) is configured to cause the apparatus to: receive the accumulated data from the second volatile memory device a first time (Gorobets, [0144 - For the first host write operation, a first segment of data is recorded on the update block]), wherein the first stage is performed using the accumulated data received at the first time (Gorobets, [0144 - Since each host write is a segment of one or more sector with contiguous logical address, it follows that the first update is always sequential in nature]); receive the accumulated data from the second volatile memory device a second time (Gorobets, [0144 - In subsequent host writes, update segments within the same logical group are recorded in the update block in the order received from the host. A block continues to be managed as a sequential update block whilst sectors updated by the host within the associated logical group remain logically sequential. All sectors updated in this logical group are written to this sequential update block, until the block is either closed or converted to a chaotic update block]), wherein the second stage is performed using the accumulated data received at the second time (Gorobets, [0120 – In Figs. 1-2, a memory-side memory manager is implemented in controller 100 of memory system 20 to manage the storage of the data of host logical sectors among metablocks of flash memory 200/second NVM]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the XXX of Gorobets into the host-based processing system of Byun, Huang for the benefit of utilizing memory devices having a specific time allowance to complete a certain operation. For example when a host writes to a memory device, it expects the write operation to be completed within a specified time, known as write latency. While the memory device is busy writing the data from the host, it signals a busy state to the host. If the busy state lasts longer than the write latency period, the host will time-out the write operation and register an error (Gorobets, 0449). As per Claim 10, the rejection of claim 1 is incorporated and Byun discloses, wherein the size of the first portion equals a size of the storage capacity of the first volatile memory device (Byun, [0082 – In Fig. 6A, processor 134 selects a memory block having the smallest number of valid data among the plurality of memory blocks as the sacrificial memory block 610. Then, processor 134 reads valid data/sacrificial data and stores it in memory 144/first volatile memory of controller 130]; [0083 - When available capacity of integrated memory 104 is equal to the size of the sacrificial data, the processor 134 provides host 102 with the stored sacrificial data, thereby implying that the size of the first portion at least equals the size of the storage capacity of the first volatile memory device]) allocated for transfer operations (Byun, [Fig. 6A]) associated with writing data to multiple-level memory cells for storing four or more bits of information (Byun, [0060 – In Fig. 2, each of memory cells in memory blocks BLOCK0 to BLOCKN−1 is a multi-level cell/MLC storing multi-bit data]; [0061 - Memory device 150 includes a plurality of quadruple-level cell/QLC memory blocks. The QLC memory block includes a plurality of pages including memory cells each capable of storing 4-bit data]), the first portion is transmitted to the host system based at least in part on the first portion equaling the storage capacity of the first volatile memory device (Byun, [0083 – In Fig. 6A, processor 134 compares an available capacity of the integrated memory 104/second volatile memory with a size of the sacrificial data stored in the memory 144/first volatile memory, thereby implying that the first data is transmitted to the host based on equaling the storage capacity of the first volatile memory device]). As per Claim 12, the rejection of claim 1 is incorporated and Byun discloses, wherein the first non-volatile memory device and the second non-volatile memory device are the same non-volatile memory device (Byun, [0040 - In Fig. 1, memory device 150 includes a plurality of memory blocks 152]; [0041 - Memory device 150 include a plurality of memory dies, and each memory die includes a plurality of planes. The memory device 150 may be a flash memory having a 3D stack structure]). Huang clarifies, wherein the first non-volatile memory device and the second non-volatile memory device are the same non-volatile memory device (Huang, [0023 – A flash memory system operating with a cache and operating in mixed MLC/multi-level cell and SLC/single-level cell modes and with the SLC memory operating as a dedicated cache. The cache is mainly to buffer the data between a fast host and a slower MLC memory and for accumulation to write to a block]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the mixed MLC, SLC modes of Huang into the host-based processing system of Byun, Gorobets for the benefit of the SLC memory cache to buffer the data between a fast host and a slower MLC memory and for accumulation to write to a block (Huang, 0023). As per Claim 13, it is similar to claim 1 and therefore the same rejections are incorporated. As per Claim 14, it is similar to claim 2 and therefore the same rejections are incorporated. As per Claim 15, it is similar to claim 3 and therefore the same rejections are incorporated. As per Claim 16, it is similar to claim 4 and therefore the same rejections are incorporated. As per Claim 20, it is similar to claims 1, 13 and therefore the same rejections are incorporated. Claims 5, 7, 17, 19 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Byun (20200174690) in view of Huang et al (20110149650), Gorobets et al (20050144365) and Szubbocsev (20170300422). As per Claim 5, the rejection of claim 1 is incorporated and Byun, Huang, Gorobets disclose, wherein the controller (Gorobets, [Fig. 1: controller 10]; [0141 – In Fig. 2, update block manager 150 handles the update of logical groups]) is further configured to cause the apparatus to: update, at the first volatile memory device based at least in part on writing accumulated data (Gorobets, [Fig. 24A]; [Figs. 28-29 show consolidation/accumulated data]; [0272 - Logical to physical address records for recently written sectors are temporarily held in RAM/first volatile memory]; [0372 - Data update management operations are performed in RAM on the ABL, the CBL and the chaotic sector list]), a first set of logical to physical mappings associated with the accumulated data in accordance with the accumulated data being written to the second non-volatile memory device (Gorobets, [0272 – In Fig. 2, the logical to physical address translation module 140 is responsible for relating a host's logical address to a corresponding physical address in flash memory. Mapping between logical groups and physical groups/metablocks are stored in a set of table and lists distributed among the nonvolatile flash memory 200/second NVM and the volatile but more agile RAM 130]); Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the mapping of Gorobets into the host-based processing system of Byun, Huang for the benefit of having the hierarchy of address records for logical groups include the open update block list, the closed update block list in RAM and the group address table/GAT maintained in flash memory (Gorobets, 0272). Szubbocsev discloses, transmit the first set of logical-to-physical mappings (Szubbocsev, [Fig. 4A: step 414, transfer selected zone/mapping table to host]) from the first volatile memory device (Szubbocsev, [Fig. 1: Memory 132; Embedded memory 132 is DRAM in ASIC controller 106. It is well-known that ASIC controllers get assembled with SRAM or DRAM]) to the second volatile memory device (Szubbocsev, [Fig. 1: host memory 105, DRAM]) based at least in part on a size of the first set of logical-to-physical mappings equaling the storage capacity of the first volatile memory device (Szubbocsev, [0024 – In Fig. 1, controller 106 retrieves the first mapping table 134a from the main memory 102 in a sequence of exchanges. During the exchanges, a portion, or zone, of physical to logical address mappings is read out into the embedded memory 132/first volatile memory, thereby implying that the size of the first L2P mappings/134a is equal to the capacity of the first volatile memory device 132]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the host device interface of Szubbocsev into the host-based processing system of Byun, Huang, Gorobets for the benefit of enabling a host device to read from the memory of the memory device (Szubbocsev, 0011). As per Claim 7, the rejection of claim 5 is incorporated and Byun, Huang, Gorobets, Szubbocsev disclose, wherein the controller (Huang, [Fig. 1: controller 100]) is further configured to cause the apparatus to: determine that the folding operation has been completed (Huang, [0140 – In Fig. 21, the bottom line shows the stages of D1 to D3 folding process. Three D1 blocks are available for folding into one D3 block, so that all D1 data pages are available for folding to D1. The fine phase then follows, again the word lines x, y, and z are loaded into the read/write latches and programmed into the D3 word line for the fine write. This completes the first, foggy, fine stages and the data can then be read out]; [0142 – In Fig. 21, the transfers at 735 and 737 are pipelined with the fine programming phase, as were the transfers at 731 and 733 hidden behind the initial phases 701-707, which provided the data subsequently transferred out of RAM at 721. This process then continues until the transfer is complete]; [0158 - When controller's firmware recognizes that it is approaching the end of a write command, it can set a folding control flag which tells the folding task to continue so as to end on a fine programming step/last step]); receive, from the second volatile memory device, the first set of logical-to-physical mappings based at least in part on the folding operation being completed (Huang, [Fig. 21 shows completion of folding operation]; [0145 – In Fig. 22B, for balanced folding, it is preferable that the amount of folding output is faster that amount of D1 write input, in the second NVM. The reason is to be able to flush out the data in D1 to D3 faster than the system is taking in new host data to D1 in order to better prepare system D1 resources]; [0149 - The system performance is improved by increasing the amount of host-to-RAM transfer; The citations imply that after folding is completed, due to the host flush the memory system receives the accumulated L2P mapping from the host. This is similar to Para-0104 of the spec]), update, based at least in part on the first set of logical-to-physical mappings and the folding operation being completed (Huang, [Fig. 21 shows completion of folding operation]), a set of entries of a logical-to-physical mapping table (Huang, [0020 - Updates are at the logical sector level and a write pointer points to the corresponding physical sectors in a block to be written. The mapping information is buffered in RAM and eventually stored in a sector allocation table in the main memory]) that map logical addresses of the apparatus to physical addresses (Huang, [0017 - The memory system/controller keeps track of how the logical address space is mapped into the physical memory but the host is unaware of this]) of the apparatus (Huang, [0099 - Figs. 10A(i)-10A(iii) show the mapping between a logical group and a metablock]; [0101 - Fig. 10B shows the mapping between logical groups and metablocks. Each logical group 380 is mapped to a unique metablock 370, except for a small number of logical groups in which data is currently being updated. After a logical group has been updated, it may be mapped to a different metablock. The mapping information is maintained in a set of logical to physical directories]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the mapping of Huang into the host-based processing system of Byun, Gorobets for the benefit of mapping between a logical group and a metablock. The metablock of the physical memory has N physical sectors for storing N logical sectors of data of a logical group. When the logical sectors are in contiguous logical order, the data are stored in the metablock in the same logical order (Huang, 0099). Szubbocsev clarifies, receive, from the second volatile memory device (Szubbocsev, [Fig. 1: Memory 105 in host 108]), the first set of logical-to-physical mappings based at least in part on the folding operation being completed (Szubbocsev, [Fig. 4B: step 421, Receive a write request from host 108]; [0037 – In Fig. 4B, step 423, the routine looks up a physical memory address in the first mapping table 134a using the logical address/L2P mapping contained in the write request sent from host 108. In Fig. 4B, step 424, the data in the write request is written to memory device 102 at the translated physical address; The writing to NVM memory device implies that folding is complete]), update, based at least in part on the first set of logical-to-physical mappings and the folding operation being completed (Szubbocsev, [Fig. 4B]), a set of entries of a logical-to-physical mapping table that map logical addresses of the apparatus to physical addresses of the apparatus (Szubbocsev, [0038 – In Fig. 4B, after step 423, at step 425, routine 420 re-maps/updates at least a portion/subset of the first mapping table 134a in response to writing the main memory 102]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the host device interface of Szubbocsev into the host-based processing system of Byun, Huang, Gorobets for the benefit of enabling a host device to directly read from the memory of the memory device (Szubbocsev, 0011). As per Claim 17, it is similar to claim 5 and therefore the same rejections are incorporated. As per Claim 19, it is similar to claim 7 and therefore the same rejections are incorporated. Claim 8 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Byun (20200174690) in view of Huang et al (20110149650), Gorobets et al (20050144365) and Yano et al (20100037011). As per Claim 8, the rejection of claim 1 is incorporated and Byun, Huang, Gorobets disclose receiving the accumulated data. Yano further discloses, receive the accumulated data at the first volatile memory device (Yano, [0190 – In Fig. 9, the input data from the host is first written in the first memory area 11/first volatile memory and the data is stored in the first memory area 11 for a certain period]) in respective data chunks having sizes corresponding to the storage capacity of the first volatile memory device (Yano, [0198 – In Fig. 9, controller 10 determines the physical address for data writing based on the logical address of the input data and the logical address and the physical address of the selected entry. The controller 10 instructs the volatile semiconductor memory including the first memory area 11 to write the input data in the area designated by the physical address, in step ST8, thereby implying receiving the data in chunk sizes corresponding to size of the first volatile memory]; [0200 - In the case where the input data from the host is larger than the page size, plural entries in the cache management table may be required. In such a case, the controller 10 updates the plural entries by repeating the process in Fig. 9]), the controller further configured to cause the apparatus to: delete a data chunk from the first volatile memory device (Yano, [Fig. 11: step ST8’, Invalidate entry in first memory area 11 corresponding to written data, thereby implying deleting the data chunk in first volatile memory]) after writing the data chunk to the second non-volatile memory device and before receiving a next data chunk (Yano, [Fig. 11: step ST7’, Write data in third memory area/second NVM]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the improved write performance of Yano, into the host-based processing system of Byun, Huang, Gorobets for the benefit of using the semiconductor storage device comprising nonvolatile semiconductor memory such as a NAND type flash memory with a specified unit of erasing, reading, and writing (Yano, 0101). Claim 11 is rejected under AIA 35 U.S.C. 103 as being unpatentable over Byun (20200174690) in view of Huang et al (20110149650), Gorobets et al (20050144365) and Cui et al (20210181940). As per Claim 11, the rejection of claim 1 is incorporated and Byun, Huang, Gorobets disclose, wherein the pageline of the second non-volatile memory device corresponds to a respective first page of each physical block (Gorobets, [0121 - Figs. 2, 3A(i)-3A(iii) show the mapping between a logical group and a metablock. The metablock has N physical sector/pages for storing N logical sectors of data of a logical group]; [0387 – In Fig. 21, when the logical pages are filled in logically sequential order, the planes are visited in cyclic order with the first page in the first plane, the second page in the second plane, etc. After the last plane is reached, the filling returns cyclically to start from the first plane again in the next metapage]; [0528 - An index of logical units recorded in a block is stored in nonvolatile memory after every N writes]) of a group of physical blocks (Gorobets, [0018 - Each physical group/metablock is erasable as a unit and can be used to store a logical group of data]) included in a virtual block of the second non-volatile memory device (Gorobets, [0391 – In Fig. 21, a metapage is formed by multiple logical pages]; [0386 – In Fig. 21, when combining multiple planes, a maximum aggregated unit of parallel read or program could be regarded as a metapage of memory cells, where the metapage is constituted by a page from each of the multiple planes. A metapage such as MP0 has four pages, one from each of the planes, P0, P1, P2 and P3, storing in parallel logical pages LP0, LP1, LP2, LP3]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the metapage of Gorobets into the host-based processing system of Byun, Huang for the benefit of using a metapage or virtual block formed by multiple logical pages, one in each plane. Each logical page may consist of one or more logical units. As data are being recorded logical unit by logical unit into a block across the planes, each logical unit will fall in one of the four memory plane (Gorobets, 0391). It is well-known that physical blocks are aggregated into a single logical entity to which data is written. Huang clarifies the virtual block based on logical-to-physical mapping as follows, wherein the pageline of the second non-volatile memory device corresponds to a respective first page of each physical block of a group of physical blocks (Huang, [0017 - The memory system maps data between the logical address space and pages of the physical blocks of memory. The memory system keeps track of how the logical address space is mapped into the physical memory]) included in a virtual block (Huang, [0046 - Figs. 15-18 show the use of a virtual update block]; [0124 – In on-chip data folding, all D1 blocks allocated to an update group/UG for a Logical Group are located in the same die/plane. In a multi-die/plane configuration, the block selection algorithm attempts to open virtual update blocks in all dies/planes evenly]) of the second non-volatile memory device (Huang, [0122 – In Fig. 18, the three logical groups, LG X, LG X+1, LG X+2, that will be stored in a common D3 metablock such as 401 are a Logical Group Triplet. Prior to folding all related UG's for a logical group triplet are consolidated to a single UB each, as shown in Fig. 17, where UB 403 and UB 409 are consolidated for LG X+1. The data from the original block 401 for LG X and LG X+2 is then used to be folded into the new block 401]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the virtual update block of Huang into the host-based processing system of Byun, Gorobets for the benefit of having the virtual update block or VUB consist of three full update blocks. Such a VUB will then be the update block for a D3 block (Huang, 0117). Gorobets, Huang disclose that in NAND, the organization of data across multiple planes, each containing sets/groups of blocks, allows the controller to use unique block indexes to perform concurrent operations and locate physical pages. Cui clarifies the use of unique block indexes as follows, wherein the pageline of the second non-volatile memory device (Cui, [0041 - The memory array 120 includes several memory cells arranged in a number of devices, planes, sub-blocks, blocks, or pages]; [0042 - Data is written to NAND memory device 110 in pages/pageline/row of pages]) corresponds to a respective first page of each physical block (Cui, [0041 - 1536 pages per block]) of a group (Cui, [Abstract - Set of blocks]) of physical blocks included in a virtual block (Cui, [0035 - A given superblock/virtual block is composed of blocks in different planes of a die, each block having an index specific to the die, thereby implying determining the first page by its index within the block]) of the second non-volatile memory device (Cui, [0041 - A 48 GB TLC NAND memory device can include 18,592 bytes of data per page or 16,384+2208 bytes, 1536 pages per block, 548 blocks per plane, and 4 or more planes per device]), wherein the virtual block comprises the group of physical blocks (Cui - [0034 - The blocks/physical blocks for a superblock/virtual block]; [0035 – In Fig. 1, the superblock entry includes a set/group of blocks from the array 120], and wherein a physical block comprises two or more physical pages (Cui, [0022 - Table 1: Pages per block]). Therefore it would have been obvious to a person of ordinary skill at the time of filing to incorporate the unique block address of Cui into the host-based processing system of Byun, Gorobets, Huang for the benefit of having the controller create a superblock entry in the translation table of the memory device. The superblock or virtual block entry includes a set of blocks from the array. And the set of blocks have block indexes that are the same across multiple die of the array. The number of unique block indexes is equal to the number of planes and in different planes. Thus, a superblock is composed of blocks in different planes of a die, each block having an index specific to the die (Cui, 0035). Response to Arguments The Applicant's arguments filed on November 26, 2025 have been fully considered, but they are not persuasive. Applicant argues: ‘Further, the Office Action provides no explanation as to how such a single metric of Byun could be construed as two separate metrics, as claimed, nor does the Office Action point to any portions of Byun that….construed as "a storage capacity of the first volatile memory device," and "a threshold quantity of data," as claimed’. (Rem, Pg. 12) Response: This argument is incorrect. Please see the 112(b). As mentioned in the 112(b), the spec does not disclose how ‘threshold’ is determined. The combination of Byun,Huang,Gorobets disclose ‘a storage capacity of the first volatile memory’ and ‘a threshold quantity of data’. Byun, Para-0049 recites that first memory 144 may be a data memory, a write buffer/cache, a read buffer/cache, a data buffer/cache, etc., in order to store data required for performing data write and read operations between the host 102 and the memory device 150. And Byun, Fig. 9, Para-0122 recites, ‘When the RAM 6222 is used as the buffer memory, the RAM 6222 may buffer data to be transmitted to the memory device 6230/NVM from the host 6210’. Depending on the size of data, the data received from the host is buffered in first memory 144/RAM 6222. Huang discloses the same concept in Fig. 20, by referring to ‘volatile RAM 511 is relatively small’, and Huang, Para-0128 recites, ‘Data is transferred from a host 501 onto the memory 503, where it is initially stored on the volatile buffer memory RAM 511’, and then proceeds to recite that the data initially stored/buffered in RAM 511, is written into the second NVM. See at least Huang, Figs. 21-22, Para-0143. The combination of Byun,Huang,Gorobets disclose buffering at the first memory device, thereby implying receiving ‘accumulated data’ from the host. In other words, data is transmitted from the larger host memory buffer to the smaller controller buffer, RAM 511/memory 144, which acts as a high-speed staging area for data before it is written to the second NVM. Huang, Para-0015 recites, ‘many pages of data are stored/written in one block, and a page may store multiple sectors of data. Further, two or more blocks are often operated together as metablocks, and the pages of such blocks logically linked together as metapages. A page or metapage of data are written and read together for parallelism’. It is well-known in the prior art that data is written to NAND flash in pages (and erased in blocks). In other words, because NAND flash is programmed/written at a page level, the controller buffers incoming/accumulated data in RAM 511/memory 144 until it reaches the size needed to fill NAND physical pages in a block/pageline/row of pages, thereby satisfying the ‘threshold’. Since the spec does not recite how ‘threshold’ is determined, the combination of Byun,Huang,Gorobets disclose ‘threshold’ and its relationship with the capacity/size of the first volatile memory. Gorobets also discloses the controller volatile memory buffer, Fig. 1, RAM 130 and the consolidation and compaction process in more detail. Applicant further argues, ‘Moreover, even if the size capacity of the integrated memory on the host, as described in Byun…..suggested "a threshold quantity of data,"……there is no indication that the size capacity of the integrated memory on the host "corresponds to a quantity of data stored by a pageline of the second non-volatile memory device," as recited in independent claim 1.’ (Rem, Pg. 12) Response: This argument is incorrect. Please see the 112(b). As mentioned in the 112(b), the steps determining how ‘threshold’ is calculated, is undisclosed. So how the ‘accumulated data’ satisfies the ‘threshold’ is indefinite. There is also no disclosure of the steps to verify if ‘threshold’ corresponds to a quantity/size of data stored by a pageline in the second NVM, after the final folding. In essence, the limitation lacks written description support in the spec. Byun, at least Para-0006 recites that the available capacity of the host memory/second volatile device is larger than or equal to a size of the current valid data received from the memory device. This suggests that the host memory accumulates larger quantity data which is then transmitted by buffering at memory 144, as explained above. Byun, Para-0034 recites that the host provides the memory system with information on available capacity of the integrated memory 104/host buffer. This suggests that the controller knows when ‘threshold’ data corresponding to a pageline, is accumulated at the host. It is well known that in the prior art the host uses logical addresses but the controller uses physical addresses. But since the spec does not recite any communication between them, no logical-to-physical address translations associated with the transfer(s), maintained by the controller, is disclosed. However, Huang, at least Para-0014 recites that the controller translates logical addresses received from the host into physical addresses within the memory array, and then keeps track of these address translations, which further supports knowledge of host accumulated data capacity by the controller and receiving the ‘threshold’ data by buffering into RAM 511. Huang, clarifies the buffering and folding in Fig. 21, Para-0143 as it recites, ‘the RAM size for data transfer is set to 32 KB, so that there is a transfer of 16 KB of D1 data. In theory, the RAM is filled up with 32 KB of host data during the folding process (2×16 KB). Once 16 KB is transferred into the D1 memory (at 721), but not necessarily programmed in yet (at 723), the portion of the RAM that was holding the 16 KB data can be released to take in new data’. Huang, Para-0104 recites, ‘Due to requirement to store sequentially the logical data units in the blocks of the main memory, smaller and chaotic fragments of logical units from a series of host writes can be buffered in the cache portion and later reassembly in sequential order to the blocks in the main memory portion’. This suggests that buffering in RAM 511 facilitates the efficient writing of data to NVRAM pages/pageline by aggregating small writes into larger, sequential physical blocks. As shown in Gorobets, the folding process involves consolidating the accumulated changes from the buffer into organized pages in NVRAM. Hence the combination of Byun,Huang,Gorobets disclose that in NAND flash memory, a folding operation results in a pageline (physical pages, e.g. 4KB,8KB,16KB or larger) of NVM data at the second NVM that corresponds to the threshold buffered at the RAM (See Huang, at least Figs. 21,22A,22B,23). The spec describes a broad, general invention but key terms and key steps are undisclosed. The spec takes a structural and functional leap when it recites, ‘wherein….a size of the accumulated data satisfying a threshold quantity of data that corresponds to a quantity of data stored by a pageline of the second non-volatile memory device’. The spec does not demonstrate that the applicant was in possession of the claimed invention at the time of filing. Hence claim 1 is indefinite. Examiner Notes: The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: 1.‘Micron Technical Note: NAND Flash 101’, 2006 - The 2Gb NAND Flash device is organized as 2048 blocks, with 64 pages per block (see Figure 3). Each page is 2112 bytes, consisting of a 2048-byte data area and a 64-byte spare area. The spare area is typically used for ECC, wear-leveling, and other software overhead functions, although it is physically the same as the rest of the page. Host data is connected to the NAND Flash memory via an 8-bit- or 16-bit-wide bidirectional data bus. For 16-bit devices, commands and addresses use the lower 8 bits (7:0). The upper 8 bits of the 16-bit data bus are used only during data-transfer cycles (Pg. 6). https://user.eng.umd.edu/~blj/CS-590.26/micron-tn2919.pdf, Pgs. 1-27 2.‘High-efficient superblock flash translation layer for NAND flash controller’, 2020 - A superblock is a set of a fixed number of blocks. On the one hand, we organize continuous logical addresses as logical superblocks, and map a logical superblock to a corresponding physical superblock with a mapping manager, on the other hand, a physical superblock can contain different physical blocks according to the physical block list. The difference is, the physical addresses can be discrete, while the logical addresses must be continuous, as shown in Fig. 1 and Table 1. https://ietresearch.onlinelibrary.wiley.com/doi/full/10.1049/el.2019.3526 Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARVIND TALUKDAR whose telephone number is (303)297-4475. The examiner can normally be reached M-F, 10 am-6pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at 571-272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Arvind Talukdar Primary Examiner Art Unit 2132 /ARVIND TALUKDAR/Primary Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Nov 10, 2023
Application Filed
Nov 16, 2024
Non-Final Rejection — §103, §112
Feb 21, 2025
Response Filed
Mar 22, 2025
Final Rejection — §103, §112
May 22, 2025
Response after Non-Final Action
Jun 20, 2025
Request for Continued Examination
Jun 25, 2025
Response after Non-Final Action
Aug 23, 2025
Non-Final Rejection — §103, §112
Nov 20, 2025
Applicant Interview (Telephonic)
Nov 26, 2025
Response Filed
Dec 09, 2025
Examiner Interview Summary
Feb 21, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602317
MEMORY DEVICE HARDWARE HOST READ ACTIONS BASED ON LOOKUP OPERATION RESULTS
2y 5m to grant Granted Apr 14, 2026
Patent 12591520
LINEAR TO PHYSICAL ADDRESS TRANSLATION WITH SUPPORT FOR PAGE ATTRIBUTES
2y 5m to grant Granted Mar 31, 2026
Patent 12591382
STORAGE DEVICE OPERATION ORCHESTRATION
2y 5m to grant Granted Mar 31, 2026
Patent 12579074
HARDWARE PROCESSOR CORE HAVING A MEMORY SLICED BY LINEAR ADDRESS
2y 5m to grant Granted Mar 17, 2026
Patent 12566712
A RING BUFFER WITH MULTIPLE HEAD POINTERS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
81%
Grant Probability
84%
With Interview (+3.5%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month