Prosecution Insights
Last updated: April 19, 2026
Application No. 18/605,982

LATENCY BASED STORAGE DECISIONS IN A MEMORY DEVICE AND OPERATING METHOD THEREOF

Non-Final OA §103
Filed
Mar 15, 2024
Examiner
WARREN, TRACY A
Art Unit
2137
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
344 granted / 422 resolved
+26.5% vs TC avg
Moderate +6% lift
Without
With
+6.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
22 currently pending
Career history
444
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
17.6%
-22.4% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 422 resolved cases

Office Action

§103
NON-FINAL REJECTION DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 29, 2025 has been entered. Response to Amendment The Amendment filed December 29, 2025 has been entered. Claims 1-2, 4-8, 10-13, 15-17, and 19-21 remain pending in the application. Claims 3, 14, and 18 have been cancelled. The Examiner notes that claim 9 was canceled in the amendment filed July 21, 2025 but is shown as “currently amended” in the amendments filed December 29, 2025. A claim that was previously canceled may be reinstated only by adding the claim as a “new” claim with a new claim number. Therefore, claim 9 remains canceled. Applicant's amendments to the claims have overcome the 35 U.S.C. 112(a), 35 U.S.C. 112(b), and 35 U.S.C. 103 rejections previously set forth in the Final Office Action mailed October 3, 2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 2, 10, and 13 are objected to because of the following informalities: Claim 1 recites “wherein when the register write operation is performed, the register control circuit is configured to map the address to at least one register address among a plurality of register addresses based on a size of the data and a size of each of the plurality of registers and is configured to generate a mapping table including a mapping relationship between the address and the register address.” The phrase “the register address” should recite “the at least one register address.” Claims 10…recite similar limitations and are objected to for the same reason. Claim 2 recites “a register control circuit” but should recite “the register control circuit.” Claims 13…recite similar limitations and are objected to for the same reason. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-5, 7-8, 16-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Choo et al. (US 2019/0259732), Sun et al. (US 2018/0373626), Kotra et al. (US 2023/0205693), and Williams (US 2024/0403190). Regarding claim 1, Choo et al. disclose: a plurality of memory banks (FIG. 2 banks 121-151); a plurality of registers ([0094] the buffer die may include at least one register to store addresses, which indicate areas of the first and second banks where data is stored. The buffer die may include at least one register to temporarily store data to copy); a data I/O buffer configured to transmit and receive data ([0023] The buffer die 110 may include circuit components configured to buffer signals transmitted between an external device external to the memory device 100 (e.g., devices accessing the memory device 100 such as a host, a processor, a memory controller, etc.) and the first to fourth memory dies 120 to 150. For example, the buffer die 110 may include a buffer circuit (not shown), thereby compensating signal integrity of signals received from the external device and signals received from the first to fourth memory dies 120 to 150. For example, the buffer die 110 may transmit a command, an address, and a write data transmitted from the external device to at least one of the first to fourth memory dies 120 to 150. The buffer die 110 may transmit a read data transmitted from the first to fourth memory dies 120 to 150 to the external device); a plurality of memory dies (FIG. 2 memory dies 120-150), wherein the plurality of memory dies comprises at least one memory bank among the plurality of memory banks (FIG. 2 each die 120-150 comprises banks 121-151); a buffer die (FIG. 2 buffer die 110), and …wherein the buffer die comprises the plurality of registers ([0094] the buffer die may include at least one register) and the control circuit ([0023] buffer die 110 may include circuit components configured to perform logic functions), wherein the plurality of memory dies are separated from (FIG. 2 bumps 126; [0044] Bumps 126 may be disposed between the first memory die 120 and the buffer die 110) and stacked on the buffer die (FIG. 2) and are interconnected to one another by a plurality of through silicon vias ([0044] The first to fourth memory dies 120 to 150 may be stacked on buffer die 110 through the through silicon vias 128), wherein the control circuit is configured to determine a logic level of the low latency bit, and …the plurality of registers on the buffer die ([0094] the buffer die may include at least one register to store addresses, which indicate areas of the first and second banks where data is stored. The buffer die may include at least one register to temporarily store data to copy)… Choo et al. do not appear to explicitly teach “a control circuit configured to receive a low latency bit and an address that is preset according to a type of the data,…wherein the control circuit is configured to determine a logic level of the low latency bit, and wherein, based on the low latency bit, the memory device is configured to store the data in a memory bank corresponding to the address among the plurality of memory banks or a register corresponding to the address among the plurality of registers…or configured to read the data from the memory bank corresponding to the address or the register corresponding to the address, wherein the memory device further comprises a register control circuit configured to be activated based on the low latency bit, and configured to perform a register write operation to store the data in the plurality of registers and is configured to perform a register read operation to read the data from the plurality of registers, wherein when the register write operation is performed, the register control circuit is configured to map the address to at least one register address among a plurality of register addresses based on a size of the data and a size of each of the plurality of registers and is configured to generate a mapping table including a mapping relationship between the address and the register address.” However, Sun et al. disclose: a control circuit (FIG. 1 Controller) configured to receive a low latency bit and an address that is preset according to a type of the data (FIG. 2 DSM Hints, which includes Access latency, and Write Cmd Starting LBA/Read Cmd Starting LBA; FIG. 5 Latency Bit High/Low), …wherein the control circuit is configured to determine a logic level of the low latency bit (FIG. 3 step 304 Request (ri), access needs low latency according to cmd hint?; [0040] In FIG. 3, the controller 120 in the device 126 receives a new write command (ci) from the host 114 at 302. The write command may have data (di) associated with the command. In 304, the write command is evaluated to determine if there is a request (ri) for low latency according to a command hint), and wherein, based on the low latency bit (FIG. 3 step 304), the memory device is configured to store the data in a memory bank corresponding to the address among the plurality of memory banks (FIG. 3 Step 306 Store di in NAND flash, corresponding to the memory banks disclosed by Choo et al. supra; [0025] using the flash memory components for higher latency storage; [0031] the slow namespace entities may be associated with the NAND flash 122. This preference to separate the slow namespace entities and fast namespace entities is but one configuration and should be considered non-limiting) or a register corresponding to the address among the plurality of registers (FIG. 3 Step 312 Store di in SCM, corresponding to the registers disclosed by Choo et al. supra; [0025] storage class memory (SCM) may be used for respective low-latency data [0031] the fast namespace entities are associated with the storage class memory 124…This preference to separate the slow namespace entities and fast namespace entities is but one configuration and should be considered non-limiting)…or configured to read the data from the memory bank corresponding to the address (FIG. 4 step 404 Read from memory; [0041] the memory (either NAND flash, or storage class memory) is read) or the register corresponding to the address (FIG. 4 step 404 Read from memory; [0041] the memory (either NAND flash, or storage class memory) is read), wherein the memory device further comprises a register control circuit (FIG. 1 Controller) configured to be activated based on the low latency bit (FIG. 3 step 304 Request (ri), access needs low latency according to cmd hint? Yes; FIG. 5 Latency Bit High/Low), and configured to perform a register write operation to store the data in the plurality of registers (FIG. 3 step 312 Store di in SCM, corresponding to the registers disclosed by Choo et al. in claim 1) and is configured to perform a register read operation to read the data from the plurality of registers (FIG. 4 step 404 Read from memory; [0041] the memory (either NAND flash, or storage class memory) is read)… Choo et al. and Sun et al. are analogous art because Choo et al. teach a memory device including a plurality of memory dies and a buffer die and Sun et al. teach providing low latency storage mechanisms. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Choo et al. and Sun et al. before him/her, to modify the teachings of Choo et al. with the Sun et al. teachings of storing low-latency data in storage class memory while storing higher latency data in flash memory because doing so allows for a higher quality of service for the user (Sun et al. [0025]). Choo et al. and Sun et al. do not appear to explicitly teach “wherein when the register write operation is performed, the register control circuit is configured to map the address to at least one register address among a plurality of register addresses based on a size of the data and a size of each of the plurality of registers and is configured to generate a mapping table including a mapping relationship between the address and the register address.” However, Kotra et al. disclose: wherein when the register write operation is performed, the register control circuit is configured to map the address to at least one register address among a plurality of register addresses based on a size of the data and a size of each of the plurality of registers and is configured to generate a mapping table including a mapping relationship between the address and the register address ([0040] the memory controller 140 includes a PIM register mapping table 142 to facilitate the use of the PIM register file 118 for expediting non-PIM instructions. The PIM register mapping table 142 maps memory locations to PIM registers. For example, to utilize a PIM register as a write buffer for a non-PIM instruction, the memory controller logic 130 remaps the write destination of the write data from the target memory location of the non-PIM write instruction to a PIM register. The memory controller logic 130 writes the write data to the PIM register using a PIM write command and updates the PIM register mapping table 142 to include an association between that PIM register and the target memory location). Choo et al., Sun et al., and Kotra et al. are analogous art because Choo et al. teach a memory device including a plurality of memory dies and a buffer die; Sun et al. teach providing low latency storage mechanisms; and Kotra et al. teach expediting operations in a stacked memory device. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Choo et al., Sun et al., and Kotra et al. before him/her, to modify the teachings of Choo et al. and Sun et al. with the Kotra et al. teachings of generating a mapping table because doing so would facilitate the use of a register by remapping the write destination address of the write address to the register address. Choo et al., Sun et al., and Kotra et al. do not appear to explicitly teach “based on a size of the data and a size of each of the plurality of registers.” However, Williams discloses: based on a size of the data and a size of each of the plurality of registers ([0103] a plurality of registers, each of which is a temporary storage structure for storing data of a given size). Choo et al., Sun et al., Kotra et al., and Williams are analogous art because Choo et al. teach a memory device including a plurality of memory dies and a buffer die; Sun et al. teach providing low latency storage mechanisms; Kotra et al. teach expediting operations in a stacked memory device; and Williams teaches register files for temporary data storage. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Choo et al., Sun et al., Kotra et al., and Williams before him/her, to modify the teachings of Choo et al., Sun et al., and Kotra et al. with Williams’ teachings of registers because such a modification would have amounted to little more than combining “familiar elements according to known methods” and would have been obvious because it would have done “no more than yield predictable results.” (MPEP 2143 I.A.) Registers are well-known structures for the temporary storage of data. Using a particular register to store data of a given size would have yielded the predictable result of the temporary storage of the given data in the register. Regarding claim 2, Sun et al. further disclose: The memory device of claim 1, further comprising: a register control circuit (FIG. 1 Controller) that is configured to be activated when the low latency bit is at a first level (FIG. 3 step 304 Request (ri), access needs low latency according to cmd hint? Yes; FIG. 5 Latency Bit High/Low) and is configured to perform the register write operation (FIG. 3 step 312 Store di in SCM, corresponding to the registers disclosed by Choo et al. in claim 1) and is configured to perform the register read operation (FIG. 4 step 404 Read from memory; [0041] the memory (either NAND flash, or storage class memory) is read), wherein, when the low latency bit is at a second level that is different from the first level (FIG. 3 step 304 Request (ri), access needs low latency according to cmd hint? No; FIG. 5 Latency Bit High/Low), the control circuit is configured to store the data in the memory bank corresponding to the address (FIG. 3 Step 306 Store di in NAND flash, corresponding to the memory banks disclosed by Choo et al. in claim 1 above) or configured to read the data from the memory bank corresponding to the address (FIG. 4 step 404 Read from memory; [0041] the memory (either NAND flash, or storage class memory) is read). Regarding claim 4, Kotra et al. further disclose: The memory device of claim 2, wherein when the register write operation is performed,…the register corresponding to the address has a second size greater than the first size, the register control circuit is configured to map the address to one of the plurality of register addresses (FIG. 4; [0040]). The combination of Choo et al., Sun et al., Kotra et al., and Williams do not appear to explicitly teach “and when the data has a first size and the register corresponding to the address has a second size greater than the first size.” However, it would be obvious to one skilled in the art before the effective filing date of the claimed invention that the write operation can be mapped to the register when the size of the data is less than the size of the register. When the size of the data is less than the size of the register the register is able to store the data. Regarding claim 5, Kotra et al. further disclose: The memory device of claim 2, wherein…the register corresponding to the address has a second size greater than the first size, the register control circuit is configured to map the address to register addresses among the plurality of register addresses (FIG. 4; [0040]). The combination of Sun et al., Choo et al., Kotra et al., and Williams do not appear to explicitly teach “and when the data has a first size and the register corresponding to the address has a second size greater than the first size.” However, it would be obvious to one skilled in the art before the effective filing date of the claimed invention that the write operation can be mapped to the register when the size of the data is less than the size of the register. When the size of the data is less than the size of the register the register is able to store the data. Regarding claim 7, Kotra et al. further disclose: The memory device of claim 2, wherein when the register read operation is performed, the register control circuit is configured to transmit a control signal that indicates to the control circuit to read the data from the memory bank corresponding to the address when the register address corresponding to the address does not exist in the mapping table ([0058] the memory controller 610 takes up the non-PIM read instruction for dispatch to the memory device 612, the memory controller 610 first determines whether the memory location 624 hits on the PIM register mapping table 614; [0039] the memory controller logic 130 uses the PIM register file 118 as a memory side cache or prefetch buffer. For example, the memory controller logic 130 can prepopulate, based on a speculative algorithm, the PIM register file 118 with data loaded using a PIM load command. If a non-PIM read instruction hits on the memory side cache, the memory controller logic can read the requested data from the PIM register file 118 using a PIM read command, which is faster than reading from the memory array because there is no need to open a memory row; It would be obvious to one skilled in the art before the effective filing date of the claimed invention that the data would be read from the memory bank because the register acts as a cache and the target address is to the memory bank. In caching, when there is a miss in the cache in response to a read request, the data is read from the main memory, which corresponds to the memory bank). Regarding claim 8, Kotra et al. further disclose: The memory device of claim 2, wherein when the register read operation is performed, the register control circuit is configured to read the data from at least one register corresponding to the at least one register address that is mapped to the address based on the mapping table ([0040] When a non-PIM read instruction hits on the PIM register mapping table 142 (i.e., the target memory location of the non-PIM read instruction matches a memory location in the PIM register mapping table 142), the source of the non-PIM read instruction is remapped from the target memory location of the non-PIM read instruction to the PIM register associated with that memory location, and the data is read from the PIM register using a PIM read command). Regarding claim 16, Choo et al. disclose: An operating method of a memory device, comprising: …the memory device that comprises a buffer die (FIG. 2 buffer die 110) including a plurality of registers ([0094] the buffer die may include at least one register to store addresses, which indicate areas of the first and second banks where data is stored. The buffer die may include at least one register to temporarily store data to copy) and a control logic ([0023] buffer die 110 may include circuit components configured to perform logic functions), and further comprises at least one memory die (FIG. 2 memory dies 120-150) stacked on and separated from the buffer die (FIG. 2 bumps 126; [0044] Bumps 126 may be disposed between the first memory die 120 and the buffer die 110) and including a plurality of memory banks (FIG. 2 each die 120-150 comprises banks 121-151),… Choo et al. do not appear to explicitly teach “receiving, by the memory device… data, a low latency bit that is preset according to a type of the data, a command, and an address from a host device; and determining a logic level of the low latency bit, wherein when the command is a write command, the operating method of the memory device comprises: mapping the address to at least one register address among a plurality of register addresses when the logic level of the low latency bit is at a first level, as determined by the control logic; generating a mapping table including a mapping relationship between the address and the at least one register address; and storing the data in a register corresponding to the at least one register address that was mapped, wherein the mapping of the address comprises mapping the address to at least one register address among the plurality of register addresses based on a size of the data and respective sizes of each of the plurality of registers.” However, Sun et al. disclose: receiving by the memory device (FIG. 1 Storage Device 126)…data, a low latency bit that is preset according to a type of the data, a command, and an address from a host device (FIG. 1 Host 114; FIG. 2 DSM Hints, which includes Access latency, and Write Cmd Starting LBA/Read Cmd Starting LBA; FIG. 5 Latency Bit High/Low); and determining a logic level of the low latency bit (FIG. 3 step 304 Request (ri), access needs low latency according to cmd hint?; [0040] In FIG. 3, the controller 120 in the device 126 receives a new write command (ci) from the host 114 at 302. The write command may have data (di) associated with the command. In 304, the write command is evaluated to determine if there is a request (ri) for low latency according to a command hint), wherein when the command is a write command (FIG. 3 Receive a new write cmd (ci), from host, data (di)), the operating method of the memory device comprises: …storing the data in a register corresponding to the at least one register address that was mapped storing the data in a register corresponding to the at least one register address that was mapped (FIG. 3 Step 312 Store di in SCM, corresponding to the registers disclosed by Choo et al. supra; [0025] storage class memory (SCM) may be used for respective low-latency data [0031] the fast namespace entities are associated with the storage class memory 124…This preference to separate the slow namespace entities and fast namespace entities is but one configuration and should be considered non-limiting), The motivation for combining is based on the same rational presented for rejection of independent claim 1. Choo et al., and Sun et al. do not appear to explicitly teach “mapping the address to at least one register address among a plurality of register addresses when the logic level of the low latency bit is at a first level, as determined by the control logic; generating a mapping table including a mapping relationship between the address and the at least one register address…wherein the mapping of the address comprises mapping the address to at least one register address among the plurality of register addresses based on a size of the data and respective sizes of each of the plurality of registers.” However, Kotra et al. disclose: …mapping the address to at least one register address among a plurality of register addresses (FIG. 1 PIM Register Mapping Table 132; FIG. 4 PIM Register Mapping Table 404; [0040] the memory controller 140 includes a PIM register mapping table 142 to facilitate the use of the PIM register file 118 for expediting non-PIM instructions. The PIM register mapping table 142 maps memory locations to PIM registers. For example, to utilize a PIM register as a write buffer for a non-PIM instruction, the memory controller logic 130 remaps the write destination of the write data from the target memory location of the non-PIM write instruction to a PIM register. The memory controller logic 130 writes the write data to the PIM register using a PIM write command and updates the PIM register mapping table 142 to include an association between that PIM register and the target memory location) when the logic level of the low latency bit is at a first level, as determined by the control logic (Sun et al. above (FIG. 3 step 304), discloses that the data is written to the low latency memory corresponding to Choo’s registers when the latency bit is at a first level); generating a mapping table including a mapping relationship between the address and the at least one register address (FIG. 4 PIM Register Mapping Table 404); and …wherein the mapping of the address comprises mapping the address to at least one register address among the plurality of register addresses (FIG. 1 PIM Register Mapping Table 132; FIG. 4 PIM Register Mapping Table 404; [0040] the memory controller 140 includes a PIM register mapping table 142 to facilitate the use of the PIM register file 118 for expediting non-PIM instructions. The PIM register mapping table 142 maps memory locations to PIM registers. For example, to utilize a PIM register as a write buffer for a non-PIM instruction, the memory controller logic 130 remaps the write destination of the write data from the target memory location of the non-PIM write instruction to a PIM register. The memory controller logic 130 writes the write data to the PIM register using a PIM write command and updates the PIM register mapping table 142 to include an association between that PIM register and the target memory location) based on a size of the data and respective sizes of each of the plurality of registers. The motivation for combining is based on the same rational presented for rejection of independent claim 1. Choo et al., Sun et al., and Kotra et al. do not appear to explicitly teach “based on a size of the data and respective sizes of each of the plurality of registers.” However, Williams discloses: based on a size of the data and respective sizes of each of the plurality of registers ([0103] a plurality of registers, each of which is a temporary storage structure for storing data of a given size). The motivation for combining is based on the same rational presented for rejection of independent claim 1. Regarding claim 17, the combination of Choo et al., Sun et al., Kotra et al., and Williams further disclose: The operating method of claim 16, wherein when the command is the write command and the logic level of the low latency bit is at a second level that is different from the first level (FIG. 3 step 304 Request (ri), access needs low latency according to cmd hint? No; FIG. 5 Latency Bit High/Low), the operating method of the memory device further comprises storing the data in a memory bank corresponding to the address among the plurality of memory banks (FIG. 3 Step 306 Store di in NAND flash, corresponding to the memory banks disclosed by Choo et al.). Regarding claim 19, the combination of Choo et al., Sun et al., Kotra et al., and Williams further disclose: The operating method of claim 16, wherein when the command is a read command, the operating method of the memory device further comprises: determining whether a register address mapped to the address exists within the mapping table when the logic level of the low latency bit is at the first level (according to the low latency bit as taught by Sun et al. in claim 16; Kotra et al. further disclose [0058] the memory controller 610 takes up the non-PIM read instruction for dispatch to the memory device 612, the memory controller 610 first determines whether the memory location 624 hits on the PIM register mapping table 614); and reading the data from a register corresponding to the register address when the at least one register address that was mapped exists ([0040] When a non-PIM read instruction hits on the PIM register mapping table 142 (i.e., the target memory location of the non-PIM read instruction matches a memory location in the PIM register mapping table 142), the source of the non-PIM read instruction is remapped from the target memory location of the non-PIM read instruction to the PIM register associated with that memory location, and the data is read from the PIM register using a PIM read command). Regarding claim 20, Kotra et al. further disclose: The operating method of claim 19, wherein when the at least one register address that was mapped does not exist, the operating method of the memory device further comprises: reading the data from a memory bank corresponding to the address among the plurality of memory banks ([0058] the memory controller 610 takes up the non-PIM read instruction for dispatch to the memory device 612, the memory controller 610 first determines whether the memory location 624 hits on the PIM register mapping table 614; [0039] the memory controller logic 130 uses the PIM register file 118 as a memory side cache or prefetch buffer. For example, the memory controller logic 130 can prepopulate, based on a speculative algorithm, the PIM register file 118 with data loaded using a PIM load command. If a non-PIM read instruction hits on the memory side cache, the memory controller logic can read the requested data from the PIM register file 118 using a PIM read command, which is faster than reading from the memory array because there is no need to open a memory row; It would be obvious to one skilled in the art before the effective filing date of the claimed invention that the data would be read from the memory bank because the register acts as a cache and the target address is to the memory bank. In caching, when there is a miss in the cache in response to a read request, the data is read from the main memory, which corresponds to the memory bank). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Sun et al., Choo et al., Kotra et al., and Williams as applied to claim 2 above, and further in view of Ayyapureddi (US 2024/0071549) and Witham (US 2023/0342297). Regarding claim 6, Sun et al., Choo et al., Kotra et al., and Williams do not appear to explicitly teach while Ayyapureddi discloses: The memory device of claim 2, further comprising: a deserializing circuit (FIG. 2 Deserializer 264)…and is configured to generate parallel data by parallelizing input data ([0047] the deserializer circuit 264 converts the codewords to a parallel format), wherein the data I/O buffer is configured to transfer external data to the deserializing circuit based on an input/output control signal (FIG. 2 I/O Buffer 262; [0047] The codewords are provided to the I/O circuit 260 of the ECC circuit 250, and stored in the input buffer 262 while the deserializer circuit 264 converts the codewords to a parallel format), and Choo et al., Sun et al., Kotra et al., Williams, and Ayyapureddi are analogous art because Choo et al. teach a memory device including a plurality of memory dies and a buffer die; Sun et al. teach providing low latency storage mechanisms; Kotra et al. teach expediting operations in a stacked memory device; Williams teaches register files for temporary data storage; and Ayyapureddi teaches semiconductor memory devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Choo et al., Sun et al., Kotra et al., Williams, and Ayyapureddi before him/her, to modify the teachings of Choo et al., Sun et al., Kotra et al., and Williams with Ayyapureddi’s teachings of a deserializer because including a deserializer would enable the system to convert the codewords to a parallel format. Such a modification would have amounted to little more than combining "familiar elements according to known methods" and would have been obvious because it would have done "no more than yield predictable results." (MPEP 2143 I.A.) Deserializers are well-known elements used to transform serial data into parallel data. Implementing a deserializer would have yielded the predictable result producing parallel data to send to multiple memory devices. Sun et al., Choo et al., Kotra et al., Williams, and Ayyapureddi do not appear to explicitly teach “a deserializing circuit that is electrically or logically connected to the plurality of registers…and wherein the parallel data is stored in the plurality of registers.” However, Witham discloses: a deserializing circuit that is electrically or logically connected to the plurality of registers…and wherein the parallel data is stored in the plurality of registers ([0142] SERDES 712 represents a serializer/deserializer to convert received serial data signals into parallel data to write to memory (corresponding to registers taught by Choo et al. in claim 1)). Sun et al., Choo et al, Kotra et al., Williams, Ayyapureddi, and Witham are analogous art because Sun et al. and Kotra et al. are analogous art because Sun et al. teach providing low latency storage mechanisms; Kotra et al. teach expediting operations is a stacked memory device; Williams teaches register files for temporary data storage; Ayyapureddi teach semiconductor memory devices; and Witham teaches memory architecture. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Sun et al., Choo et al., Kotra et al., Williams, Ayyapureddi, and Witham before him/her, to modify the combined teachings of Sun et al., Choo et al., Kotra et al., Williams, and Ayyapureddi with Witham’s teachings of because such a modification would have amounted to little more than combining “familiar elements according to known methods” and would have been obvious because it would have done “no more than yield predictable results.” (MPEP 2143 I.A.) Deserializers are well-known elements used to transform serial data into parallel data. Implementing a deserializer would have yielded the predictable result producing parallel data to send to multiple memory devices. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Sun et al., Choo et al., Kotra et al., and Williams as applied to claim 16 above, and further in view of Choi (US 2020/0027521). Regarding claim 21, Choo et al., Sun et al., and Kotra et al., and Williams do not appear to explicitly teach while Choi discloses: The memory device of claim 1, wherein the plurality of through silicon vias extend between and into the buffer die and the plurality of memory dies (FIG. 5; Abstract: A stacked memory device includes a buffer die, a plurality of memory dies stacked on the buffer die and a plurality of through silicon vias (TSVs). The buffer die communicates with an external device. The TSVs extend through the plurality of memory dies to connect to the buffer die). Choo et al., Sun et al., Kotra et al., Williams, and Choi are analogous art because Choo et al. teach a memory device including a plurality of memory dies and a buffer die; Sun et al. teach providing low latency storage mechanisms; Kotra et al. teach expediting operations in a stacked memory device; Williams teaches register files for temporary data storage; and Choi teaches stacked memory devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Choo et al., Sun et al., Kotra et al., Williams, and Choi before him/her, to modify the teachings of Choo et al., Sun et al., Kotra et al., and Williams with Choi’s teachings of through silicon vias because such a modification would have amounted to little more than combining "familiar elements according to known methods" and would have been obvious because it would have done "no more than yield predictable results." (MPEP 2143 I.A.) Through silicon vias are well-known elements that may be disposed to pass through memory dies and a buffer die used for communication between layers and may may independently deliver only the data of any one memory die, or any channel, as an independent channel for that one memory die or channel (Choi [0065]). Claims 10 are rejected under 35 U.S.C. 103 as being unpatentable over Choo et al., Sun et al., Kotra et al., Williams, and Cresci et al. (US 2022/0350533). Regarding claim 10, Choo et al. disclose: a buffer die (FIG. 2 buffer die 110) configured to transmit and receive data to and from a host device ([0023] The buffer die 110 may include circuit components configured to buffer signals transmitted between an external device external to the memory device 100 (e.g., devices accessing the memory device 100 such as a host, a processor, a memory controller, etc.) and the first to fourth memory dies 120 to 150. For example, the buffer die 110 may include a buffer circuit (not shown), thereby compensating signal integrity of signals received from the external device and signals received from the first to fourth memory dies 120 to 150. For example, the buffer die 110 may transmit a command, an address, and a write data transmitted from the external device to at least one of the first to fourth memory dies 120 to 150. The buffer die 110 may transmit a read data transmitted from the first to fourth memory dies 120 to 150 to the external device), wherein the buffer die includes a plurality of registers ([0094] the buffer die may include at least one register to store addresses, which indicate areas of the first and second banks where data is stored. The buffer die may include at least one register to temporarily store data to copy); a plurality of memory dies (FIG. 2 memory dies 120-150) that are stacked on and separated from the buffer die (FIG. 2 bumps 126; [0044] Bumps 126 may be disposed between the first memory die 120 and the buffer die 110), wherein the plurality of memory dies include a plurality of memory banks (FIG. 2 each die 120-150 comprises banks 121-151); and a plurality of through silicon vias that electrically connect the buffer die to respective ones of the plurality of memory dies ([0044] The first to fourth memory dies 120 to 150 may be stacked on buffer die 110 through the through silicon vias 128), wherein the buffer die further comprises a control circuit ([0023] buffer die 110 may include circuit components configured to perform logic functions)… Choo et al. do not appear to explicitly teach “configured to receive a low latency bit and an address that is preset according to a type of the data, and a register control circuit that is configured to be activated based on the low latency bit and is configured to perform a register write operation to store the data in the plurality of registers and is configured to perform a register read operation to read the data from the plurality of registers, wherein the control circuit is configured to determine a logic level of the low latency bit the control circuit comprising at least hardware, wherein, based on the low latency bit, the memory device is configured to store the data in a memory bank corresponding to the address among the plurality of memory banks or store the data in a register corresponding to the address among the plurality of registers, or read the data from the memory bank corresponding to the address or the register corresponding to the address, and wherein when the register write operation is performed, the register control circuit is configured to map the address to at least one register address among a plurality of register addresses based on a size of the data and a size of each of the plurality of registers, and is configured to generate a mapping table including a mapping relationship between the address and the register address.” However, Sun et al. discloses: …a control circuit (FIG. 1 Controller) configured to receive a low latency bit and an address that is preset according to a type of the data (FIG. 2 DSM Hints, which includes Access latency, and Write Cmd Starting LBA/Read Cmd Starting LBA; FIG. 5 Latency Bit High/Low), and a register control circuit that is configured to be activated based on the low latency bit (FIG. 3 step 304 Request (ri), access needs low latency according to cmd hint? Yes; FIG. 5 Latency Bit High/Low) and is configured to perform a register write operation to store the data in the plurality of registers (FIG. 3 step 312 Store di in SCM, corresponding to the registers disclosed by Choo et al. supra) and is configured to perform a register read operation to read the data from the plurality of registers (FIG. 4 step 404 Read from memory; [0041] the memory (either NAND flash, or storage class memory) is read), wherein the control circuit is configured to determine a logic level of the low latency bit (FIG. 3 step 304 Request (ri), access needs low latency according to cmd hint?; [0040] In FIG. 3, the controller 120 in the device 126 receives a new write command (ci) from the host 114 at 302. The write command may have data (di) associated with the command. In 304, the write command is evaluated to determine if there is a request (ri) for low latency according to a command hint)… wherein, based on the low latency bit (FIG. 3 step 304), the memory device is configured to store the data in a memory bank corresponding to the address among the plurality of memory banks (FIG. 3 Step 306 Store di in NAND flash, corresponding to the memory banks disclosed by Choo et al. supra; [0025] using the flash memory components for higher latency storage; [0031] the slow namespace entities may be associated with the NAND flash 122. This preference to separate the slow namespace entities and fast namespace entities is but one configuration and should be considered non-limiting) or store the data in a register corresponding to the address among the plurality of registers (FIG. 3 Step 312 Store di in SCM, corresponding to the registers disclosed by Choo et al. supra; [0025] storage class memory (SCM) may be used for respective low-latency data [0031] the fast namespace entities are associated with the storage class memory 124…This preference to separate the slow namespace entities and fast namespace entities is but one configuration and should be considered non-limiting), or read the data from the memory bank corresponding to the address or the register corresponding to the address (FIG. 4 step 404 Read from memory; [0041] the memory (either NAND flash, or storage class memory) is read) or the register corresponding to the address (FIG. 4 step 404 Read from memory; [0041] the memory (either NAND flash, or storage class memory) is read)… The motivation for combining is based on the same rational presented for rejection of independent claim 1. Choo et al. and Sun et al. do not appear to explicitly teach “the control circuit comprising at least hardware…wherein when the register write operation is performed, the register control circuit is configured to map the address to at least one register address among a plurality of register addresses based on a size of the data and a size of each of the plurality of registers, and is configured to generate a mapping table including a mapping relationship between the address and the register address.” However, Kotra et al. disclose: wherein when the register write operation is performed, the register control circuit is configured to map the address to at least one register address among a plurality of register addresses…and is configured to generate a mapping table including a mapping relationship between the address and the register address ([0040] the memory controller 140 includes a PIM register mapping table 142 to facilitate the use of the PIM register file 118 for expediting non-PIM instructions. The PIM register mapping table 142 maps memory locations to PIM registers. For example, to utilize a PIM register as a write buffer for a non-PIM instruction, the memory controller logic 130 remaps the write destination of the write data from the target memory location of the non-PIM write instruction to a PIM register. The memory controller logic 130 writes the write data to the PIM register using a PIM write command and updates the PIM register mapping table 142 to include an association between that PIM register and the target memory location). The motivation for combining is based on the same rational presented for rejection of independent claim 1. Choo et al., Sun et al., and Kotra et al. do not appear to explicitly teach “based on a size of the data and a size of each of the plurality of registers.” However, Williams discloses: based on a size of the data and a size of each of the plurality of registers ([0103] a plurality of registers, each of which is a temporary storage structure for storing data of a given size). The motivation for combining is based on the same rational presented for rejection of independent claim 1. Choo et al., Sun et al., Kotra et al., and Williams do not appear to explicitly teach “the control circuit comprising at least hardware.” However, Cresci et al. disclose: the control circuit comprising at least hardware ([0024] The memory system controller 115 may include hardware such as one or more integrated circuits) Sun et al., Choo et al, Kotra et al., Williams, and Cresci et al. are analogous art because Choo et al. teach a memory device including a plurality of memory dies and a buffer die; Sun et al. teach providing low latency storage mechanisms; Kotra et al. teach expediting operations is a stacked memory device; Williams teaches register files for temporary data storage; and Cresci et al. teach low latency storage. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Choo et al., Sun et al., Kotra et al., Williams, and Cresci et al. before him/her, to modify the combined teachings of Choo et al., Sun et al., Kotra et al., and Williams with the Cresci et al. teachings of a control circuit comprising hardware because such a modification would have amounted to little more than combining “familiar elements according to known methods” and would have been obvious because it would have done “no more than yield predictable results.” (MPEP 2143 I.A.) Hardware control circuits well-known. Using a hardware control circuit would have yielded the predictable result of controlling the memory device. Regarding claim 12, Choo et al. further disclose: The memory device of claim 10, wherein each of the plurality of memory dies includes at least one of a dynamic random access memory (DRAM) ([0025] The first to fourth memory dies 120 to 150 may be manufactured to have the same structure as each other. The fourth memory dies 150 may include banks 151. A bank may be referred to as a memory cell array including memory cells disposed at intersections of word lines (not shown) and bit lines (not shown). For example, the memory cells may include a dynamic random access memory (DRAM) cell), a thyristor random access memory (TRAM) ([0025] a thyristor random access memory (TRAM) cell), a static random access memory (SRAM) ([0025] a static random access memory (SRAM) cell), or a double data rate synchronous dynamic random access memory (DDR SDRAM). Regarding claim 13, Choo et al. further disclose: the buffer die further includes a register control logic ([0023] buffer die 110 may include circuit components configured to perform logic functions) Choo et al. do not appear to explicitly teach “a register control circuit that is configured to be activated when the low latency bit is at a first level and is configured to perform the register write operation and is configured to perform the register read operation, and wherein, when the low latency bit is at a second level that is different from the first level, the control circuit is configured to store the data in the memory bank corresponding to the address or configured to read the data from the memory bank corresponding to the address.” However, Sun et al. further disclose: …a register control circuit (FIG. 1 Controller) that is configured to be activated when the low latency bit is at a first level (FIG. 3 step 304 Request (ri), access needs low latency according to cmd hint? Yes; FIG. 5 Latency Bit High/Low) and is configured to perform the register write operation (FIG. 3 step 312 Store di in SCM, corresponding to the registers disclosed by Choo et al. supra) and is configured to perform the register read operation (FIG. 4 step 404 Read from memory; [0041] the memory (either NAND flash, or storage class memory) is read), and wherein, when the low latency bit is at a second level that is different from the first level (FIG. 3 step 304 Request (ri), access needs low latency according to cmd hint? No; FIG. 5 Latency Bit High/Low), the control circuit is configured to store the data in the memory bank corresponding to the address (FIG. 3 Step 306 Store di in NAND flash, corresponding to the memory banks disclosed by Choo et al. supra) or configured to read the data from the memory bank corresponding to the address (FIG. 4 step 404 Read from memory; [0041] the memory (either NAND flash, or storage class memory) is read). Regarding claim 15, Kotra et al. further disclose: The memory device of claim 13, wherein when the register read operation is performed, the register control circuit is configured to read the data from at least one register corresponding to the at least one register address mapped to the address based on the mapping table ([0040] When a non-PIM read instruction hits on the PIM register mapping table 142 (i.e., the target memory location of the non-PIM read instruction matches a memory location in the PIM register mapping table 142), the source of the non-PIM read instruction is remapped from the target memory location of the non-PIM read instruction to the PIM register associated with that memory location, and the data is read from the PIM register using a PIM read command). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Choo et al., Sun et al., Kotra et al., Williams, and Cresci et al. as applied to claim 10 above, and further in view of Malladi et al. (US 2020/0349093). Regarding claim 11, Choo et al. discloses that a buffer die includes a plurality of registers to store addresses. The combination of Choo et al., Sun et al., Kotra et al., Williams, and Cresci et al. do not appear to explicitly teach “wherein the buffer die further comprises a plurality of static random access memories (SRAMs) configured to store the data.” However, Malladi et al. disclose: The memory device of claim 10, wherein the buffer die further comprises…static random access memories (SRAMs) configured to store the data ([0034] buffer die area and may include logic and SRAM). Choo et al., Sun et al., Kotra et al., Williams, Cresci et al., and Malladi et al. are analogous art because Choo et al. teach a memory device including a plurality of memory dies and a buffer die; Sun et al. teach providing low latency storage mechanisms; Kotra et al. teach expediting operations is a stacked memory device; Williams teaches register files for temporary data storage; Cresci et al. teach low latency storage; and Malladi et al. teach stacked memory devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Choo et al., Sun et al., Kotra et al., Williams, Cresci et al., and Malladi et al. before him/her, to modify the teachings of Choo et al., Sun et al., Kotra et al., Williams and Cresci et al. with the Malladi et al. teachings of SRAMs because placing an SRAM on the buffer dies close to the host would decrease the latency of memory accesses. The combination of Choo et al., Sun et al., Kotra et al., Williams, Cresci et al., and Malladi et al. do not appear to explicitly teach a “plurality of’ SRAMS. However, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a plurality of SRAMS in the buffer die in order to implement the plurality of registers taught by Choo et al. Response to Arguments Applicant’s arguments, filed December 29, 2025, with respect to the rejection(s) of claim(s) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Choo et al., Sun et al., Kotra et al., and Williams. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRACY A WARREN whose telephone number is (571)270-7288. The examiner can normally be reached M-Th 7:30am-5pm, Alternate F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan P. Savla can be reached at 571-272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TRACY A WARREN/Primary Examiner, Art Unit 2137
Read full office action

Prosecution Timeline

Mar 15, 2024
Application Filed
Apr 18, 2025
Non-Final Rejection — §103
Apr 30, 2025
Interview Requested
May 09, 2025
Applicant Interview (Telephonic)
May 09, 2025
Examiner Interview Summary
Jul 21, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103
Oct 13, 2025
Interview Requested
Nov 14, 2025
Response after Non-Final Action
Dec 29, 2025
Request for Continued Examination
Jan 18, 2026
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §103
Feb 24, 2026
Interview Requested
Mar 03, 2026
Examiner Interview Summary
Mar 03, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602174
SEMICONDUCTOR DEVICE, COMPUTING SYSTEM, AND DATA COMPUTING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12578855
REMOTE POOLED MEMORY DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12578887
BOOT PROCESS TO IMPROVE DATA RETENTION IN MEMORY DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12572312
MEMORY DEVICE OPERATION BASED ON DEVICE CHARACTERISTICS
2y 5m to grant Granted Mar 10, 2026
Patent 12572306
VERIFYING CHUNKS OF DATA BASED ON READ-VERIFY COMMANDS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
88%
With Interview (+6.0%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 422 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month