Prosecution Insights
Last updated: April 19, 2026
Application No. 19/020,443

MEMORY-ALIGNED ACCESS OPERATIONS

Non-Final OA §103
Filed
Jan 14, 2025
Examiner
RIGOL, YAIMA
Art Unit
2135
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
464 granted / 619 resolved
+20.0% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
18 currently pending
Career history
637
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
9.2%
-30.8% vs TC avg
§112
17.5%
-22.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The instant application having Application No. 19/020,443, the preliminary amendment filed on 4/7/2025 is herein acknowledged. Claim 1 has been canceled and claims 2-21 have been added. Claims 2-21 are pending. The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. The specification should be amended to reflect the status of all related applications, whether patented or abandoned. Therefore, applications noted by their serial number and/or attorney docket number should be updated with correct serial number and patent number if patented. In the response to this Office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. INFORMATION CONCERNING DRAWINGS The applicant’s drawings submitted are acceptable for examination purposes. STATUS OF CLAIM FOR PRIORITY IN THE APPLICATION The instant Application No. 19020443 filed 01/14/2025 is a Divisional of 18080568, filed 12/13/2022, now U.S. Patent # 12223204. Application No. 18080568 Claims Priority from Provisional Application 63291769, filed 12/20/2021. ACKNOWLEDGEMENT OF REFERENCES CITED BY APPLICANT As required by M.P.E.P. 609(C), the applicant’s submission of the Information Disclosure Statement(s) dated 4/7/2025 is/are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C(2), a copy (copies) of the PTOL-1449(s) initialed and dated by the examiner is/are attached to the instant office action. REJECTIONS BASED ON PRIOR ART Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 5-7, 11, 14, 16-18 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Subbarao et al. (US 2020/0356307) in view of Oh et al. (US 2022/0050626). 2. (New) An apparatus, comprising: a controller that is configured to cause the apparatus to: [Subbarao teaches “controller 116” (fig. 1 and related text; pars. 0018, 0020, 0021)] receive, from a memory system, an indication of a data size corresponding to a quantity of physical pages that is addressable by a first-level page of a first-level page table for mapping logical addresses to respective physical pages of memory cells at the memory system, [Subbarao teaches “The memory sub-system can determine, based on the state of the media layout of the preferred input/output size and communicate the size to the host system… The input/output size provided in the response can be used to configure the next write command” (par. 0013) “[0055] In FIG. 4, a logical to physical block map 303 is configured to facilitate the translation of LBA addresses (e.g., 331) into physical addresses in the media (e.g., 203).” “[0056] The logical to physical block map 303 can have multiple entries. An LBA address (e.g., 331) can be used as, or converted into, an index for an entry in the logical to physical block map 303. The index can be used to look up an entry for the LBA address (e.g., 331). Each entry in the logical to physical block map 303 identifies, for an LBA address (e.g., 331), the physical address of a block of memory in the media (e.g., 203). For example, the physical address of the block of memory in the media (e.g., 203) can include a die identifier 333, a block identifier 335, a page map entry identifier 337, etc.” “[0065] The page map entry identifier 337 identifies an entry in the page map 305, which identifies a page (e.g., 241 or 241) that can be used to store the subsequent data of the zone (e.g., 211).” “the preferred size of input/output is the size of data that can be stored into the entire set of atomically programmable cells in the page” (par. 0074) where the logical to physical block map corresponds to the claimed first-level page table mapping logical addresses to physical pages, note the terms first-level page table as used herein have been used a label to identify the claimed table where other than mapping logical addresses to physical addresses, there are no attributes or limitations claimed differentiating this table from any mapping table or requiring more than a single level] wherein the data size is associated with a target packet size for an application; [Subbarao teaches “[0032] The computing system 100 includes an input/output size manager 113 in the memory sub-system 110 that determines the preferred input/output size for atomically store/program/commit/write data into the media of the memory sub-system 110… In other embodiments, the input/output size manager 113 is part of an operating system of the host system 120, a device driver, or an application.“; thus, the preferred I/O size may be for an application. Additionally, note that host data being written is at least associated with a host application running on host as host performs read/write operation to memory sub-system 110, see pars. 0021 and 0032] configure, based at least in part on the indication, a buffer to have a size that is greater than or equal to the target packet size; store, in the buffer, data for the application based at least in part on configuring the buffer; and [Subbarao teaches “the memory sub-system 110 can include a cache or buffer” (par. 0030) “When a write command has an input/output size that is smaller than the preferred size, the storage capacity of the entire set of atomically programmable memory cells in the page (e.g., 241) is not fully utilized for the write operating. When a write command has an input/output size that is larger than the preferred size, the data of the write command is to be programmed via multiple atomic write operations. Thus, some of the data of the write command may have to be buffered for a longer period of time in order to wait for the next atomic write operation.” (par. 0074)]; thus, teaching buffering data to be written to the storage device but Subbarao does not expressly disclose a buffer to have a size that is greater than or equal to the target packet size… send, to the memory system based at least in part on a size of the data stored in the buffer reaching a threshold, a set of data stored in the buffer and a command to write the set of data, wherein a size of the set of data is the target packet size; however, regarding these limitations, Oh teaches [buffer memory device 220 (fig. 1 and related text) “[0047] The buffer memory device 220 may temporarily store write data corresponding to the write request provided from the host 400…” “[0055] The operation controller 210 may control the buffer memory device 220 and the memory device 100 so that write data corresponding to a write request provided from the host 400 is stored in the memory device 100. The operation controller 210 may provide the memory device 100 with the write data in a size of a program unit of the memory device 100. The program unit size may be a size of write data that can be stored in the memory device 100 by performing one program operation.”; thus, the buffer having a size of at least equal to a program unit size “[0056] When a size of write data stored in the buffer memory device 220 is less than the program unit size, the operation controller 210 may not provide the write data to the memory device 100. When the size of write data stored in the buffer memory device 220 reaches the program unit size, the operation controller 210 may provide the write data to the memory device 100.” “When a size of write data stored in the buffer memory device 220 reaches the program unit size, the operation controller 210 may provide the write data to the memory device 100.” (par. 0098) where upon the data in buffer reaching a threshold or unit size, it is written to the memory device]. Subbarao and Oh are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify Subbarao to include a buffer to have a size that is greater than or equal to the target packet size… send, to the memory system based at least in part on a size of the data stored in the buffer reaching a threshold, a set of data stored in the buffer and a command to write the set of data, wherein a size of the set of data is the target packet size as taught by Oh since doing so would provide the benefits of facilitating program unit sized writes to the storage device. Therefore, it would have been obvious to combine Subbarao and Oh for the benefit of creating a storage system/method to obtain the invention as specified in claim 2. 5. (New) The apparatus of claim 2, wherein the command comprises a logical address that corresponds to an entry of the first-level page [Subbarao teaches ““[0061] In FIG. 4, the block set table 307 stores data controlling aspects of the dynamic media layout for a zone (e.g., 211). [0062] The block set table 307 can have multiple entries. Each entry in the block set table 307 identifies a number/count 371 of integrated circuit dies (e.g., 205 and 207) in which data of the zone (e.g., 211) is stored. For each of the integrated circuit dies (e.g., 205 and 207) used for the zone (e.g., 211), the entry of the block set table 307 has a die identifier 373, a block identifier 375, a page map entry identifier 377, etc… [0065] The page map entry identifier 337 identifies an entry in the page map 305, which identifies a page (e.g., 241 or 241) that can be used to store the subsequent data of the zone (e.g., 211).” “Thus, the preferred size of input/output is the size of data that can be stored into the entire set of atomically programmable cells in the page” (par. 0074) where the logical to physical block map corresponds to the claimed first-level page table mapping logical addresses to physical pages, note the terms first-level page table as used herein have been used a label to identify the claimed table where other than mapping logical addresses to physical addresses, there are no attributes or limitations claimed differentiating this table from any mapping table or requiring more than a single level]. Oh teaches [“[0041] In an embodiment, the memory controller 200 may receive write data and a logical block address (LBA) from the host 400, and may translate the logical block address (LBA) into a physical block address (PBA) indicating an address of memory cells included in the memory device 100, the write data being to be stored in the memory cells.” Where “[0111] The mapping information storage 223 may store a sequential mapping table 223a, a random mapping table 223b, and combined data information 223c.”]. 6. (New) The apparatus of claim 2, wherein the application is configured to reach a utilization threshold of the buffer within a duration [“[0056] When a size of write data stored in the buffer memory device 220 is less than the program unit size, the operation controller 210 may not provide the write data to the memory device 100. When the size of write data stored in the buffer memory device 220 reaches the program unit size, the operation controller 210 may provide the write data to the memory device 100.” “When a size of write data stored in the buffer memory device 220 reaches the program unit size, the operation controller 210 may provide the write data to the memory device 100.” (par. 0098) where upon the data in buffer reaching a threshold or unit size, it is written to the memory device, note this would occur during “a duration”]. 7. (New) The apparatus of claim 2, wherein the command comprises an indication of the target packet size [Subbarao teaches “[0035] In FIG. 2, the host system 120 sends commands 121, 123, . . . , to store data into the media 203 of the memory sub-system 110. The commands (e.g., 121 or 123) includes the sizes (e.g., 141 or 143) of the data to be written into the media 203 and the logical addresses (e.g., 142 or 144) for storing the data in the media 203.” Where the preferred size correspond to the size of the command (par. 0033)]. 11. (New) The apparatus of claim 2, wherein the controller is further configured to cause the apparatus to: transmit, to the memory system, an indication that the application is activated at the apparatus, wherein the indication of the data size is based at least in part on the indication that the application is activated [Subbarao teaches “The memory sub-system can determine, based on the state of the media layout the preferred input/output size and communicate the size to the host system (e.g., via a status field in a response to a current command). The input/output size provided in the response can be used to configure the next write command.” (par. 0013) where receiving a current host application command by the data storage device corresponds to an indication the host application is active, and in response, an indication of the data size is sent to the host]. 14. (New) A non-transitory, computer-readable medium storing code comprising instructions which, when executed by a processor of an electronic device, cause the electronic device to: receive, from a memory system, an indication of a data size corresponding to a quantity of physical pages that is addressable by a first-level page of a first-level page table for mapping logical addresses to respective physical pages of memory cells at the memory system, wherein the data size is associated with a target packet size for an application; configure, based at least in part on the indication of the data size, a buffer to have a size that is greater than or equal to the target packet size; store, in the buffer, data for the application based at least in part on configuring the buffer; and send, to the memory system based at least in part on a size of the data stored in the buffer reaching a threshold, a set of data stored in the buffer and a command to write the set of data, wherein a size of the set of data is the target packet size [The rationale in the rejection of claim 2 is herein incorporated]. 16. (New) The non-transitory, computer-readable medium of The non-transitory, computer-readable medium of wherein the command comprises a logical address that corresponds to an entry of the first-level page [The rationale in the rejection of claim 5 is herein incorporated]. 17. (New) The non-transitory, computer-readable medium of claim 14, wherein the application is configured to reach a utilization threshold of the buffer within a duration [The rationale in the rejection of claim 6 is herein incorporated]. 18. (New) The non-transitory, computer-readable medium of claim 14, wherein the command comprises an indication of the target packet size [The rationale in the rejection of claim 7 is herein incorporated]. 21. (New) A method, comprising: receiving, from a memory system, an indication of a data size corresponding to a quantity of physical pages that is addressable by a first-level page of a first-level page table for mapping logical addresses to respective physical pages of memory cells at the memory system, wherein the data size is associated with a target packet size for an application; configuring, based at least in part on the indication of the data size, a buffer to have a size that is greater than or equal to the target packet size; storing, in the buffer, data for the application based at least in part on configuring the buffer; and sending, to the memory system based at least in part on a size of the data stored in the buffer reaching a threshold, a set of data stored in the buffer and a command to write the set of data, wherein a size of the set of data is the target packet size [The rationale in the rejection of claim 2 is herein incorporated]. Claims 3 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Subbarao et al. (US 2020/0356307) in view of Oh et al. (US 2022/0050626) as applied in the rejection of claim 2 above, and further in view of Byun et al. (20200150898). 3. (New) The apparatus of claim 2, wherein the command comprises a logical address for the set of data that is aligned to an integer multiple of the target packet size [Subbarao teaches “[0035] In FIG. 2, the host system 120 sends commands 121, 123, . . . , to store data into the media 203 of the memory sub-system 110. The commands (e.g., 121 or 123) includes the sizes (e.g., 141 or 143) of the data to be written into the media 203 and the logical addresses (e.g., 142 or 144) for storing the data in the media 203.” Where the size of the input/output commands corresponds to the preferred size (see par. 0033); thus, logical addresses of commands correspond to the target or preferred size. Oh teaches storing data having logical addresses input from host (Abstract)]; but the combination of Subbarao and Oh does not expressly refer to the logical addresses aligned to an integer multiple of the target packet size; however regarding these limitations, Byun teaches [“[0072] At step S130, the alignment processor 110 may determine whether the start logical address where the write operation is to be performed in the target nonvolatile memory device satisfies the alignment condition. When it is determined that the start logical address does not satisfy the alignment condition (S130, N), the procedure may proceed to step S140. When it is determined that the start logical address satisfies the alignment condition (S130, Y), the procedure may proceed to step S150… [0074] At step S150, the alignment processor 110 may select some write data of the entire write data as the target data. The some write data may have a size corresponding to the alignment data size. For example, the alignment processor 110 may select write data as the target data. The write data may be corresponding to consecutive logical addresses and having the alignment data size.” (see fig. 4 and related text)]. Subbarao, Oh and Byun are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Subbarao and Oh to have the logical addresses aligned to an integer of the target packet size as taught by Byun since doing so would provide the benefits of [“a memory system capable of performing sequential read operations with improved performance through an alignment operation, and an operating method thereof.” (par. 0004)]. Therefore, it would have been obvious to combine Subbarao and Oh with Byun for the benefit of creating a storage system/method to obtain the invention as specified in claim 3. 15. (New) The non-transitory, computer-readable medium of The non-transitory, computer-readable medium of wherein the command comprises a logical address for the set of data that is aligned to an integer multiple of the target packet size [The rationale in the rejection of claim 3 is herein incorporated]. Claims 4, 8 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Subbarao et al. (US 2020/0356307) in view of Oh et al. (US 2022/0050626) as applied in the rejection of claim 2 above, and further in view of Ng et al. (US 20130073784). 4. (New) The combination of Subbarao and Oh teaches The apparatus of claim 2, but does not expressly disclose wherein the controller is further configured to cause the apparatus to: store, in a second buffer, the data for the application based at least in part on sending the set of data to the memory system; however, regarding these limitations Ng teaches [“The controller RAM 114 may include two data cache areas for data queues for use in optimizing write performance. As explained in greater detail below, an aligned data queue 118 in the controller RAM 114 may be configured to cache portions of data from host data writes that are complete pages aligned with bank page boundaries in the flash memory. An unaligned data queue 120 in the controller RAM 114 may be configured to cache portions of data from host data writes that contain data that do not make up the size of a complete bank page and thus not aligned with bank page boundaries in the flash memory 108.” (par. 0019) “[0025] The aligned portions are cached in a first queue 118 in the controller random access memory (RAM) 114 and the unaligned portions are cached in a second queue area 120 of the controller RAM 114 (at 604, 606). The steps of identifying and caching the aligned portions and identifying and caching the unaligned portions may be accomplished concurrently, where the incoming data is simply identified and routed to the appropriate queue as it is received. In instances where a host write command is received for data that is unaligned and less than a physical page, that data is all placed in the second queue 120.” Thus teaching a second buffer/queue for host application writes]. Subbarao, Oh and Ng are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Subbarao and Oh to store, in a second buffer, the data for the application based at least in part on sending the set of data to the memory system as taught by Ng, where host data is stored in unaligned queue or second buffer since doing so would provide the benefits of [“The controller RAM 114 may include two data cache areas for data queues for use in optimizing write performance.” (par. 0019)]. Therefore, it would have been obvious to combine Subbarao and Oh with Ng for the benefit of creating a storage system/method to obtain the invention as specified in claim 4. 8. (New) The combThe apparatus of claim 2, wherein the controller is further configured to cause the apparatus to: configure, based at least in part on the indication, a second buffer to have a second size that is equal to the size of the buffer [“The controller RAM 114 may include two data cache areas for data queues for use in optimizing write performance. As explained in greater detail below, an aligned data queue 118 in the controller RAM 114 may be configured to cache portions of data from host data writes that are complete pages aligned with bank page boundaries in the flash memory. An unaligned data queue 120 in the controller RAM 114 may be configured to cache portions of data from host data writes that contain data that do not make up the size of a complete bank page and thus not aligned with bank page boundaries in the flash memory 108.” (par. 0019) “In the example of FIGS. 7-9, the parallel write increments are bank pages and there are four banks, so the minimum queue capacity for each queue is preferably four bank pages. An advantage of having the queues each sized to no less than this minimum size is that there is always a possibility of having data in the queues that can be written to each of the banks in parallel to maximize efficiency.” (par. 0032)]. Subbarao, Oh and Ng are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Subbarao and Oh to configure, based at least in part on the indication, a second buffer to have a second size that is equal to the size of the buffer as taught by Ng, where host data is stored in unaligned queue or second buffer sized the same as the aligned queue, since doing so would provide the benefits of [“The controller RAM 114 may include two data cache areas for data queues for use in optimizing write performance.” (par. 0019) and “An advantage of having the queues each sized to no less than this minimum size is that there is always a possibility of having data in the queues that can be written to each of the banks in parallel to maximize efficiency.” (par. 0032)]. Therefore, it would have been obvious to combine Subbarao and Oh with Ng for the benefit of creating a storage system/method to obtain the invention as specified in claim 8. 19. (New) The non-transitory, computer-readable medium of claim 14, wherein the instructions are further executable by the processor to: configure, based at least in part on the indication of the data size, a second buffer to have a second size that is equal to the size of the buffer [The rationale in the rejection of claim 8 is herein incorporated]. 20. (New) The non-transitory, computer-readable medium of claim 19, wherein the instructions are further executable by the processor to: store, in the second buffer, the data for the application based at least in part on sending the set of data to the memory system [The rationale in the rejection of claim 4 is herein incorporated]. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Subbarao et al. (US 2020/0356307) in view of Oh et al. (US 2022/0050626) as applied in the rejection of claim 2 above, and further in view of Agesen (US 2009/0182976). 9. (New) The combination of Subbarao ad Oh teaches The apparatus of claim 2, but does not expressly disclose wherein the target packet size corresponds to a set of physical addresses beginning at a first physical address corresponding to a lowest logical address mapped to by a lowest level page of the first-level page table and ending at a second physical address corresponding to a highest logical address mapped to by the lowest level page of the first-level page table [Agesen teaches “[0031] The PPN 322 indicates the next page in the page table hierarchy. If a particular PTE is at the lowest level of the page table hierarchy, then the PPN 322 points to a data page. If a particular PTE is not at the lowest level of the page table hierarchy, then the PPN 322 points to a lower-level page table 142.” “[0043]… For each of the selected level two PTEs, the operating system 172 allocates a 2 MB large page in the following manner. First, the operating system 172 copies the data from the collection of small pages accessed through the selected level two PTE to the newly allocated 2 MB large page. If any of the small pages had been swapped out or not allocated until now, the operating system 172 may swap-in or pre-zero missing pieces. The operating system 172 then sets the stop bit 324 in the selected level two PTE to one, thereby indicating that the selected level two PTE is now the lowest level in the page table hierarchy when mapping this 2 MB range of virtual addresses (thus having a lowest and highest logical or virtual addresses). Finally, the operating system 172 sets the PPN 322 in the selected level two PTE to point to the newly allocated 2 MB large page. A wide variety of other techniques may be used to define and execute a policy for large page table mapping.” Where the PPN corresponds to the physical addresses of the 2MB range to which the page is mapped (see fig. 4 and related text), note Agesen teaches “At step 516, the page walker 134 uses a portion of the virtual address to index into the data page that is identified by the current physical page number, and accesses the data at the physical address corresponding to the virtual address.” (par. 0041; fig. 5 and related text) The mapped data page in the lowest level page table corresponds to the lowest level page as claimed]. Subbarao, Oh and Agesen are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Subbarao and Oh to have the target packet size corresponds to a set of physical addresses beginning at a first physical address corresponding to a lowest logical address mapped to by a lowest level page of the first-level page table and ending at a second physical address corresponding to a highest logical address mapped to by the lowest level page of the first-level page table as taught by Agesen since doing so would provide the benefits of [“improving virtual memory system performance using large pages. In one embodiment, virtual memory system performance is improved using large pages in a normal (non-virtualized) computer system. In another embodiment, virtual memory system performance is improved using large pages in a virtualized computer system that employs nested page tables.” (par. 0004) as well as facilitating mapping of pages of different sizes (par. 0021)]. Therefore, it would have been obvious to combine Subbarao and Oh with Agesen for the benefit of creating a storage system/method to obtain the invention as specified in claim 9. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Subbarao et al. (US 2020/0356307) in view of Oh et al. (US 2022/0050626) as applied in the rejection of claim 2 above, and further in view of Choi (US 2019/0212943). 10. (New) The apparatus of claim 2, wherein the buffer reaching the threshold is based at least in part on the buffer being full [Oh teaches “[0056] When a size of write data stored in the buffer memory device 220 is less than the program unit size, the operation controller 210 may not provide the write data to the memory device 100. When the size of write data stored in the buffer memory device 220 reaches the program unit size, the operation controller 210 may provide the write data to the memory device 100.” “When a size of write data stored in the buffer memory device 220 reaches the program unit size, the operation controller 210 may provide the write data to the memory device 100.” (par. 0098)]; thus, writing data to memory when reaching the program unit size or threshold, but the combination of Subbarao and Oh does not expressly refer to the threshold as the buffer being full; however, regarding these limitations, Choi teaches [“[0047] The memory system 100 may perform a write operation for the write data DT, in response to the write request RQ write of the host system 400. In general, in a write operation, the write data DT may be buffered in the buffer memory 221 and may then be stored in the memory cells of the nonvolatile memory device 300 when a predetermined condition is satisfied (for example, in the case where the buffer memory 221 is full).”]. Subbarao, Oh and Choi are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Subbarao and Oh to have threshold as the buffer being full as taught by Choi since doing so would provide the benefits of [“a data processing system where data write performance may possibly be improved since data may be stored selectively in a host memory or a nonvolatile memory device based on an attribute of the data.” (par. 0014)]. Therefore, it would have been obvious to combine Subbarao and Oh with Choi for the benefit of creating a storage system/method to obtain the invention as specified in claim 10. Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Subbarao et al. (US 2020/0356307) in view of Oh et al. (US 2022/0050626) as applied in the rejection of claim 2 above, and further in view of Jin et al. (US 2021/0089447). 12. (New) The combination of Subbarao and Oh teaches The apparatus of claim 2, but does not expressly disclose wherein the controller is further configured to cause the apparatus to: transmit, to the memory system, an indication that a second application is activated at the apparatus; and receive, from the memory system, an indication of a second data size based at least in part on the second application, wherein the second data size is associated with a second target packet size for the second application, and wherein the target packet size is different than the second target packet size; however, regarding these limitations, Jin teaches [“[0073] As described with reference to FIG. 2, according to an embodiment, the plurality of buffer areas may be allocated. The sizes of the data temporarily stored in the allocated buffer areas may be different from each other. That is, a buffer area full of data may exist, and a buffer area including an area in which data is empty may exist together. This is because the size of data input for each application from the host is different, and the data is temporarily stored in distinct respective buffer areas for the plurality of applications.” “[0091] Referring to FIG. 5D, the buffer allocation request may include the start LBA and the data size information. Similarly, the type of the application may be distinguished according to the start LBA. In addition, when the size information of data is known, the last LBA may be calculated from the start LBA.”]. Subbarao, Oh and Jin are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Subbarao and Oh to include a second application having a second data size associated with a second target packet size as taught by Jin since doing so would provide the benefits of [“a memory controller providing improved reliability and a method of operating the same are provided.” (par. 0010)]. Therefore, it would have been obvious to combine Subbarao and Oh with Jin for benefit of creating a storage system/method to obtain the invention as specified in claim 12. 13. (New) The apparatus of claim 12, wherein the controller is further configured to cause the apparatus to: configure, based at least in part on the indication of the second data size, one or more second buffers to have a second size that is greater than or equal to the second target packet size [Jin teaches “[0058] In another embodiment, respective sizes of the opened buffer areas may be different from each other. The respective sizes of the opened buffer areas may indicate a maximum amount of data that may be respectively stored in each of the buffer areas. The size of the opened buffer areas may determine whether to start a flush operation. For example, among the plurality of buffer areas included in the buffer memory 220, a maximum size of a first buffer area may be 4 Kbytes, and a maximum size of a second buffer area may be 8 Kbytes. A maximum size of a third buffer area may be one of 4 Kbytes and 8 Kbytes, or another size that does not correspond to any of 4 Kbytes and 8 Kbytes. When data is temporarily stored in the first buffer area and a total size of the stored data in that buffer area reaches 4 Kbytes, a flush operation in which the data in the first buffer area is stored into the first memory area may be performed. When the data is temporarily stored in the second buffer area and the total size of the stored data in that buffer area reaches 8 Kbytes, the flush operation may be performed to store the data in the second buffer area into the second memory area. The flush operation may be performed on other buffer areas in the same manner. Since the flush operation may be separately performed for each buffer area, the flush operation of all buffer areas does not have to be performed simultaneously. [0059] The write request or the buffer open request provided by the host may further include information indicating a size of the opened buffer area. Referring to FIG. 2, the host may request to write the 102nd logical address LA102 and the 103rd logical address LA103. The write request may also include information on the size of the buffer area to allocate. For example, if the write request received from the host includes buffer area size information indicating a size of 4 Kbytes, the size of the second buffer area in which the data having the 102nd logical address LA102 and the 103rd logical address LA103 is to be temporarily stored may be determined as 4 Kbytes as indicated in the write request received. That is, the size of the opened buffer area may be determined according to the write request provided by the host.” Where each buffer area is configured for the different applications (pars. 0064-0066)]. CLOSING COMMENTS a. STATUS OF CLAIMS IN THE APPLICATION a(1) CLAIMS REJECTED IN THE APPLICATION Per the instant office action, claims 2-21 have received a first action on the merits and are subject of a first action non-final. a(2) CLAIMS NO LONGER UNDER CONSIDERATION Claim 1 has been canceled. b. DIRECTION OF FUTURE CORRESPONDENCES Any inquiry concerning this communication or earlier communications from the examiner should be directed to YAIMA RIGOL whose telephone number is (571)272-1232. The examiner can normally be reached Monday-Friday 9:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared I. Rutz can be reached on (571) 272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. February 12, 2026 /YAIMA RIGOL/ Primary Examiner, Art Unit 2135
Read full office action

Prosecution Timeline

Jan 14, 2025
Application Filed
Feb 12, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591522
COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN MEMORY ACCESS CONTROL PROGRAM, MEMORY ACCESS CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12585581
MEMORY MODULE HAVING VOLATILE AND NON-VOLATILE MEMORY SUBSYSTEMS AND METHOD OF OPERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579073
APPARATUS AND METHOD FOR INTELLIGENT MEMORY PAGE MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12578899
MEMORY DEVICE, MEMORY SYSTEM, MEMORY CONTROLLER, AND OPERATION METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12566716
SYSTEMS AND METHODS FOR TIMESTEP SHARED MEMORY MULTIPROCESSING BASED ON TRACKING TABLE MECHANISMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
92%
With Interview (+17.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month