Prosecution Insights
Last updated: April 19, 2026
Application No. 18/331,842

NAMESPACES ALLOCATION IN NON-VOLATILE MEMORY DEVICES

Non-Final OA §103
Filed
Jun 08, 2023
Examiner
HO, AARON D
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
7 (Non-Final)
74%
Grant Probability
Favorable
7-8
OA Rounds
2y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
187 granted / 251 resolved
+19.5% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
11 currently pending
Career history
262
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
23.0%
-17.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 251 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 8, 2025 has been entered. Response to Amendment The amendments filed November 7, 2025 and December 8, 2025 have been entered. Claims 1-20 remain pending in this application. Terminal Disclaimer The terminal disclaimers filed on November 7, 2025 and December 8, 2025 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of US 10,437,476, US 10,969,963, US 11,520,484, and US 11,714,553 have been reviewed and are accepted. The terminal disclaimers have been recorded. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Oshima (US 2016/0342463) in view of Sundararaman et al. (US 2016/0070652). Regarding claim 1, Oshima teaches a device (Fig. 1, storage device 104), comprising: a memory (Fig. 1, non-volatile semiconductor memory 112); a host interface (Fig. 1, host interface 106) configured to receive, over a computer bus from a host (Fig. 1 depicts a bus communication between host system 102 and host interface 106), data access commands identifying first logical addresses specified in first storage spaces implemented using portions of the memory (“Front end 108 communicates with host system 102 to receive, organize, and forward commands and data from host system 102,” [0019] teaching that the front end 108 receives commands and data from host system 102, where Fig. 1 shows front end communicating with host interface, see also “To write data to NAND 112, host system 102 provides a write command, write data, and a logical address to front end 108 via host interface 106,” [0035] and “To read data from NAND 112, host system 102 provides a read command and a logical address to front end 108 via host interface 106,” [0036], providing example write and read commands with a logical address attached to the command to identify physical locations of the memory to write to/read from); and a circuit (Fig. 1, back end 110, where [0033] provides hardware implementations for back end 110 teaching a circuit) configured to translate first logical addresses specified in first storage spaces implemented using portions of the memory into second logical addresses specified in a second storage space corresponding to a capacity of the memory (“Back end 110 includes multiple functional units, including … a logical-to-physical address translation unit 132” [0025], “For efficiency, it is advantageous for logical-to-physical translation unit 132 to use a logical address as an index to a single lookup table that encompasses all namespaces managed by SSD controller 105. However, the namespace-based addresses illustrated in FIG. 2 are not amenable for use in such a table,” [0040], “Thus, instead of using namespace-based addresses as indexes to logical-to-physical lookup tables, logical-to-physical translation unit 132 instead first converts the namespace-based address to a linear, internal address that is not based on namespaces (also referred to herein simply as an “internal address,” or a “linear address”) and uses that linear, internal address as an index to a logical-to-physical lookup table. Within the linear address space that is associated with the linear, internal address, the namespaces are arrayed in a back-to-back manner, so that the linear addresses corresponding to one namespace are adjacent to the linear addresses corresponding to the subsequent namespace. This effectively converts the namespace-based address space into an address space that includes a single set of numbers that begin at 0 and increase to a maximum number,” [0041], see also Figs. 2 and 3, with Fig. 3 showing that the output of the conversion to the internal address is specifically a logical address, and the citations above showing that these are logical addresses in the namespace; regarding the second storage space corresponding to a capacity of the memory, see the internal address space in [0043] and Figs. 4 and 5 showing the entire logical address space of the NAND storage); wherein the circuit is configured to divide the first logical addresses in the first storage spaces into first blocks (“The use of namespaces means that logical addresses provided by host system 102 to SSD controller 105 include a namespace identifier, which identifies a namespace (and can be, for example, a short sequence of bits), in addition to a logical block address, which identifies a logical block within that namespace,” [0038], teaching that the namespaces can be identified by block addresses, reading on the division of the first spaces and addresses into blocks, see also “The flow in the diagram begins with the host system 102 and ends with the back end 110 of the SSD controller 105. To access (read or write) particular data within storage device 104, host system 102 provides an LBA-based address 602 that includes a namespace identifier (“NSID”) and that specifies a logical block address (“LBA”) within the namespace associated with the NSID. The LBA specifies a particular block within the associated namespace. Note that because namespaces are independent logical subdivisions of storage for storage device 104, an LBA in one namespace points to a different location in NAND 112 than the same LBA in a different namespace,” [0051]). Oshima fails to teach where the translation occurs based on a plurality of block sizes. Sundararaman’s disclosure relates to a storage device and interface and as such comprises analogous art in the same field of endeavor of storage management. As part of this disclosure, Sundararaman provides for a translation between IO namespaces to storage resources, where “The virtual blocks may be adapted to provide a desired storage granularity (e.g., block size),” [0070], with the ability to select different block sizes based on different factors, see [0099], and the example of [0153] showing different name spaces and storage resources with different block sizes, see also the Fig. 2B embodiment. An obvious modification can be identified: providing flexibility to provide a desired block size for different virtual blocks and different underlying blocks, reading upon where a translation between virtual blocks and underlying blocks is based on a plurality of block sizes. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Sundararaman’s variable block size into Oshima’s system, as a flexible block size allows for the ability to account for different performance factors such as “memory overhead (e.g., larger virtual blocks 145 may result in a fewer number of entries in the forward map 125), garbage collection complexity, sequential storage performance of the storage resource(s) 190, I/O properties of the clients 106 (e.g., preferred client block size), and/or the like,” [0099]. Regarding claim 2, the combination of Oshima and Sundararaman teaches the device of claim 1, and Oshima further teaches wherein the first storage spaces include a namespace allocated from a storage capacity of the memory (see Fig. 2 showing name spaces allocated, where Figs. 3, 4, and 5 show how the namespace correlates to an internal linear addressing, where the linear internal addressing refers to the SSD’s internal address space, see [0043]). Regarding claim 3, the combination of Oshima and Sundararaman teaches the device of claim 2, and Oshima further teaches wherein the second storage space includes the storage capacity (see the SSD internal address space in [0043], see also Figs. 4 and 5 showing the entire logical address space of the NAND storage) Regarding claim 4, the combination of Oshima and Sundararaman teaches the device of claim 1, and the combination further teaches wherein the circuit is configured to divide, according to the plurality of block sizes, the first logical addresses in the first storage spaces into first blocks (“The use of namespaces means that logical addresses provided by host system 102 to SSD controller 105 include a namespace identifier, which identifies a namespace (and can be, for example, a short sequence of bits), in addition to a logical block address, which identifies a logical block within that namespace,” [0038], teaching that the namespaces can be identified by block addresses, reading on the division of the first spaces and addresses into blocks, see also “The flow in the diagram begins with the host system 102 and ends with the back end 110 of the SSD controller 105. To access (read or write) particular data within storage device 104, host system 102 provides an LBA-based address 602 that includes a namespace identifier (“NSID”) and that specifies a logical block address (“LBA”) within the namespace associated with the NSID. The LBA specifies a particular block within the associated namespace. Note that because namespaces are independent logical subdivisions of storage for storage device 104, an LBA in one namespace points to a different location in NAND 112 than the same LBA in a different namespace,” [0051]; regarding the according to the plurality of block sizes, as discussed in the claim 1 rationale, Sundararaman’s disclosure provides for translation between IO namespaces and storage resources, with the ability to translate it based on a desired storage granularity/block size and the ability to select different block sizes, reading on this limitation) and map the first blocks into second blocks in the second storage space (“Logical-to-physical translation unit 132 translates logical addresses, e.g., logical block addresses (LBAs), to physical addresses, e.g., physical block addresses, of non-volatile semiconductor memory 112 during reading or writing data,” [0028], “logical-to-physical translation unit 132 instead first converts the namespace-based address to a linear, internal address that is not based on namespaces (also referred to herein simply as an “internal address,” or a “linear address”) and uses that linear, internal address as an index to a logical-to-physical lookup table,” [0041] teaching that the internal linear address space addresses are also provided as blocks). Regarding claim 5, the combination of Oshima and Sundararaman teaches the device of claim 4, and Oshima further teaches wherein the first blocks include third logical addresses that are specified continuously in one of the first storage spaces (as seen in Fig. 2, Oshima’s namespaces are ordered in a sequential manner, with each address being provided continuously within the respective namespace to provide for increasing offsets; this is also seen in Figs. 4 and 5). Oshima fails to teach where fourth logical addresses, mapped from the third logical addresses, are not continuous in the second storage space. As seen in Oshima Figs. 4 and 5, Oshima’s internal linear address spaces provides for a sequential, continuous mapping from the namespace addresses to the linear address space. As further provided in Sundararaman’s disclosure, in the Fig. 2B embodiment, the general address space 122 shows how discontinuous parts of the address space may be mapped to virtual blocks assigned to a given resource/namespace (see virtual blocks 145A being mapped arbitrarily within total address space with one arrow from the left side of the address space 122 and another arrow being mapped from close to the right side of the address space 122). A further modification can be identified: incorporating Sundararaman’s disclosure of discontinuous mapping between an overall address space and the individual namespaces. Such a modification reads upon the limitation of the claim. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Sundararaman’s disclosure of discontinuous mapping between address spaces into Oshima’s disclosure, as this allows for flexibility to reuse parts of the logical address space as they are available instead of potentially having to shift data around in order to adjust boundaries of namespaces. Regarding claim 6, the combination of Oshima and Sundararaman teaches the device of claim 5, and the combination further teaches wherein the plurality of block sizes include a common block size shared across the first storage spaces in division of the first logical addresses in the first storage spaces into the first blocks (as part of the incorporation of Sundararaman in the claim 4 rationale, Sundararaman [0153] provides for different block sizes while still providing for 2kb virtual blocks for the different LIDs, providing for a 2kb common size). Regarding claim 7, the combination of Oshima and Sundararaman teaches the device of claim 6, and the combination further teaches wherein each of the first storage spaces is divided by no more than two block sizes (as shown in the claims 4 and 6 rationale, Sundararaman provides for different block size resources, where Fig 2B shows that each individual namespace 145A and 190A through 145N and 190N map different block sizes in 190A-190N onto the single common block size in 145A-145N, teaching that each of the namespaces utilizes both its own individual block size and the common size, with the following citations coming from Sundararaman); wherein each of the first storage spaces is divided by at least one block size that is no larger than the common block size (“In the FIG. 2B embodiment, the storage resource 190A may be configured with a block size of 1 kb, the storage resource 190B may have a block size of 2 kb, and the storage resource 190N may have a block size of 512 bytes. The translation module 124 may configure the LIDs to reference 2 kb blocks,” [0153], so each of the namespaces utilizes a smaller block size than the common block size); and wherein each of the first storage spaces is divided to include no more than one block of logical addresses having a block size that is different from the common block size (“In the FIG. 2B embodiment, the storage resource 190A may be configured with a block size of 1 kb, the storage resource 190B may have a block size of 2 kb, and the storage resource 190N may have a block size of 512 bytes. The translation module 124 may configure the LIDs to reference 2 kb blocks,” [0153], each namespace utilizes one block size, i.e. no more than one block having a block size different from the common lock size). Regarding claim 8, the combination of Oshima and Sundararaman teaches the device of claim 7, and Oshima further teaches wherein the circuit is configured at least in part via firmware (“In various embodiments, the functional blocks included in front end 108 and back end 110 represent hardware or combined software and hardware elements for performing associated functionality. Thus, any or all of the functional blocks may be embodied as firmware executing in a processing unit,” [0033]). Regarding claim 9, the combination of Oshima and Sundararaman teaches the device of claim 7, and the combination further teaches wherein the circuit is configured to consolidate at least two of the first blocks into being mapped within a third block having the common block size in the second storage space (as discussed in the claim 7 rationale, incorporating the claims 4 and 6 rationale, Sundararaman’s disclosure provides for different block sizes for the different name spaces while providing for a larger common virtual block for the overall logical address space; in order to make this work, “The translation module 124 may configure the LIDs to reference 2 kb blocks, which may comprise mapping two virtual addresses 195A to each virtual block 145A, mapping one virtual address 195B to each virtual block 145B, mapping four virtual addresses 195N to each virtual block 145N, and so on,” [0153], teaching that smaller blocks are consolidated into blocks of the common size). Regarding claim 10, Oshima teaches a method, comprising: allocating a plurality of first storage spaces respectively from a plurality of portions of a memory of a device (Figs. 2, 4, 5, allocating namespaces from an internal logical address space of a storage device, see also Fig. 1 for the storage device)); receiving, in a host interface of the device over a computer bus from a host, data access commands identifying first logical addresses specified in the first storage spaces (“To write data to NAND 112, host system 102 provides a write command, write data, and a logical address to front end 108 via host interface 106,” [0035] and “To read data from NAND 112, host system 102 provides a read command and a logical address to front end 108 via host interface 106,” [0036], providing example write and read commands with a logical address attached to the command to identify physical locations of the memory to write to/read from); translating, by the device, first logical addresses specified in the first storage spaces into second logical addresses specified in a second storage space defined in the memory (“Back end 110 includes multiple functional units, including … a logical-to-physical address translation unit 132” [0025], “For efficiency, it is advantageous for logical-to-physical translation unit 132 to use a logical address as an index to a single lookup table that encompasses all namespaces managed by SSD controller 105. However, the namespace-based addresses illustrated in FIG. 2 are not amenable for use in such a table,” [0040], “Thus, instead of using namespace-based addresses as indexes to logical-to-physical lookup tables, logical-to-physical translation unit 132 instead first converts the namespace-based address to a linear, internal address that is not based on namespaces (also referred to herein simply as an “internal address,” or a “linear address”) and uses that linear, internal address as an index to a logical-to-physical lookup table. Within the linear address space that is associated with the linear, internal address, the namespaces are arrayed in a back-to-back manner, so that the linear addresses corresponding to one namespace are adjacent to the linear addresses corresponding to the subsequent namespace. This effectively converts the namespace-based address space into an address space that includes a single set of numbers that begin at 0 and increase to a maximum number,” [0041], see also Figs. 2 and 3, with Fig. 3 showing that the output of the conversion to the internal address is specifically a logical address, and the citations above showing that these are logical addresses in the namespace); and dividing the first logical addresses in the first storage spaces into first blocks (“The use of namespaces means that logical addresses provided by host system 102 to SSD controller 105 include a namespace identifier, which identifies a namespace (and can be, for example, a short sequence of bits), in addition to a logical block address, which identifies a logical block within that namespace,” [0038], teaching that the namespaces can be identified by block addresses, reading on the division of the first spaces and addresses into blocks, see also “The flow in the diagram begins with the host system 102 and ends with the back end 110 of the SSD controller 105. To access (read or write) particular data within storage device 104, host system 102 provides an LBA-based address 602 that includes a namespace identifier (“NSID”) and that specifies a logical block address (“LBA”) within the namespace associated with the NSID. The LBA specifies a particular block within the associated namespace. Note that because namespaces are independent logical subdivisions of storage for storage device 104, an LBA in one namespace points to a different location in NAND 112 than the same LBA in a different namespace,” [0051]). Oshima fails to teach where the translation occurs based on a plurality of block sizes. Sundararaman’s disclosure relates to a storage device and interface and as such comprises analogous art in the same field of endeavor of storage management. As part of this disclosure, Sundararaman provides for a translation between IO namespaces to storage resources, where “The virtual blocks may be adapted to provide a desired storage granularity (e.g., block size),” [0070], with the ability to select different block sizes based on different factors, see [0099], and the example of [0153] showing different name spaces and storage resources with different block sizes, see also the Fig. 2B embodiment. An obvious modification can be identified: providing flexibility to provide a desired block size for different virtual blocks and different underlying blocks, reading upon where a translation between virtual blocks and underlying blocks is based on a plurality of block sizes. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to incorporate Sundararaman’s variable block size into Oshima’s system, as a flexible block size allows for the ability to account for different performance factors such as “memory overhead (e.g., larger virtual blocks 145 may result in a fewer number of entries in the forward map 125), garbage collection complexity, sequential storage performance of the storage resource(s) 190, I/O properties of the clients 106 (e.g., preferred client block size), and/or the like,” [0099]. Claims 11, 12, 13, 14, 15, and 16 are rejected according to the rationale of claims 4, 5, 6, 7, 3 (incorporating the rejection to claim 2), and claim 9 respectively. Regarding claim 17, Oshima teaches a non-transitory computer storage medium storing instructions which, when executed by a device having a memory, cause the device to perform a method (“One embodiment disclosed herein provides A non-transitory computer readable medium. The non-transitory computer-readable medium stores instructions that, when executed by a processor, cause the processor to perform a method”, [0007]) identical to the method of claim 10 (with the exception that the last limitation of claim 10 is no longer recited in claim 17) and rejected according to the same rationale. Claims 18, 19, and 20 are rejected according to the rationale of claims 11, 13 (incorporating the rejection to claim 12), and claim 9 (incorporating the rejection to claim 7) respectively. Response to Arguments Applicant's arguments filed December 8, 2025 have been fully considered but they are not persuasive. The bulk of the argument focuses on Sundararaman’s Fig. 1A and 1B implementation, arguing that the intermediate translation layer is configured in a data services module as a host of the storage resource, and so incorporating this into Oshima’s disclosure would be modifying Oshima’s host system, not the storage device. However, this is unpersuasive, as Oshima is silent as to any translation functionality of the host, and instead, Oshima’s host is more similar to Sundararaman’s clients that provide IO requests, see for example [0094]. Oshima’s back end is earlier cited as providing translation functionality similar to Sundararaman’s translation layer. And as such, there is more rationale for modifying Oshima’s back end with the variable granularity and block size functionality of Sundararaman than there is to modify the host. This argument is unpersuasive. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON D HO whose telephone number is (469)295-9093. The examiner can normally be reached Mon-Fri 8:00-4:00 CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached at (571)272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.D.H./Examiner, Art Unit 2139 /REGINALD G BRAGDON/Supervisory Patent Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Jun 08, 2023
Application Filed
Jan 01, 2024
Non-Final Rejection — §103
Apr 05, 2024
Response Filed
Apr 21, 2024
Final Rejection — §103
Jun 25, 2024
Response after Non-Final Action
Jul 25, 2024
Request for Continued Examination
Jul 29, 2024
Response after Non-Final Action
Aug 01, 2024
Response after Non-Final Action
Nov 07, 2024
Non-Final Rejection — §103
Feb 13, 2025
Response Filed
Feb 24, 2025
Final Rejection — §103
Apr 24, 2025
Response after Non-Final Action
May 16, 2025
Non-Final Rejection — §103
Aug 25, 2025
Response Filed
Sep 04, 2025
Final Rejection — §103
Nov 07, 2025
Response after Non-Final Action
Dec 08, 2025
Request for Continued Examination
Dec 19, 2025
Response after Non-Final Action
Dec 29, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578886
METHOD AND APPARATUS FOR MEMORY MANAGEMENT IN MEMORY DISAGGREGATION ENVIRONMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12572356
MEMORY DEVICE FOR PERFORMING IN-MEMORY PROCESSING
2y 5m to grant Granted Mar 10, 2026
Patent 12561252
DYNAMIC CACHE LOADING AND VERIFICATION
2y 5m to grant Granted Feb 24, 2026
Patent 12554418
MEMORY CHANNEL CONTROLLER OPERATION BASED ON DATA TYPES
2y 5m to grant Granted Feb 17, 2026
Patent 12524340
ARRAY ACCESS WITH RECEIVER MASKING
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
74%
Grant Probability
90%
With Interview (+15.1%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 251 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month