Prosecution Insights
Last updated: April 19, 2026
Application No. 19/040,600

LOGICAL ADDRESS GRANULARITY CONFIGURATIONS FOR LOGICAL ADDRESS SPACE PARTITIONS

Non-Final OA §102§103§DP
Filed
Jan 29, 2025
Examiner
PAPERNO, NICHOLAS A
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
66%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
193 granted / 275 resolved
+15.2% vs TC avg
Minimal -4% lift
Without
With
+-3.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
21 currently pending
Career history
296
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
60.4%
+20.4% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 275 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-9 of U.S. Patent No. 12,242,374. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application deal with the creation of the partitions mentioned in the claims of the patent and also cover the creation and host communication that is covered in the independent claims of the patent. Instant Application Patent 12,242,374 1. A system comprising: a memory device having an associated logical address space; and a processing device, operatively coupled with the memory device, to perform operations comprising: configuring a first partition of the logical address space for a first logical address mapping granularity and a second partition of the logical address space for a second logical address mapping granularity; and processing a plurality of memory access operations directed to the first partition and the second partition of the logical address space according to the first and second logical address mapping granularities. 2. The system of claim 1, wherein the first logical address mapping granularity defines a first number of logical block addresses that are associated with one physical address of the memory device, and wherein the second logical address mapping granularity defines a second number of logical block addresses that are associated with one physical address of the memory device 3. The system of claim 1, further comprising: a volatile memory device storing a logical-to-physical (L2P) mapping data structure for the logical address space, the L2Pmapping data structure comprising a number of entries. 4. The system of claim 3, wherein the number of entries in the L2P mapping data structure is based on logical address mapping granularities of partitions of the logical address space. 1. A system comprising: a memory device associated with a logical address space; and a processing device, operatively coupled with the memory device, to perform operations comprising: providing, to a host system, device descriptor data for the memory device, the device descriptor data comprising: a first field indicating that a memory sub-system supports at least one logical address granularity configuration for the logical address space; and a second field specifying at least one available supported granularity of the at least one logical address granularity configuration for the logical address space; obtaining, from the host system, a logical address granularity configuration for a partition of a set of partitions of the logical address space, wherein the logical address granularity configuration for the partition is located within a data structure that maintains the logical address granularity configuration for the partition, and a size of the partition defined by an address length for the partition that is represented by a number of addresses beginning from a starting address of the partition; providing, to the host system, an acknowledgement of receipt of the logical address granularity configuration for the partition, wherein the logical address granularity configuration for the partition is specified by partition descriptor data comprising a third field; receiving a command to perform a memory access operation corresponding to the partition; in response to receiving the command, determining head data alignment and tail data alignment based on the logical address granularity configuration for the partition and the size of the partition; and causing the memory access operation to be performed based on the head data alignment and the tail data alignment. 5. The system of claim 1, wherein the first partition and the second partition comprise at least one of a boot partition, a system partition, a cache partition, an application partition, an application data partition, or a media partition. 2. The system of claim 1, wherein the partition comprises at least one of: a boot partition, a system partition, a cache partition, an application partition, an application data partition, or a media partition. 6. The system of claim 1, wherein the processing device is to perform operations further comprising: obtaining, from a host system, a logical address granularity configuration for at least one of the first partition or the second partition of the logical address space, the logical address granularity configuration defining at least one of the first logical address mapping granularity or the second logical address mapping granularity. 7. The system of claim 6, wherein the logical address granularity configuration is based on a write workload size of the host system. 3. The system of claim 1, wherein the operations further comprise: finalizing a set of logical address granularity configurations comprising the logical address granularity configuration; and causing the memory access operation to be performed based on the set of logical address granularity configurations. 8. A method comprising: configuring a first partition of a logical address space of a memory device for a first logical address mapping granularity and a second partition of the logical address space for a second logical address mapping granularity; and processing a plurality of memory access operations directed to the first partition and the second partition of the logical address space according to the first and second logical address mapping granularities. 9. The method of claim 8, wherein the first logical address mapping granularity defines a first number of logical block addresses that are associated with one physical address of the memory device, and wherein the second logical address mapping granularity defines a second number of logical block addresses that are associated with one physical address of the memory device. 10. The method of claim 8, further comprising: maintaining a logical-to-physical (L2P) mapping data structure for the logical address space, the L2Pmapping data structure comprising a number of entries. 11. The method of claim 10, wherein the number of entries in the L2Pmapping data structure is based on logical address mapping granularities of partitions of the logical address space. 13. The method of claim 8, further comprising: obtaining, from a host system, a logical address granularity configuration for at least one of the first partition or the second partition of the logical address space, the logical address granularity configuration defining at least one of the first logical address mapping granularity or the second logical address mapping granularity. 14. The method of claim 13, wherein the logical address granularity configuration is based on a write workload size of the host system. 4. A method comprising: providing, by a processing device to a host system, device descriptor data for a memory device associated with a logical address space, the device descriptor data comprising: a first field indicating that a memory sub-system supports at least one logical address granularity configuration for the logical address space; and a second field specifying at least one available supported granularity of the at least one logical address granularity configuration for the logical address space; obtaining, by the processing device from the host system, a logical address granularity configuration for a partition of a set of partitions of the logical address space, wherein the logical address granularity configuration for the partition is located within a data structure that maintains the logical address granularity configuration for the partition, and a size of the partition defined by an address length for the partition that is represented by a number of addresses beginning from a starting address of the partition; providing, by the processing device to the host system, an acknowledgement of receipt of the logical address granularity configuration for the partition, wherein the logical address granularity configuration for the partition is specified by partition descriptor data comprising a third field; receiving, by the processing device, a command to perform a memory access operation corresponding to the partition; in response to receiving the command, determining, by the processing device, head data alignment and tail data alignment based on the logical address granularity configuration for the partition and the size of the partition; and causing, by the processing device, the memory access operation to be performed based on the head data alignment and the tail data alignment. 12. The method of claim 8, wherein the first partition and the second partition comprise at least one of a boot partition, a system partition, a cache partition, an application partition, an application data partition, or a media partition. 5. The method of claim 4, wherein the partition comprises at least one of: a boot partition, a system partition, a cache partition, an application partition, an application data partition, or a media partition. 15. A system comprising: a memory device having an associated logical address space arranged in a plurality of partitions; and a processing device, operatively coupled with the memory device, to perform operations comprising: configuring a first subset of the plurality of partitions for a first logical address mapping granularity and a second subset of the plurality of partitions for a second logical address mapping granularity; and processing a plurality of memory access operations directed to the plurality of partitions of the logical address space according to the first and second logical address mapping granularities. 16. The system of claim 15, wherein the first logical address mapping granularity defines a first number of logical block addresses that are associated with one physical address of the memory device, and wherein the second logical address mapping granularity defines a second number of logical block addresses that are associated with one physical address of the memory device. 17. The system of claim 15, further comprising: a volatile memory device storing a logical-to-physical (L2P) mapping data structure for the logical address space, the L2P mapping data structure comprising a number of entries. 18. The system of claim 17, wherein the number of entries in the L2P mapping data structure is based on logical address mapping granularities of partitions of the logical address space. 19. The system of claim 15, wherein the processing device is to perform operations further comprising: obtaining, from a host system, a logical address granularity configuration for at least one of the first subset or the second subset of the plurality of partitions, the logical address granularity configuration defining at least one of the first logical address mapping granularity or the second logical address mapping granularity. 20. The system of claim 19, wherein the logical address granularity configuration is based on a write workload size of the host system. 7. A system comprising: a memory device associated with a logical address space; and a processing device, operatively coupled with the memory device, to perform operations comprising: receiving, from a host system, a request for usable capacity information for the logical address space; providing, to the host system, the usable capacity information; receiving, from the host system, a request for supported logical address granularity information; providing, to the host system, the supported logical address granularity information, wherein the usable capacity information and supported logical address granularity information comprise device descriptor data for the memory device, the device descriptor data comprising: a first field indicating that a memory sub-system supports at least one logical address granularity configuration for the logical address space; and a second field specifying at least one available supported granularity of the at least one logical address granularity configuration for the logical address space; obtaining, from the host system, a logical address granularity configuration for a partition of a set of partitions of the logical address space, wherein the logical address granularity configuration for the partition is located within a data structure that maintains the logical address granularity configuration for the partition, and a size of the partition defined by an address length for the partition that is represented by a number of addresses beginning from a starting address of the partition; and providing, to the host system, an acknowledgement of receipt of the logical address granularity configuration for the partition, wherein the logical address granularity configuration for the partition is specified by partition descriptor data comprising a third field receiving a command to perform a memory access operation corresponding to the partition; in response to receiving the command, determining head data alignment and tail data alignment based on the logical address granularity configuration for the partition and the size of the partition; and causing the memory access operation to be performed based on the head data alignment and the tail data alignment. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 5, 6, 8, 9, 12, 13, 15, 16, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chou (US PGPub 2018/0189174). Regarding claim 1, Chou teaches a system comprising: a memory device having an associated logical address space, and a processing device, operatively coupled with the memory device (Fig. 1 and Paragraphs [0016]-[0017], show the device that has a memory device and a controller connected to the memory device. Paragraph [0020], shows there is a logical address space associated with the memory), to perform operations comprising: configuring a first partition of the logical address space for a first logical address mapping granularity and a second partition of the logical address space for a second logical address mapping granularity (Paragraph [0020], states the host can send a create request back with a particular configurations of namespaces which can include a formatted logical block addresses (LBA), which can have LBAs size (granularity). Paragraph [0021], states that a global mapping table which defines the partitions can be created thereby configuring the partitions), and processing a plurality of memory access operations directed to the first partition and the second partition of the logical address space according to the first and second logical address mapping granularities (Paragraph [0024] states that the data and access to the data are then managed based on the created table and configured partitions. Paragraph [0059] and [0064], since the cache is being partitioned into different sections that have different page granularities it means that the configurations would be finalized as they are put into use and that access operations to those partitions would be done using the configured page size as that is how the partitions have been set up to be used. It should be noted that this is true for any system that utilizes partitions of any kind and different granularities). Regarding claim 2, Chou teaches all the limitations to claim 1. Chou further teaches wherein the first logical address mapping granularity defines a first number of logical block addresses that are associated with one physical address of the memory device, and wherein the second logical address mapping granularity defines a second number of logical block addresses that are associated with one physical address of the memory device (Paragraph [0028], shows that the LBAs are associated with a block and page ID. Since blocks can contain multiple pages it means that there can be multiple LBAs associated with a particular block). Regarding claim 5, Chou teaches all the limitations to claim 1. Chou further teaches wherein the first partition and the second partition comprise at least one of a boot partition, a system partition, a cache partition, an application partition, an application data partition, or a media partition (Paragraphs [0059] and [0064], the cache is partitioned for use by applications making the partitions application partitions). Regarding claim 6, Chou teaches all the limitations to claim 1. Chou further teaches wherein the processing device is to perform operations further comprising: obtaining, from a host system, a logical address granularity configuration for at least one of the first partition or the second partition of the logical address space, the logical address granularity configuration defining at least one of the first logical address mapping granularity or the second logical address mapping granularity (Paragraph [0020], states the host can send a create request back with a particular configurations of namespaces which can include a formatted logical block addresses (LBA), which can have LBAs size. Since the size of the namespace can be expressed in terms of number of LBAs the host and controller would have to know the size of the LBA to know how much space is being requested and eventually allocated so as to create a proper mapping). Regarding claims 8, 9, 12, and 13, claims 8, 9, 12, and 13 are the method claims associated with claims 1, 2, 5, and 6. Since Chou teaches all the limitations to claims 1, 2, 5, and 6, it also teaches all the limitations to claims 8, 9, 12, and 13; therefore the rejections to claims 1, 2, 5, and 6 also apply to claims 8, 9, 12, and 13. Regarding claims 15, 16, and 19, claims 15, 16, and 19 are the system claims associated with claims 1, 2, and 6. Since Chou teaches all the limitations to claims 1, 2, and 6 and further teaches a memory device having an associated logical address space arranged in a plurality of partitions; and a processing device, operatively coupled with the memory device (Fig. 1 and Paragraphs [0016]-[0017] and [0020], as stated in the rejection to claim 1, the memory that is coupled to the controller (processing device) is shown to be divided into several namespaces), it also teaches all the limitations to claims 15, 16, and 19; therefore the rejections to claims 1, 2, and 6 also apply to claims 15, 16, and 19. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3, 4, 10, 11, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chou. Regarding claim 3, Chou teaches all the limitations to claim 1. Chou further teaches a volatile memory device storing data (Fig. 1 and Paragraph [0017], shows the volatile storage medium for storing data). Chou also teaches storing a logical-to-physical (L2P) mapping data structure for the logical address space, the L2Pmapping data structure comprising a number of entries (Fig. 4 and Paragraph [0022], shows the global H2F mapping table which maps the logical addresses to physical memory locations. However, it is not stated where specifically in the system the mapping table is stored). Since Chou teaches a volatile memory for storing data and storing a mapping table it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the storage location of the mapping table with the volatile memory to obtain the predictable result of a volatile memory device storing a logical-to-physical (L2P) mapping data structure for the logical address space (as all this does is specify where the mapping table is stored). Regarding claim 4, Chou teaches all the limitations of claim 3. Chou further teaches wherein the number of entries in the L2P mapping data structure is based on logical address mapping granularities of partitions of the logical address space (Fig. 4 and Paragraph [0022], as the mapping table is based on the configuration command which specifies the number of LBAs for a partition and the size of the LBAs for a partition it means the entries in the table will be based on this information). The combination of and reason for combining are the same as those given in claim 3. Regarding claims 10 and 11, claims 10 and 11 are the method claims associated with claims 3 and 4. Since Chou teaches all the limitations to claims 3 and 4, it also teaches all the limitations to claims 10 and 11; therefore the rejections to claims 3 and 4 also apply to claims 10 and 11. Regarding claims 17 and 18, claims 17 and 18 are the system claims associated with claims 3 and 4. Since Chou teaches all the limitations to claims 3 and 4 and further teaches a memory device having an associated logical address space arranged in a plurality of partitions; and a processing device, operatively coupled with the memory device (Fig. 1 and Paragraphs [0016]-[0017] and [0020], as stated in the rejection to claim 1, the memory that is coupled to the controller (processing device) is shown to be divided into several namespaces), it also teaches all the limitations to claims 17 and 18; therefore the rejections to claims 3 and 4 also apply to claims 17 and 18. Claims 7, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chou in view of Yang et al. (US PGPub 2022/0107743, hereafter referred to as Yang). Regarding claim 7, Chou teaches all the limitations of claim 6. Chou does not teach wherein the logical address granularity configuration is based on a write workload size of the host system. Yang teaches wherein the logical address granularity configuration is based on a write workload size of the host system (Paragraph [0040], describes the use of workload zones that can have different granularities (particularly virtual zones meaning they are using a logical address space) and be based on things like a workload set size or ratio of reads (and writes)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Chou to use the partitions of Yang so to improve or optimize the overall performance of a storage system and/or ensure fairness to storage clients (Yang, Paragraph [0036]). Regarding claim 14, claim 14 is the method claim associated with claim 7. Since Chou and Yang teach all the limitations to claim 7, they also teach all the limitations to claim 14; therefore the rejection to claim 7 also applies to claim 14. Regarding claim 20, claim 20 is the system claim associated with claim 7. Since Chou teaches all the limitations to claim 7 and Chou further teaches a memory device having an associated logical address space arranged in a plurality of partitions; and a processing device, operatively coupled with the memory device (Fig. 1 and Paragraphs [0016]-[0017] and [0020], as stated in the rejection to claim 1, the memory that is coupled to the controller (processing device) is shown to be divided into several namespaces), they also teach all the limitations to claim 20; therefore the rejections to claim 7 also applies to claim 20. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS A PAPERNO whose telephone number is (571)272-8337. The examiner can normally be reached Mon-Fri 9:30-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at 571-272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS A. PAPERNO/Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Jan 29, 2025
Application Filed
Feb 20, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602314
MEMORY EXPANSION METHOD AND RELATED DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12585580
TECHNIQUES FOR A FRAGMENT CURSOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585406
WRITING AND READING DATA SETS TO AND FROM CLOUD STORAGE FOR LEGACY MAINFRAME APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12578884
DYNAMIC ONLINE CODE-RATE ALLOCATION ACCORDING TO WORDLINE NOISE FOR ADAPATIVE ECC IN SSD/UFS
2y 5m to grant Granted Mar 17, 2026
Patent 12578904
METHOD FOR HANDLING ACCESS COMMANDS WITH MATCHING AND UNMATCHING ADDRESSES AND SOLID-STATE STORAGE DEVICE OPERATING THE SAME
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
66%
With Interview (-3.8%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 275 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month