Prosecution Insights
Last updated: April 19, 2026
Application No. 18/089,717

METHOD AND APPARATUS TO DYNAMICALLY SHARE NON-VOLATILE CACHE IN TIERED STORAGE

Non-Final OA §102§103
Filed
Dec 28, 2022
Examiner
VERDERAMO III, RALPH A
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
89%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
336 granted / 425 resolved
+24.1% vs TC avg
Moderate +10% lift
Without
With
+10.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
10 currently pending
Career history
435
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 425 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 7, and 13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yang et al. US Patent Application Publication No. 2022/0107743 (herein after referred to as Yang). Regarding claim 1, Yang describes an apparatus comprising: an orchestrator (A storage system having one or more tiers of storage resources may partition one or more of the tiers into individual partitions, each of which may be accessed by one of multiple storage clients. In some embodiments a partition manager system in accordance with example embodiments of the disclosure may periodically and/or dynamically adjust the partition size for one or more of the tiers for one or more of the storage clients (page 3, paragraph [0036]). Some embodiments of partition manager systems and methods in accordance with example embodiments of the disclosure may periodically capture and/or predict I/O changes to adaptively reallocate storage resources, including top-tier resources, as well as using different partitioning methodologies for workloads having different burst levels (page 3, paragraph [0038])), the orchestrator to identify a workload type for a workload and to dynamically assign a portion of a non-volatile cache in a tiered storage for use by the workload based on the workload type (Some embodiments of partition manager systems and methods in accordance with example embodiments of the disclosure may use preknowledge (e.g. prior-knowledge) of clients, for example, from a client pattern library… to perform an initial partition of top-tier cache storage to clients… different clients may be placed into different workload zones based on preknowledge of one or more factors such as workload read ratio, workload working set size, and/or the like… (page 3, paragraph [0040]). Some embodiments may provide a partition optimization framework that may take into consideration various factors for some or all storage clients such as changes in workload during a recent workload monitoring window, the weight of one or more clients (e.g., base on QoS, SLAs and/or the like), the estimated hit ratio that may be expected if a partition size is increased or decreased, and/or the like (page 4, paragraph [0042])), the tiered storage including the non-volatile cache and a storage device (Tier 1 may be implemented with storage devices based on persistent memory such as cross-gridded nonvolatile memory with bulk resistance change which may provide very high performance… Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs which may provide high performance but not as high as Tier 1 (page 8, paragraph [0088])), the non-volatile cache to cache data for the workload to be written to the storage device (Tier 1 may operate, for example, as a storage cache for one or more of the other tiers… (page 8, paragraph [0088])). Regarding claim 7, Yang describes one or more non-transitory machine-readable storage media comprising a plurality of instructions stored thereon that (The architecture illustrated in Fig. 1 may represent hardware, software, workflow, and/or any combination thereof (page 4, paragraph [0050]). The logic 140 and 142, as well as any of the methods, techniques, processes, and/or the like described herein, may be implemented with hardware, software, or any combination thereof… complex instruction set computer (CISC) processors and/or reduced instruction set computer (RISC) processors, and/or the like executing instructions stored in volatile memories… nonvolatile memory… as well as graphics processing units (GPUs), neural processing units (NPUs), and/or the like (page 4, paragraph [0055])), when executed by a compute device cause the compute device to: cache data for a workload to be written to a non-volatile cache in a tiered storage (Tier 1 may operate, for example, as a storage cache for one or more of the other tiers… (page 8, paragraph [0088])), the tiered storage including the non-volatile cache and a storage device (Tier 1 may be implemented with storage devices based on persistent memory such as cross-gridded nonvolatile memory with bulk resistance change which may provide very high performance… Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs which may provide high performance but not as high as Tier 1 (page 8, paragraph [0088])); identify a workload type for the workload (Some embodiments of partition manager systems and methods in accordance with example embodiments of the disclosure may use preknowledge (e.g. prior-knowledge) of clients, for example, from a client pattern library… to perform an initial partition of top-tier cache storage to clients… different clients may be placed into different workload zones based on preknowledge of one or more factors such as workload read ratio, workload working set size, and/or the like… (page 3, paragraph [0040]). Some embodiments may provide a partition optimization framework that may take into consideration various factors for some or all storage clients such as changes in workload during a recent workload monitoring window, the weight of one or more clients (e.g., base on QoS, SLAs and/or the like), the estimated hit ratio that may be expected if a partition size is increased or decreased, and/or the like (page 4, paragraph [0042])); and dynamically assign a portion of the non-volatile cache for use by the workload based on the workload type (Some embodiments of partition manager systems and methods in accordance with example embodiments of the disclosure may use preknowledge (e.g. prior-knowledge) of clients, for example, from a client pattern library… to perform an initial partition of top-tier cache storage to clients… different clients may be placed into different workload zones based on preknowledge of one or more factors such as workload read ratio, workload working set size, and/or the like… (page 3, paragraph [0040]). Some embodiments may provide a partition optimization framework that may take into consideration various factors for some or all storage clients such as changes in workload during a recent workload monitoring window, the weight of one or more clients (e.g., base on QoS, SLAs and/or the like), the estimated hit ratio that may be expected if a partition size is increased or decreased, and/or the like (page 4, paragraph [0042])). Regarding claim 13, Yang describes a system comprising: a compute node, the compute node comprising a processor The logic 140 and 142, as well as any of the methods, techniques, processes, and/or the like described herein, may be implemented with hardware, software, or any combination thereof… complex instruction set computer (CISC) processors and/or reduced instruction set computer (RISC) processors, and/or the like executing instructions stored in volatile memories… nonvolatile memory… as well as graphics processing units (GPUs), neural processing units (NPUs), and/or the like (page 4, paragraph [0055]); and an orchestrator (A storage system having one or more tiers of storage resources may partition one or more of the tiers into individual partitions, each of which may be accessed by one of multiple storage clients. In some embodiments a partition manager system in accordance with example embodiments of the disclosure may periodically and/or dynamically adjust the partition size for one or more of the tiers for one or more of the storage clients (page 3, paragraph [0036]). Some embodiments of partition manager systems and methods in accordance with example embodiments of the disclosure may periodically capture and/or predict I/O changes to adaptively reallocate storage resources, including top-tier resources, as well as using different partitioning methodologies for workloads having different burst levels (page 3, paragraph [0038])), the orchestrator to identify a workload type for a workload and to dynamically assign a portion of a non-volatile cache in a tiered storage for use by the workload in the compute node based on the workload type (Some embodiments of partition manager systems and methods in accordance with example embodiments of the disclosure may use preknowledge (e.g. prior-knowledge) of clients, for example, from a client pattern library… to perform an initial partition of top-tier cache storage to clients… different clients may be placed into different workload zones based on preknowledge of one or more factors such as workload read ratio, workload working set size, and/or the like… (page 3, paragraph [0040]). Some embodiments may provide a partition optimization framework that may take into consideration various factors for some or all storage clients such as changes in workload during a recent workload monitoring window, the weight of one or more clients (e.g., base on QoS, SLAs and/or the like), the estimated hit ratio that may be expected if a partition size is increased or decreased, and/or the like (page 4, paragraph [0042])), the tiered storage including the non-volatile cache and a storage device (Tier 1 may be implemented with storage devices based on persistent memory such as cross-gridded nonvolatile memory with bulk resistance change which may provide very high performance… Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs which may provide high performance but not as high as Tier 1 (page 8, paragraph [0088])), the non-volatile cache to cache data for the workload to be written to the storage device (Tier 1 may operate, for example, as a storage cache for one or more of the other tiers… (page 8, paragraph [0088])). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2, 8 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Hahn et al. US Patent Application Publication No. 2016/0026406 (herein after referred to as Hahn). Regarding claim 2, Yang describes the apparatus of claim 1 (see above). Yang discloses that re-partitioning may improve or optimize the overall performance and that in some embodiments, a partition manager system may provide automated re-partitioning decision making and/or operations based on one or more factors such as runtime workload analysis (page 3, paragraph [0036]). However, Yang does not specifically describe wherein the workload type is sequential, the orchestrator to request a reduction in the portion of the non-volatile cache assigned for the workload. Hahn describes a system for allocating an amount of host memory as a host memory buffer accessible by a solid state drive (SSD) as a cache for the SSD. Specifically, it is desirable to manage the size of host cache 112, also referred to herein as the host memory buffer, in a manner that balances the needs of storage device 104 and host 102 and that does not adversely affect the overall performance of host 102. To manage the size of host cache 112, a workload analyzer 140 located on host 102 analyzes the current workload on host cache 112. If workload analyzer 140 determines that the current workload on host cache 112 is not random I/O intensive, i.e., the I/O is primarily sequential, workload analyzer 140 may then determine whether the workload is CPU intensive. An example of a CPU intensive workload may be a read to cache 112 followed by a number of processing cycles that do not involve reads to host cache 112. If the CPU is reading data from host cache 112 and not frequently accessing host cache 112, then it may be desirable to decrease the size of host cache 112 to allow RAM 110 to be used by other applications executing on CPU 108 that do not involve storage device 104. Thus, workload analyzer 140 analyzes the current workload on CPU 108 and/or host cache 112. HMB manager 142 increases and decreases the size of host cache 112 based on input from workload analyzer 140 (page 3, paragraph [0027]). Returning to step 410, if the current workload is not random I/O intensive, i.e., the current workload is primarily sequential accesses to host cache 112, control proceeds to step 416 where it is determined whether the current workload is CPU intensive. If the current workload is determined to be CPU intensive, this means that the CPU is accessing host cache or memory buffer 112 during one or more cycles and then spending subsequent cycles processing data read from host cache or memory buffer 112. If this is true, host cache or memory buffer 112 may be under-utilized. Accordingly, in step 418, workload analyzer 140 instructs HMB manager 142 to reduce the size host cache or memory buffer 112. Control then returns to step 408 where the current workload on host cache 112 is re-analyzed. Thus, using the steps illustrated in FIG. 4, the size of host cache or memory buffer 112 may be dynamically and continually updated based on CPU utilization and access to host cache or memory buffer 112 (page 5, paragraph [0058]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Hahn teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of reducing cache size in response to determining a sequential workload as taught by Hahn in the Yang system for effectively reducing the size of an under-utilized memory. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as cache management via workload analysis. This close relation between both of the references highly suggests an expectation of success. Regarding claim 8, Yang describes the one or more non-transitory machine-readable storage media of claim 7 (see above). Yang discloses that re-partitioning may improve or optimize the overall performance and that in some embodiments, a partition manager system may provide automated re-partitioning decision making and/or operations based on one or more factors such as runtime workload analysis (page 3, paragraph [0036]). However, Yang does not specifically describe wherein the workload type is sequential, the compute device to request a reduction in the portion of the non-volatile cache assigned for the workload. Hahn describes a system for allocating an amount of host memory as a host memory buffer accessible by a solid state drive (SSD) as a cache for the SSD. Specifically, it is desirable to manage the size of host cache 112, also referred to herein as the host memory buffer, in a manner that balances the needs of storage device 104 and host 102 and that does not adversely affect the overall performance of host 102. To manage the size of host cache 112, a workload analyzer 140 located on host 102 analyzes the current workload on host cache 112. If workload analyzer 140 determines that the current workload on host cache 112 is not random I/O intensive, i.e., the I/O is primarily sequential, workload analyzer 140 may then determine whether the workload is CPU intensive. An example of a CPU intensive workload may be a read to cache 112 followed by a number of processing cycles that do not involve reads to host cache 112. If the CPU is reading data from host cache 112 and not frequently accessing host cache 112, then it may be desirable to decrease the size of host cache 112 to allow RAM 110 to be used by other applications executing on CPU 108 that do not involve storage device 104. Thus, workload analyzer 140 analyzes the current workload on CPU 108 and/or host cache 112. HMB manager 142 increases and decreases the size of host cache 112 based on input from workload analyzer 140 (page 3, paragraph [0027]). Returning to step 410, if the current workload is not random I/O intensive, i.e., the current workload is primarily sequential accesses to host cache 112, control proceeds to step 416 where it is determined whether the current workload is CPU intensive. If the current workload is determined to be CPU intensive, this means that the CPU is accessing host cache or memory buffer 112 during one or more cycles and then spending subsequent cycles processing data read from host cache or memory buffer 112. If this is true, host cache or memory buffer 112 may be under-utilized. Accordingly, in step 418, workload analyzer 140 instructs HMB manager 142 to reduce the size host cache or memory buffer 112. Control then returns to step 408 where the current workload on host cache 112 is re-analyzed. Thus, using the steps illustrated in FIG. 4, the size of host cache or memory buffer 112 may be dynamically and continually updated based on CPU utilization and access to host cache or memory buffer 112 (page 5, paragraph [0058]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Hahn teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of reducing cache size in response to determining a sequential workload as taught by Hahn in the Yang system for effectively reducing the size of an under-utilized memory. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as cache management via workload analysis. This close relation between both of the references highly suggests an expectation of success. Regarding claim 14, Yang describes the system of claim 13 (see above). Yang discloses that re-partitioning may improve or optimize the overall performance and that in some embodiments, a partition manager system may provide automated re-partitioning decision making and/or operations based on one or more factors such as runtime workload analysis (page 3, paragraph [0036]). However, Yang does not specifically describe wherein the workload type is sequential, the orchestrator to request a reduction in the portion of the non-volatile cache assigned for the workload. Hahn describes a system for allocating an amount of host memory as a host memory buffer accessible by a solid state drive (SSD) as a cache for the SSD. Specifically, it is desirable to manage the size of host cache 112, also referred to herein as the host memory buffer, in a manner that balances the needs of storage device 104 and host 102 and that does not adversely affect the overall performance of host 102. To manage the size of host cache 112, a workload analyzer 140 located on host 102 analyzes the current workload on host cache 112. If workload analyzer 140 determines that the current workload on host cache 112 is not random I/O intensive, i.e., the I/O is primarily sequential, workload analyzer 140 may then determine whether the workload is CPU intensive. An example of a CPU intensive workload may be a read to cache 112 followed by a number of processing cycles that do not involve reads to host cache 112. If the CPU is reading data from host cache 112 and not frequently accessing host cache 112, then it may be desirable to decrease the size of host cache 112 to allow RAM 110 to be used by other applications executing on CPU 108 that do not involve storage device 104. Thus, workload analyzer 140 analyzes the current workload on CPU 108 and/or host cache 112. HMB manager 142 increases and decreases the size of host cache 112 based on input from workload analyzer 140 (page 3, paragraph [0027]). Returning to step 410, if the current workload is not random I/O intensive, i.e., the current workload is primarily sequential accesses to host cache 112, control proceeds to step 416 where it is determined whether the current workload is CPU intensive. If the current workload is determined to be CPU intensive, this means that the CPU is accessing host cache or memory buffer 112 during one or more cycles and then spending subsequent cycles processing data read from host cache or memory buffer 112. If this is true, host cache or memory buffer 112 may be under-utilized. Accordingly, in step 418, workload analyzer 140 instructs HMB manager 142 to reduce the size host cache or memory buffer 112. Control then returns to step 408 where the current workload on host cache 112 is re-analyzed. Thus, using the steps illustrated in FIG. 4, the size of host cache or memory buffer 112 may be dynamically and continually updated based on CPU utilization and access to host cache or memory buffer 112 (page 5, paragraph [0058]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Hahn teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of reducing cache size in response to determining a sequential workload as taught by Hahn in the Yang system for effectively reducing the size of an under-utilized memory. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as cache management via workload analysis. This close relation between both of the references highly suggests an expectation of success. Claims 3, 9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Barbalho et al. US Patent Application Publication No. 2021/0157725 (herein after referred to as Barbalho). Regarding claim 3, Yang describes the apparatus of claim 1 (see above). Yang discloses that re-partitioning may improve or optimize the overall performance and that in some embodiments, a partition manager system may provide automated re-partitioning decision making and/or operations based on one or more factors such as runtime workload analysis (page 3, paragraph [0036]). However, Yang does not specifically describe wherein the workload type is random, the orchestrator to request a reduction in the portion of the non-volatile cache assigned for the workload. Barbalho describes a method for dynamically adapting cache size. Specifically, it is disclosed that for example, if the IOs 132 on the cache partition 130 are primarily random, and the IOs 136 on the cache partition 134 are primarily sequential, increasing the size of partition 134 may greatly increase the hit rate on partition 134 while not significantly reducing the hit rate on cache partition 130. Doing so requires the cache management system 128 to estimate any performance loss associated with reducing the size of the first cache partition 130 and estimate any performance increase associated with increasing the size of the second cache partition 134 (page 3, paragraph [0032]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Barbalho teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of reducing cache size of a partition in response to determining a random workload in said partition as taught by Barbalho in the Yang system for effectively freeing up space that can then be used to increase the size of another partition that would benefit from said increase. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as cache management via workload analysis. This close relation between both of the references highly suggests an expectation of success. Regarding claim 9, Yang describes the one or more non-transitory machine-readable storage media of claim 7 (see above). Yang discloses that re-partitioning may improve or optimize the overall performance and that in some embodiments, a partition manager system may provide automated re-partitioning decision making and/or operations based on one or more factors such as runtime workload analysis (page 3, paragraph [0036]). However, Yang does not specifically describe wherein the workload type is random, the compute device to request a reduction in the portion of the non-volatile cache assigned for the workload. Barbalho describes a method for dynamically adapting cache size. Specifically, it is disclosed that for example, if the IOs 132 on the cache partition 130 are primarily random, and the IOs 136 on the cache partition 134 are primarily sequential, increasing the size of partition 134 may greatly increase the hit rate on partition 134 while not significantly reducing the hit rate on cache partition 130. Doing so requires the cache management system 128 to estimate any performance loss associated with reducing the size of the first cache partition 130 and estimate any performance increase associated with increasing the size of the second cache partition 134 (page 3, paragraph [0032]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Barbalho teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of reducing cache size of a partition in response to determining a random workload in said partition as taught by Barbalho in the Yang system for effectively freeing up space that can then be used to increase the size of another partition that would benefit from said increase. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as cache management via workload analysis. This close relation between both of the references highly suggests an expectation of success. Regarding claim 15, Yang describes the system of claim 13 (see above). Yang discloses that re-partitioning may improve or optimize the overall performance and that in some embodiments, a partition manager system may provide automated re-partitioning decision making and/or operations based on one or more factors such as runtime workload analysis (page 3, paragraph [0036]). However, Yang does not specifically describe wherein the workload type is random, the orchestrator to request a reduction in the portion of the non-volatile cache assigned for the workload. Barbalho describes a method for dynamically adapting cache size. Specifically, it is disclosed that for example, if the IOs 132 on the cache partition 130 are primarily random, and the IOs 136 on the cache partition 134 are primarily sequential, increasing the size of partition 134 may greatly increase the hit rate on partition 134 while not significantly reducing the hit rate on cache partition 130. Doing so requires the cache management system 128 to estimate any performance loss associated with reducing the size of the first cache partition 130 and estimate any performance increase associated with increasing the size of the second cache partition 134 (page 3, paragraph [0032]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Barbalho teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of reducing cache size of a partition in response to determining a random workload in said partition as taught by Barbalho in the Yang system for effectively freeing up space that can then be used to increase the size of another partition that would benefit from said increase. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as cache management via workload analysis. This close relation between both of the references highly suggests an expectation of success. Claims 4, 10 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Hodes et al. US Patent Application Publication No. 2019/0377681 (herein after referred to as Hodes). Regarding claim 4, Yang describes the apparatus of claim 1 (see above). Yang discloses that re-partitioning may improve or optimize the overall performance and that in some embodiments, a partition manager system may provide automated re-partitioning decision making and/or operations based on one or more factors such as runtime workload analysis (page 3, paragraph [0036]). However, Yang does not specifically describe wherein the workload type is local, the orchestrator to request an increase of the portion of the non-volatile cache assigned for the workload. Hodes describes a method for workload based dynamic cache control in an SSD. Specifically, the NVM may be partitioned into a cache partition and a storage partition, and the respective sizes of the partitions may be dynamically changed based on a locality of data (LOD) of the access pattern of the NVM (page 2, paragraph [0018]). The LOD determination circuitry 132 may be configured to determine a LOD of an access pattern of the NVM 114. For example, the LOD may indicate a logical block address (LBA) range with a predetermined hit and miss (H/M) rate by the host within a certain time period. An exemplary method for determining the LOD is described in relation to FIG. 7 below. In some embodiments, other methods may be used to determine the LOD. The NVM partition circuitry 133 may be configured to dynamically configure data storage cells of the NVM 114 into a cache partition and a storage partition based on the LOD. In some embodiments, the data storage cells of the storage partition are configured to store a greater number of bits per cell than the data storage cells of the cache partition (page 3, paragraph [0027]). A larger LOD indicates that the access pattern is spread widely across the NVM. To the contrary, a smaller LOD indicates that the access pattern is restricted to a smaller LBA range (page 3, paragraph [0031]). Based on the LOD of the NVM 400, the controller may dynamically resize these partitions. In one example, when the LOD increases, the controller may increase the cache partition 402 to a larger cache partition 406 and decrease the storage partition 404 to a smaller storage partition 408 (page 4, paragraph [0039]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Hodes teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of increasing a cache partition in response to determining an increase in the locality of data as taught by Hodes in the Yang system for effectively enhancing the caching ability when the locality of data is high. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as cache management via workload analysis. This close relation between both of the references highly suggests an expectation of success. Regarding claim 10, Yang describes the one or more non-transitory machine-readable storage media of claim 7 (see above). Yang discloses that re-partitioning may improve or optimize the overall performance and that in some embodiments, a partition manager system may provide automated re-partitioning decision making and/or operations based on one or more factors such as runtime workload analysis (page 3, paragraph [0036]). However, Yang does not specifically describe wherein the workload type is local, the compute device to request an increase of the portion of the non-volatile cache assigned for the workload. Hodes describes a method for workload based dynamic cache control in an SSD. Specifically, the NVM may be partitioned into a cache partition and a storage partition, and the respective sizes of the partitions may be dynamically changed based on a locality of data (LOD) of the access pattern of the NVM (page 2, paragraph [0018]). The LOD determination circuitry 132 may be configured to determine a LOD of an access pattern of the NVM 114. For example, the LOD may indicate a logical block address (LBA) range with a predetermined hit and miss (H/M) rate by the host within a certain time period. An exemplary method for determining the LOD is described in relation to FIG. 7 below. In some embodiments, other methods may be used to determine the LOD. The NVM partition circuitry 133 may be configured to dynamically configure data storage cells of the NVM 114 into a cache partition and a storage partition based on the LOD. In some embodiments, the data storage cells of the storage partition are configured to store a greater number of bits per cell than the data storage cells of the cache partition (page 3, paragraph [0027]). A larger LOD indicates that the access pattern is spread widely across the NVM. To the contrary, a smaller LOD indicates that the access pattern is restricted to a smaller LBA range (page 3, paragraph [0031]). Based on the LOD of the NVM 400, the controller may dynamically resize these partitions. In one example, when the LOD increases, the controller may increase the cache partition 402 to a larger cache partition 406 and decrease the storage partition 404 to a smaller storage partition 408 (page 4, paragraph [0039]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Hodes teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of increasing a cache partition in response to determining an increase in the locality of data as taught by Hodes in the Yang system for effectively enhancing the caching ability when the locality of data is high. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as cache management via workload analysis. This close relation between both of the references highly suggests an expectation of success. Regarding claim 16, Yang describes the system of claim 13 (see above). Yang discloses that re-partitioning may improve or optimize the overall performance and that in some embodiments, a partition manager system may provide automated re-partitioning decision making and/or operations based on one or more factors such as runtime workload analysis (page 3, paragraph [0036]). However, Yang does not specifically describe wherein the workload type is local, the orchestrator to request an increase of the portion of the non-volatile cache assigned for the workload. Hodes describes a method for workload based dynamic cache control in an SSD. Specifically, the NVM may be partitioned into a cache partition and a storage partition, and the respective sizes of the partitions may be dynamically changed based on a locality of data (LOD) of the access pattern of the NVM (page 2, paragraph [0018]). The LOD determination circuitry 132 may be configured to determine a LOD of an access pattern of the NVM 114. For example, the LOD may indicate a logical block address (LBA) range with a predetermined hit and miss (H/M) rate by the host within a certain time period. An exemplary method for determining the LOD is described in relation to FIG. 7 below. In some embodiments, other methods may be used to determine the LOD. The NVM partition circuitry 133 may be configured to dynamically configure data storage cells of the NVM 114 into a cache partition and a storage partition based on the LOD. In some embodiments, the data storage cells of the storage partition are configured to store a greater number of bits per cell than the data storage cells of the cache partition (page 3, paragraph [0027]). A larger LOD indicates that the access pattern is spread widely across the NVM. To the contrary, a smaller LOD indicates that the access pattern is restricted to a smaller LBA range (page 3, paragraph [0031]). Based on the LOD of the NVM 400, the controller may dynamically resize these partitions. In one example, when the LOD increases, the controller may increase the cache partition 402 to a larger cache partition 406 and decrease the storage partition 404 to a smaller storage partition 408 (page 4, paragraph [0039]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Hodes teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of increasing a cache partition in response to determining an increase in the locality of data as taught by Hodes in the Yang system for effectively enhancing the caching ability when the locality of data is high. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as cache management via workload analysis. This close relation between both of the references highly suggests an expectation of success. Claims 5 – 6, 11 – 12, and 17 – 18 are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Wysocki et al. US Patent Application Publication No. 2019/0042413 (herein after referred to as Wysocki). Regarding claim 5, Yang describes the apparatus of claim 1 (see above). Yang discloses that tier 1 may be implemented with storage devices based on persistent memory such as cross-gridded nonvolatile memory and that Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs (page 8, paragraph [0088]). However, Yang does not specifically describe wherein the non-volatile cache is a byte-addressable, write-in-place non-volatile memory and the storage device is a solid state drive comprising a block addressable memory device. Wysocki describes a storage system. Specifically, the storage region write-back cache 132 is a portion of external memory 126 which may be byte addressable write-in-place non-volatile memory (page 3, paragraph [0028]). Furthermore, a non-volatile memory (NVM) device is a memory whose state is determinate even if power is interrupted to the device. In one embodiment, the NVM device can comprise a block addressable mode memory device, such as NAND or NOR technologies, or more specifically, multi-threshold level NAND flash memory (for example, Single-Level Cell (“SLC”), Multi-Level Cell (“MLC”), Quad-Level Cell (“QLC”), Tri-Level Cell (“TLC”), or some other NAND) (page 2, paragraph [0024]). The storage region write-back cache 132 stores data to be written to non-volatile memory 122 in the SSD 118 (page 2, paragraph [0027]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Wysocki teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of using a byte-addressable cache non-volatile memory with a block addressable non-volatile memory SSD as taught by Wysocki in the Yang system since Wysocki clearly shows that non-volatile memory may be byte-addressable and/or block-addressable. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as caching for an SSD. This close relation between both of the references highly suggests an expectation of success. Regarding claim 6, Yang describes the apparatus of claim 1 (see above). Yang discloses that Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs and Tier 3 may be implemented with multi-level cell (MLC) NVMe SSDs. Tier 2 may operate as a cache for Tier 3 (page 8, paragraph [0088]). However, Yang does not specifically describe wherein the non-volatile cache is a solid state drive with byte-addressable, write-in-place non-volatile memory and the storage device is a second solid state drive comprising a block addressable memory device. Wysocki describes a storage system. Specifically, the storage region write-back cache 132 may be a SSD that includes byte addressable write-in-place non-volatile memory and a NVMe over PCIe interface (page 3, paragraph [0028]). Furthermore, a non-volatile memory (NVM) device is a memory whose state is determinate even if power is interrupted to the device. In one embodiment, the NVM device can comprise a block addressable mode memory device, such as NAND or NOR technologies, or more specifically, multi-threshold level NAND flash memory (for example, Single-Level Cell (“SLC”), Multi-Level Cell (“MLC”), Quad-Level Cell (“QLC”), Tri-Level Cell (“TLC”), or some other NAND) (page 2, paragraph [0024]). The storage region write-back cache 132 stores data to be written to non-volatile memory 122 in the SSD 118 (page 2, paragraph [0027]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Wysocki teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of using a byte-addressable cache non-volatile memory SSD with a block addressable non-volatile memory SSD as taught by Wysocki in the Yang system since Wysocki clearly shows that non-volatile memory may be byte-addressable and/or block-addressable. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as caching for an SSD. This close relation between both of the references highly suggests an expectation of success. Regarding claim 11, Yang describes the one or more non-transitory machine-readable storage media of claim 7 (see above). Yang discloses that tier 1 may be implemented with storage devices based on persistent memory such as cross-gridded nonvolatile memory and that Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs (page 8, paragraph [0088]). However, Yang does not specifically describe wherein the non-volatile cache is a byte-addressable, write-in-place non- volatile memory and the storage device is a solid state drive comprising a block addressable memory device. Wysocki describes a storage system. Specifically, the storage region write-back cache 132 is a portion of external memory 126 which may be byte addressable write-in-place non-volatile memory (page 3, paragraph [0028]). Furthermore, a non-volatile memory (NVM) device is a memory whose state is determinate even if power is interrupted to the device. In one embodiment, the NVM device can comprise a block addressable mode memory device, such as NAND or NOR technologies, or more specifically, multi-threshold level NAND flash memory (for example, Single-Level Cell (“SLC”), Multi-Level Cell (“MLC”), Quad-Level Cell (“QLC”), Tri-Level Cell (“TLC”), or some other NAND) (page 2, paragraph [0024]). The storage region write-back cache 132 stores data to be written to non-volatile memory 122 in the SSD 118 (page 2, paragraph [0027]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Wysocki teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of using a byte-addressable cache non-volatile memory with a block addressable non-volatile memory SSD as taught by Wysocki in the Yang system since Wysocki clearly shows that non-volatile memory may be byte-addressable and/or block-addressable. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as caching for an SSD. This close relation between both of the references highly suggests an expectation of success. Regarding claim 12, Yang describes the one or more non-transitory machine-readable storage media of claim 7 (see above). Yang discloses that Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs and Tier 3 may be implemented with multi-level cell (MLC) NVMe SSDs. Tier 2 may operate as a cache for Tier 3 (page 8, paragraph [0088]). However, Yang does not specifically describe wherein the non-volatile cache is a solid state drive with byte-addressable, write-in-place non-volatile memory and the storage device is a second solid state drive comprising a block addressable memory device. Wysocki describes a storage system. Specifically, the storage region write-back cache 132 may be a SSD that includes byte addressable write-in-place non-volatile memory and a NVMe over PCIe interface (page 3, paragraph [0028]). Furthermore, a non-volatile memory (NVM) device is a memory whose state is determinate even if power is interrupted to the device. In one embodiment, the NVM device can comprise a block addressable mode memory device, such as NAND or NOR technologies, or more specifically, multi-threshold level NAND flash memory (for example, Single-Level Cell (“SLC”), Multi-Level Cell (“MLC”), Quad-Level Cell (“QLC”), Tri-Level Cell (“TLC”), or some other NAND) (page 2, paragraph [0024]). The storage region write-back cache 132 stores data to be written to non-volatile memory 122 in the SSD 118 (page 2, paragraph [0027]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Wysocki teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of using a byte-addressable cache non-volatile memory SSD with a block addressable non-volatile memory SSD as taught by Wysocki in the Yang system since Wysocki clearly shows that non-volatile memory may be byte-addressable and/or block-addressable. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as caching for an SSD. This close relation between both of the references highly suggests an expectation of success. Regarding claim 17, Yang describes the system of claim 13 (see above). Yang discloses that tier 1 may be implemented with storage devices based on persistent memory such as cross-gridded nonvolatile memory and that Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs (page 8, paragraph [0088]). However, Yang does not specifically describe wherein the non-volatile cache is a byte-addressable, write-in-place non-volatile memory and the storage device is a solid state drive comprising a block addressable memory device. Wysocki describes a storage system. Specifically, the storage region write-back cache 132 is a portion of external memory 126 which may be byte addressable write-in-place non-volatile memory (page 3, paragraph [0028]). Furthermore, a non-volatile memory (NVM) device is a memory whose state is determinate even if power is interrupted to the device. In one embodiment, the NVM device can comprise a block addressable mode memory device, such as NAND or NOR technologies, or more specifically, multi-threshold level NAND flash memory (for example, Single-Level Cell (“SLC”), Multi-Level Cell (“MLC”), Quad-Level Cell (“QLC”), Tri-Level Cell (“TLC”), or some other NAND) (page 2, paragraph [0024]). The storage region write-back cache 132 stores data to be written to non-volatile memory 122 in the SSD 118 (page 2, paragraph [0027]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Wysocki teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of using a byte-addressable cache non-volatile memory with a block addressable non-volatile memory SSD as taught by Wysocki in the Yang system since Wysocki clearly shows that non-volatile memory may be byte-addressable and/or block-addressable. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as caching for an SSD. This close relation between both of the references highly suggests an expectation of success. Regarding claim 18, Yang describes the system of claim 13 (see above). Yang discloses that Tier 2 may be implemented with single-level cell (SLC) NVMe SSDs and Tier 3 may be implemented with multi-level cell (MLC) NVMe SSDs. Tier 2 may operate as a cache for Tier 3 (page 8, paragraph [0088]). However, Yang does not specifically describe wherein the non-volatile cache is a solid state drive with byte-addressable, write-in-place non-volatile memory and the storage device is a second solid state drive comprising a block addressable memory device. Wysocki describes a storage system. Specifically, the storage region write-back cache 132 may be a SSD that includes byte addressable write-in-place non-volatile memory and a NVMe over PCIe interface (page 3, paragraph [0028]). Furthermore, a non-volatile memory (NVM) device is a memory whose state is determinate even if power is interrupted to the device. In one embodiment, the NVM device can comprise a block addressable mode memory device, such as NAND or NOR technologies, or more specifically, multi-threshold level NAND flash memory (for example, Single-Level Cell (“SLC”), Multi-Level Cell (“MLC”), Quad-Level Cell (“QLC”), Tri-Level Cell (“TLC”), or some other NAND) (page 2, paragraph [0024]). The storage region write-back cache 132 stores data to be written to non-volatile memory 122 in the SSD 118 (page 2, paragraph [0027]). Therefore, it would have been obvious to a person of ordinary skill in the computer art before the effective filing date of the claimed invention to incorporate the Wysocki teachings in the Yang system. Skilled artisan would have been motivated to incorporate the method of using a byte-addressable cache non-volatile memory SSD with a block addressable non-volatile memory SSD as taught by Wysocki in the Yang system since Wysocki clearly shows that non-volatile memory may be byte-addressable and/or block-addressable. In addition, both of the references teach features that are directed to analogous art and they are directed to the same field of endeavor, such as caching for an SSD. This close relation between both of the references highly suggests an expectation of success. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Gu et al. US Patent Application Publication No. 2021/0326255 describes dynamic cache size management of a multi-tenant caching system including a pattern classifier that receives sample data from a profiler which is categorized into one of several distributions used to predict cache performance. Roberts et al. US Patent Application Publication No. 2021/0365376 describes adjusting cache-line parameters responsive to identified workload. Shatsky et al. US Patent Application Publication No. 2022/0129380 describes optimizing volume tiers for designated types of workloads. Ainscow et al. US Patent Application Publication No. 2019/0339903 describes a system in which IO workload type is considered while performing tiering decisions. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RALPH A VERDERAMO III whose telephone number is (571)270-1174. The examiner can normally be reached Monday through Friday 8:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RALPH A VERDERAMO III/Examiner, Art Unit 2139 /REGINALD G BRAGDON/Supervisory Patent Examiner, Art Unit 2139 rv February 21, 2026
Read full office action

Prosecution Timeline

Dec 28, 2022
Application Filed
Feb 16, 2023
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602182
USER CONFIGURABLE SLC MEMORY SIZE
2y 5m to grant Granted Apr 14, 2026
Patent 12585779
SECURE PROGRAMMING OF ONE-TIME-PROGRAMMABLE (OTP) MEMORY
2y 5m to grant Granted Mar 24, 2026
Patent 12578881
SYSTEMS AND METHODS FOR USING DISTRIBUTED MEMORY CONFIGURATION BITS IN ARTIFICIAL NEURAL NETWORKS
2y 5m to grant Granted Mar 17, 2026
Patent 12578877
STORING SENSITIVE DATA SECURELY IN A MULTI-CLOUD ENVIRONMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12554653
STORAGE DEVICE CACHE SYSTEM WITH MACHINE LEARNING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
89%
With Interview (+10.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 425 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month