Prosecution Insights
Last updated: April 19, 2026
Application No. 18/057,628

Adaptive Cache Partitioning

Final Rejection §103
Filed
Nov 21, 2022
Examiner
LI, SIDNEY
Art Unit
2137
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
4 (Final)
80%
Grant Probability
Favorable
5-6
OA Rounds
2y 8m
To Grant
86%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
304 granted / 380 resolved
+25.0% vs TC avg
Moderate +6% lift
Without
With
+5.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
14 currently pending
Career history
394
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
18.5%
-21.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 380 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 21-40 are pending. Claims 21, 22, 25, 31, 39, and 40 have been amended as per Applicants' request. Papers Submitted It is hereby acknowledged that the following papers have been received and placed of record in the file: Amended Claims as filed on July 14, 2025 Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim 40 recites the limitations “means for implementing …”, “means for servicing”, “means for maintaining …”, “means for loading …”, “means for modifying …”, “means for dividing …”, “means for moving …”, “means for reassigning …”, “means for removing”, “means for adding”, “means for storing …”, “means for servicing …”, means for accessing …”, and “means for updating …”. The corresponding structure of cache logic which performs these steps is defined as “The cache logic 210, prefetch logic 230, and/or components and functionality thereof may include, but are not limited to: circuitry, logic circuitry, control circuitry, interface circuitry, input/output (I/O) circuitry, fuse logic, analog circuitry, digital circuitry, logic gates, registers, switches, multiplexers, arithmetic logic units (ALU), state machines, microprocessors, processor-in-memory (PIM) circuitry, and/or the like. The cache logic 210 may be configured as a controller of the cache 110 (or cache controller). The prefetch logic 230 may be configured as a prefetcher (or cache prefetcher) of the cache 110” in paragraph [0068]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 21-29, 31-36, 38, and 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over TAN et al. (US 2022/0179785) (hereinafter Tan) (continuation of PCT filed August 03, 2020, priority to CN 201910790109.2 filed August 26, 2019) in view of Palacharla et al. (US 2016/0019157) (hereinafter Pala) (published January 21, 2016) and Roberts et al. (US 2011/0093654) (hereinafter Roberts) (published April 21, 2011). Regarding Claims 21, 31, and 40, taking claim 31 as exemplary, Tan discloses an apparatus comprising: a memory array configured as a cache memory, “A part of a cache space may be reserved in the server 110, each storage node 120, or a management node in the plurality of storage nodes 120 as a read cache and a metadata cache. The read cache is configured to cache data stored in the storage system 100. The metadata cache is configured to cache metadata corresponding to the data stored in the storage system 100” (Tan [0051]) logic coupled the memory array, the logic configured to: implement a dynamic partitioning scheme to partition a cache memory into a first portion and a second portion, “A part of a cache space may be reserved in the server 110, each storage node 120, or a management node in the plurality of storage nodes 120 as a read cache and a metadata cache. The read cache is configured to cache data stored in the storage system 100. The metadata cache is configured to cache metadata corresponding to the data stored in the storage system 100” (Tan [0051] the metadata cache is the first portion and the read cache is the second portion) “In an embodiment, the size of the read cache and the size of the metadata cache are dynamically adjusted by using the hit rate of the read cache as a decision factor” (Tan [0112]) service multiple requests relating to an address space, including: “After receiving a read request sent by a client, the server 110 first determines whether the read cache includes to-be-read data corresponding to the read request, or determines whether the metadata cache includes metadata of to-be-read data corresponding to the read request. If the read cache has cached the to-be-read data, the to-be-read data is obtained from the read cache and fed back to the client. If the metadata cache includes the metadata corresponding to the to-be-read data, a storage address corresponding to the to-be-read data is determined based on the metadata corresponding to the to-be-read data to obtain the to-be-read data from the corresponding storage address” (Tan [0052]) maintaining metadata pertaining to the address space within the first portion of the cache memory, and “A part of a cache space may be reserved in the server 110, each storage node 120, or a management node in the plurality of storage nodes 120 as a read cache and a metadata cache. The read cache is configured to cache data stored in the storage system 100. The metadata cache is configured to cache metadata corresponding to the data stored in the storage system 100. For ease of description, a cache space reserved in the server end 110 is used as the read cache and the metadata cache below. That the read cache is configured to cache the data stored in the storage system 100 means that the read cache is configured to cache the data stored in the storage system 100 and related to a read request” (Tan [0051]) loading data associated with addresses of the address space into the second portion of the cache memory; “A part of a cache space may be reserved in the server 110, each storage node 120, or a management node in the plurality of storage nodes 120 as a read cache and a metadata cache. The read cache is configured to cache data stored in the storage system 100. The metadata cache is configured to cache metadata corresponding to the data stored in the storage system 100. For ease of description, a cache space reserved in the server end 110 is used as the read cache and the metadata cache below. That the read cache is configured to cache the data stored in the storage system 100 means that the read cache is configured to cache the data stored in the storage system 100 and related to a read request” (Tan [0051]) But does not explicitly state the cache memory comprising multiple ways, the multiple ways comprising ways of a set of the cache memory, the ways of the set comprising a first way and a second way; and modify the dynamic partitioning scheme to adapt a size of the first portion by changing a quantity of the multiple ways that is allocated for the metadata based, at least in part, on a metric quantifying performance of the loading, including: dividing the ways of the set of the cache memory into a first group allocated for the metadata pertaining to the address space and a second group allocated for the data associated with addresses of the address space, moving data from the first way assigned to the second group to the second way assigned to the second group to produce moved data, reassigning the first way of the set from the second group allocated for the data to the first group allocated for the metadata, including: removing the first way of the set from an address mapping scheme, and adding the first way of the set to a metadata mapping scheme, and storing, after the reassigning, metadata in the first way of the set to produce stored metadata; and service, after the modification of the dynamic partitioning scheme and using the cache memory, multiple additional requests relating to the address space, including: accessing the moved data, and updating the stored metadata. Pala and Tan discloses the cache memory comprising multiple ways, the multiple ways comprising ways of a set of the cache memory, the ways of the set comprising a first way and a second way; and “A component cache configuration table may store parameters of each component cache defining the features of the component caches, including parameters for the total number or size of the sets, the location of the set that the component cache occupies, the ways that the component cache occupies, whether the component cache is accessed using a custom index (e.g., derived from a virtual address) or set index derived from a physical address (index mode), and a replacement policy for the component cache” (Pala [0044]) modify the dynamic partitioning scheme to adapt a size of the first portion by changing a quantity of the multiple ways that is allocated for the metadata based, at least in part, on a metric quantifying performance of the loading, including: “A component cache configuration table may store parameters of each component cache defining the features of the component caches, including parameters for the total number or size of the sets, the location of the set that the component cache occupies, the ways that the component cache occupies, whether the component cache is accessed using a custom index (e.g., derived from a virtual address) or set index derived from a physical address (index mode), and a replacement policy for the component cache. The component cache configuration table may populate dynamically as clients request cache access, or the component cache configuration table may populate at boot time, defining a number of component caches, and remain static at runtime” (Pala [0044]) “The size of a cache line in a component cache may be dependent on the number of cache sets and/or ways making up the component cache” (Pala [0045]) “The server adjusts a size of the read cache and a size of the metadata cache based on the hit rate of the read cache” (Tan [0089] the cache size is adjust based on the hit rate, which is a metric regarding the performance of loads to the cache) dividing the ways of the set of the cache memory into a first group allocated for the metadata pertaining to the address space and a second group allocated for the data associated with addresses of the address space, “The size of a cache line in a component cache may be dependent on the number of cache sets and/or ways making up the component cache” (Pala [0045]) “This application provides a cache space management method and apparatus, to properly set a size of a cache space occupied by a read cache and a size of a cache space occupied by a metadata cache, so as to improve data read performance of a storage system” (Tan [0006] the cache is composed of multiple ways as disclosed by Pala and these ways are divided into two groups, one group for the read cache and one group for the metadata cache) reassigning the first way of the set from the second group allocated for the data to the first group allocated for the metadata, including: removing the first way of the set from an address mapping scheme, and adding the first way of the set to a metadata mapping scheme, and “The size of a cache line in a component cache may be dependent on the number of cache sets and/or ways making up the component cache” (Pala [0045]) “when the prefetch hit rate is greater than or equal to a first threshold, decreasing the read cache and increasing the metadata cache” (Tan [0092] ways in the read cache are required to be part of an address mapping scheme and ways in the metadata cache are required to be part of a metadata mapping scheme to operate and the reassigning of ways from the read cache to the metadata cache is to achieve the increasing/decreasing of the respective cache and the addition/removal from the respective mapping scheme) storing, after the reassigning, metadata in the first way of the set to produce stored metadata; and “when the prefetch hit rate is greater than or equal to a first threshold, decreasing the read cache and increasing the metadata cache” (Tan [0092] the reassigned first way is now part of the metadata cache) “The metadata cache is configured to cache metadata corresponding to the data stored in the storage system 100” (Tan [0051] after the reassigning the first way is to perform the normal functions of storing metadata as part of the metadata cache) It would have been obvious before the effective filing date of the invention to one of ordinary skill in the art to combine the dynamic configuration of the cache to define the size of the cache as a number of set and ways with the cache of Tan. The motivation for doing so would be for increased adaptability to workload changes by modifying the size of the cache in changing the number of sets and ways to improve performance of the cache. Roberts and Tan discloses moving data from the first way assigned to the second group to the second way assigned to the second group to produce moved data, “This mechanism comprises an eviction selection mechanism for selecting evictable cached data for eviction from the cache memory to the main memory, and a cache compacting mechanism configured to perform cache compaction by evicting the evictable cached data from the cache memory and storing non-evictable cached data in fewer cache segments than were used to store the cached data prior to eviction of the evictable cached data. When cached data is evicted from the cache, the cache compacting mechanism packs the remaining data into fewer, more densely packed, cache segments so that at least one cache segment is no longer required to store cached data” (Roberts [0018] non-evictable data in the data cache would be compacted/moved to from one way/segment to another way/segment such that at least one way/segment is freed up) service, after the modification of the dynamic partitioning scheme and using the cache memory, multiple additional requests relating to the address space, including: accessing the moved data, and updating the stored metadata. “After receiving a read request sent by a client, the server 110 first determines whether the read cache includes to-be-read data corresponding to the read request, or determines whether the metadata cache includes metadata of to-be-read data corresponding to the read request. If the read cache has cached the to-be-read data, the to-be-read data is obtained from the read cache and fed back to the client. If the metadata cache includes the metadata corresponding to the to-be-read data, a storage address corresponding to the to-be-read data is determined based on the metadata corresponding to the to-be-read data to obtain the to-be-read data from the corresponding storage address” (Tan [0052] in response to the read request it is determined that the moved data is included in the read cache and the metadata in the metadata cache is updated) It would have been obvious before the effective filing date of the invention to one of ordinary skill in the art to combine the compaction of data in Roberts with the memory system in the combination of Tan and Pala. The motivation for doing so would be to more efficiently use the storage space and be able to turn off or repurpose the segments that are no longer required. Furthermore coherency would also be improved as only cache ways/segments that have no valid data would be reassigned. Regarding Claim 22, the combination of Tan, Pala, and Roberts disclosed the method of claim 21, but does not explicitly state further comprising: utilizing the metadata mapping scheme to access cache units allocated to the first portion of the cache memory; and utilizing the address mapping scheme to map addresses of the address space to cache units allocated to the second portion of the cache memory. Tan discloses determination of storage address in the read cache, and properly sizing and adjusting the sizes of the read cache and the metadata cache. “If the server determines that storage addresses of data stored in the read cache are the LUN 0 and the LUN 1, which are different from the storage address of the to-be-read data, the server determines that the read cache does not include the first to-be-read data” (Tan [0065]) “The server may periodically perform operation S211 and operation S212, so that proper sizes of cache spaces can be configured for the read cache and the metadata, thereby ensuring the data read performance of the storage system” (Tan [0110]) “Adjusting the size of the read cache and the size of the metadata cache based on the hit rate of the read cache means that the size of the read cache and the size of the metadata cache are allocated based on the hit rate of the read cache when a sum of the size of the read cache and the size of the metadata cache is determined” (Tan [0111]) Paragraph [0068] of the specification states “Partitioning the cache memory 120 into a plurality of portions (e.g., a first portion 124 and second portion 126) may include configuring mapping logic and/or mapping schemes of the portions to allocate, include, and/or incorporate designated cache memory resources of the cache memory 120. As used herein, “allocating,” “partitioning,” or “assigning” a portion of the cache memory 120 (or “allocating,” “partitioning,” or “assigning” cache memory resources to a portion or partition of the cache memory 120) may include configuring mapping logic and/or a mapping scheme of the portion (or partition) to “include” or “reference” the cache memory resources …” When viewing the applicant’s specification the mapping schemes is used to define the first and second portions for metadata and data respectively. To one of ordinary skill in the art when viewing Tan, the adjusting of the sizes of the cache, such as increasing and decreasing the metadata cache and the read cache would include the deallocation of segments from one cache and the allocation and association of those segments to the other cache. The metadata mapping scheme would just be the association of the segments to the metadata cache and the address mapping scheme would just be the association of the segments to the read cache. One of ordinary skill in the art would also understand that the determination of storage addresses of the data in the read cache disclosed by Tan, would implicitly teach the mapping of the storage addresses of the data in storage to the location of the read cache where it is cached and the utilization of the mapping to access the cached data. And likewise the implicit teaching of metadata mapping to link the metadata to the data and the utilization of the mapping to access the metadata. Therefore it would have been obvious before the effective filing date of the invention to one of ordinary skill in the art to combine the understanding that the adjusting of sizes of the caches would include deallocating, allocating, and associating segments would be done using schemes that map address and/or metadata to the portions of cache memory and with the memory system in the combination of Tan, Pala, Roberts. The motivation for doing so would be to effectively move segments of the caches around according to the needs and be able to identify to which cache the segments belongs to. Regarding Claim 23, Tan further discloses further comprising: loading data into the second portion of the cache memory in response to cache misses, including cache misses resulting from requests pertaining to addresses that are not available within the cache memory. “The server determines that the read cache does not include the first to-be-read data, and determines that the metadata cache does not include metadata corresponding to the first to-be-read data” (Tan [0064] when the cache does not include the data the addresses pertaining to that data is not available) “The server sends the first to-be-read data to the client, and stores the first to-be-read data in the read cache” (Tan [0068]) Regarding Claim 24, Tan further discloses further comprising: prefetching data into the second portion of the cache memory based, at least in part, on the metric quantifying performance of the loading. “A specific implementation may include data stored in the storage system 100 and accessed by the read request, and may further include data related to data that is stored in the storage system 100, that is prefetched based on a prefetch algorithm, and that is accessed by the read request” (Tan [0051]) “It should be noted that operation S207 is an optional operation, that is, operation S207 is not mandatory to be performed. For example, the server may determine, based on a usage requirement, whether operation S207 needs to be performed” (Tan [0071] the second data is prefetched since it is supplied before the request) “In an embodiment of this application, the hit rate of the read cache may be represented by using a prefetch hit rate or a repeated hit rate. To clarify a difference between the prefetch hit rate and the repeated hit rate, the following describes data stored in the read cache” (Tan [0080] the prefetching of data is based on the usage/prefetch hit rate) Regarding Claim 25, Tan further discloses further comprising: utilizing the metadata maintained within the first portion of the cache memory to predict addresses of upcoming requests; and “If the metadata cache includes the metadata corresponding to the to-be-read data, a storage address corresponding to the to-be-read data is determined based on the metadata corresponding to the to-be-read data to obtain the to-be-read data from the corresponding storage address” (Tan [0052]) Prefetching, into the second portion of the cache memory, data corresponding to the predicted addresses before requests pertaining to the predicted addresses are received at the cache memory. “It should be noted that operation S207 is an optional operation, that is, operation S207 is not mandatory to be performed. For example, the server may determine, based on a usage requirement, whether operation S207 needs to be performed” (Tan [0071] in S207 the second data is prefetched since it is supplied before the request in S208) “If the server performs operation S207 and determines that the storage address of the first to-be-read data is the LUN 4, the server stores data after the LUN 4, that is, data in an LUN 5 and an LUN 6, in the read cache. So far, the read cache has stored data in the LUN 0, the LUN 1, and the LUN 4 to the LUN 6” (Tan [0072]) Regarding Claim 26, Tan and Pala further discloses further comprising: modifying the dynamic partitioning scheme to adapt a size of the second portion by changing a quantity of at least one of multiple sets or the multiple ways that are allocated to the data based, at least in part, on the metric quantifying performance of the loading. “In a first manner, the server uses a prefetch hit rate to represent the hit rate of the read cache, and the adjusting a size of the read cache and a size of the metadata cache based on the hit rate of the read cache includes:” (Tan [0091] the cache size is dynamically adjusted based on the hit rate, which is a metric regarding the performance of loads to the cache) “The size of a cache line in a component cache may be dependent on the number of cache sets and/or ways making up the component cache” (Pala [0045] the adjusting of the size of the cache in Tan would change the quantity of the number of sets or ways) Regarding Claim 27, Tan further discloses further comprising: modifying the dynamic partitioning scheme to adapt the size of the first portion and the size of the second portion based, at least in part, on a metric quantifying prefetch performance of the loading. “In a first manner, the server uses a prefetch hit rate to represent the hit rate of the read cache, and the adjusting a size of the read cache and a size of the metadata cache based on the hit rate of the read cache includes:” (Tan [0091]) Regarding Claim 28, Tan further discloses further comprising: monitoring the metric quantifying prefetch performance that pertains to data prefetched into the second portion of the cache memory; and “In a first manner, the server uses a prefetch hit rate to represent the hit rate of the read cache, and the adjusting a size of the read cache and a size of the metadata cache based on the hit rate of the read cache includes:” (Tan [0091]) comparing the metric quantifying prefetch performance to one or more thresholds. “when the prefetch hit rate is greater than or equal to a first threshold, decreasing the read cache and increasing the metadata cache” (Tan [0092]) Regarding Claim 29, Tan further discloses further comprising: increasing the size of the first portion of the cache memory allocated for the metadata and decreasing a size of the second portion of the cache memory allocated for the data responsive to the metric quantifying performance exceeding at least one threshold. “when the prefetch hit rate is greater than or equal to a first threshold, decreasing the read cache and increasing the metadata cache” (Tan [0092]) Regarding Claim 30, Roberts and Tan further discloses further comprising: increasing the size of the first portion and decreasing a size of the second portion, includes compacting data stored within a second portion of the cache memory; and “This mechanism comprises an eviction selection mechanism for selecting evictable cached data for eviction from the cache memory to the main memory, and a cache compacting mechanism configured to perform cache compaction by evicting the evictable cached data from the cache memory and storing non-evictable cached data in fewer cache segments than were used to store the cached data prior to eviction of the evictable cached data. When cached data is evicted from the cache, the cache compacting mechanism packs the remaining data into fewer, more densely packed, cache segments so that at least one cache segment is no longer required to store cached data” (Roberts [0018] non-evictable data in the data cache would be compacted/moved to from one way/segment to another way/segment such that at least one way/segment is freed up) “when the prefetch hit rate is greater than or equal to a first threshold, decreasing the read cache and increasing the metadata cache” (Tan [0092]) “It should be noted that if a total size of cache spaces used for and the read cache and the metadata cache is specified in the server, when one cache (the read cache or the metadata cache) is decreased, the other cache is necessarily increased” (Tan [0095] after the segments are freed up from compaction and are not required for use in the respective cache it would be reallocated for use in the other cache) decreasing the size of the first portion and increasing the size of the second portion, including compacting metadata stored within the first portion of the cache memory. “This mechanism comprises an eviction selection mechanism for selecting evictable cached data for eviction from the cache memory to the main memory, and a cache compacting mechanism configured to perform cache compaction by evicting the evictable cached data from the cache memory and storing non-evictable cached data in fewer cache segments than were used to store the cached data prior to eviction of the evictable cached data. When cached data is evicted from the cache, the cache compacting mechanism packs the remaining data into fewer, more densely packed, cache segments so that at least one cache segment is no longer required to store cached data” (Roberts [0018] non-evictable data in the data cache would be compacted/moved to from one way/segment to another way/segment such that at least one way/segment is freed up) “the server determines to increase the read cache and decrease the metadata cache. A manner in which the server increases the read cache and decreases the metadata cache is similar to the corresponding content in the first manner, and details are not described herein again” (Tan [0099]) “It should be noted that if a total size of cache spaces used for and the read cache and the metadata cache is specified in the server, when one cache (the read cache or the metadata cache) is decreased, the other cache is necessarily increased” (Tan [0095] after the segments are freed up from compaction and are not required for use in the respective cache it would be reallocated for use in the other cache) Regarding Claim 32, Tan further discloses wherein the logic is further configured to: maintain within the first portion of the cache memory one or more of an address sequence, address history, index table, delta sequence, stride pattern, correlation pattern, feature vector, machine-learned (ML) feature, ML feature vector, ML model, or ML modeling data. “Because the read cache caches data and the metadata cache caches metadata (that is, an index of the data)” (Tan [0053]) Regarding Claim 33, Tan further discloses wherein: the metric quantifying performance of the loading comprises a prefetch metric; “In an embodiment of this application, the hit rate of the read cache may be represented by using a prefetch hit rate or a repeated hit rate. To clarify a difference between the prefetch hit rate and the repeated hit rate, the following describes data stored in the read cache” (Tan [0080]) and the logic is further configured to monitor one or more of a prefetch hit rate, quantity of useful prefetches, quantity of bad prefetches, or ratio of useful prefetches to bad prefetches. “The prefetch hit rate is a ratio of a data volume of to-be-read data obtained by a prefetch operation within preset duration to a total data volume of data prefetched by the prefetch operation within the preset duration. The prefetch hit rate is a ratio of a data volume of to-be-read data obtained by a prefetch operation within preset duration to a total data volume of data prefetched by the prefetch operation within the preset duration. For example, within the preset duration, the client has sent ten data read requests, and a total data volume of to-be-read data corresponding to the ten data read requests is 200 M. Then, the server obtains recorded hit statuses corresponding to the ten data read requests and determines that there are nine data read requests with a hit status of a prefetch hit in the ten data read requests, and a total data volume of to-be-read data corresponding to the nine data read requests is 160 M, thereby obtaining that the prefetch hit rate is 160/200=80% within the preset duration” (Tan [0086]) Regarding Claim 34, Tan further discloses wherein the logic is further configured to: increase the size of the first portion of the cache memory for the metadata pertaining to the address space in response to the metric exceeding at least one threshold; and decrease a size of the second portion of the cache memory for the data associated with addresses of the address space in response to the metric exceeding the at least one threshold. “when the prefetch hit rate is greater than or equal to a first threshold, decreasing the read cache and increasing the metadata cache” (Tan [0092]) Regarding Claim 35, Tan further discloses wherein the logic is further configured to: decrease the size of the first portion of the cache memory for the metadata pertaining to the address space in response to the metric being below the at least one threshold; and increase the size of the second portion of the cache memory for the data associated with addresses of the address space in response to the metric being below the at least one threshold. “The repeated hit rate is a ratio of a data volume of to-be-read data obtained from cached data within the preset duration to a total data volume of the cached data” (Tan [0087]) “when the repeated hit rate is greater than or equal to a second threshold, increasing the read cache and decreasing the metadata cache” (Tan [0097] repeating hit rate greater than or equal to can be rewritten as “a ratio of a total data volume of the cached data to a data volume of to-be-read data obtained from cached data within the preset duration being less than or equal to”) Regarding Claim 36, Tan and Pala further discloses wherein the logic is further configured to: increase the size of the first portion by the reassigning of the first way of the set from the second group allocated for data to the first group allocated for the metadata; and decrease a size of the second portion by the reassigning of the first way of the set from the second group allocated for data to the first group allocated for metadata. “A part of a cache space may be reserved in the server 110, each storage node 120, or a management node in the plurality of storage nodes 120 as a read cache and a metadata cache. The read cache is configured to cache data stored in the storage system 100. The metadata cache is configured to cache metadata corresponding to the data stored in the storage system 100” (Tan [0051] the metadata cache is the first portion and the read cache is the second portion) “when the prefetch hit rate is greater than or equal to a first threshold, decreasing the read cache and increasing the metadata cache” (Tan [0092] ways from the metadata cache is reallocated/reassigned to the read cache to achieve the increasing and decreasing of the read cache and metadata cache) Regarding Claim 38, Tan and Pala further discloses wherein the logic is further configured to: allocate a quantity of sets of multiple sets of the cache memory for the metadata pertaining to the address space; and “A server configures initial sizes for the read cache and the metadata cache” (Tan [0056]) “The size of a cache line in a component cache may be dependent on the number of cache sets and/or ways making up the component cache” (Pala [0045]) modify the quantity of sets of the multiple sets of the cache memory allocated for the metadata pertaining to the address space based, at least in part, on the metric quantifying performance of the loading. “when the prefetch hit rate is greater than or equal to a first threshold, decreasing the read cache and increasing the metadata cache” (Tan [0092] when modifying increasing the size of the metadata cache more ways are allocated to it) “The size of a cache line in a component cache may be dependent on the number of cache sets and/or ways making up the component cache” (Pala [0045] the increasing of the size of the cache in Tan would be achieved by modifying the quantity of the number of sets or ways) Claim(s) 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tan (continuation of PCT filed August 03, 2020, priority to CN 201910790109.2 filed August 26, 2019), Pala (published January 21, 2016), and Roberts (published April 21, 2011) as applied to claim 31 above, and further in view of Talagala et al. (US 2013/0185475) (hereinafter Talagala) (published July 18, 2013). Regarding Claim 37, the combination of Tan, Pala, and Roberts disclosed the apparatus of claim 31, and Tan further discloses wherein the logic is further configured to: read the moved data from the second way of the set of the cache memory in response to at least one additional request of the multiple additional requests relating to the address space; and “After receiving a read request sent by a client, the server 110 first determines whether the read cache includes to-be-read data corresponding to the read request, or determines whether the metadata cache includes metadata of to-be-read data corresponding to the read request” (Tan [0052] after compaction the data would be stored in the second way and the requested would be directed where the data is currently) But does not explicitly state update the stored metadata maintained within the first way of the set of the cache memory in response to one or more additional requests of the multiple additional requests relating to the address space. Talagala discloses update the stored metadata maintained within the first way of the set of the cache memory in response to one or more additional requests of the multiple additional requests relating to the address space. “The cache module 440 may be configured to update the access metadata 442 in response to data accesses within the logical address space 432” (Talagala [0197]) It would have been obvious before the effective filing date of the invention to one of ordinary skill in the art to combine the updating of metadata in Talagala with the memory system in the combination of Tan, Pala, and Roberts. The motivation for doing so would be to more to increase coherency of the storage space by keeping the metadata up to date with respect to data accesses. Claim(s) 39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tan (continuation of PCT filed August 03, 2020, priority to CN 201910790109.2 filed August 26, 2019), Pala (published January 21, 2016), and Roberts (published April 21, 2011) as applied to claim 31 above, and further in view of Brown et al. (US 2015/0113223) (hereinafter Brown) (published April 23, 2015). Regarding Claim 39, the combination of Tan, Pala, and Roberts disclosed the apparatus of claim 31, and Tan, Pala, and Roberts further discloses wherein the cache memory comprises multiple cache units including a group of cache units; and “The size of a cache line in a component cache may be dependent on the number of cache sets and/or ways making up the component cache” (Pala [0045] cache units would be the cache lines uses to implement the cache and organized into sets and ways) the logic is further configured, responsive to a determination to reallocate the group of cache units from the second portion to the first portion, “when the prefetch hit rate is greater than or equal to a first threshold, decreasing the read cache and increasing the metadata cache” (Tan [0092]) evict data from the selected cache unit, the selected cache unit to remain allocated to the second portion, move data to the selected cache unit from a cache unit of the group of cache units being reallocated from the second portion to the first portion, “The cache compaction mechanism 40 acts to evict evictable data from the cache 5 and to rearrange the cached data within the cache 5 so that the non-evictable data is packed into fewer segments 12 than had been used prior to eviction. Following cache compaction by the cache compaction mechanism 40, one or more segments 12 of the cache 5 will no longer be required for storing cache data, and so these segments may be placed in the power saving state by the power supply unit 15 under control of the power controller 22” (Roberts [0091] the selected cache unit is a cache unit having data evicted from it and further storing data from the selected unit that is being compacted) “It should be noted that if a total size of cache spaces used for and the read cache and the metadata cache is specified in the server, when one cache (the read cache or the metadata cache) is decreased, the other cache is necessarily increased” (Tan [0092] after the cache unit is freed up via compaction and are not required for use in the respective cache it would be reallocated for use in the other cache) evict data from the group of cache units, and “The cache compaction mechanism 40 acts to evict evictable data from the cache 5 and to rearrange the cached data within the cache 5 so that the non-evictable data is packed into fewer segments 12 than had been used prior to eviction. Following cache compaction by the cache compaction mechanism 40, one or more segments 12 of the cache 5 will no longer be required for storing cache data, and so these segments may be placed in the power saving state by the power supply unit 15 under control of the power controller 22” (Roberts [0091] the cache unit is would be evicted of data and have the rest of the data compacted and stored in the selected cache unit) But does not explicitly state disable cache tags associated the group of cache units. Brown discloses disable cache tags associated the group of cache units, “In response to a reduction in the size of the available storage capacity, the allocation module may be configured to a) evict data from the cache, and/or b) deallocate one or more cache tags corresponding to the evicted data” (Brown [0008]) It would have been obvious before the effective filing date of the invention to one of ordinary skill in the art to combine the deallocating of the cache tags in Brown with the memory system in combination of Tan, Pala, and Roberts. The motivation for doing so would be to more to increase coherency of the storage space by keeping the cache tags up to date with respect to data eviction. Response to Arguments Reply to Claim Objection(s) Applicant’s arguments, see page 15 of remarks, filed July 14, 2025, with respect to claim 39 have been fully considered and are persuasive. The claim objection of claim 39 has been withdrawn. Reply to § 103 Claim Rejections Applicant's arguments filed July 14, 2025 have been fully considered but they are not persuasive. Applicant Argues: a) Thus, claim 21 now reads: “reassigning the first way of the set from the second group allocated for the data to the first group allocated for the metadata, including: removing the first way of the set from an address mapping scheme, and adding the first way of the set to a metadata mapping scheme.” It is respectfully submitted that the applied documents of Tan, Palacharla, and Roberts do not describe, suggest, or otherwise render unpatentable, either alone or in any combination, the recitations of at least amended claim 21. First, it is respectfully noted that the Office has not addressed these new recitations of claim 21. With respect to (a), it is implicit that the data portion of a cache would have an address mapping scheme and the metadata portion of the cache would have a metadata mapping scheme to operate. Without any mapping schemes data in the cache memory would just be a bunch of random information with no meaning. An address mapping scheme is associated with the data portion of the cache to identify the addresses of lines of the cache to store cached data and a metadata mapping is associated with the metadata portion to link the stored metadata with the data. b) Second, claim 21 reads "removing the first way of the set from an address mapping scheme." Claim 21 also reads "adding the first way of the set to a metadata mapping scheme." Neither Tan nor Palacharla describes a "mapping scheme." Consequently, neither Tan nor Palacharla can describe "an address mapping scheme" or "a metadata mapping scheme," much less both such mapping schemes. It is therefore respectfully submitted that Tan and Palacharla, both alone and in any combination with each other or other document(s) of record, cannot describe or teach utilizing "an address mapping scheme" and "a metadata mapping scheme" in the manner recited in claim 21. With respect to (b), an address mapping scheme and metadata mapping scheme is implicit to the working of a cache including data and metadata. And when reassigning ways/lines of the cache to/from the data portion to/from the metadata portions, the address mapping scheme and metadata mapping scheme would be updated to indicate how that way/line is reassigned. c) Third, it is respectfully submitted that combining Palacharla with Tan under 35 U.S.C. § 103, as proposed in the Current Office Action, would render one or both unfit for their intended purpose due to their fundamentally different approaches to cache Space management. Tan is directed to a system in which “the size of the read cache and the size of the metadata cache are dynamically adjusted.” (Tan; the Abstract.) Tan focuses on “a cache space management method and apparatus, [which is designed] to properly set a size of a cache space occupied by a read cache and a size of a cache space occupied by a metadata cache, so as to improve data read performance of a storage system.” (Tan; Paragraph [0006].) … With respect to (c), applicant contends that Tan and Palacharla are “fundamentally different” and that their combination would render one or both references unfit for their intended purpose. This is not persuasive because the references address complementary, not conflicting, aspects of cache management. Tan teaches dynamically adjusting between read and metadata caches within a cache allocation, while Palacharla teaches partitioning a system cache and deactivating partitions to save power or make space available for other uses. A person of ordinary skill would recognize that such released space could readily be reassigned within Tan’s framework to the other cache portion (read or metadata), thereby improving cache efficiency. Thus, Palacharla’s deactivation mechanism naturally complements Tan’s balancing logic. The legal standard under 35 U.S.C. §103 does not require each reference to remain unchanged in isolation, but only whether their combination would have been obvious to one of ordinary skill. Here, combining Palacharla’s partitioning and deactivation with Tan’s intra-partition balancing yields predictable benefits in both efficiency and power management. No further arguments have been provided for the dependent claims and just references the above and the dependency to the independent claims. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY LI whose telephone number is (571)270-5967. The examiner can normally be reached Monday to Friday 10:00 AM to 6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. T
Read full office action

Prosecution Timeline

Nov 21, 2022
Application Filed
Feb 24, 2023
Response after Non-Final Action
May 30, 2024
Non-Final Rejection — §103
Aug 19, 2024
Interview Requested
Aug 29, 2024
Examiner Interview Summary
Aug 29, 2024
Applicant Interview (Telephonic)
Sep 09, 2024
Response Filed
Dec 02, 2024
Final Rejection — §103
Mar 05, 2025
Applicant Interview (Telephonic)
Mar 05, 2025
Examiner Interview Summary
Mar 10, 2025
Request for Continued Examination
Mar 17, 2025
Response after Non-Final Action
May 01, 2025
Non-Final Rejection — §103
Jul 01, 2025
Examiner Interview Summary
Jul 01, 2025
Applicant Interview (Telephonic)
Jul 14, 2025
Response Filed
Sep 16, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596501
SYSTEMS AND METHODS FOR ALLOCATING READ BUFFERS BASED ON READ DATA SIZES IN NON-VOLATILE STORAGE DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12572300
MEMORY EXPANDER, HETEROGENEOUS COMPUTING DEVICE USING MEMORY EXPANDER, AND OPERATION METHOD OF HETEROGENOUS COMPUTING
2y 5m to grant Granted Mar 10, 2026
Patent 12572281
ADAPTIVE DIE SELECTION FOR BLOCK FAMILY SCAN
2y 5m to grant Granted Mar 10, 2026
Patent 12566715
TRANSLATION TABLE ADDRESS STORAGE CIRCUITRY
2y 5m to grant Granted Mar 03, 2026
Patent 12554656
SYSTEMS, METHODS, AND DEVICES FOR NEAR DATA PROCESSING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
80%
Grant Probability
86%
With Interview (+5.9%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 380 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month