Prosecution Insights
Last updated: April 19, 2026
Application No. 18/090,249

UNIFIED FLEXIBLE CACHE

Non-Final OA §103
Filed
Dec 28, 2022
Examiner
MACKALL, LARRY T
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Advanced Micro Devices, Inc.
OA Round
7 (Non-Final)
85%
Grant Probability
Favorable
7-8
OA Rounds
2y 9m
To Grant
93%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
661 granted / 779 resolved
+29.9% vs TC avg
Moderate +8% lift
Without
With
+8.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
31 currently pending
Career history
810
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
24.8%
-15.2% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 779 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 4 February 2026 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 5-7, and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Loh et al. (Pub. No. US 2014/0173211) in view of Joshi (Pub. No. US 2012/0005432), and Agarwal et al. (Pub. No. US 2019/0294546). Claim 1: Loh et al. disclose a device comprising: a cache structure [fig. 2 - cache]; and a cache controller [fig. 2 – cache controller] configured to: partition the cache structure into a plurality of cache partitions that are sized based on different cache types [fig. 4; pars. 0094-0095 – Partitioning is dynamically performed. (“The process shown in FIG. 5 starts when partitioning mechanism 216 re-allocates two or more sub-partitions for a partition in the cache based on at least one of a memory access pattern of at least one of one or more sub-entities and a property of at least one of the one or more sub-entities (step 500). Generally, this operation is an update of the previously established allocation (see step 400 in FIG. 4) that is made after the earlier allocation as a dynamic update of the sub partitioning of cache 200.”)]; configure a first partition of the plurality of cache partitions into a client-side cache or a memory-side cache [fig. 2; par. 0046, 0058-0068 – Cache is partitioned between CPU and GPU cores. The cache is used by the CPU and GPU cores, and as such, corresponds to the claimed client-side cache (e.g. caches used by processors). Examiner notes that “client-side cache” and “memory-side cache” are claimed in alternate form, and as such, only one is required. In other words, the claim requires a first cache type, which may be a client-side cache or a memory-side cache and a second type which corresponds to probe filter. (“Partitioning mechanism 216 is a functional block that performs operations for partitioning the memory in the cache for use by one or more entities and/or sub-entities.” … “In the first part of the partitioning process, the cache is partitioned into partitions by allocating ways to entities such as CPUs or GPUs, and in the second part, a partition (as established in the first part) is partitioned into sub-partitions by allocating the ways in the partition to sub-entities of the corresponding entity such as functional blocks in or software threads executing on the CPUs or GPUs.”)]; receive memory requests corresponding to the different cache types [pars. 0036, 0040-0041 – Memory requests are received. (“Within computing device 100, memory requests are preferentially handled in the level of the memory hierarchy that results in the fastest and/or most efficient operation of computing device 100.” … “Cache controller 204 is a functional block that performs various functions for controlling operations in cache 200. For example, cache controller 204 can manage storing cache blocks to, invalidating cache blocks in, and evicting cache blocks from cache 200; can perform lookups for cache blocks in cache 200; can handle coherency operations for cache 200; and/or can respond to requests for cache blocks from cache 200.”)]; and perform, using the target cache partition, the memory request [pars. 0040-0055 – The request is performed. (“In these embodiments, upon receiving a memory request to be resolved in cache 200, cache controller 204 determines, from information in the request, an entity or sub-entity that made the memory request. Cache controller 204 then determines a number of ways allocated to the entity or sub-entity from partition record 220 and processes the memory request accordingly. For example, when the memory request is a request to write data to cache 200, cache controller 204 can use the number of ways allocated to the entity or sub-entity and other entities or sub-entities to determine a way to which the data is permitted to be written.”)]. However, Loh et al. do not specifically disclose, configure a second partition of the plurality of cache partitions into a probe filter for the first partition. In the same field of endeavor, Joshi discloses, configure a second partition of the plurality of cache partitions into a probe filter for the first partition [par. 0026 – “Embodiments of the present invention are directed to filtering broadcast probes used to maintain cache coherency on multiprocessor/multi-node systems. According to an embodiment of the present invention, probe-filter (PF) logic uses a portion of a level-three (L3) cache to store a directory of entries that track cache lines. Each node maintains a separate directory and tracks lines cached anywhere in the multiprocessor/multi-node system for which it is the home node. Based on whether a cache line is present in the directory, the PF logic can either generate a directed probe or handle a data-access request without generating any probes.”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Loh et al. and to include probe filters, as taught by Joshi, in order to improve performance by reducing bandwidth overhead and memory latency by preventing unnecessary snooping (checking) of caches for data that is not needed. Loh et al. and Joshi disclose all the limitations above but do not specifically disclose, the controller configured to: forward a memory request of the received memory requests to a target cache partition, wherein to a target cache type of the memory request indicates the target partition [pars. 0040-0055 – Loh et al. disclose performing operations according to the entity or sub-entity that made the request, but do not clearly disclose forwarding the requests. In other words, the request indicates the target partition by indicating the entity or sub-entity. (“In these embodiments, upon receiving a memory request to be resolved in cache 200, cache controller 204 determines, from information in the request, an entity or sub-entity that made the memory request. Cache controller 204 then determines a number of ways allocated to the entity or sub-entity from partition record 220 and processes the memory request accordingly. For example, when the memory request is a request to write data to cache 200, cache controller 204 can use the number of ways allocated to the entity or sub-entity and other entities or sub-entities to determine a way to which the data is permitted to be written.”)]. In the same field of endeavor, Agarwal et al. disclose, the controller configured to: forward a memory request of the received memory requests to a target cache partition, wherein to a target cache type of the memory request indicates the target partition [figs. 2-3; pars. 0017-0022 – The cache request is forwarded to the appropriate cache partition (slice). (“If the requested data line does not reside in the level-two cache (i.e., misses the level-two cache) (402), then the level-two cache controller issues a speculative DRAM read request to main memory 114 (408) in parallel with forwarding the memory request to the last-level cache (e.g., a corresponding slice of the level-three cache).”)]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Loh et al. and Joshi to include forwarding requests in parallel with a speculative DRAM read request, as taught by Agarwal et al., in order to improve performance by reducing miss latency of the cache. Claim 2 (as applied to claim 1 above): Loh et al. disclose, wherein forwarding the memory request is based on an addressing scheme incorporating cache types [pars. 0040-0055 – The request is forwarded to one or more appropriate cache ways according to the requesting entity or sub-entity. (“In these embodiments, upon receiving a memory request to be resolved in cache 200, cache controller 204 determines, from information in the request, an entity or sub-entity that made the memory request. Cache controller 204 then determines a number of ways allocated to the entity or sub-entity from partition record 220 and processes the memory request accordingly. For example, when the memory request is a request to write data to cache 200, cache controller 204 can use the number of ways allocated to the entity or sub-entity and other entities or sub-entities to determine a way to which the data is permitted to be written.”)]. Claim 3 (as applied to claim 2 above): Loh et al. disclose, wherein the addressing scheme includes one or more bits for identifying the target cache partition [pars. 0040-0055 – The appropriate cache ways are selected according to the requesting entity or sub-entity. Examiner suggests clarifying that the “one or more bits” are part of the address. (“In these embodiments, upon receiving a memory request to be resolved in cache 200, cache controller 204 determines, from information in the request, an entity or sub-entity that made the memory request. Cache controller 204 then determines a number of ways allocated to the entity or sub-entity from partition record 220 and processes the memory request accordingly. For example, when the memory request is a request to write data to cache 200, cache controller 204 can use the number of ways allocated to the entity or sub-entity and other entities or sub-entities to determine a way to which the data is permitted to be written.”)]. Claim 5 (as applied to claim 1 above): Loh et al. disclose, wherein partitioning the cache structure includes partitioning the cache structure based on physical delineations of the cache structure [fig. 2; pars. 0058-0068 – The cache is partitioned according to ways. (“In the first part of the partitioning process, the cache is partitioned into partitions by allocating ways to entities such as CPUs or GPUs, and in the second part, a partition (as established in the first part) is partitioned into sub-partitions by allocating the ways in the partition to sub-entities of the corresponding entity such as functional blocks in or software threads executing on the CPUs or GPU s.”)]. Claim 6 (as applied to claim 5 above): Loh et al. disclose, wherein the physical delineations correspond to at least one of a bank, a way, an index, or a macro [fig. 2; pars. 0058-0068 – The cache is partitioned according to ways. (“In the first part of the partitioning process, the cache is partitioned into partitions by allocating ways to entities such as CPUs or GPUs, and in the second part, a partition (as established in the first part) is partitioned into sub-partitions by allocating the ways in the partition to sub-entities of the corresponding entity such as functional blocks in or software threads executing on the CPUs or GPU s.”)]. Claim 7 (as applied to claim 1 above): Loh et al. disclose, wherein the plurality of cache partitions further includes a cache type correspond to at least one of an accelerator cache, or multiple levels of a cache hierarchy [fig. 2; pars. 0058-0068 – GPU cache corresponds to the claimed accelerator cache. (“In the first part of the partitioning process, the cache is partitioned into partitions by allocating ways to entities such as CPUs or GPUs, and in the second part, a partition (as established in the first part) is partitioned into sub-partitions by allocating the ways in the partition to sub-entities of the corresponding entity such as functional blocks in or software threads executing on the CPUs or GPUs.”)]. Claim 9 (as applied to claim 1 above): Loh et al. disclose, wherein partitioning the cache structure further comprises dynamically partitioning the cache structure based on a workload of the device [fig. 4; pars. 0094-0095 – Partitioning is dynamically performed. (“The process shown in FIG. 5 starts when partitioning mechanism 216 re-allocates two or more sub-partitions for a partition in the cache based on at least one of a memory access pattern of at least one of one or more sub-entities and a property of at least one of the one or more sub-entities (step 500). Generally, this operation is an update of the previously established allocation (see step 400 in FIG. 4) that is made after the earlier allocation as a dynamic update of the sub partitioning of cache 200.”)]. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Loh et al. (Pub. No. US 2014/0173211) in view of Joshi (Pub. No. US 2012/0005432), and Agarwal et al. (Pub. No. US 2019/0294546) as applied to claim 3 above, and further in view of Luick (U.S. Patent No. 6,230,260). Claim 4 (as applied to claim 3 above): Loh et al., Joshi, and Agarwal et al. disclose all the limitations above but do not specifically disclose, wherein the one or more bits correspond to a port coupled to the target cache partition. In the same field of endeavor, Luick discloses, wherein the one or more bits correspond to a port coupled to the target cache partition [column 14, lines 58-67 – “Each memory chip is logically partitioned into an L2 cache partition and an instruction history cache partition, e.g., partitions 178, 180 for chip 176. Separate access ports are provided for each of the partitions, with an access port for the L2 cache coupled to address lines 182 from processor 152, which provide a real address for accessing the L2 cache. Lines 182 are also provided to an L2 directory chip 184 that returns a directory entry providing an L2 hit signal to indicate whether or not the access request from the processor hit in the L2 cache.”]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Loh et al., Joshi, and Agarwal et al. to include a multi port cache, as taught by Luick, in order to allow multiple operations to be performed in the same cycle. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Loh et al. (Pub. No. US 2014/0173211) in view of Joshi (Pub. No. US 2012/0005432), and Agarwal et al. (Pub. No. US 2019/0294546) as applied to claim 1 above, and further in view of Hauck et al. (Pub. No. US 2003/0158999). Claim 8 (as applied to claim 1 above): Loh et al., Joshi, and Agarwal et al. disclose all the limitations above but do not specifically disclose, wherein partitioning the cache structure further comprises partitioning the cache structure at a boot time of the device. In the same field of endeavor, Hauck et al. disclose, wherein partitioning the cache structure further comprises partitioning the cache structure at a boot time of the device [par. 0013 - More particularly, at boot time the controllers partition the cache into two segments, a Read/Write segment and a Copy segment.]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Loh et al., Joshi, and Agarwal et al. to include partitioning the cache at boot time, as taught by Hauck et al., in order to provide a default operating state for the entities that use the cache. The boot time partitioning may be used in conjunction with dynamic partitioning. Claim(s) 10-13 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Loh et al. (Pub. No. US 2014/0173211) in view of Luick (U.S. Patent No. 6,230,260), Blagodurov et al. (Pub. No. US 2016/0179382), and Agarwal et al. (Pub. No. US 2019/0294546). Claim 10: Loh et al. disclose a system comprising: at least one physical processor [fig. 1 - Processor]; a physical memory [fig. 1 - Memory]; a cache structure [fig. 1 - Cache]; and a cache controller [fig. 2 – cache controller] configured to: partition the cache structure into a plurality of cache partitions that are sized based on different cache types [fig. 4; pars. 0094-0095 – Partitioning is dynamically performed. (“The process shown in FIG. 5 starts when partitioning mechanism 216 re-allocates two or more sub-partitions for a partition in the cache based on at least one of a memory access pattern of at least one of one or more sub-entities and a property of at least one of the one or more sub-entities (step 500). Generally, this operation is an update of the previously established allocation (see step 400 in FIG. 4) that is made after the earlier allocation as a dynamic update of the sub partitioning of cache 200.”)]; configure a first partition of the plurality of cache partitions into a client-side cache [fig. 2; par. 0046, 0058-0068 – Cache is partitioned between CPU and GPU cores. The cache is used by the CPU and GPU cores, and as such, corresponds to the claimed client-side cache (e.g. caches used by processors). Examiner notes that “client-side cache” and “memory-side cache” are claimed in alternate form, and as such, only one is required. In other words, the claim requires a first cache type, which may be a client-side cache or a memory-side cache and a second type which corresponds to probe filter. (“Partitioning mechanism 216 is a functional block that performs operations for partitioning the memory in the cache for use by one or more entities and/or sub-entities.” … “In the first part of the partitioning process, the cache is partitioned into partitions by allocating ways to entities such as CPUs or GPUs, and in the second part, a partition (as established in the first part) is partitioned into sub-partitions by allocating the ways in the partition to sub-entities of the corresponding entity such as functional blocks in or software threads executing on the CPUs or GPUs.”)]; receive memory requests corresponding to the different cache types [pars. 0036, 0040-0041 – Memory requests are received. (“Within computing device 100, memory requests are preferentially handled in the level of the memory hierarchy that results in the fastest and/or most efficient operation of computing device 100.” … “Cache controller 204 is a functional block that performs various functions for controlling operations in cache 200. For example, cache controller 204 can manage storing cache blocks to, invalidating cache blocks in, and evicting cache blocks from cache 200; can perform lookups for cache blocks in cache 200; can handle coherency operations for cache 200; and/or can respond to requests for cache blocks from cache 200.”)]; and perform, using the target cache partition, the memory request [pars. 0040-0055 – The request is performed. (“In these embodiments, upon receiving a memory request to be resolved in cache 200, cache controller 204 determines, from information in the request, an entity or sub-entity that made the memory request. Cache controller 204 then determines a number of ways allocated to the entity or sub-entity from partition record 220 and processes the memory request accordingly. For example, when the memory request is a request to write data to cache 200, cache controller 204 can use the number of ways allocated to the entity or sub-entity and other entities or sub-entities to determine a way to which the data is permitted to be written.”)]. However, Loh et al. do not specifically disclose, the cache structure including a plurality of ports, each cache partition of the plurality of cache partitions is coupled to at least one of the plurality of ports; In the same field of endeavor, Luick discloses, the cache structure including a plurality of ports, each cache partition of the plurality of cache partitions is coupled to at least one of the plurality of ports; [column 14, lines 58-67 – “Each memory chip is logically partitioned into an L2 cache partition and an instruction history cache partition, e.g., partitions 178, 180 for chip 176. Separate access ports are provided for each of the partitions, with an access port for the L2 cache coupled to address lines 182 from processor 152, which provide a real address for accessing the L2 cache. Lines 182 are also provided to an L2 directory chip 184 that returns a directory entry providing an L2 hit signal to indicate whether or not the access request from the processor hit in the L2 cache.”]; It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Loh et al. to include a multi port cache, as taught by Luick, in order to allow multiple operations to be performed in the same cycle. Loh et al. and Luick disclose all the limitations above, but do not specifically disclose, configure a second partition of the plurality of cache partitions into a memory-side cache; In the same field of endeavor, Blagodurov et al. disclose, configure a second partition of the plurality of cache partitions into a memory-side cache [figs. 1-3; pars. 0004, 0014-0016, 0022 – The DRAM may be configured to operate in a combination of hardware cache, page cache, and extended memory modes. Hardware cache corresponds to the claimed processor-side cache and page-cache corresponds to the claimed memory-side cache. (“Processing systems have traditionally employed a memory hierarchy that uses high-speed memory as a hardware cache to store data likely to be retrieved by a processor in the near future, and that uses lower-speed memory as "system memory" to store pages of data loaded by an executing application.” … “An example of a multilevel memory hierarchy is a memory hierarchy that includes stacked DRAM configured as a dedicated hardware cache, DRAM that can be configured to operate as a hardware cache, a page cache, or as extended memory,” … “Examples of memory management modes include hardware cache mode, page cache mode, and extended memory mode.”)]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Loh et al. and Luick to include a memory that can operate as a cache as well as system memory, as taught by Blagodurov et al., in order to improve processing efficiency and flexibility. Loh et al., Luick, and Blagodurov et al. disclose all the limitations above but do not specifically disclose, the cache controller configured to: forward a memory request of the received memory requests alone one of the plurality of ports to a target cache partition based on a target cache type of the memory request indicating the target cache partition [pars. 0040-0055 – Loh et al. disclose performing operations according to the entity or sub-entity that made the request, but do not clearly disclose forwarding the requests. In other words, the request indicates the target partition by indicating the entity or sub-entity. (“In these embodiments, upon receiving a memory request to be resolved in cache 200, cache controller 204 determines, from information in the request, an entity or sub-entity that made the memory request. Cache controller 204 then determines a number of ways allocated to the entity or sub-entity from partition record 220 and processes the memory request accordingly. For example, when the memory request is a request to write data to cache 200, cache controller 204 can use the number of ways allocated to the entity or sub-entity and other entities or sub-entities to determine a way to which the data is permitted to be written.”)]. In the same field of endeavor, Agarwal et al. disclose, the cache controller configured to: forward a memory request of the received memory requests alone one of the plurality of ports to a target cache partition based on a target cache type of the memory request indicating the target cache partition [figs. 2-3; pars. 0017-0022 – The cache request is forwarded to the appropriate cache partition (slice). (“If the requested data line does not reside in the level-two cache (i.e., misses the level-two cache) (402), then the level-two cache controller issues a speculative DRAM read request to main memory 114 (408) in parallel with forwarding the memory request to the last-level cache (e.g., a corresponding slice of the level-three cache).”)]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Loh et al., Luick, and Blagodurov et al. to include forwarding requests in parallel with a speculative DRAM read request, as taught by Agarwal et al., in order to improve performance by reducing miss latency of the cache. Claim 11 (as applied to claim 10 above): Loh et al. disclose, wherein forwarding the memory request is based on an addressing scheme including one or more bits for identifying a port coupled to the target cache partition [pars. 0040-0055 – The appropriate cache ways are selected according to the requesting entity or sub-entity. Examiner suggests clarifying that the “one or more bits” are part of the address. (“In these embodiments, upon receiving a memory request to be resolved in cache 200, cache controller 204 determines, from information in the request, an entity or sub-entity that made the memory request. Cache controller 204 then determines a number of ways allocated to the entity or sub-entity from partition record 220 and processes the memory request accordingly. For example, when the memory request is a request to write data to cache 200, cache controller 204 can use the number of ways allocated to the entity or sub-entity and other entities or sub-entities to determine a way to which the data is permitted to be written.”)]. Claim 12 (as applied to claim 10 above): Loh et al. disclose, wherein partitioning the cache structure includes partitioning the cache structure based on at least one of a bank, a way, an index, or a macro [fig. 2; pars. 0058-0068 – The cache is partitioned according to ways. (“In the first part of the partitioning process, the cache is partitioned into partitions by allocating ways to entities such as CPUs or GPUs, and in the second part, a partition (as established in the first part) is partitioned into sub-partitions by allocating the ways in the partition to sub-entities of the corresponding entity such as functional blocks in or software threads executing on the CPUs or GPU s.”)]. Claim 13 (as applied to claim 10 above): Loh et al. disclose, wherein the plurality of cache partitions further includes a cache type corresponding to at least one of an accelerator cache, multiple levels of a cache hierarchy, or a probe filter [fig. 2; pars. 0058-0068 – GPU cache corresponds to the claimed accelerator cache. (“In the first part of the partitioning process, the cache is partitioned into partitions by allocating ways to entities such as CPUs or GPUs, and in the second part, a partition (as established in the first part) is partitioned into sub-partitions by allocating the ways in the partition to sub-entities of the corresponding entity such as functional blocks in or software threads executing on the CPUs or GPUs.”)]. Claim 15 (as applied to claim 10 above): Loh et al. disclose, wherein partitioning the cache structure further comprises dynamically partitioning the cache structure based on a workload of the system [fig. 4; pars. 0094-0095 – Partitioning is dynamically performed. (“The process shown in FIG. 5 starts when partitioning mechanism 216 re-allocates two or more sub-partitions for a partition in the cache based on at least one of a memory access pattern of at least one of one or more sub-entities and a property of at least one of the one or more sub-entities (step 500). Generally, this operation is an update of the previously established allocation (see step 400 in FIG. 4) that is made after the earlier allocation as a dynamic update of the sub partitioning of cache 200.”)]. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Loh et al. (Pub. No. US 2014/0173211) in view of Luick (U.S. Patent No. 6,230,260), Blagodurov et al. (Pub. No. US 2016/0179382), and Agarwal et al. (Pub. No. US 2019/0294546) as applied to claim 10 above, and further in view of Hauck et al. (Pub. No. US 2003/0158999). Claim 14 (as applied to claim 10 above): Loh et al., Luick, Blagodurov et al., and Agarwal et al. disclose all the limitations above but do not specifically disclose, wherein partitioning the cache structure further comprises partitioning the cache structure at a boot time of the system. In the same of endeavor, Hauck et al. disclose, wherein partitioning the cache structure further comprises partitioning the cache structure at a boot time of the system [par. 0013 - More particularly, at boot time the controllers partition the cache into two segments, a Read/Write segment and a Copy segment.]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Loh et al., Luick, Blagodurov et al., and Agarwal et al. to include partitioning the cache at boot time, as taught by Hauck et al., in order to provide a default operating state for the entities that use the cache. The boot time partitioning may be used in conjunction with dynamic partitioning. Claim(s) 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Loh et al. (Pub. No. US 2014/0173211) in view of Hauck et al. (Pub. No. US 2003/0158999), Blagodurov et al. (Pub. No. US 2016/0179382), Joshi (Pub. No. US 2012/0005432), and Agarwal et al. (Pub. No. US 2019/0294546). Claim 16: Loh et al. disclose a method comprising: partitioning a cache structure into a plurality of cache partitions that are sized based on different cache types [fig. 4; pars. 0094-0095 – Partitioning is dynamically performed. (“The process shown in FIG. 5 starts when partitioning mechanism 216 re-allocates two or more sub-partitions for a partition in the cache based on at least one of a memory access pattern of at least one of one or more sub-entities and a property of at least one of the one or more sub-entities (step 500). Generally, this operation is an update of the previously established allocation (see step 400 in FIG. 4) that is made after the earlier allocation as a dynamic update of the sub partitioning of cache 200.”)]; configuring a first partition of the plurality of cache partitions into a client-side cache [fig. 2; par. 0046, 0058-0068 – Cache is partitioned between CPU and GPU cores. The cache is used by the CPU and GPU cores, and as such, corresponds to the claimed client-side cache (e.g. caches used by processors). Examiner notes that “client-side cache” and “memory-side cache” are claimed in alternate form, and as such, only one is required. In other words, the claim requires a first cache type, which may be a client-side cache or a memory-side cache and a second type which corresponds to probe filter. (“Partitioning mechanism 216 is a functional block that performs operations for partitioning the memory in the cache for use by one or more entities and/or sub-entities.” … “In the first part of the partitioning process, the cache is partitioned into partitions by allocating ways to entities such as CPUs or GPUs, and in the second part, a partition (as established in the first part) is partitioned into sub-partitions by allocating the ways in the partition to sub-entities of the corresponding entity such as functional blocks in or software threads executing on the CPUs or GPUs.”)]; receiving memory requests corresponding to the different cache types [pars. 0036, 0040-0041 – Memory requests are received. (“Within computing device 100, memory requests are preferentially handled in the level of the memory hierarchy that results in the fastest and/or most efficient operation of computing device 100.” … “Cache controller 204 is a functional block that performs various functions for controlling operations in cache 200. For example, cache controller 204 can manage storing cache blocks to, invalidating cache blocks in, and evicting cache blocks from cache 200; can perform lookups for cache blocks in cache 200; can handle coherency operations for cache 200; and/or can respond to requests for cache blocks from cache 200.”)];and performing, using the target cache partition, the memory request [pars. 0040-0055 – The request is performed. (“In these embodiments, upon receiving a memory request to be resolved in cache 200, cache controller 204 determines, from information in the request, an entity or sub-entity that made the memory request. Cache controller 204 then determines a number of ways allocated to the entity or sub-entity from partition record 220 and processes the memory request accordingly. For example, when the memory request is a request to write data to cache 200, cache controller 204 can use the number of ways allocated to the entity or sub-entity and other entities or sub-entities to determine a way to which the data is permitted to be written.”)]. However, Loh et al. do not specifically disclose, partitioning, during a boot time of a system. In the same field of endeavor, Hauck et al. disclose, partitioning, during a boot time of a system [par. 0013 - More particularly, at boot time the controllers partition the cache into two segments, a Read/Write segment and a Copy segment.]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Loh et al. to include partitioning the cache at boot time, as taught by Hauck et al., in order to provide a default operating state for the entities that use the cache. The boot time partitioning may be used in conjunction with dynamic partitioning. Loh et al. and Hauck et al. disclose all the limitations above, but do not specifically disclose, configuring a second partition of the plurality of cache partitions into a memory-side cache. In the same field of endeavor, Blagodurov et al. disclose, configuring a second partition of the plurality of cache partitions into a memory-side cache [figs. 1-3; pars. 0004, 0014-0016, 0022 – The DRAM may be configured to operate in a combination of hardware cache, page cache, and extended memory modes. Hardware cache corresponds to the claimed processor-side cache and page-cache corresponds to the claimed memory-side cache. (“Processing systems have traditionally employed a memory hierarchy that uses high-speed memory as a hardware cache to store data likely to be retrieved by a processor in the near future, and that uses lower-speed memory as "system memory" to store pages of data loaded by an executing application.” … “An example of a multilevel memory hierarchy is a memory hierarchy that includes stacked DRAM configured as a dedicated hardware cache, DRAM that can be configured to operate as a hardware cache, a page cache, or as extended memory,” … “Examples of memory management modes include hardware cache mode, page cache mode, and extended memory mode.”)]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Loh et al. and Hauck et al. to include a memory that can operate as a cache as well as system memory, as taught by Blagodurov et al., in order to improve processing efficiency and flexibility. Loh et al., Hauck et al., and Blagodurov et al. disclose all the limitations above but do not specifically disclose, configuring a third partition of the plurality of cache partitions into a probe filter, In the same Joshi discloses, configuring a third partition of the plurality of cache partitions into a probe filter [par. 0026 – “Embodiments of the present invention are directed to filtering broadcast probes used to maintain cache coherency on multiprocessor/multi-node systems. According to an embodiment of the present invention, probe-filter (PF) logic uses a portion of a level-three (L3) cache to store a directory of entries that track cache lines. Each node maintains a separate directory and tracks lines cached anywhere in the multiprocessor/multi-node system for which it is the home node. Based on whether a cache line is present in the directory, the PF logic can either generate a directed probe or handle a data-access request without generating any probes.”], It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Loh et al., Hauck et al., and Blagodurov et al. to include probe filters, as taught by Joshi, in order to improve performance by reducing bandwidth overhead and memory latency by preventing unnecessary snooping (checking) of caches for data that is not needed. Loh et al., Hauck et al., Blagodurov et al., and Joshi disclose all the limitations above but do not specifically disclose, forwarding a memory request of the received memory requests from a first cache partition to a target cache partition based on a target cache type of the memory request indicating the target cache partition [pars. 0040-0055 – Loh et al. disclose performing operations according to the entity or sub-entity that made the request, but do not clearly disclose forwarding the requests. In other words, the request indicates the target partition by indicating the entity or sub-entity. Examiner suggests clarifying that the first cache partition and the target cache partitions are in the group of the plurality of cache partitions. (“In these embodiments, upon receiving a memory request to be resolved in cache 200, cache controller 204 determines, from information in the request, an entity or sub-entity that made the memory request. Cache controller 204 then determines a number of ways allocated to the entity or sub-entity from partition record 220 and processes the memory request accordingly. For example, when the memory request is a request to write data to cache 200, cache controller 204 can use the number of ways allocated to the entity or sub-entity and other entities or sub-entities to determine a way to which the data is permitted to be written.”)]. In the same field of endeavor, Agarwal et al. disclose, forwarding a memory request of the received memory requests from a first cache partition to a target cache partition based on a target cache type of the memory request indicating the target cache partition [figs. 2-3; pars. 0017-0022 – The cache request is forwarded to the appropriate cache partition (slice). (“If the requested data line does not reside in the level-two cache (i.e., misses the level-two cache) (402), then the level-two cache controller issues a speculative DRAM read request to main memory 114 (408) in parallel with forwarding the memory request to the last-level cache (e.g., a corresponding slice of the level-three cache).”)]. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Loh et al., Hauck et al., Blagodurov et al., and Joshi to include forwarding requests in parallel with a speculative DRAM read request, as taught by Agarwal et al., in order to improve performance by reducing miss latency of the cache. Claim 17 (as applied to claim 16 above): Loh et al. disclose, wherein forwarding the memory request is based on an addressing scheme that includes one or more bits for identifying a target cache type [pars. 0040-0055 – The appropriate cache ways are selected according to the requesting entity or sub-entity. Examiner suggests clarifying that the “one or more bits” are part of the address. (“In these embodiments, upon receiving a memory request to be resolved in cache 200, cache controller 204 determines, from information in the request, an entity or sub-entity that made the memory request. Cache controller 204 then determines a number of ways allocated to the entity or sub-entity from partition record 220 and processes the memory request accordingly. For example, when the memory request is a request to write data to cache 200, cache controller 204 can use the number of ways allocated to the entity or sub-entity and other entities or sub-entities to determine a way to which the data is permitted to be written.”)]. Claim 18 (as applied to claim 16 above): Loh et al. disclose, wherein partitioning the cache structure includes partitioning the cache structure based on at least one of a bank, a way, an index, or a macro [fig. 2; pars. 0058-0068 – The cache is partitioned according to ways. (“In the first part of the partitioning process, the cache is partitioned into partitions by allocating ways to entities such as CPUs or GPUs, and in the second part, a partition (as established in the first part) is partitioned into sub-partitions by allocating the ways in the partition to sub-entities of the corresponding entity such as functional blocks in or software threads executing on the CPUs or GPU s.”)]. Claim 19 (as applied to claim 16 above): Loh et al. disclose, wherein the plurality of cache types further includes a cache type corresponding to at least one of a processor cache, an accelerator cache, or multiple levels of a cache hierarchy [fig. 2; pars. 0058-0068 – GPU cache corresponds to the claimed accelerator cache. (“In the first part of the partitioning process, the cache is partitioned into partitions by allocating ways to entities such as CPUs or GPUs, and in the second part, a partition (as established in the first part) is partitioned into sub-partitions by allocating the ways in the partition to sub-entities of the corresponding entity such as functional blocks in or software threads executing on the CPUs or GPUs.”)]. Claim 20 (as applied to claim 16 above): Loh et al. disclose, wherein partitioning the cache structure further comprises dynamically partitioning the cache structure based on a workload of the system [fig. 4; pars. 0094-0095 – Partitioning is dynamically performed. (“The process shown in FIG. 5 starts when partitioning mechanism 216 re-allocates two or more sub-partitions for a partition in the cache based on at least one of a memory access pattern of at least one of one or more sub-entities and a property of at least one of the one or more sub-entities (step 500). Generally, this operation is an update of the previously established allocation (see step 400 in FIG. 4) that is made after the earlier allocation as a dynamic update of the sub partitioning of cache 200.”)]. Response to Arguments Applicant's arguments filed February 4, 2026 have been fully considered but they are not persuasive. Applicant’s arguments with respect to the amended subject matter have been addressed in the revised rejections presented above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LARRY T MACKALL whose telephone number is (571)270-1172. The examiner can normally be reached Monday - Friday, 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald G Bragdon can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. LARRY T. MACKALL Primary Examiner Art Unit 2131 21 February 2026 /LARRY T MACKALL/Primary Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Dec 28, 2022
Application Filed
Mar 23, 2024
Non-Final Rejection — §103
Jun 20, 2024
Response Filed
Jul 26, 2024
Final Rejection — §103
Nov 01, 2024
Request for Continued Examination
Nov 08, 2024
Response after Non-Final Action
Nov 09, 2024
Non-Final Rejection — §103
Feb 07, 2025
Response Filed
Mar 22, 2025
Final Rejection — §103
Jun 17, 2025
Request for Continued Examination
Jun 17, 2025
Applicant Interview (Telephonic)
Jun 20, 2025
Response after Non-Final Action
Jun 28, 2025
Examiner Interview Summary
Jul 12, 2025
Non-Final Rejection — §103
Oct 06, 2025
Response Filed
Nov 01, 2025
Final Rejection — §103
Feb 04, 2026
Request for Continued Examination
Feb 14, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591389
MEMORY CONTROLLER AND OPERATION METHOD THEREOF FOR PERFORMING AN INTERLEAVING READ OPERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12572308
STORAGE DEVICE SUPPORTING REAL-TIME PROCESSING AND METHOD OF OPERATING THE SAME
2y 5m to grant Granted Mar 10, 2026
Patent 12561065
PROVIDING ENDURANCE TO SOLID STATE DEVICE STORAGE VIA QUERYING AND GARBAGE COLLECTION
2y 5m to grant Granted Feb 24, 2026
Patent 12555170
TRANSFORMER STATE EVALUATION METHOD BASED ON ECHO STATE NETWORK AND DEEP RESIDUAL NEURAL NETWORK
2y 5m to grant Granted Feb 17, 2026
Patent 12554400
METHOD OF OPERATING STORAGE DEVICE USING HOST REQUEST BYPASS AND STORAGE DEVICE PERFORMING THE SAME
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
85%
Grant Probability
93%
With Interview (+8.1%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 779 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month