Prosecution Insights
Last updated: April 19, 2026
Application No. 18/073,341

STARVATION MITIGATION FOR ASSOCIATIVE CACHE DESIGNS

Non-Final OA §103§112
Filed
Dec 01, 2022
Examiner
YOON, ALEXANDER J
Art Unit
2135
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
74%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
125 granted / 220 resolved
+1.8% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
24 currently pending
Career history
244
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
62.3%
+22.3% vs TC avg
§102
7.1%
-32.9% vs TC avg
§112
24.0%
-16.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 220 resolved cases

Office Action

§103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Action is in response to communications filed 12/01/2022. Claims 1-20 are pending. Claims 1-20 are rejected. Drawings The applicant’s drawings submitted on 12/01/2022 are acceptable for examination purposes. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-8, 13-16, and 19-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 3 recites “buffer a memory access command for which a slot is not allocated” wherein claim 2 recites “tracking occurrences of when a cache slot is not allocated for a memory access command.” It appears and currently interpreted as, the recitation of the memory access command is not intended to rely upon the recitation in claim 2 and therefore it should be clearly established in claim 3 that the memory access command is a separate and distinguished instance and is suggested to be referred to as “a first memory access command for which a slot is not allocated; and subsequently replay the first memory access command.” Claims 4-8 depend on claim 3 and do not resolve the issue. Additionally, all subsequent recitations of “a memory access command” in claims 4-8 should be similarly amended to properly establish antecedent basis to claims 2 and 3 where appropriate. If this determination is made in error, the Examiner requests clarification regarding the limitations and corresponding scope to be established. Claim 7 recites “while the back pressure mechanism is active” wherein claim 1, from which claim 7 depends do not recite any instance of a “back pressure mechanism” and therefore the term lacks proper antecedent basis. The Examiner notes the first recitation of “a back pressure mechanism” appears in claim 5. Additionally, the mechanism in claim 5 is recited as “a backpressure mechanism” and consistency of language should be maintained where appropriate for clarity purposes. Claim 8 depends on claim 7 and does not resolve the above identified issue. Furthermore, claim 8 also refers to the “back pressure mechanism”. Claim 13 recites the same issue as claim 3 with respect to claim 12 regarding the “a memory access command”. Claims 14-16 do not resolve the issue. Claim 16 recites the same issue as claim 7 with respect to claim 14 regarding “a back pressure mechanism”. Claim 19 recites the same issue as claim 3 with respect to claim 18 regarding the “a memory access command”. Claim 20 does not resolve the issue. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 9-10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Potter et al. (US 2022/0206946) in view of Ganguli et al. (US 2016/0179560). Regarding claim 1, Potter discloses, in the italicized portions, an apparatus comprising: a memory controller having one or more channels for accessing memory, an associative cache, (Figure 1, memory controller 132 and set-associative caches 126 and 128 with memory channels interconnecting the components) and logic to, receive memory access commands for accessing the memory; cache translated memory addresses allocated for memory access commands in the associative cache ([0046] IOMMU 232 address translation responsive to memory accesses); detect a level of cache contention for the associative cache crosses a first threshold; and, in response thereto, limit access to the associative cache. Herein Potter discloses the apparatus structure regarding the memory controller interfacing with caches via respective memory channels and address translation circuitry. Potter does not explicitly disclose the limitations regarding detecting the level of cache contention and limiting access to the cache. Regarding this aspects, Ganguli discloses in Paragraphs [0034] and [0059] “[0034] Contention score determination module 206 calculates a contention score as a function of the performance data collected by the data collection module 204. The contention score may include both a contention metric and a contention score level. The contention metric may include aggregated data describing cache misses for all processors 120 of the compute node 102… The contention score may be embodied as a tuple including the cache misses per some reference number of instructions (e.g., per thousand instructions), as well as a contention score level (e.g., high, medium, or low contention). [0059] According to one embodiment, node agent 207 implements resource controls to limit, account, and isolate resource usage (e.g., CPU, memory, disk I/O, etc.) to manage a CPU Controller and CPUSET Controller subsystem in order to meet the application SLAs by reducing resource contention and increasing predictability in performance…” Herein Ganguli discloses monitoring and determining a contention score for a cache based on heuristics including the number of cache misses. Using this performance metric, the system includes logic for limiting resource usage in or to reduce resource contention as additionally noted in Paragraph [0037] “In a further embodiment, compute service module 222 detects SLA violations based on the contention score while monitoring application performance.” In this manner, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the cache contention limiting steps as performed in Ganguli into the memory system structure as recited by Potter in order to reduce resource contention and improve performance predictability (Ganguli [0059]). Potter and Ganguli are analogous art because they are from the same field of endeavor of managing cache memory accesses. Regarding claim 9, Potter further discloses the apparatus of claim 1, wherein the apparatus comprises a processor having a System on a Chip (SoC) architecture including the memory controller (Figure 1, System 100 SoC including processors and memory controller). Herein Potter discloses the SoC structure including processing and controller elements. Regarding claim 10, Potter discloses, in the italicized portions, a method implemented by a memory controller having an associative cache in which translated memory addresses are cached, (Figure 1, memory controller 132 and set-associative caches 126 and 128 with memory channels interconnecting the components) comprising: receiving memory access commands for accessing the memory; cache translated memory addresses allocated for memory access commands in the associative cache ([0046] IOMMU 232 address translation responsive to memory accesses); detecting a level of cache contention crosses a first threshold; and, in response thereto, limiting access to the associative cache. Herein Potter discloses the apparatus structure regarding the memory controller interfacing with caches via respective memory channels and address translation circuitry. Potter does not explicitly disclose the limitations regarding detecting the level of cache contention and limiting access to the cache. Regarding this aspects, Ganguli discloses in Paragraphs [0034] and [0059] monitoring and determining a contention score for a cache based on heuristics including the number of cache misses. Using this performance metric, the system includes logic for limiting resource usage in or to reduce resource contention as additionally noted in Paragraph [0037]. Claim 10 is rejected on a similar basis as claim 1. Regarding claim 17, Potter discloses, in the italicized portions, a system, comprising: one or more memory devices; and a System on a Chip (SoC) including, a plurality cores; and a memory controller, operatively coupled to the plurality of cores and having an interface coupled to the one or more memory devices, an associative cache, (Figure 1, System 100 SoC including CPUs 122 and 124 and memory controller 132 and set-associative caches 126 and 128 with memory channels interconnecting the components) and logic to, receive memory access commands from software executing on one or more of the plurality of cores cache translated memory addresses allocated for memory access commands in the associative cache ([0046] IOMMU 232 address translation responsive to memory accesses); detect a level of cache contention for the associative cache crosses a first threshold; and, in response thereto, limit access to the associative cache. Herein Potter discloses the apparatus structure regarding the memory controller interfacing with caches via respective memory channels and address translation circuitry. Potter does not explicitly disclose the limitations regarding detecting the level of cache contention and limiting access to the cache. Regarding this aspects, Ganguli discloses in Paragraphs [0034] and [0059] monitoring and determining a contention score for a cache based on heuristics including the number of cache misses. Using this performance metric, the system includes logic for limiting resource usage in or to reduce resource contention as additionally noted in Paragraph [0037]. Claim 17 is rejected on a similar basis as claim 1. Claims 2, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Potter in view of Ganguli and further in view of Wood et al. (US 2018/0260506). Regarding claim 2, Potter and Ganguli further disclose the aspects of the limitation, as identified in italics, the apparatus of claim 1, wherein the associative cache comprises a set associative cache (Potter Figure 1 caches) including a plurality of sets having a plurality of slots, and the level of cache contention is detected by tracking occurrences of when a cache slot is not allocated for a memory access command (Ganguli [0034]). Herein Ganguli identifies determining a contention score as an evaluation of the number of misses. While Potter discloses a set-associative cache, Potter and Ganguli do not explicitly address the structure of the set associative cache as including a plurality of sets having a plurality of slots. Regarding this aspect of the limitation, Wood discloses in Paragraphs [0028-29] “[0028] A cache 102 is comprised of a plurality of slots (or entries) wherein each slot is configured to store a cache line (which may be referred to as a cache block or simply a data block) and information or a tag that identifies the memory address(es) associated with the cache line. The cache 102 typically determines if there is a cache hit by comparing the information or tag to all or part of the requested memory address. [0029] There are generally three different cache structures: fully-associative, direct mapped, and n-way set associative.” Herein Wood discloses the structure of a cache as comprising a plurality of slots. Additionally, n-way set associative caches are also identified. In this manner, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the set-associative cache of Potter to comprise a plurality of sets having a plurality of slots as disclosed by Wood as a known configuration utilized in the field of technology. Potter, Ganguli and Wood are analogous art because they are from the same field of endeavor of managing cache memory accesses. Regarding claim 11, Potter and Ganguli further disclose the aspects of the limitation, as identified in italics, the method of claim 10, wherein the associative cache comprises a set associative cache (Potter Figure 1 caches) including a plurality of sets having a plurality of slots, and the level of cache contention is detected by tracking occurrences of when a cache slot is not allocated for a memory access command (Ganguli [0034]). Herein Ganguli identifies determining a contention score as an evaluation of the number of misses. While Potter discloses a set-associative cache, Potter and Ganguli do not explicitly address the structure of the set associative cache as including a plurality of sets having a plurality of slots. Regarding this aspect of the limitation, Wood discloses in Paragraphs [0028-29] the structure of a cache as comprising a plurality of slots. Additionally, n-way set associative caches are also identified. Claim 11 is rejected on a similar basis as claim 2. Regarding claim 18, Potter and Ganguli further disclose the aspects of the limitation, as identified in italics, the system of claim 17, wherein the associative cache comprises a set associative cache (Potter Figure 1 caches) including a plurality of sets having a plurality of slots, and the level of cache contention is detected by tracking occurrences of when a cache slot is not allocated for a memory access command (Ganguli [0034]). Herein Ganguli identifies determining a contention score as an evaluation of the number of misses. While Potter discloses a set-associative cache, Potter and Ganguli do not explicitly address the structure of the set associative cache as including a plurality of sets having a plurality of slots. Regarding this aspect of the limitation, Wood discloses in Paragraphs [0028-29] the structure of a cache as comprising a plurality of slots. Additionally, n-way set associative caches are also identified. Claim 18 is rejected on a similar basis as claim 2. Claims 3, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Potter in view of Ganguli and further in view of Wood and still further in view of Irish et al. (US 2007/0260754). Regarding claim 3, Potter, Ganguli, and Wood do not explicitly disclose the apparatus of claim 2, further comprising logic to: buffer a memory access command for which a slot is not allocated; and subsequently replay the memory access command. Regarding this limitation, Irish discloses in Paragraphs [0025] and [0033] “[0025] The cache miss may be detected by the translation processing logic 116 after the I/O virtual memory address of an I/O command is presented to the I/O address translation cache 112. If the I/O virtual memory address of the I/O command is not in the I/O address translation cache 112, then a cache miss will occur. After the cache miss has occurred at step 205 the translation processing logic 116 may place the command into a buffer as seen at step 210. This buffer may consist of several exception command queues 118, which may organize commands according to the I/O device which sent the command. [0033] Writing to the virtual channel clear register 130 may also indicate to the command re-issue logic 120 that the command waiting in the exception command queue 118 may be ready for I/O address translation. Therefore, at step 372, the command re-issue logic 120 may notify the translation processing logic 116, which in turn reads the command, the command corresponding to the virtual channel written to in step 371, from the exception command queue 118.” Herein Irish disclose the command buffering and reissuing process in response to a cache miss due to a translation not being present in cache. This process enables the system to address the cache miss by handling the issue within the system and without requiring an additional prompt for the I/O request to be re-issued by the requestor. In this manner, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide handling functionality as disclosed by Irish in order to improve cache utilization (Irish [0036]). Potter, Ganguli, Wood, and Irish are analogous art because they are from the same field of endeavor of managing cache memory accesses. Regarding claim 12, Potter, Ganguli, and Wood do not explicitly disclose the method of claim 11, further comprising: buffering a memory access command for which a slot is not allocated; and subsequently replaying the memory access command. Regarding this limitation, Irish discloses in Paragraphs [0025] and [0033] the command buffering and reissuing process in response to a cache miss due to a translation not being present in cache. This process enables the system to address the cache miss by handling the issue within the system and without requiring an additional prompt for the I/O request to be re-issued by the requestor. Claim 12 is rejected on a similar basis as claim 3. Regarding claim 19, Potter, Ganguli, and Wood do not explicitly disclose the system of claim 18, wherein the memory controller further comprises logic to: buffer a memory access command for which a slot is not allocated; and subsequently replay the memory access command. Regarding this limitation, Irish discloses in Paragraphs [0025] and [0033] the command buffering and reissuing process in response to a cache miss due to a translation not being present in cache. This process enables the system to address the cache miss by handling the issue within the system and without requiring an additional prompt for the I/O request to be re-issued by the requestor. Claim 19 is rejected on a similar basis as claim 3. Claims 4-5, 13-14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Potter in view of Ganguli and further in view of Wood and still further in view of Irish and Sukonik et al. (US 2011/0179240). Regarding claim 4, Potter, Ganguli, Wood, and Irish do not explicitly disclose the apparatus of claim 3, further comprising logic to: track memory access commands requesting access to the associative cache; increment a count when a cache slot is unavailable to be allocated for a memory access command; and decrement the count when a replayed memory access command is allocated a cache slot. As previously indicated in the rejection of claim 3, Irish discloses handling cache misses by placing commands in an exception command queue which are later reissued. Regarding the claim limitations, Sukonik discloses in Paragraph [0174] “Further, embodiments of the access buffer may be configured to have one or more FIFO queues having a backpressure threshold. If the fill level of a FIFO queue exceeds its backpressure threshold, the access buffer is configured to communicate this backpressure to the processor. Thereby the processor is configured to stop further access requests of the same type, e.g. read or write, or read or write with priority, to the access buffer until the access buffer has communicated to the processor that fill level of the FIFO queue has returned to a level below the threshold.” Herein Sukonik discloses implementing a fill level, otherwise analogous to a counter, on a per queue basis wherein the amount of commands in any one queue can be tracked. In view of Irish wherein a reissue queue is maintained as well as Sukonik Paragraph [0169] wherein a FIFO queue is maintained for misses, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to maintain fill levels or counters to track the number of requests currently pending due to misses. Potter, Ganguli, Wood, Irish, and Sukonik are analogous art because they are from the same field of endeavor of managing cache memory accesses. Regarding claim 5, Sukonik further discloses the apparatus of claim 4, wherein the count is maintained by a counter, further comprising logic to: receive host commands to access memory from a host coupled to the apparatus or integrated in the apparatus; detect when the count crosses the first threshold; and in response thereto, implement a backpressure mechanism that temporarily blocks new host commands from accessing the set associative cache ([0174]). Herein Sukonik discloses determining the fill level, determined analogous to a counter, of a FIFO queue and when to implement backpressure when the fill level exceeds a threshold. Upon communicating the backpressure, no additional requests are issued to access memory. Regarding claim 13, Potter, Ganguli, Wood, and Irish do not explicitly disclose the method of claim 12, further comprising: tracking memory access commands requesting access to the associative cache; incrementing a count when a cache slot is unavailable to be allocated for a memory access command; and decrementing the count when a replayed memory access command is allocated a cache slot. Regarding the claim limitations, Sukonik discloses in Paragraph [0174] implementing a fill level, otherwise analogous to a counter, on a per queue basis wherein the amount of commands in any one queue can be tracked. Claim 13 is rejected on a similar basis as claim 4. Regarding claim 14, Sukonik further discloses the method of claim 13, wherein the count is maintained by a counter, further comprising: receiving host commands to access memory from a host; detecting when the count crosses the first threshold; and in response thereto, implementing a backpressure mechanism that temporarily blocks new host commands from accessing the set associative cache ([0174]). Claim 14 is rejected on a similar basis as claim 5. Regarding claim 20, Potter, Ganguli, Wood, and Irish do not explicitly disclose the system of claim 19, wherein the memory controller further comprises logic to: track memory access commands requesting access to the associative cache; increment a count when a cache slot is unavailable to be allocated for a memory access command; and decrement the count when a replayed memory access command is allocated a cache slot. Regarding the claim limitations, Sukonik discloses in Paragraph [0174] implementing a fill level, otherwise analogous to a counter, on a per queue basis wherein the amount of commands in any one queue can be tracked. Claim 20 is rejected on a similar basis as claim 4. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Potter in view of Ganguli and further in view of Wood and still further in view of Irish and Sukonik and Mital et al. (US 2013/0125127). Regarding claim 6, Potter, Ganguli, Wood, Irish, and Sukonik do not explicitly disclose the apparatus of claim 5, wherein the first threshold is a high threshold, further comprising logic to: following the count crossing the high threshold, detect the count has crossed a low threshold; and in response thereto, disable the backpressure mechanism. Regarding this limitation, Mital discloses in Paragraph [0047] “When the task queue depth becomes greater than a threshold, a backpressure "on" message might be generated on backpressure ring 540. When the cache depth goes below the threshold, a backpressure "off" message might be generated on backpressure ring 540. Hysteresis might be built into backpressure message generation and release such that a backpressure on message is not generated until the queue depth is more than the threshold by some amount and a backpressure off message is not generated until the queue depth falls below a certain amount below the threshold.” Herein Mital discloses use of a backpressure threshold to determine when to initiate backpressure for command processing, wherein this threshold is determined to be analogous to the first threshold and a high threshold. Subsequent to threshold being exceeded, once it is determined that count of commands in the queue falls below the threshold by a certain amount, determined to be analogous to a low threshold, the backpressure is then turned off. In this manner, the number of commands in the queue can be managed to control the number of pending commands. In this manner, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement thresholds to control a backpressure mechanism to avoid dropping commands. Potter, Ganguli, Wood, Irish, Sukonik, and Mital are analogous art because they are from the same field of endeavor of managing cache memory accesses. Regarding claim 15, Potter, Ganguli, Wood, Irish, and Sukonik do not explicitly disclose the method of claim 14, wherein the first threshold is a high threshold, further comprising: following the count crossing the high threshold, detecting the count has crossed a low threshold; and in response thereto, disabling the backpressure mechanism. Regarding this limitation, Mital discloses in Paragraph [0047] use of a backpressure threshold to determine when to initiate backpressure for command processing, wherein this threshold is determined to be analogous to the first threshold and a high threshold. Subsequent to threshold being exceeded, once it is determined that count of commands in the queue falls below the threshold by a certain amount, determined to be analogous to a low threshold, the backpressure is then turned off. Claim 15 is rejected on a similar basis as claim 6. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Potter in view of Ganguli and further in view of Karaje et al. (US 2016/0092108). Regarding claim 7, Potter and Ganguli do not explicitly disclose the apparatus of claim 1, further comprising a command pipeline and command arbitration logic to, while the back pressure mechanism is active, block host commands from advancing to the command pipeline while enabling media management commands to advance to the command pipeline. Regarding this limitation, Karaje discloses in Paragraph [0065] “In other tests, it was observed that foreground workloads may affect background tasks, where the foreground workloads overloaded the system and starve the background tasks from executing, which in the long term resulted in the performance degradation of the array. In one specific example, a foreground workload overloads the system and causes resource starvation for GC. Due to the CPU starvation, GC cannot compact segments in timely fashion resulting in a low amount of free segments in the system. As the system gets low on free segments, a backpressure mechanism is triggered to slow down the incoming IOs so as to give breathing space to GC.” Herein Karaje discloses that a backpressure mechanism can be triggered in order to enable media management operations, including garbage collection (GC), to be performed while the backpressure is enabled to disable host IO. In this manner, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide this functionality in order to allow background tasks to be performed as discussed by Karaje. Potter, Ganguli and Karaje are analogous art because they are from the same field of endeavor of managing cache memory accesses. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Potter in view of Ganguli and further in view of Karaje and still in further view of Simionescu et al. (US 2023/0176731). Regarding claim 8, Potter, Ganguli, and Karaje do not explicitly disclose the apparatus of claim 7, wherein the command arbitration logic further is configured to allow replayed commands to reenter the command pipeline while the back pressure mechanism is active. Regarding this limitation, Simionescu discloses in Paragraphs [0039] and [0056-57] “[0039] The commands can be stored by type of command in respective buffers for each type of command. The management component 112 can, in response to a buffer being full, backpressure a channel of the memory system 104 via which the bursts of commands are received. As used herein, “backpressuring” a channel refers to preventing receipt of commands and/or execution of commands from the channel. [0056] A read queue of a stripe queue can include one or more sub-queues: a sub-queue for compute express link (CXL) commands and/or a sub-queue for retry commands. A sub-queue for retry commands can have higher priority to be dequeued than a sub-queue for CXL commands. [0057] Each channel of a slice, including a parity channel, can have a respective command queues scheduler. The command queue schedulers enable execution of commands in command queues according to one or more scheduling policies. The scheduling policies can be based on memory access latencies, rules, and/or timings, for example. Non-limiting examples of scheduling policies follow. Retry queues can have a highest priority.” Herein Simionescu discloses that backpressure can be applied to individual queues and that particular queues may be processed in a priority ordering. Specifically, it is indicated that retry queues have the highest priority for execution; therefore, while backpressure may be applied to a particular queue thereby preventing commands from being received or executed, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that the commands in the retry queue are processed for execution, in similar fashion to the background tasks as performed in Karaje, while backpressure is being applied in order to balance operation execution between operation types. Potter, Ganguli, Karaje, and Simionescu are analogous art because they are from the same field of endeavor of managing cache memory accesses. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Potter in view of Ganguli and further in view of Wood and still further in view of Irish and Sukonik and Karaje. Regarding claim 16, Potter, Ganguli, Wood, Irish, and Sukonik do not explicitly disclose the method of claim 14, further comprising: while the back pressure mechanism is active, blocking host commands from advancing to a command pipeline while enabling media management commands to advance to the command pipeline. Regarding this limitation, Karaje discloses in Paragraph [0065] that a backpressure mechanism can be triggered in order to enable media management operations, including garbage collection (GC), to be performed while the backpressure is enabled to disable host IO. Claim 16 is rejected on a similar basis as claim 7 in view of managing background operations to be performed while a backpressure mechanism is engaged. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Aue et al. (US 2014/0173200) – Paragraph [0005] wherein cache slot structure is discussed. Creed (US 2021/0365379) – Paragraph [0059] wherein maintaining counters for writes pending is disclosed. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER J YOON whose telephone number is (408)918-7629. The examiner can normally be reached on Monday-Friday 8am-3pm ET. The examiner’s email is alexander.yoon2@uspto.gov. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared Rutz can be reached on 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER YOON/ Examiner, Art Unit 2135 /JARED I RUTZ/ Supervisory Patent Examiner, Art Unit 2135
Read full office action

Prosecution Timeline

Dec 01, 2022
Application Filed
Jan 12, 2023
Response after Non-Final Action
Jan 20, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602164
Data Storage Device and Method for Thermal Management Through Command Selection
2y 5m to grant Granted Apr 14, 2026
Patent 12596641
Hardware And Software Hybrid Configuration Of DRAM Channel Interleaving Management
2y 5m to grant Granted Apr 07, 2026
Patent 12591371
MEMORY SUB-SYSTEM FOR MEMORY CELL IN-FIELD TOUCH-UP
2y 5m to grant Granted Mar 31, 2026
Patent 12578866
Data processing method for improving continuity of data corresponding to continuous logical addresses as well as avoiding excessively consuming service life of memory blocks and the associated data storage device
2y 5m to grant Granted Mar 17, 2026
Patent 12572426
DATA BACKUP METHOD, DATA BACKUP DEVICE, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
74%
With Interview (+17.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 220 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month