Prosecution Insights
Last updated: April 19, 2026
Application No. 19/011,310

DRAM CACHE CLEANING

Non-Final OA §103§112
Filed
Jan 06, 2025
Examiner
BLUST, JASON W
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Rambus Inc.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
220 granted / 277 resolved
+24.4% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
24 currently pending
Career history
301
Total Applications
across all art units

Statute-Specific Performance

§101
6.6%
-33.4% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
23.8%
-16.2% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 277 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential structural cooperative relationships of elements, such omission amounting to a gap between the necessary structural connections. See MPEP § 2172.01. The omitted structural cooperative relationships are: the claim states “a command/address interface to receive a first access command”, however it doesn’t state what/where it is receiving this from. The claim also states “a cache result interface to transmit… a first status indicator and a second status indicator”, and doesn’t state what/where the interface is transmitting this result to. Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential structural cooperative relationships of elements, such omission amounting to a gap between the necessary structural connections. See MPEP § 2172.01. The omitted structural cooperative relationships are: the claim states “the cache result interface is to… transmit the second tag value”, however it doesn’t state a what/where the interface is transmitting the second tag value to. Claims 4-5 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential structural cooperative relationships of elements, such omission amounting to a gap between the necessary structural connections. See MPEP § 2172.01. The omitted structural cooperative relationships are: the claim states “a data interface is to… transmit the cache line”, however it doesn’t state a what/where the interface is transmitting the second tag value to. It is also unclear if the mentioned “a data interface” is part of the claimed memory component or external to it. Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential structural cooperative relationships of elements, such omission amounting to a gap between the necessary structural connections. See MPEP § 2172.01. The omitted structural cooperative relationships are: the claim states “a cache result interface to… transmit a first status indicator”, however it doesn’t state a what/where the interface is transmitting this status indicator to. Claim 15 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential structural cooperative relationships of elements, such omission amounting to a gap between the necessary structural connections. See MPEP § 2172.01. The omitted structural cooperative relationships are: the claim states “a command/address interface to receive…” however it doesn’t state what/where it is receiving from. The claim also states “a cache result interface to transmit… a first status indicator and a second status indicator”, and doesn’t state what/where the interface is transmitting this result to. The claim further states “a data interface to… transmit”, and doesn’t state what/where the interface is transmitting to. Claims 18-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential structural cooperative relationships of elements, such omission amounting to a gap between the necessary structural connections. See MPEP § 2172.01. The omitted structural cooperative relationships are: the claim states “the cache result interface to transmit…”, and doesn’t state what/where the interface is transmitting to. Claims 2, 6-7, 9-14, 16-17, and 20 are rejected for the same reasons as above, as they fail to cure the deficiencies of the claims upon which they depend. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 2020/0004686, as listed on the IDS dated 1/6/2025). In regards to claim 1, Miller teaches A memory component, comprising: (fig. 1 memory 120) a first dynamic random access memory (DRAM) array to store a plurality of cache line information entries, each cache line information entry comprising tag information and cache line status information; (fig. 2, ¶23 cache memory 224 may be implemented on DRAM devices. ¶27 teaches tag data (tag information) can be stored on a dedicated storage device (i.e. a first DRAM array). ¶42 teaches that a dirty indicator can be included (cache line status information) a second DRAM array to store a plurality of cache lines; (fig. 2, ¶23 cache memory 224 may be implemented on DRAM devices which store sets of cache lines. (i.e. a second DRAM array) a command/address interface to receive a first access command (fig. 2, memory controller 122, receives read request 202) over interconnect 117 (i.e. via an interface) in association with a first tag query value and a first address, indicating a first access to the first DRAM array, the first access to the first DRAM array to access a first cache line information entry associated with the first address and including a first tag value and first cache line status information and to access a second cache line information entry associated with a second address and including a second tag value and second cache line status information; (fig. 2, ¶24 the access command specifies an address and include tag data (i.e. first tag query value). Fig. 3, ¶27 tag data 310 can include tags for multiple cache lines (i.e. first/second cache line information entries with tag/status information associated with a first/second address respectively). Fig. 5, ¶33 teaches the received tag (first tag query value) can be compared with each of the tags stored in tag data 310 (i.e. first/second cache line information is being accessed) Miller may not explicitly teach a cache result interface to transmit, based on the first access command, a first status indicator and a second status indicator, the first status indicator including a first hit/miss indication indicating whether the first tag query value matches the first tag value, the second status indicator including a first modification indicator indicating whether a cache line associated with the second cache line information entry is in a modified state. However, Miller does disclose in ¶32-36 and fig. 5 that the tag logic 130 (cache result interface) can transmit a cache miss indication to the controller in case of a miss. In case of a hit of one of the ways (i.e. cache lines) in a set of tags, the tag logic sends the tag data 310, which contains the status information for all the cache lines (i.e. a second status indicator for indicating a modified state of a cache line) to the controller. In addition, the tag logic 130 is aware of which cache line (i.e. way) of the set has been hit. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Miller, such that the tag logic 130 sends hit/miss and modified indications for each cache line entry in the accesses tag data 310 to the cache controller 112 in response to the first access command, so that the controller 112 doesn’t have to redo these comparisons in order to send an appropriate follow-up read/write request to the memory controller 122 in order to access the desired cache line. One of ordinary skill in the art would have been able to make this modification and achieved predictable results. The motivation for making this modification is that it improves operation of the cache system. Claim(s) 6-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 2020/0004686, as listed on the IDS dated 1/6/2025) in view of Kaburaki (US 2017/0235681). In regards to claims 6-7, Miller further teaches wherein the first access to the first DRAM array is to access a third cache line information entry associated with a third address and including a third tag value and third cache line status information, (fig. 2, ¶24 the access command specifies an address and include tag data (i.e. first tag query value). Fig. 3, ¶27 tag data 310 can include tags for multiple cache lines (i.e. first/second cache line information entries with tag/status information associated with a first/second address respectively). Fig. 5, ¶33 teaches the received tag (first tag query value) can be compared with each of the tags stored in tag data 310 (i.e. first/second cache line information is being accessed) Miller does not expressly teach The first cache line information entry including first recency of access information, the second cache line information entry including second recency of access information, the third cache line information entry including third recency of access information, and the second cache line information entry is selected as a basis for the first modification indicator based on the second recency of access information and the third recency of access information. And wherein the second cache line information entry is selected as the basis for the first modification indicator based on the second recency of access information and the third recency of access information indicating that a cache line associated with the third cache line information entry has been accessed more recently than the cache line associated with the second cache line information entry. Kaburaki teaches in ¶297 that tag entries can contain LRU timestamps (i.e. recency of access information) and that the controller can read and use these LRU timestamp entries to preferentially select which cache lines are replaced. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have incorporated the teachings of Kaburaki such that LRU timestamp information can be associated with each of the cache lines in their associated tag entries. Furthermore, as this information can be used to preferentially select tag entries for replacement, one of ordinary skill in the art could selectively transmit only certain information on the basis of the status of information contained in the associated cache tag entries. One of ordinary skill in the art would have the required skill/knowledge in order to make this modification and achieve predictable results. This would allow the tag logic to reduce the amount of data sent to the cache controller 112, reducing the bandwidth requirements of the bus 117 (see fig. 5), and as such improving the cache system. Claim(s) 2-5, 8-12 and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 2020/0004686, as listed on the IDS dated 1/6/2025) in view of Kang (US 2024/0256452). In regards to claims 2-5, Miller may not explicitly teach wherein the second cache line information entry is selected from the plurality of cache line information entries as a basis for the first modification indicator based on the second cache line status information indicating that the cache line associated with the second cache line information entry is in the modified state. or based on the second cache line status information indicating that the cache line associated with the second cache line information entry is in the modified state, transmit the second tag value. or based on the second cache line status information indicating that the cache line associated with the second cache line information entry is in the modified state, transmit the cache line associated with the second cache line information entry. or based on the first tag query value not matching the first tag value and the second cache line status information indicating that the cache line associated with the second cache line information entry is in the modified state, transmit the cache line associated with the second cache line information entry. Miller does teach all of this information is available to the tag logic 130 as ¶32-36 teaches the hit/miss for the tag entries and sending the tag data to the cache controller 112 (see fig. 5), and marking or indicating the dirty flag indicator for a cache line is described in ¶40-43). Fig. 2 step 218 also teaches that cache line data (i.e. a cache line associated with a cache line information entry) can be transferred to the cache controller 112. Kang also teaches in fig. 10 and ¶87-94 teaches that a cache can write back dirty data during idle times in response to a refresh command, and it will search for dirty lines in the cache, read the cache line data, write it back memory, and then mark the cached line as clean Therefore, It would have been obvious to one of ordinary skill in the art prior to the effective filing date to further modify the system of Miller, such that the tag logic 130 to selectively transmit only certain information on the basis of the status of whether associated cache line status entries matched the required tag value, along with the status of the associated cache line to indicate if it was already marked as being in a dirty state, as Kang is explicitly searching for dirty (modified) cache lines in order to write them back. One of ordinary skill in the art would have the required skill/knowledge in order to make this modification and achieve predictable results. This would allow the tag logic to reduce the amount of data sent to the cache controller 112, reducing the bandwidth requirements of the bus 117 (see fig. 5), and as such improving the cache system. In regards to claim 8, Miller teaches a first dynamic random access memory (DRAM) array to store a plurality of cache line information entries, each cache line information entry comprising tag information and cache line status information; (fig. 2, ¶23 cache memory 224 may be implemented on DRAM devices. ¶27 teaches tag data (tag information) can be stored on a dedicated storage device (i.e. a first DRAM array). ¶42 teaches that a dirty indicator can be included (cache line status information). a second DRAM array to store a plurality of cache lines; (fig. 2, ¶23 cache memory 224 may be implemented on DRAM devices which store sets of cache lines. (i.e. a second DRAM array) a command/address interface to receive, from a controller, a first access command, (fig. 2, memory controller 122, receives read request 202 from cache controller 112) over interconnect 117 (i.e. via an interface) in association with a first address, indicating a first access to the first DRAM array,the first access to the first DRAM array to access a first cache line information entry associated with the first address and including first cache line status information; (fig. 2, ¶24 the access command specifies an address and include tag data (i.e. first tag query value). Fig. 3, ¶27 tag data 310 can include tags for multiple cache lines (i.e. first/second cache line information entries with tag/status information associated with a first/second address respectively). Fig. 5, ¶33 teaches the received tag (first tag query value) can be compared with each of the tags stored in tag data 310 (i.e. first/second cache line information is being accessed) a data interface to communicate cache lines with the controller, (fig. 2, memory controller 122 (i.e. over a “data interface”) can return cache line data 218 to the cache controller 112.) a cache result interface to, based on the first access command, transmit a first status indicator; (¶32-36 and fig. 5 that the tag logic 130 (cache result interface) can transmit a cache miss indication (first status indicator) to the controller in case of a miss. In case of a hit of one of the ways (i.e. cache lines) in a set of tags, the tag logic sends the tag data 310 ,which contains the status information for all the cache lines (first status indicator) Miller may not explicitly teach the data interface to, based on the first access command and the first cache line information entry indicating that a first cache line stored by the second DRAM array that is associated with the first address is in a modified state, transmit, to the controller, the first cache line. Kang in fig. 10 and ¶87-94 teaches that a cache can write back dirty data during idle times in response to a refresh command, and it will search for dirty lines in the cache, read the cache line data, write it back memory, and then mark the cached line as clean. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have been able to take this information from Kang and modify Miller such that when a refresh command is received that only a modified lines are returned to the controller such that the data can be written back. One of ordinary skill in the art would have the required skill/knowledge in order to make this modification and achieve predictable results. The motivation for such modification is that it is unnecessary to send the data to the controller unless it is modified when the request is being sent in order to write-back dirty data, therefore saving bandwidth and power. In regards to claim 9, Kang further teaches based on the first access command and the first cache line information entry indicating that the first cache line stored by the second DRAM array that is associated with the first address is in the modified state, setting the first cache line information entry to indicate that the first cache line is not in the modified state. (fig. 10 and ¶87-94 teaches that once dirty data is written back, it can then be marked as clean.) In regards to claim 10, Kang further teaches wherein setting the first cache line information entry to indicate that the first cache line is not in the modified state is further based on the memory component being in a first mode. (fig. 10 and ¶87-94 teaches that this can occur when the system is in a power saving mode (first mode).) In regards to claim 11, Miller further teaches wherein the first status indicator indicates that the first cache line stored by the second DRAM array that is associated with the first address is in the modified state. (¶40-43 teaches that the cache line information contains a dirty flag indicating the data has been modified for the associated cache line) In regards to claim 12, Miller further teaches wherein the first access to the first DRAM array is to further access a second cache line information entry associated with the first address and including second cache line status information, and wherein the memory component selects the first cache line to be transmitted to the controller based on the second cache line information entry indicating a second cache line that is stored by the second DRAM array, that is associated with the first address, and that is associated with the second cache line information entry, is not in the modified state. (fig. 2, ¶24 the access command specifies an address and include tag data (i.e. first tag query value). Fig. 3, ¶27 tag data 310 can include tags for multiple cache lines (i.e. first/second cache line information entries with tag/status information associated with a first/second address respectively). Fig. 5, ¶33-35 teaches the received tag (first tag query value) can be compared with each of the tags stored in tag data 310 (i.e. first/second cache line information is being accessed) and a matching cache line can be returned.) In regards to claim 15, Miller teaches a first dynamic random access memory (DRAM) array to store a plurality of cache line information entries, each cache line information entry comprising tag information and cache line status information; (fig. 2, ¶23 cache memory 224 may be implemented on DRAM devices. ¶27 teaches tag data (tag information) can be stored on a dedicated storage device (i.e. a first DRAM array). ¶42 teaches that a dirty indicator can be included (cache line status information). a second DRAM array to store a plurality of cache lines; (fig. 2, ¶23 cache memory 224 may be implemented on DRAM devices which store sets of cache lines. (i.e. a second DRAM array) a command/address interface to receive a first access command, (fig. 2, memory controller 122, receives read request 202 from cache controller 112) over interconnect 117 (i.e. via an interface) in association with a first tag query value and a first address, indicating a first access to the first DRAM array, the first access to the first DRAM array to access a first cache line information entry associated with the first address and including a first tag value and first cache line status information and to access a second cache line information entry associated with a second address and including a second tag value and second cache line status information; (fig. 2, ¶24 the access command specifies an address and include tag data (i.e. first tag query value). Fig. 3, ¶27 tag data 310 can include tags for multiple cache lines (i.e. first/second cache line information entries with tag/status information associated with a first/second address respectively). Fig. 5, ¶33 teaches the received tag (first tag query value) can be compared with each of the tags stored in tag data 310 (i.e. first/second cache line information is being accessed) the command/address interface to receive a second access command, in association with the first address, indicating a second access to the first DRAM array, the second access to the first DRAM array to access the second cache line information entry that indicates the cache line associated with the second cache line information entry is in the modified state; (fig. 2, ¶24 the access command specifies an address and include tag data (i.e. first tag query value). Fig. 3, ¶27 tag data 310 can include tags for multiple cache lines (i.e. first/second cache line information entries with tag/status information associated with a first/second address respectively). Fig. 5, ¶33 teaches the received tag (first tag query value) can be compared with each of the tags stored in tag data 310 (i.e. first/second cache line information is being accessed). ¶42 teaches that a dirty indicator can be included (cache line status information). a data interface to… transmit the cache line associated with the second cache line information entry. (fig. 2, memory controller 122 (i.e. over a “data interface”) can return cache line data 218 to the cache controller 112.) Miller may not explicitly teach a cache result interface to transmit, based on the first access command, a first status indicator and a second status indicator, the first status indicator including a first hit/miss indication indicating whether the first tag query value matches the first tag value, the second status indicator including a first modification indicator indicating whether a cache line associated with the second cache line information entry and stored by the second DRAM array, is in a modified state; and a data interface to, based on the second access command and the second cache line information entry indicating that the cache line associated with the second cache line information entry is in the modified state, transmit the cache line associated with the second cache line information entry. However, Miller does disclose in ¶32-36 and fig. 5 that the tag logic 130 (cache result interface) can transmit a cache miss indication to the controller in case of a miss. In case of a hit of one of the ways (i.e. cache lines) in a set of tags, the tag logic sends the tag data 310 ,which contains the status information for all the cache lines (i.e. a second status indicator for indicating a modified state of a cache line) to the controller. In addition, the tag logic 130 is aware of which cache line (i.e. way) of the set has been hit. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to modify the system of Miller, such that the tag logic 130 sends hit/miss and modified indications for each cache line entry in the accesses tag data 310 to the cache controller 112 in response to the first access command, so that the controller 112 doesn’t have to redo these comparisons in order to send an appropriate follow-up read/write request to the memory controller 122 in order to access the desired cache line. One of ordinary skill in the art would have been able to make this modification and achieved predictable results. The motivation for making this modification is that it improves operation of the cache system. Even with this modification, Miller still may not render obvious the claim limitation: based on the second access command and the second cache line information entry indicating that the cache line associated with the second cache line information entry is in the modified state, transmit the cache line associated with the second cache line information entry. Kang in fig. 10 and ¶87-94 teaches that a cache can write back dirty data during idle times in response to a refresh command, and it will search for dirty lines in the cache, read the cache line data, write it back memory, and then mark the cached line as clean. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have been able to take this information from Kang and modify Miller such that when a refresh command is received that only a modified lines are returned to the controller such that the data can be written back. One of ordinary skill in the art would have the required skill/knowledge in order to make this modification and achieve predictable results. The motivation for such modification is that it is unnecessary to send the data to the controller unless it is modified when the request is being sent in order to write-back dirty data, therefore saving bandwidth and power. In regards to claim 16, Miller further teaches wherein the second access command is further in association with a second tag query value and the second tag value matches the second tag query value. (fig. 2, ¶24 the access command specifies an address and include tag data (i.e. first tag query value). Fig. 3, ¶27 tag data 310 can include tags for multiple cache lines (i.e. first/second cache line information entries with tag/status information associated with a first/second address respectively). Fig. 5, ¶33 teaches the received tag (first tag query value) can be compared with each of the tags stored in tag data 310 (i.e. first/second cache line information is being accessed) In regards to claim 17, Kang further teaches wherein, based on the second access command, the second cache line information entry is to be set to indicate the cache line associated with the second cache line information entry is not in the modified state. (fig. 10 and ¶87-94 teaches that once dirty data is written back, it can then be marked as clean.) In regards to claim 18, Miller further teaches based on the second access command, the cache result interface is to transmit the second tag value. (fig. 2, the tag logic 130 (result interface) can return the tag data 210 (second tag value) based on a command 202.) In regards to claim 19, Miller and Kang make obvious based on the second access command and the second cache line information entry indicating the cache line associated with the second cache line information entry is the modified state, the cache result interface is to transmit the second tag value. (Miller fig. 2, the tag logic 130 (result interface) can return the tag data 210 (second tag value) based on a command 202. Kang, ¶91 teaches that the controller can check if dirty lines exist.) In regards to claim 20, Kang further teaches wherein the first access command instructs the memory component to perform a refresh operation. (fig. 10 (see S510/S520) and ¶87-94 teaches that the received command can be a refresh command.) Claim(s) 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 2020/0004686, as listed on the IDS dated 1/6/2025) in view of in view of Kang (US 2024/0256452) and Kaburaki (US 2017/0235681). In regards to claims 13-14, Miller further teaches wherein the first access to the first DRAM array is to further access a second cache line information entry associated with the first address and including second cache line status information (fig. 2, ¶24 the access command specifies an address and include tag data (i.e. first tag query value). Fig. 3, ¶27 tag data 310 can include tags for multiple cache lines (i.e. first/second cache line information entries with tag/status information associated with a first/second address respectively). Fig. 5, ¶33 teaches the received tag (first tag query value) can be compared with each of the tags stored in tag data 310 (i.e. first/second cache line information is being accessed) Miller and Kang may not explicitly teach the first cache line information entry including first recency of access information and the second cache line information entry including second recency of access information, and wherein the first cache line is selected for transmission via the data interface to the controller based on the first recency of access information and the second recency of access information. and wherein the first cache line is selected for transmission via the data interface to the controller based on the first recency of access information and the second recency of access information indicating that a second cache line that is stored by the second DRAM array, that is associated with the first address, and that is associated with the second cache line information entry, has been accessed more recently than the first cache line. Kaburaki teaches in ¶297 that tag entries can contain LRU timestamps (i.e. recency of access information) and that the controller can read and use these LRU timestamp entries to preferentially select which cache lines are replaced. Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have incorporated the teachings of Kaburaki such that LRU timestamp information can be associated with each of the cache lines in their associated tag entries. Furthermore, as this information can be used to preferentially select tag entries for replacement, one of ordinary skill in the art could selectively transmit only certain information on the basis of the status of information contained in the associated cache tag entries. One of ordinary skill in the art would have the required skill/knowledge in order to make this modification and achieve predictable results. This would allow the tag logic to reduce the amount of data sent to the cache controller 112, reducing the bandwidth requirements of the bus 117 (see fig. 5), and as such improving the cache system. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Bayer (US 2012/0210069) teaches querying cache tags and determining matches Shao (US 2025/0139005) teaches cache tag querying and cache design methods. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON W BLUST whose telephone number is (571)272-6302. The examiner can normally be reached 12-8:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at (571) 272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JASON W BLUST/Primary Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Jan 06, 2025
Application Filed
Mar 06, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596485
HOST DEVICE GENERATING BLOCK MAP INFORMATION, METHOD OF OPERATING THE SAME, AND METHOD OF OPERATING ELECTRONIC DEVICE INCLUDING THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12554417
DISTRIBUTED DATA STORAGE CONTROL METHOD, READABLE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 17, 2026
Patent 12535954
STORAGE DEVICE AND OPERATING METHOD THEREOF
2y 5m to grant Granted Jan 27, 2026
Patent 12530120
Maximizing Data Migration Bandwidth
2y 5m to grant Granted Jan 20, 2026
Patent 12530118
DATA PROCESSING METHOD AND RELATED DEVICE
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
96%
With Interview (+16.2%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 277 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month