Prosecution Insights
Last updated: April 19, 2026
Application No. 18/612,349

OPTIMIZED SELECTIVE SCANNING OF OVERLAP-TABLE IN STORAGE MEMORIES FOR SEQUENTIAL DATA

Non-Final OA §103
Filed
Mar 21, 2024
Examiner
KORTMAN, CURTIS JAMES
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Sandisk Technologies Inc.
OA Round
3 (Non-Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
170 granted / 216 resolved
+23.7% vs TC avg
Strong +24% interview lift
Without
With
+23.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
18 currently pending
Career history
234
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
43.7%
+3.7% vs TC avg
§102
6.9%
-33.1% vs TC avg
§112
30.8%
-9.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 216 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03 February 2026 and 14 January 2026 has been entered. CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Accordingly, “means to store data” as claimed in claim 18 is interpreted to cover any of the volatile/non-volatile memories discussed in the application, such as a host memory buffer, a DRAM, a RAID array, flash, NAND, PCM, MRAM, etc. as disclosed in [0031-0033] [0037-0038]. Claim Objections Claims 1-6 and 8-17 are objected to because of the following informalities: Claim 1 recites “a single cache occupation bitmap” and then names “a sequential read workload based cache occupation bitmap” and “a random read workload based cache occupation bitmap”. However, the claim later refers to “the cache occupation bitmap”, which logically, could refer to any of the aforementioned bitmaps. Consulting the specification, it is evident to the Examiner that “the cache occupation bitmap” was most likely meant to refer back to “a single cache occupation bitmap”. The Examiner therefore requests references to “the cache occupation bitmap” be amended to recite “the single cache occupation bitmap” for clarity and consistency in the claims. Claim 2 recites, “wherein the controller is configured to maintain a first cache occupation bitmap for sequential read commands and a second cache occupation bitmap for random read commands”, which is grammatically confusing as to whether it conflicts with claim 1, which requires “a single” cache occupation bitmap, as claim 2 could logically require up to two more. Consulting the specification, it is evident to the Examiner that claim 2 is meant to indicate that the other of the sequential or random read workload based bitmaps is also maintained. The Examiner therefore suggests amending claim 2 to clearly reflect this fact by reciting something to the effect of “wherein the controller is configured to maintain an additional cache occupation bitmap that is the other of the sequential read workload based cache occupation bitmap or the random read workload based cache occupation bitmap relative to the single cache occupation bitmap”. Claim 11 is objected to for reasons analogous to claim 1 for similarly reciting “a single cache occupation bitmap,” “a sequential read workload based cache occupation bitmap,” “a random read workload based cache occupation bitmap,” and “the cache occupation bitmap” and should accordingly be amended analogously. Claim 17 is objected to for reasons analogous to claim 2 and should accordingly be amended to recite, “wherein the controller is configured to maintain an additional cache occupation bitmap” to reflect that “multiple cache occupation bitmaps” in claim 17 is not in conflict with “single cache occupation bitmap” in claim 11. Claims 2-6, 9-10 and 11-17 are objected to for failing to correct the deficiencies of a base claim from which they depend. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-5 and 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication No. US 2022/0350744 A1 (Urrinkala) in view of US Patent Application Publication No. US 2018/0232310 A1 (Chang). Regarding claim 1: Urrinkala teaches a data storage device, comprising: a memory device; and a controller coupled to the memory device, wherein the controller is configured to: receive a read command; detect whether the read command is a sequential read command or a random read command; and respond to the received read command by delivering data associated with the read command from the memory device (by disclosing that a memory system (210) includes a storage controller (230) coupled to a memory device (240) and receives a read command (415) from the host system (205) as seen in [Fig. 3] and determines whether the command corresponds to a sequential access pattern (yes -> sequential / no -> random) [Fig. 4] [0077-0083] [0084 – the method of Fig. 4 is performed by the controller]. This is in order to prefetch sequential data into a cache (275) if sequential reads are detected (425) [Fig. 4]). Urrinkala does not explicitly disclose, but Chang teaches maintain a single cache occupation bitmap that is either a sequential read workload based cache occupation bitmap or a random read workload based cache occupation bitmap, search the cache occupation bitmap; determine whether a full cache scan is necessary, wherein determining whether a full cache scan is necessary comprises determining whether one or more bits of the cache occupation bitmap has a value of 0 or 1, the one or more bits corresponding to the read command, either perform the full cache scan when it is determined to be unnecessary; or omit the full cache scan when it is determined to be unnecessary, wherein the full cache scan is omitted when the one or more bits of the cache occupation bitmap has a value of 0; respond to the received read command by delivering data associated with the read command from the memory device to a host device after performing the full cache scan or omitting the full cache scan (by teaching that cache occupancy can be tested with a bloom filter (505) (maintain a single cache occupation bitmap (in combination with the teachings of Urrinkala, where the read access is either a sequential or random workload type, such that when the access is sequential the bloom filter may be considered a “sequential read workload based cached occupation bitmap” and when the access is random the bloom filter may be considered a “random read workload based cache occupation bitmap” – it may be considered as such because there are no teachings in Chang that restrict the bloom filter’s ability to function under either detected workload type of Urrinkala and Applicant’s specification does not disclose a structural difference between a random read workload or sequential read workload based cache occupation bitmap, and the labels instead are understood to be the intended use of the bitmap, which the bitmaps taught by Chang are suitable for)) [see Fig. 4]. The bloom filter can return a negative/miss result when the hash functions maps to bits where either one or all of the bits are set to ‘0’ [Fig. 4]. The negative/miss result indicates that there is a DRAM cache miss (509) and a search of the metadata of the DRAM cache is omitted (i.e., steps 507, 508, 510, and 511 are all skipped) and flash is accessed directly (512) (deliver data… after omitting the full cache scan) [Fig. 5] [0057]. The bloom filter can also return a positive/hit result, which is a probabilistic result that indicates the cache line is likely present in the cache (because false positives are possible) [Fig. 5] [0058]. The positive/hit result occurs when the hash functions of the bloom filter map to bits that are all set to ‘1’ [Fig. 4]. If the probabilistic result is that the cache line is likely present, then the metadata of the DRAM cache is accessed (507) and searched for a DRAM cache tag match (i.e., deliver the data… after performing the full scan) (508). The bits that are checked in the bloom filter correspond to the indexes determined by address tag of the read request that are hashed to then determine the entries to read in the bloom filter (the one or more bits corresponding to the read command) [0039] [0055] [Fig. 4] [Fig. 5]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the use of the cache as taught by Urrinkala to include maintaining a bloom filter and searching the bloom filter on cache accesses, such as from a read request, to determine whether the cache contains the requested data or not according to the bloom filter (i.e., whether entries corresponding to the read request address as determined by a hash are ‘1’ or ‘0’, where all entries ‘0’ indicates a cache miss) and then skip accessing cache metadata to determine whether there is a cache hit when the entries are ‘0’ and directly accessing the persistent storage for the requested read data as taught by Chang. One of ordinary skill in the art would have been motivated to make this modification because access to the cache to determine whether or not an entry is cached can be skipped and the latency of the data access to the flash memory is improved with the use of the bloom filter as taught by Chang in [0057]. Regarding claim 3: The data storage device of claim 1 is made obvious by Urrinkala in view of Chang (Urrinkala-Chang). Urrinkala does not explicitly disclose, but Chang teaches wherein the cache occupation bitmap comprises a plurality of bitmap bits, wherein each bit of the bitmap bits corresponds to a predetermined amount of a storage address range (see [Fig. 4]. Furthermore, as each bit corresponds to a hash function that hashed the DRAM tag of the address of the cache line to a specific bit of the bloom filter, each bit corresponds to a cache line. Furthermore, each cache line corresponds to 2K 1B pieces of data in the cache line, and therefore has 2048 offset numbers that correspond to data in the cache line, and accordingly correspond to 2KB of data in the storage address range [0043] [0055-0059]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the use of the cache as taught by Urrinkala to include maintaining a bloom filter and searching the bloom filter on cache accesses, such as from a read request, to determine whether the cache contains the requested cache line or not according to the bloom filter (i.e., whether entries corresponding to the read request address as determined by a hash are ‘1’ or ‘0’, where all entries ‘0’ indicates a cache miss) and then skip accessing cache metadata to determine whether there is a cache hit when the entries are ‘0’ and directly accessing the persistent storage for the requested read data as taught by Chang. One of ordinary skill in the art would have been motivated to make this modification because access to the cache to determine whether or not an entry is cached can be skipped and the latency of the data access to the flash memory is improved as taught by Chang in [0057]. Regarding claim 4: The data storage device of claim 3 is made obvious by Urrinkala-Chang. Urrinkala-Chang further make obvious wherein the predetermined amount is equal for each bitmap bit (through the analysis performed for claim 3 as each bitmap bit represents a 2KB cache line). Regarding claim 5: The data storage device of claim 3 is made obvious by Urrinkala-Chang. Urrinkala-Chang further make obvious wherein the predetermined amount comprises a plurality of address ranges that are logically sequential (through the analysis performed for claim 3 as each bitmap bit represents the plurality of address ranges that represent 2KB bytes of the physical address space and are logically sequential, but multiple different cache lines may map to the same bits and accordingly, each bit may represent multiple cache lines depending on which cache line DRAM tags hash to the same location in the bloom filter [Chang, 0050-0059]). Regarding claim 8: The data storage device of claim 7 is cancelled, but claim 1 is made obvious by Urrinkala-Chang. Urrinkala-Chang further make obvious wherein upon determining that one or more bits of the cache occupation bitmap has a value of 1, a full cache scan is performed to search for overlaps (through the analysis performed for claim 1 as Chang teaches that the positive/hit result occurs when the hash functions of the bloom filter map to bits that are all set to ‘1’ [Fig. 4]. If the probabilistic result is that the cache line is likely present, then the metadata of the DRAM cache is accessed (507) and searched for a DRAM cache tag match (i.e., full scan) (508)). Regarding claim 9: The data storage device of claim 8 is made obvious by Urrinkala-Chang. Urrinkala does not explicitly disclose, but Chang teaches wherein data for the read command is read from the memory device after performing the full cache scan (by teaching that the positive/hit result occurs when the hash functions of the bloom filter map to bits that are all set to ‘1’ [Fig. 4]. If the probabilistic result is that the cache line is likely present, then the metadata of the DRAM cache is accessed (507) and searched for a DRAM cache tag match (i.e., full scan) (508). The search in the cache metadata can result in a cache miss, as it is possible the bloom filter causes false positives (508 -> No) (509 -> 512), at which point, the read data is accessed from flash memory (512) [Fig. 5]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the use of the cache and access to the memory device as taught by Urrinkala to include checking a bloom filter on a read access, on a positive determination, checking whether the metadata of the cache includes the requested address, and upon determining that the cache does not include the data associated with the requested address, accessing the backing memory device for the requested read data as taught by Chang. One of ordinary skill in the art would have been motivated to make this modification because access to the cache to determine whether or not an entry is cached can sometimes be skipped and the latency of the data access to the flash memory is improved with the use of the bloom filter as taught by Chang in [0057]. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Urrinkala-Chang in view of US Patent Application Publication No. US 2022/0129385 A1 (Karve). Regarding claim 2: The data storage device of claim 1 is made obvious by Urrinkala-Chang. Urrinkala teaches that there are sequential and random read commands received and performed by the storage device (by disclosing that a memory system (210) includes a storage controller (230) coupled to a memory device (240) and receives a read command (415) from the host system (205) as seen in [Fig. 3] and determines whether the command corresponds to a sequential access pattern (yes -> sequential / no -> random) [Fig. 4] [0077-0083] [0084 – the method of Fig. 4 is performed by the controller]. This is in order to prefetch sequential data into a cache (275) if sequential reads are detected (425) [Fig. 4]). Urrinkala does not explicitly disclose, but Karve teaches wherein the controller is configured to maintain a first cache occupation bitmap for sequential read commands and a second cache occupation bitmap for random read commands (by teaching that there may be multiple bloom filters and each may be checked for each cache access [0028]. Using multiple bloom filters may reduce the chances of false negatives [0035]. Furthermore, the two filters may be flushed at the same interval, but offset from each other to reduce false positives and reduce the effects of saturation [0024] [0054]. Moreover, there is nothing about the structure of the bloom filters that would prevent them from being used for both sequential and random read operations as taught by Urrinkala and accordingly, they are interpreted as, “for sequential read commands” and “for random read commands”). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the cache with the bloom filter by Urrinkala in view of Chang to include multiple bloom filters that are flushed on offset intervals, with both bloom filters checked for a cache access as taught by Karve. One of ordinary skill in the art would have been motivated to make this modification because the use of the multiple bloom filters reduces the chances of false positives and also flushing them on offset intervals prevents saturation as taught by Karve in [0024] [0054]. Claims 6 is rejected under 35 U.S.C. 103 as being unpatentable over Urrinkala-Chang in further view of US Patent No. US 9,798,672 B1 (Svendsen). Regarding claim 6: Urrinkala-Chang does not explicitly disclose, but Svendsen teaches wherein the plurality of logically sequential address ranges are randomly distributed within the storage address range (by teaching that the hash functions used to hash the cache line address to determine whether there is a possible hit or miss in the cache use a hash function that provides a uniform random distribution of the hashed addresses into the bloom filter array [Col 7: line 38 – Col 8: line 3]). It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the hash functions used to hash the DRAM tags to determine positions in the bloom filter as taught by Chang to be uniform random hash functions as taught by Svendsen because it would have only required the simple substitution of one known element for another and the results would have been predictable. For example, Chang teaches using a hash function to index into a bloom filter array, but does not teach that the hash function yields a uniform random distribution. However, Svendsen teaches a hash function used for the same purpose that yields a uniform random distribution. Accordingly, one of ordinary skill in the art could have substituted the hash function taught by Chang for the one taught by Svendsen and the results would have been predictable. Allowable Subject Matter The subject matter of claims 10-18 and 20 was searched for in the prior art, but not found. Accordingly, claims 10-18 and 20 are not rejected with prior art. However, claims 10-17 are objected to for reciting minor informalities. Accordingly, claims 18 and 20 are allowed, while claims 10-17 are objected to. Regarding claim 10: Claim 10 recites and the prior art does not teach: “wherein the controller is configured to switch between maintaining the sequential read workload based cache occupation bitmap and the random read workload based cache occupation bitmap based on detecting either a random read workload or a sequential read workload, wherein the controller is configured to quiesce and flush the single cache occupation bitmap when switching between cache occupation bitmaps” and furthermore, does not teach a reason to modify the prior art to arrive at the claimed invention, because the prior art does not teach about switching between and flushing bitmaps after quiescing them to switch between sequential and random bitmaps. Accordingly, the claim is not rejected with prior art. Regarding claim 11: Claim 11 recites, “A data storage device, comprising: a memory device; and a controller coupled to the memory device, wherein the controller is configured to: determine that a full cache scan is necessary for a first read command; determine that a full cache scan is not necessary for a second read command; determine that a full cache scan is necessary for a third read command, wherein the first read command, the second read command, and the third read command are received in order; and perform a full cache scan for the first read command and the third read command, wherein the full cache scan for the first read command occurs after determining that the full cache scan is not necessary for the second read command.” In this case the Examiner is interpreting “in order” to require that the first read command is received first, the second read command is received after the first read command, and the third read command is received after the second read command. Durbhakula makes obvious determining to switch between performing a full and partial cache scan (based on switching between fully associative and set associative caching) for different commands received in order [see Fig. 8 (800) – where the capacity level is switched after an interval of commands]. However, Durbhakula does not teach to perform the full cache scan for the first command after determining that the full cache scan is not necessary for the second read command because the determination to make a cache set associative and therefore to not perform a full cache scan is not performed until after a full cache scan (on a fully associative cache) is performed for the first command. Moreover, neither Chang nor Karve teaches performing a full cache scan for the first and third read commands received in order after determining that the full cache scan is not necessary for the second read command as claimed. Furthermore, the prior art does not teach a reason why the prior art should be modified to arrive at the claimed invention. Accordingly, the claim is not rejected with prior art. Regarding claims 12-17: Claims 12-17 are not rejected with prior art at least by virtue of their dependence from claim 11. Regarding claim 18: Claim 18 recites and the prior art does not teach “switch between maintaining the first cache occupation bitmap and the second cache occupation bitmap based on detecting either a random read workload or a sequential read workload” because the prior art does not teach switching between sequential and random read workload based cache occupation bitmaps based on detecting either a random or sequential workload. Accordingly, the claim is not rejected with prior art. Regarding claim 20: Claim 20 is not rejected with prior art at least by virtue of its dependence from claim 18. Response to Arguments/Amendments In response to the amendments, a new objection has been made to claims 1-6 and 8-17. In response to the amendments, the pervious 35 USC § 112(d) rejection has been withdrawn. In response to the amendments, the previous 35 USC § 112(b) rejections have been withdrawn. In response to the amendments, the 35 USC §101 rejection has been withdrawn. Applicant’s arguments with respect to the 35 USC §103 rejection of claims 1-6 and 8-9 have been considered but are not persuasive. Applicant argues against Chang, individually, that it does not disclose a sequential read workload based cache occupation bitmap or a random read workload based cache occupation bitmap. However, the Examiner finds that the combination of Urrinkala teaching detecting a random read workload or a sequential read workload and then using the cache occupation bitmap taught by Chang results in the use of a cache occupation bitmap for the detected random read workload or sequential read workload, such that the combination may be thought of as a “sequential read workload based cache occupation bitmap” or a “random read workload based cache occupation bitmap” because the cache occupation bitmap taught by Chang is usable with either the detected sequential or random read workload as detected by Urrinkala. Accordingly, Applicant’s argument against the references individually is not persuasive and the rejection is updated to reflect the claim amendments for claims 1-6 and 8-9. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Patent No. US 11,675,709 B2 (D’Eliseo) teaches a bitmap that is used to cache address translations that do not require a read to memory (cache occupancy bitmap). However, in every case that the bitmap is read, the cache is checked first [Col 3: lines 57-67]. Accordingly, although a bitmap may be read to determine an address without an access to non-volatile memory, it does not replace a cache access and instead occurs after a cache access has already occurred and no cached entry was found and therefore does not teach “determine that a full cache scan is necessary… [by] finding a value of 1 for a bit of the bitmap” because the full cache scan has already occurred when the bitmap is checked. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CURTIS JAMES KORTMAN whose telephone number is (303)297-4404. The examiner can normally be reached Monday through Friday 7:30 AM through 4:00 PM MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached at (571) 272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CURTIS JAMES KORTMAN/Primary Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Mar 21, 2024
Application Filed
Aug 18, 2025
Non-Final Rejection — §103
Oct 20, 2025
Interview Requested
Oct 28, 2025
Applicant Interview (Telephonic)
Oct 28, 2025
Examiner Interview Summary
Nov 19, 2025
Response Filed
Dec 03, 2025
Final Rejection — §103
Dec 29, 2025
Interview Requested
Jan 02, 2026
Interview Requested
Jan 14, 2026
Response after Non-Final Action
Feb 03, 2026
Request for Continued Examination
Feb 05, 2026
Response after Non-Final Action
Feb 10, 2026
Non-Final Rejection — §103
Apr 01, 2026
Interview Requested
Apr 09, 2026
Examiner Interview Summary
Apr 09, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12504911
METHOD AND SYSTEM OF STANDARDS-BASED AUDIO FUNCTION PROCESSING WITH REDUCED MEMORY USAGE
2y 5m to grant Granted Dec 23, 2025
Patent 12504906
Sustainable Storage System
2y 5m to grant Granted Dec 23, 2025
Patent 12487751
Data Storage Device and Method for Handling Lifetime Read Disturb
2y 5m to grant Granted Dec 02, 2025
Patent 12449985
DYNAMIC FLASH INTERFACE MODULE (FIM) OPTIMIZATION
2y 5m to grant Granted Oct 21, 2025
Patent 12450166
CACHING HOST MEMORY ADDRESS TRANSLATION DATA IN A MEMORY SUB-SYSTEM
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+23.6%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 216 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month