Prosecution Insights
Last updated: April 19, 2026
Application No. 18/829,569

MEMORY SYSTEM

Non-Final OA §103
Filed
Sep 10, 2024
Examiner
OTTO, ALAN
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Kioxia Corporation
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
85%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
244 granted / 368 resolved
+11.3% vs TC avg
Strong +19% interview lift
Without
With
+18.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
21 currently pending
Career history
389
Total Applications
across all art units

Statute-Specific Performance

§101
6.7%
-33.3% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 368 resolved cases

Office Action

§103
Detailed Action The instant application having Application No. 18/829,569 has a total of 20 claims pending in the application; there is 1 independent claim and 19 dependent claims, all of which are ready for examination by the examiner. This Office action is in response to the claims filed 9/10/24. Claims 1-20 are pending. NOTICE OF PRE-AIA OR AIA STATUS The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . INFORMATION CONCERNING DRAWINGS Drawings The applicant's drawings submitted 9/10/24 are acceptable for examination purposes. REJECTIONS BASED ON PRIOR ART Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-6, 8-16 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Khalili et al. (U.S. Patent Application Publication No. 2020/0034061), herein referred to as Khalil et al. in view of Roberts (U.S. Patent Application Publication No. 2025/0003659), herein referred to as Roberts. Referring to claim 1, Khalili et al. disclose as claimed, a memory system comprising: a nonvolatile memory (see fig. 1, showing a memory device 120 and see para. 24, where it may be a nonvolatile memory) ; and a memory controller that includes: a first cache including a first memory unit and a first control unit controlling the first memory unit (see fig. 1, showing circuit 140 with a prefetch cache 128. See para. 25, where different media controllers may exist that separately control access to each storage media), and storing prefetch data and read data from the nonvolatile memory (see para. 44, where in mode 2, data is first accessed and placed in a prefetch cache), the first cache being connectable to a host (see fig. 1, showing prefetch cache being connectable to host 110); a second cache including a second memory unit and a second control unit controlling the second memory unit (see fig. 1, showing a I/O buffer), and storing the read data and write data from the host (see para. 43-44 and 75, where I/O buffer is for sending data to host, and the queue may buffer both read and write commands), the second cache being connected to the first cache (see fig. 1, where I/O buffer and prefetch cache are connected together); and a first controller being configured to control the nonvolatile memory (see fig. 1, showing circuit 140 with media controller 130. See para. 25, where different media controllers may exist that separately control access to each storage media), , in a case where the first control unit receives a first prefetch request for first data of a first logical address, the first control unit is configured to store the prefetched first data in the cache line of a first entry included in the first memory unit (see para. 44, where data is accessed and placed in the prefetch buffer in response to a request from host), and the first control unit is configured to maintain the first entry until receiving a read request or a write request for the first logical address from the host (see para. 90, where a write to a corresponding address invalidates the data in the prefetch buffer, and it is evicted). Khalili et al. disclose the claimed invention except for the first memory unit includes a plurality of entries each having tag information of the index and having a cache tag including a first field and a cache line; the first memory unit having an SRAM as a memory element and the second memory unit having a DRAM as a memory element; wherein each of a plurality of logical addresses designated by the host is mapped to the first memory unit by an index; and store a first value indicating that the first data is the prefetch data in the first field of the first entry. However, Roberts discloses the first memory unit includes a plurality of entries each having tag information of the index and having a cache tag including a first field and a cache line (see para. 65, where cache lines may include prefetch bits along with pointers to blocks of metadata); the first memory unit having an SRAM as a memory element and the second memory unit having a DRAM as a memory element (see para. 16 and 23, where a memory/cache may be SRAM or DRAM); wherein each of a plurality of logical addresses designated by the host is mapped to the first memory unit by an index (see para. 31-32, where a logical to physical mapping table is used to map logical addresses to data); and store a first value indicating that the first data is the prefetch data in the first field of the first entry (see para. 65, where cache lines may include a prefetched bit to indicate whether data of the cache line was prefetched to the cache). Khalili et al. and Roberts are analogous art because they are from the same field of endeavor of memory systems (see Khalili et al., abstract and Roberts, abstract, regarding memory systems). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Khalili et al. to comprise the first memory unit includes a plurality of entries each having tag information of the index and having a cache tag including a first field and a cache line; the first memory unit having an SRAM as a memory element and the second memory unit having a DRAM as a memory element; wherein each of a plurality of logical addresses designated by the host is mapped to the first memory unit by an index; and store a first value indicating that the first data is the prefetch data in the first field of the first entry., as taught by Roberts, in order to determine whether prefetched data is used to determine future likelihoods of prefetched data in similar situations being used. This would result in a higher cache hit rate and increased performance. As to claim 2, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein when the first control unit receives the first prefetch request, in a case where data is not stored in the cache line of the first entry, the first control unit is configured to store the first data read from the second cache or the nonvolatile memory in the cache line of the first entry, and store the first value in the first field of the first entry (see Roberts, para. 65, where cache lines may include a prefetched bit to indicate whether data of the cache line was prefetched to the cache). As to claim 3, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein when the first control unit receives the first prefetch request, in a case where the first data is stored in the cache line of the first entry and the first value is not stored in the first field of the first entry, the first control unit is configured to update the first field of the first entry to the first value (see Roberts, para. 65, where cache lines may include a prefetched bit to indicate whether data of the cache line was prefetched to the cache. Therefore, if the data was prefetched, then the prefetched bit would be updated). As to claim 5, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein when the first control unit receives the first prefetch request, in a case where second data of a second logical address different from the first logical address is stored in the cache line of the first entry, the first control unit is configured to evict the second data from the first entry (see Roberts, para. 63, where data may be evicted from the cache using a cache replacement policy, such as an LRU cache replacement policy. In which case if the second data was the least recently used, it would be evicted), store the first data read from the second cache or the nonvolatile memory in the cache line of the first entry (see Roberts, para. 49, where the controller may receive access requests and determine if a cache includes the data. If it doesn’t, the data may be transferred both to the cache and to the processor core 205 from the memory device), and store the first value in the first field of the first entry (see Roberts, para. 65, where cache lines may include a prefetched bit to indicate whether data of the cache line was prefetched to the cache). ). As to claim 6, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein when the first control unit receives a read request for the first data of the first logical address from the host, in a case where data is not stored in the cache line of the first entry, the first control unit is configured to transmit the first data read from the second cache or the nonvolatile memory to the host (see Roberts, para. 49, where the controller may receive access requests and determine if a cache includes the data. If it doesn’t, the data may be transferred both to the cache and to the processor core 205 from the memory device), store the first data in the cache line of the first entry, and store a second value indicating that the first data is not the prefetch data in the first field of the first entry (see Roberts, para. 65, where cache lines may include a prefetched bit to indicate whether data of the cache line was prefetched to the cache, and therefore would be a second value to indicate that the data in the cache line was not prefetched). As to claim 8, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein when the first control unit receives a read request for the first data of the first logical address from the host, in a case where the first data is stored in the cache line of the first entry and the first value is stored in the first field of the first entry, the first control unit is configured to transmit the first data of the first entry to the host and clear the cache line and the first field of the first entry (see Khalili et al., para. 44 and 56, where if there is a cache hit in the prefetch buffer, the data may be moved to the I/O buffer and other data may remain in the cache until requested. As the first field of the first entry signifies a prefetched bit, that would also be cleared as there is no longer prefetched data in that entry). As to claim 9, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein when the first control unit receives a read request for the first data of the first logical address from the host, in a case where second data of a second logical address different from the first logical address is stored in the cache line of the first entry and the first value is not stored in the first field of the first entry, the first control unit is configured to evict the second data from the first entry (see Roberts, para. 63, where any cache replacement / eviction policy may be used. See Khalili et al., para. 90, where if the prefetch buffer is full, then prefetched data will be evicted according to a cache replacement policy. Therefore, the second data from the first entry would be evicted if it is next on the list per the replacement policy.), transmit the first data read from the second cache or the nonvolatile memory to the host, store the first data in the cache line of the first entry (See Khalili et al., para. 44, where if there is a cache miss, then the requested data is sent to the host while the other data remains cached. Also see Roberts, para. 23, where data stored in memory may be read from or written or updated. Therefore, data that is not being directly overwritten would stay in the cache unless it was being evicted), and store a second value indicating that the first data is not the prefetch data in the first field of the first entry (See Roberts, para. 65, where cache lines may include a prefetched bit to indicate whether data of the cache line was prefetched to the cache. Therefore, if the data was prefetched, then the prefetched bit would be updated). As to claim 10, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein when the first control unit receives a read request for the first data of the first logical address from the host, in a case where second data of a second logical address different from the first logical address is stored in the cache line of the first entry and the first value is stored in the first field of the first entry, the first control unit is configured to transmit the first data read from the second cache or the nonvolatile memory to the host and maintain the first entry (see Khalili et al., para. 90, where data is marked invalid only when a write occurs to a corresponding address. Therefore, if the write is not directed at the data in the first entry, then the first entry would be maintained. Also see Roberts, para. 23, where data stored in memory may be read from or written or updated. Therefore, data that is not being directly overwritten would stay in the cache unless it was being evicted. See Khalili et al., para. 44, where if there is a cache miss, then the requested data is sent to the host while the other data remains cached). As to claim 11, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein when the first control unit receives a write request for the first logical address from the host, in a case where data is not stored in the cache line of the first entry, the first control unit is configured to maintain the first entry (see Khalili et al., para. 90, where data is marked invalid only when a write occurs to a corresponding address. Therefore, if the write is not directed at the data in the first entry, then the first entry would be maintained. Also see Roberts, para. 23, where data stored in memory may be read from or written or updated. Therefore, data that is not being directly overwritten would stay in the cache unless it was being evicted). As to claim 12, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein when the first control unit receives a write request for the first logical address from the host, in a case where the first data is stored in the cache line of the first entry, the first control unit is configured to clear the cache line and the first field of the first entry (see Khalili et al., para. 90, where when a write to a corresponding prefetch buffer data in the cache is received, that data is marked as invalid. See Roberts, para. 33, where invalid data is garbage collected and erased). As to claim 13, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein when the first control unit receives a write request for the first logical address from the host, in a case where second data of a second logical address different from the first logical address is stored in the cache line of the first entry, the first control unit is configured to maintain the first entry (see Khalili et al., para. 90, where data is marked invalid only when a write occurs to a corresponding address. Therefore, if the write is not directed at the data in the first entry, then the first entry would be maintained. Also see Roberts, para. 23, where data stored in memory may be read from or written or updated. Therefore, data that is not being directly overwritten would stay in the cache unless it was being evicted). As to claim 14, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein the memory controller further includes a second controller being configured to control a prefetch process, and the first prefetch request is transmitted to the first control unit by the second controller based on designation of the prefetch process by a user (see Roberts, fig 1, showing a general memory system controller 140 that would transmit requests to controllers 145 and 150. See para. 14-15 where the memory system controller 140 receives read and write requests directed to memory device 155 or memory device 170, and would need to transmit those requests to the separate memory controllers). As to claim 15, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein the memory controller further includes a second controller being configured control a prefetch process, and the first prefetch request is transmitted to the first control unit by the second controller (see Roberts, fig 1, showing a general memory system controller 140 that would transmit requests to controllers 145 and 150. See para. 14-15 where the memory system controller 140 receives read and write requests directed to memory device 155 or memory device 170, and would need to transmit those requests to the separate memory controllers). As to claim 16, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein the first prefetch request is transmitted to the first control unit by the host (see Khalili et al., para. 44, where prefetch requests are received from the host). As to claim 19, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein the second memory unit does not have a copy of data stored in the first memory unit (see Khalili et al., para. 44, where the controller first looks in prefetch cache for the requested data and if there is a cache hit, it is moved to I/O buffer. So initially, I/O buffer did not have a copy of the data). Claims 4, 7, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Khalili et al. in view of Roberts and in view of Lai et al. (U.S. Patent No. 10,664,403), herein referred to as Lai et al. As to claim 4, Khalili et al. and Roberts disclose the claimed invention except for the memory system according to claim 1, wherein when the first control unit receives the first prefetch request, in a case where the first data is stored in the cache line of the first entry and the first value is stored in the first field of the first entry, the first control unit is configured to maintain the first entry. However, Lai et al. disclose wherein when the first control unit receives the first prefetch request, in a case where the first data is stored in the cache line of the first entry and the first value is stored in the first field of the first entry, the first control unit is configured to maintain the first entry (see col. 7, lines 16-35, where prefetched data is evicted or invalidated from a cache based on a duration of a timer. See col. 6, lines 8-39, where if there is a prefetch request for blocks that the cache already stores, then the request is ignored and the entry is therefore maintained) Khalili et al. and Lai et al. are analogous art because they are from the same field of endeavor of memory systems (see Khalili et al., abstract and Lai et al., abstract, regarding memory systems). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Khalili et al. to comprise wherein when the first control unit receives the first prefetch request, in a case where the first data is stored in the cache line of the first entry and the first value is stored in the first field of the first entry, the first control unit is configured to maintain the first entry, as taught by Lai et al., in order to avoid unnecessary traffic and increase efficiency (see Lai et al., col. 6, lines 25-30) As to claim 7, Khalili et al. and Roberts disclose the claimed invention except for the memory system according to claim 1, wherein when the first control unit receives a read request for the first data of the first logical address from the host, in a case where the first data is stored in the cache line of the first entry and the first value is not stored in the first field of the first entry, the first control unit is configured to transmit the first data of the first entry to the host (see Roberts, para. 49, where the controller may receive access requests and determine if a cache includes the data. If it doesn’t, the data may be transferred both to the cache and to the processor core 205). Khalili et al. and Roberts disclose the claimed invention except for maintaining the first entry. However, Lai et al. disclose maintaining the first entry. (see col. 7, lines 16-35, where prefetched data is evicted or invalidated from a cache based on a duration of a timer. See col. 6, lines 8-39, where if there is a prefetch request for blocks that the cache already stores, then the request is ignored and the entry is therefore maintained). Khalili et al. and Lai et al. are analogous art because they are from the same field of endeavor of memory systems (see Khalili et al., abstract and Lai et al., abstract, regarding memory systems). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Khalili et al. to comprise maintaining the first entry, as taught by Lai et al., in order to avoid unnecessary traffic and increase efficiency (see Lai et al., col. 6, lines 25-30) As to claim 17, Khalili et al. and Roberts also disclose the memory system according to claim 1, wherein the first cache further includes a measurement unit being configured to measure an elapsed time from a time at which the first value is stored in the first field of the first entry to a current time (see Roberts, para. 140-141, where a time of eviction of data from the cache is used to calculate a duration of time that a value was stored and a counter is updated). Khalili et al. and Roberts disclose the claimed invention except for in a case where the result measured by the measurement unit exceeds a constant time, the first control unit is configured to update the first field of the first entry to a second value indicating that the first data is not the prefetch data. However, Lai et al. disclose where the result measured by the measurement unit exceeds a constant time, the first control unit is configured to update the first field of the first entry to a second value indicating that the first data is not the prefetch data (see col. 7, lines 15-35, where a timer indicates the amount of time that the data may stay resident in the cache that it was prefetched to, and would be evicted after. Therefore, the entry would indicate the data is not prefetched data when it is evicted as taught by Roberts. Also see col. 6, lines 40-64, where prefetch tracking data is used to mark whether data has been prefetched and in the cache, and therefore would not be marked if the entry has been evicted). Khalili et al. and Lai et al. are analogous art because they are from the same field of endeavor of memory systems (see Khalili et al., abstract and Lai et al., abstract, regarding memory systems). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Khalili et al. to comprise wherein when the first control unit receives the first prefetch request, in a case where the first data is stored in the cache line of the first entry and the first value is stored in the first field of the first entry, the first control unit is configured to maintain the first entry, as taught by Lai et al., in order to avoid unnecessary traffic and increase efficiency (see Lai et al., col. 6, lines 25-30). As to claim 20, Khalili et al. and Roberts disclose the claimed invention except for the memory system according to claim 1, wherein the second memory unit has a copy of data stored in the first memory unit. However, Lai et al. disclose wherein the second memory unit has a copy of data stored in the first memory unit (see col. 5, lines 38-42, where the backing memory may be part of the cache hierarchy and see col. 6, lines 30-40, where the backing memory reads blocks into the cache and therefore both the backing memory and the cache would have those blocks. Also see Roberts, para. 50, where when prefetching data, the other memory retains a copy of the data. Retaining copies of data in different parts of a cache hierarchy is well known in the art). Khalili et al. and Lai et al. are analogous art because they are from the same field of endeavor of memory systems (see Khalili et al., abstract and Lai et al., abstract, regarding memory systems). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Khalili et al. to comprise wherein the second memory unit has a copy of data stored in the first memory unit, as taught by Lai et al., in order to improve performance and efficiency, Inclusive caching and its advantages are well known in the art and would be obvious to implement. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Khalili et al. in view of Roberts and in view of Kaburaki et al. (U.S. Patent Application Publication No. 2017/0235681), herein referred to as Kaburaki et al. As to claim 18, Khalili et al. and Roberts disclose the claimed invention except for the memory system according to claim 1, wherein the memory controller further includes a table for translating the logical address to a physical address of the nonvolatile memory, and a line size of the cache line of the first memory unit is a same as a management size of the table. However, Kaburaki et al. disclose wherein the memory controller further includes a table for translating the logical address to a physical address of the nonvolatile memory (see fig .1 and para. 52, showing a memory controller containing RAM, storing an L2P table cache), and a line size of the cache line of the first memory unit is a same as a management size of the table (see para. 43, where a management size of the table may be a page size of 4096 bytes. See para. 89, where a cache line size may also be 4096 bytes). Khalili et al. and Lai et al. are analogous art because they are from the same field of endeavor of memory systems (see Khalili et al., abstract and Kaburaki et al., abstract, regarding memory systems). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Khalili et al. to comprise wherein the memory controller further includes a table for translating the logical address to a physical address of the nonvolatile memory, and a line size of the cache line of the first memory unit is a same as a management size of the table, as taught by Kaburaki et al., in order to allow for faster translation and processing speed. It is well known in the art to store a translation table in a faster and closer memory like SRAM that Kaburaki teaches, to allow for faster translation. CLOSING COMMENTS Conclusion a. STATUS OF CLAIMS IN THE APPLICATION The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): a(1) CLAIMS REJECTED IN THE APPLICATION Per the instant office action, claims 1-20 have received a first action on the merits and are the subject of a first action non-final. b. DIRECTION OF FUTURE CORRESPONDENCES Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN OTTO whose telephone number is (571)270-1626. The examiner can normally be reached on M-F 8:30AM-5:00PM MST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at 571-272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.O/Examiner, Art Unit 2132 /HOSAIN T ALAM/Supervisory Patent Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Sep 10, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602324
STORAGE CONTROLLER, MEMORY MANAGEMENT METHOD AND STORAGE DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12591367
TECHNIQUES FOR LOG ORDERING TO OPTIMIZE WRITE LATENCY IN SYSTEMS ASSIGNING LOGICAL ADDRESS OWNERSHIP
2y 5m to grant Granted Mar 31, 2026
Patent 12585385
INTELLIGENT UPGRADE PROCESS IN A DISTRIBUTED SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12561080
RESEQUENCING DATA PROGRAMMED TO MULTIPLE LEVEL MEMORY CELLS AT A MEMORY SUB-SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12423017
System and Method for Performing and Verifying Data Erasure
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
85%
With Interview (+18.7%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 368 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month