Prosecution Insights
Last updated: April 19, 2026
Application No. 18/632,892

SYSTEM AND METHOD FOR UTILIZING PREFETCH CACHE FOR MEMORY DEVICE

Non-Final OA §103
Filed
Apr 11, 2024
Examiner
THAMMAVONG, PRASITH
Art Unit
2137
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
464 granted / 534 resolved
+31.9% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
36 currently pending
Career history
570
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
22.7%
-17.3% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 534 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/20/26 has been entered. 1. REJECTIONS BASED ON PRIOR ART In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC ' 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-10 and 12-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guim Bernat (US 20170353576) in view of Narsale (US 20220019536) and Mashimo (US 20210406183). With respect to claim 1, the Guim Bernat reference teaches a memory system, comprising: a memory device; (see fig. 1; e.g. memory device 108f or 108g; paragraph 78, where if the requested data is not located in the prefetch cache, the data may be requested over the network fabric 104 from the remote node [and its corresponding “memory device”] where the data is stored at 614) a prefetch cache; (e.g. fig. 5, far-prefetch cache 504) a device cache, (e.g. fig. 5, main cache 502) wherein replacement policies of the prefetch cache and the device cache are independent of each other; (paragraph 74, where the main cache 502 of the example fabric controller 122 may be used to store data that has been requested via a normal read request (which could be obtained by reading the data from a remote node or from reading the data from the far-prefetch cache 504 if the data was previously prefetched) or data that has been requested by a remote node (and that will be provided by the fabric controller 122) while the far-prefetch cache stores only data prefetched from a remote memory device 108) and a processor (e.g. fig. 3, processor 106a) configured to: receive a data request including an address of a data page stored in the memory device, perform a first lookup operation using the device cache, to determine whether the data page is stored in the device cache, and perform a second lookup operation using the prefetch cache, to determine whether the data page is stored in the prefetch cache. (paragraph 75, where a read request may be received at the fabric controller 122. A memory address 510 of the read request may be (e.g., simultaneously) provided to the main cache 502 and the far-prefetch cache 504; and where each cache outputs a hit/miss value (514 and 520) indicating whether the cache includes data associated with the address (e.g., a copy of the data stored at a location of a memory device 108 that is identified by the address or a corresponding physical address if the provided address is a virtual address)) However, the Guim Bernat reference does not explicitly teach to perform a third lookup operation using a data structure storing prefetch criteria, to determine whether to perform a prefetch data request for a next data page from the memory device based on the data request, wherein the prefetch criteria includes a weight factor of the next page and a threshold to which the weight factor is compared. The Narsale reference teaches it is conventional to have to perform a third lookup operation using a data structure storing prefetch criteria, to determine whether to perform a prefetch data request for a next data page from the memory device based on the data request. (paragraph 19, where based on a set of criteria (e.g., defined by a prefetch policy), the interface bridge can cause data (e.g., one or more sectors or pages of data) to be prefetched into the prefetch buffer of the interface bridge, from the memory device, prior to any portion of the data being requested for reading by the processing device of the memory sub-system controller coupled to the interface bridge) It would have been obvious to a person of ordinary skill in the art before the claimed invention was effectively filed to modify the Guim Bernat reference to perform a third lookup operation using a data structure storing prefetch criteria, to determine whether to perform a prefetch data request for a next data page from the memory device based on the data request, as taught by the Narsale reference. The suggestion/motivation for doing so would have been to have faster read access (which can also achieve more read bandwidth) than providing the requested data from the memory device, especially in situations where the host system is accessing data sequentially (e.g., requesting data reads from sequential memory addresses). (Narsale, paragraph 19) However, the combination of the Guim Bernat and Narsale references does not explicitly teach wherein the prefetch criteria includes a weight factor of the next page and a threshold to which the weight factor is compared. The Mashimo reference teaches it is conventional to have to have wherein the prefetch criteria includes a weight factor of the next page and a threshold to which the weight factor is compared. (paragraph 83, where if the highest accumulated weight value does exceed the minimum accumulated weight threshold, then at block 765, the prefetch logic 301 calculates an address for prefetching by adding the most likely future access delta 701 (i.e., the index of the highest valued register in the SRV 306) to the most recently accessed memory address 402. A prefetch request is issued for the calculated prefetch address; and paragraph 77, where the future delta candidate 701 having the highest accumulated weight (indicating the strongest correlation with the most recently recorded delta values) is used to calculate a next memory address for prefetching) It would have been obvious to a person of ordinary skill in the art before the claimed invention was effectively filed to modify the combination of the Guim Bernat and Narsale references to wherein the prefetch criteria includes a weight factor of the next page and a threshold to which the weight factor is compared, as taught by the Mashimo reference. The suggestion/motivation for doing so would have been to predict future access patterns by, for each current delta value, incrementing weight values corresponding to each of multiple preceding delta values observed and their distances from the current delta value in the delta sequence. (Mashimo, paragraph 22) Therefore it would have been obvious to combine the Guim Bernat, Narsale, and Mashimo references for the benefits shown above to obtain the invention as specified in the claim. With respect to claim 2, the combination of the Guim Bernat, Narsale, and Mashimo references teaches the memory system of claim 1, wherein the processor is further configured to perform the second lookup operation based on determining a cache miss for the data page in the device cache in the first lookup operation. (Guim Bernat, paragraph 76, where if the data is found to be located in the far-prefetch cache 504 (as indicated by the hit/miss signal 520), then the data (i.e., payload 518) is provided through multiplexing logic 506 to output 516 and the data is also copied into the main cache 502 and removed from the far-prefetch cache 504 (so that the next time the data is read there will be no hit in the far-prefetch cache 504 and the memory space previously occupied by the data can be opened up for additional prefetch data)) With respect to claim 3, the combination of the Guim Bernat, Narsale, and Mashimo references teaches the memory system of claim 1, wherein the processor is further configured to send the data page from the prefetch cache to the device cache, based on the data page being included in the prefetch cache. (Guim Bernat, paragraph 76, where if the data is found to be located in the far-prefetch cache 504 (as indicated by the hit/miss signal 520), then the data (i.e., payload 518) is provided through multiplexing logic 506 to output 516 and the data is also copied into the main cache 502 and removed from the far-prefetch cache 504 (so that the next time the data is read there will be no hit in the far-prefetch cache 504 and the memory space previously occupied by the data can be opened up for additional prefetch data)) With respect to claim 4, the combination of the Guim Bernat, Narsale, and Mashimo references teaches the memory system of claim 3, wherein the processor is further configured to evict the data page from the prefetch cache, in response to sending the data page to the device cache. (Guim Bernat, paragraph 76, where if the data is found to be located in the far-prefetch cache 504 (as indicated by the hit/miss signal 520), then the data (i.e., payload 518) is provided through multiplexing logic 506 to output 516 and the data is also copied into the main cache 502 and removed from the far-prefetch cache 504 (so that the next time the data is read there will be no hit in the far-prefetch cache 504 and the memory space previously occupied by the data can be opened up for additional prefetch data)) With respect to claim 5, the combination of the Guim Bernat, Narsale, and Mashimo references teaches the memory system of claim 1, wherein the processor is further configured to send the data page from the device cache in response to the data request, based on the data page being included in the device cache. (Guim Bernat, paragraph 76, where if the data is found to be located in the main cache 502, then the data is provided to the output 516 via the multiplexing logic 506. An indication of whether the data was found in either cache (i.e., whether the payload is a valid output) may also be provided by ORing the hit/miss signals 514 and 520 from both caches to produce output signal 522) With respect to claim 6, the combination of the Guim Bernat, Narsale, and Mashimo references teaches the memory system of claim 1, wherein the processor is further configured to send the data request to the memory device, based on determining a cache miss for the data page in the prefetch cache in the second lookup operation. (Guim Bernat, paragraph 78, where if the requested data is located in the prefetch cache, the data may be copied to the main cache at 610 and then removed from the prefetch cache at 612. If the requested data is not located in the prefetch cache, the data may be requested over the network fabric 104 from the remote node where the data is stored at 614. When the data is received, it is stored in the main cache at 616 and provided to the requesting core at 606) With respect to claim 7, the combination of the Guim Bernat, Narsale, and Mashimo references teaches the memory system of claim 1, wherein the processor is further configured to: receive data from the memory device, and identify a type of request that the data has been sent in reply to. (Guim Bernat, paragraph 78, where if the requested data is located in the prefetch cache, the data may be copied to the main cache at 610 and then removed from the prefetch cache at 612. If the requested data is not located in the prefetch cache, the data may be requested over the network fabric 104 from the remote node where the data is stored at 614. When the data is received, it is stored in the main cache at 616 and provided to the requesting core at 606) With respect to claim 8, the combination of the Guim Bernat, Narsale, and Mashimo references teaches the memory system of claim 7, wherein the processor is further configured to store the received data in the device cache, in response to identifying that the received data is in response to a request for data for which a cache miss was determined in either of the first lookup operation or the second lookup operation. (Guim Bernat, paragraph 78, where if the requested data is located in the prefetch cache, the data may be copied to the main cache at 610 and then removed from the prefetch cache at 612. If the requested data is not located in the prefetch cache, the data may be requested over the network fabric 104 from the remote node where the data is stored at 614. When the data is received, it is stored in the main cache at 616 and provided to the requesting core at 606) With respect to claim 9, the combination of the Guim Bernat, Narsale, and Mashimo references teaches the memory system of claim 7, wherein the processor is further configured to store the received data in the prefetch cache, in response to identifying that the received data is in response to the prefetch data request. (Guim Bernat, paragraph 50, where a processor 106A may generate a prefetch request. The prefetch request identifies a group of data to be prefetched from a remote node. As an example, the prefetch request may specify one or more memory addresses (e.g., virtual memory addresses or physical memory addresses) associated with the data. For example, the prefetch request may specify a beginning address and/or an end address of the data to be prefetched) With respect to claim 10, the combination of the Guim Bernat, Narsale, and Mashimo references teaches the memory system of claim 1, further comprising a request queue, wherein a size of the prefetch cache is based on a number of data requests that can be stored in the request queue. (Guim Bernat, paragraph 56, where in response to the request by the software application, the processor 106A may generate a prefetch request. In one embodiment, the prefetch request is placed in a request queue 112 [which would have a ‘size’ and a ‘number of data requests’] and then passed to a caching agent 114) Claims 12-20 are the method implementation of claims 1-10 and rejected under a similar rationale as shown in the rejections above. Claim Rejections - 35 USC ' 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guim Bernat (US 20170353576) in view of Narsale (US 20220019536) and Mashimo (US 20210406183) as shown in the rejections above, and further view of Babaian (US 5889985). With respect to claim 11, the combination of the Guim Bernat, Narsale, and Mashimo references does not explicitly teach the memory system of claim 1, wherein the prefetch cache is smaller than the device cache. The Babaian reference teaches it is conventional to have wherein the prefetch cache is smaller than the device cache. (column 1, lines 18-36, where the data prefetch cache is a fully associative cache which is much smaller than the first-level cache. The size of the data prefetch cache is determined by the total number of load operations that can be active at one time) It would have been obvious to a person of ordinary skill in the art before the claimed invention was effectively filed to modify the combination of the Guim Bernat, Narsale, and Mashimo references to have wherein the prefetch cache is smaller than the device cache, as taught by the Babaian reference. The suggestion/motivation for doing so would have been to have the data prefetch be used to avoid thrashing since array elements are prefetched to a data prefetch cache and then loaded from this cache so that the first-level cache is not corrupted by little-used data. (Babaian, column 1, lines 18-36) Therefore it would have been obvious to combine the Guim Bernat, Narsale, Mashimo, and Babaian references for the benefits shown above to obtain the invention as specified in the claim. 2. ARGUMENTS CONCERNING PRIOR ART REJECTIONS Rejections - USC 102/103 Applicant's arguments (see pages 9-11 of the remarks) and amendment with respect to claims 1-20 have been considered, and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Mashimo reference as shown in the rejections above to teach the newly amended claim language. Thus, the rejections have been withdrawn. 3. RELEVANT ART CITED BY THE EXAMINER The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant's art and those arts considered reasonably pertinent to applicant's disclosure. See MPEP 707.05(c). The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. These references include: Saxena (US 20020103778), which teaches a cache server (18) may prefetch one or more web pages (30) from an origin server (16) prior to those web pages being requested by a user (13). The cache server determines which web pages to prefetch based on a graph (42) associated with a prefetch module (40) associated with the cache server. The graph represents all or a portion of the web pages at the origin server using one or more nodes (130) and one or more links (100) connecting the nodes. Each link has an associated transaction weight (102) and user weight (104). The transaction weight represents the importance of the link and associated web page to the origin server and may be used to control the prefetching of web pages by the cache server. The user weight may be used to change a priority (46) associated with a request (22) for a web page. The user weight and transaction weight may change based on criteria (50) associated with the origin server. 4. CLOSING COMMENTS Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PRASITH THAMMAVONG whose telephone number is (571) 270-1040. The examiner can normally be reached Monday - Friday 12-8 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan Savla can be reached on (571) 272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PRASITH THAMMAVONG/ Primary Examiner, Art Unit 2137
Read full office action

Prosecution Timeline

Apr 11, 2024
Application Filed
May 03, 2025
Non-Final Rejection — §103
Aug 06, 2025
Applicant Interview (Telephonic)
Aug 06, 2025
Examiner Interview Summary
Aug 08, 2025
Response Filed
Oct 16, 2025
Final Rejection — §103
Dec 19, 2025
Response after Non-Final Action
Jan 20, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591511
ACCESS-AWARE FLASH TRANSLATION LAYER (FTL) CAPABILITY ON FLASH MEMORY INTEGRATED CIRCUIT
2y 5m to grant Granted Mar 31, 2026
Patent 12585395
DATA STORAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12572284
MEMORY SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12572307
READ-AHEAD BASED ON READ SIZE AND QUEUE IDENTIFIER
2y 5m to grant Granted Mar 10, 2026
Patent 12517816
MODEL BASED ERROR AVOIDANCE
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
95%
With Interview (+8.3%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 534 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month