Prosecution Insights
Last updated: April 19, 2026
Application No. 18/749,525

CACHE MANAGEMENT METHOD, APPARATUS, AND SYSTEM, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Jun 20, 2024
Examiner
KHAN, MASUD K
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
93%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
373 granted / 428 resolved
+32.1% vs TC avg
Moderate +6% lift
Without
With
+6.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
34 currently pending
Career history
462
Total Applications
across all art units

Statute-Specific Performance

§101
2.0%
-38.0% vs TC avg
§103
63.3%
+23.3% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 428 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/06/2026 has been entered. Response to Amendment The office action is responding to the amendments filed on 12/12/2025. Claims 1 and 12 have been amended. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-2, 8-9, 12-13 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over WU [US 2020/0117620 A1] in view of Bhoria et al. [US 2020/0371960 A1]. Claim 1 is rejected over WU and Bhoria. WU teaches “A system comprising a processor and a memory controller, wherein;” as “A system includes a multi-core shared memory controller (MSMC) that includes an interconnect and a plurality of devices connected to the interconnect. … The system further includes a first processor package connected to the first interface and a second processor package connected to the second interface.” [¶0008] “the processor and the memory controller are connected through a first channel and a second channel, and the processor is configured to perform a memory read/write operation through the first channel;” as “Referring to FIG. 9, a diagram 900 illustrating read-modify-write (RMW) queues that may be included in the MSMC 200 is shown. The diagram 900 illustrates that the MSMC 200 may include a RMW queue 902 for each of the RAM banks 218. Each RMW queue 902 is configured to receive read and write requests from the data path 262 for memory addresses associated with the corresponding RAM bank 218. Memory addresses associated with a RAM bank include addressable memory addresses within the RAM bank as well as memory addresses of an external memory device that are allocated to ways of the RAM bank. For example, a first RMW queue 902A may receive read/write request for addressable memory within the first RAM bank 218A or a read/write request.” [¶0128] “the processor is configured to send event information of a first event to the memory controller through the second channel, and the first event is an event that is executed by the processor and in which a memory storage is to be accessed through the memory read/write operation performed by the processor; and” as “ PSI-L messages may be directed from one component of the processing system to another, for example from an entity, such as an application, peripheral, processor, etc., to the DRU.” [¶0069] and “a first RMW queue 902A may receive read/write request for addressable memory within the first RAM bank 218A or a read/write request.” [¶0128] WU does not explicitly teach the memory controller is configured to manage a cache storage that is outside of the processor based on the event information, and the cache storage is configured to cache a part of data that is in the memory storage. However, Bhoria teaches “the memory controller is configured to manage a cache storage that is outside of the processor based on the event information, and the cache storage is configured to cache a part of data that is in the memory storage.” as “in the event a write instruction is received from the cache controller 220, the arbitration manager 1114 is configured to transmit a read instruction of the corresponding currently stored word to the victim storage 218.” [¶0608 and Fig. 1] (The cited portion teaches the cache operation is happening based on some event. Fig. 1 shows the cache storage is outside of the processing unit.) WU and Bhoria are analogous arts because they teach storage system and cache memory management. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of WU and Bhoria before him/her, to modify the teachings of WU to include the teachings of Bhoria with the motivation of the multi-bank structure of the victim storage 218 can effectuate support for two or more accesses (e.g., CPU accesses) per clock cycle. [Bhoria, ¶0345] Claim 2 is rejected over WU and Bhoria. WU teaches “wherein the first event comprises one or more of the following: a thread switching event, a page table walk event, or a cache line eviction event, wherein the thread switching event is used to switch a first thread from a running state to a preparation state, the first thread is a thread that is run by the processor, the page table walk event is used to query a physical address of a data block that is in the memory storage and that is to be accessed by the processor, and the cache line eviction event is used to evict a data block from a cache of the processor.” as “As prefetching may introduce coherency issues where a prefetched memory block may be in use by another process, the prefetch controller 416 may detect how the requested memory addresses are being accessed, for example, whether the requested memory addresses are shared or owned and adjust how prefetching is performed accordingly.” [¶0090] Claim 8 is rejected over WU and Bhoria. WU teaches “wherein the first event comprises the page table walk event, the event information comprises an address of a first memory page, and the first memory page is a memory page in which the to-be-accessed data block is located.” as “In cases where the memory operation crosses memory pages, the application may have to make separate translation requests for each memory page.” [¶0065] Claim 9 is rejected over WU and Bhoria. WU teaches “wherein the memory controller is configured to: read, from the memory storage, data comprised in the first memory page, and store, in the cache storage, the data comprised in the first memory page.” as “A first row 702 of the table 700 illustrates that, in response to receiving a read request for a memory address corresponding to a tag that is cached in the RAM banks 218” [¶0105] Claim 12 is rejected over WU and Bhoria under the same rationale of rejection of claim 1. Claim 13 is rejected over WU and Bhoria under the same rationale of rejection of claim 2. Claim 18 is rejected over WU and Bhoria under the same rationale of rejection of claim 8. Claim 19 is rejected over WU and Bhoria under the same rationale of rejection of claim 9. Claim(s) 3-5, 7 and 14-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over WU [US 2020/0117620 A1] in view of Bhoria et al. [US 2020/0371960 A1] and in further view of BEARD et al. [US 2022/0327009 A1]. Claim 3 is rejected over WU, Bhoria and BEARD. The combination of WU and Bhoria does not explicitly teach wherein the first event comprises the thread switching event, and the event information comprises identification information of the first thread. However, BEARD teaches “wherein the first event comprises the thread switching event, and the event information comprises identification information of the first thread.” as “At the point where an event of one of a number of defined types occurs then the core is interrupted and may wake up and/or switch back to the thread which requested the WFE operation. ” [¶0093] WU, Bhoria and BEARD are analogous arts because they teach storage system and cache memory management. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of WU, Bhoria and BEARD before him/her, to modify the teachings of combination of WU and Bhoria to include the teachings of BEARD with the motivation of the channel subscription request could also be implemented using such an atomic write request, to give similar advantages to those explained for the producer request. [BEARD, ¶0052] Claim 4 is rejected over WU, Bhoria and BEARD. WU teaches “the cache storage comprises a first cache storage and a second cache storage, and the first cache storage is configured to cache a part of data that is in the second cache storage; and” as “L1 or L2 cache, as compared to main memory or another cache that may be organizationally separated from the processor cores.” [¶0068] “the memory controller is configured to release first storage space based on the identification information, the first cache storage comprises the first storage space, and the first storage space is used to store a data block of the first thread.” as “to determine if the memory blocks to be prefetched are otherwise in use or overlap with addresses used by other processes. In cases where a prefetched memory block is accessed by another process, for example if there are overlapping snoop requests or a snoop request for an address that is being prefetched, then the prefetch controller 416 may not issue the prefetching commands or invalidate prefetched memory blocks.” [¶0090] Claim 5 is rejected over WU, Bhoria and BEARD. WU teaches “wherein the memory controller comprises the first cache storage, and the memory controller is connected to the second cache storage through a bus.” as “the MSMC 200 includes eight cache tag banks 216. The cache tag banks 216 are connected to the arbitration and data path manager 204.” [¶0061] Claim 7 is rejected over WU, Bhoria and BEARD. WU teaches “the event information further comprises a physical address of a second data block that is in the memory storage, the cache of the processor comprises the second data block, the second data block is modified, and the second data block is a data block of the first thread; and the memory controller is further configured to allocate second storage space in the second cache storage based on the identification information and the physical address of the second data block, and the second storage space is used to store the second data block.” as “the coherency controller 224 issues a snoop request to the 011 master peripheral to cause the 011 master peripheral to writeback (to the MSMC 200) and invalidate the 011 master peripheral's cached value for 0x23AEF5939DEA.” [¶0114] Claim 14 is rejected over WU, Bhoria and BEARD under the same rationale of rejection of claim 3. Claim 15 is rejected over WU, Bhoria and BEARD under the same rationale of rejection of claim 4. Claim 16 is rejected over WU, Bhoria and BEARD under the same rationale of rejection of claim 5. Claim 17 is rejected over WU, Bhoria and BEARD under the same rationale of rejection of claim 7. Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over WU [US 2020/0117620 A1] in view of Bhoria et al. [US 2020/0371960 A1] in further view of BEARD et al. [US 2022/0327009 A1] and yet in further view of SONG et al. [US 2018/0121126 A1]. Claim 6 is rejected over WU, Bhoria, BEARD and SONG. The combination of WU, Bhoria and BEARD does not explicitly teach wherein the memory controller is configured to: obtain, based on the identification information, a first data block stored in the first cache storage, and the first data block is a data block of the first thread; in response to determining that the first data block is modified, store the first data block in the second cache storage or the memory storage; and mark a state of the first storage space in which the first data block is located as an idle state. However, SONG teaches “wherein the memory controller is configured to: obtain, based on the identification information, a first data block stored in the first cache storage, and the first data block is a data block of the first thread; in response to determining that the first data block is modified, store the first data block in the second cache storage or the memory storage; and mark a state of the first storage space in which the first data block is located as an idle state.” as “With reference to the third or the fourth possible implementation of the second aspect, in a fifth possible implementation, when the tag of the data block includes a thread identifier corresponding to the data block and identification information of the data block, the controller compares, according to the set identifier of the target storage set of the read data block, a thread identifier of the read data block with thread information of the stored data block in the record item corresponding to the target storage set of the read data block; and if no match is found, returns information indicating that the read data block is not hit; or if a match is found, matches identification information of the read data block with a data block identifier of a matched and stored data block in the record item corresponding to the target storage set, and if no match is found, returns information indicating that the read data block is not hit.” [¶0027] WU, Bhoria, BEARD and SONG are analogous arts because they teach storage system and cache memory management. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of WU, Bhoria, BEARD and SONG before him/her, to modify the teachings of combination of WU, Bhoria and BEARD to include the teachings of SONG with the motivation to provide a memory access system and method, to effectively reduce redundant access, and improve access performance of a memory. [SONG, ¶0005] Claim(s) 10-11 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over WU [US 2020/0117620 A1] in view of Bhoria et al. [US 2020/0371960 A1] in further view of BEARD et al. [US 2022/0327009 A1] and yet in further view of HIJAZ et al. [US 2018/0336136 A1]. Claim 10 is rejected over WU, Bhoria BEARD and HIJAZ. The combination of WU, Bhoria and BEARD does not explicitly teach wherein the first event comprises the cache line eviction event, the event information comprises attribute information of the data block to be evicted by the processor, and the attribute information indicates an accessing status of the to-be- evicted data block. However, HIJAZ teaches “wherein the first event comprises the cache line eviction event, the event information comprises attribute information of the data block to be evicted by the processor, and the attribute information indicates an accessing status of the to-be- evicted data block.” as “The look-ahead device 508 may retrieve 614 the evicted data from the look-ahead buffer 510, and send 640 the evicted data to the shared memory 504 for storage. Sending 640 the evicted data to the shared memory 504 may include sending a write command to the shared memory 504 for the evicted data.” [¶0089] WU, Bhoria, BEARD and HIJAZ are analogous arts because they teach storage system and cache memory management. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of WU, Bhoria, BEARD and HIJAZ before him/her, to modify the teachings of combination of WU, Bhoria and BEARD to include the teachings of HIJAZ with the motivation of a computing device including a look-ahead device having a look-ahead buffer, a first input/output (I/O) device having a cache, and a second I/O device, to perform operations. [HIJAZ, ¶0013] Claim 11 is rejected over WU, Bhoria, BEARD and HIJAZ. The combination of WU, Bhoria and BEARD does not explicitly teach wherein the memory controller is further configured to: receive a write request sent by the processor, wherein the write request comprises the to-be-evicted data block; and store, in the cache storage, a correspondence between the to-be-evicted data block and the attribute information. However, HIJAZ teaches “wherein the memory controller is further configured to: receive a write request sent by the processor, wherein the write request comprises the to-be-evicted data block; and store, in the cache storage, a correspondence between the to-be-evicted data block and the attribute information.” as “The processing device may not have direct access to the shared memory and may send a write request to a shared memory manager to write the look-ahead request data to the shared memory. In various aspects, reading out the evicted data in block 914 and writing the evicted data in block 916” [¶0119] Claim 20 is rejected over WU, Bhoria, BEARD and HIJAZ under the same rationale of rejection of claim 10. Response to Arguments Applicant’s arguments with respect to claim(s) 1 and 12 have been considered but are moot because the new ground of rejection are used for mapping the amended portions of the limitations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MASUD K KHAN whose telephone number is (571)270-0606. The examiner can normally be reached Monday-Friday (8am-5pm). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at (571) 272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MASUD K KHAN/Primary Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Jun 20, 2024
Application Filed
Aug 19, 2024
Response after Non-Final Action
Jun 04, 2025
Non-Final Rejection — §103
Aug 22, 2025
Response Filed
Oct 02, 2025
Final Rejection — §103
Dec 12, 2025
Response after Non-Final Action
Jan 06, 2026
Request for Continued Examination
Jan 23, 2026
Response after Non-Final Action
Feb 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602328
MEMORY DEVICE, SYSTEM INCLUDING THE SAME, AND OPERATING METHOD OF MEMORY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12591778
SYSTEM AND METHOD FOR TORQUE-BASED STRUCTURED PRUNING FOR DEEP NEURAL NETWORKS
2y 5m to grant Granted Mar 31, 2026
Patent 12585592
TAG SIZE REDUCTION USING MULTIPLE HASH FUNCTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12579473
REQUIREMENTS DRIVEN MACHINE LEARNING MODELS FOR TECHNICAL CONFIGURATION
2y 5m to grant Granted Mar 17, 2026
Patent 12572463
STORAGE DEVICE AND OPERATING METHOD OF THE STORAGE DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
93%
With Interview (+6.3%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 428 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month