Prosecution Insights
Last updated: April 19, 2026
Application No. 19/039,860

METHOD AND SYSTEM FOR STORING METADATA

Non-Final OA §103
Filed
Jan 29, 2025
Examiner
DOAN, KHOA D
Art Unit
2133
Tech Center
2100 — Computer Architecture & Software
Assignee
Montage Electronics (Shanghai) Co. Ltd.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 0m
To Grant
98%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
312 granted / 349 resolved
+34.4% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
13 currently pending
Career history
362
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
52.3%
+12.3% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 349 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 1/29/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The disclosure is objected to because of the following informalities: “cacheline” should be read as “cache line”. Claims 1-14 are objected to because of the following informalities: ”cacheline”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 9-11 are rejected under 35 U.S.C. §103 as being unpatentable over Chaffin et al (U.S. 2022/0043748), and in view of Kim et al (U.S. 2013/0185268), and further in view of Kachare et al (U.S. 2024/0311049), hereinafter Chaffin. Regarding claim 1: A system for storing metadata, comprising: a main memory comprising a data storage region and a metadata storage region, wherein the data storage region is configured to store related data [and its corresponding first error correction code], and the metadata storage region is configured to store metadata and its corresponding second error correction code; Chaffin teaches a system and method for load/store data operations in cache memory (abstract). Chaffin teaches in Fig. 1, a cache memory 120 comprises a plurality of cache lines, each cache line includes data portion 124a, metadata portion 124b to store data 126a and metadata 126b, respectively (¶0019). Figs. 3-4, a main memory (Fig. 4) 440 includes a plurality of DRAM data words, similarly to DRAM word 310 in Fig. 3. The DRAM data word 444, includes a data portion 444a, and metadata portion 444b, which includes both ECC and tag information (¶,0038, ¶0042). and a cache memory comprising a cache control state machine and at least one storage unit, wherein the storage unit is configured to cache metadata and its corresponding second error correction codes from the main memory, Fig. 4, ¶0042, data portion 444a and the metadata portion 444b (including the ECC and memory tag) are stored in a line of cache memory 412, such as cache line 414. However, Chaffin does not teach the idea of storing data and its corresponding first error correction code. In an analogous art of storage management, Kim teaches a memory device 122 (Fig. 1, memory 122 may include volatile memory, or non-volatile memory ¶0073-0074). The memory device 122 may be divided into data area 73, and unique information of the memory 71. The data area is further divided into user data storage area includes data and ECC, and metadata storage area (¶0123). Kim teaches using ECC unit to performs error detection and correction on data during read/write operation (¶0121). Chaffin also discloses using ECC to check for errors (¶0023). Thus, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to incorporate the teaching of Kim into the teaching of Chaffin to have a main memory comprising a data storage region and a metadata storage region, wherein the data storage region is configured to store related data and its corresponding first error correction code, and the metadata storage region is configured to store metadata and its corresponding second error correction code. The motivation for doing so is to apply a known technique, disclosed by Kim, into the system/process ready for improvement of Chaffin to yield predictable result, which prevent data loss. Chaffin teaches in Fig. 1, a load data path and a write data path to access cache data (Fig. 1, ¶0019-0025). A load operation may load data associated with a load address into a register in a processor in the data processing system 100, for example. The data may be loaded from the cache memory 120 if a valid copy of the data is stored in one of a plurality of cache lines 124(A)-124(Z) in the cache memory 120. Otherwise, the data may be retrieved from a higher level cache or a memory (e.g., external memory), ¶0018. Each cache line 124 of the plurality of cache lines 124(A)-124(Z) includes a data portion 124a and a metadata portion 124b to store data 126a and metadata 126b, respectively. The metadata portion 124b may contain tag information 128, which may also be referred to herein as a memory tag 128, associated with the data 126a in the data portion 124a, ¶0019. An address calculation circuit 112 is configure to calculates a load address 102 that points to the target memory address of a load operation. A tag checking circuit 116 performs the memory tag checking, which includes confirming that the load operation tag 104 associated with the load operation matches with memory tag 128 (e.g., a load address tag 128) in the metadata portion 124b in one of the plurality of cache lines 124A-124Z associated with the load address, ¶0023. However, Chaffin does not teach the cache control state machine is configured to receive an access request corresponding to the metadata and perform corresponding operation on target metadata cached in the storage unit according to the access request. In an analogous art of cache management, Kachare, Fig. 3, discloses a memory 120 uses a cache-coherent interconnect protocol, such as Compute Express Link (CXL) protocols. Memory 120 includes cache controller 315 and cache memory 230 (¶0054). FIG. 5 shows details of metadata managed by cache controller 315 of FIG. 3, according to embodiments of the disclosure. In FIG. 5, cache controller 315 of FIG. 3 may manage metadata 505. Metadata 505 may be stored in any desired location: for example, in memory 320 of FIG. 3. Metadata 505 may include address 510 of data, size 515 of the data, clean/dirty status 520 of the data (that is, whether or the data stored in memory 320 of FIG. 3 is unchanged since it was copied from storage device 325 of FIG. 3 into memory 320 of FIG. 3), temperature 525 of the data, last access time 530 for the data, access count 535 for the data, and/or access frequency 540 for the data (¶0077). Kachare, discloses the cache control state machine is configured to receive an access request corresponding to the metadata Fig. 7, cache controller 315 determines if the data being access ( either read or written) is currently in memory 320. and perform corresponding operation on target metadata cached in the storage unit according to the access request If so, cache controller 315 accesses the data, and updates metadata 505 accordingly (¶0070). Chaffin and Kachare are analogous because they both teaches about accessing cache. Thus, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to incorporate the teaching of Kachare into the teaching of Chaffin to obtain the claimed limitations above. The motivation for doing so is to apply a known technique, such as using a cache controller, or cache control state machine, into a system/process ready for improvement of Chaffin to yield predictable results. Claim 9 recites the method corresponds to the operation of the system of claim 1, and is rejected under same rationale cited in claim 1. Regarding claim 2: The system according to claim 1, wherein the cache control state machine performing corresponding operation on the target metadata cached in the storage unit according to the access request, comprises: determining whether the target metadata corresponding to the access request is cached in the storage unit, and when it is determined that the target metadata is cached in the storage unit, performing corresponding operation on the target metadata; Kachare discloses cache controller 315 of FIG. 3 may determine if the data being accessed (either read or written) is currently in memory 320 of FIG. 3. If so, then at block 710, cache controller 315 may access the data and may update metadata 505 of FIG. 5(¶0090). Chaffin also discloses comparing tag from an operation with tag in the cache line (metadata cached in the storage unit), ¶0019, ¶0023. Comparing target tag with tag from the operation is interpreted as determining whether the target metadata corresponding to the access is cached. when it is determined that the target metadata is not cached in the storage unit, retrieving a target cacheline where the target metadata is located from the main memory and caching the target cacheline into the storage unit. Kachare, if the data is not currently in memory 320, then, cache controller 315 may check to see if there is space in memory 320 to store the data. If not, then at block 725, cache controller 315 may evict some data from memory 320, after which there is space in memory 320 for the data. Then, at block 730, cache controller 315 may copy the data from storage device 325 of FIG. 3 into memory 320 (¶0091). Claim 10 recites the method corresponds to the operation of the system of claim 2, and is rejected under same rationale cited in claim 2. Regarding claim 3: The system according to claim 2, wherein the cache control state machine caching the target cacheline into the storage unit comprises: selecting a cacheline from a plurality of cachelines in the storage unit as a victim cacheline, and overwriting the victim cacheline with the obtained target cacheline. Kachare, if the data is not currently in memory 320, then, cache controller 315 may check to see if there is space in memory 320 to store the data. If not, then at block 725, cache controller 315 may evict some data from memory 320, after which there is space in memory 320 for the data. Then, at block 730, cache controller 315 may copy the data from storage device 325 of FIG. 3 into memory 320 (¶0091). It is noted that the memory 320 is a volatile memory (DRAM), thus, as a nature of DRAM, the data can be overwritten. There is a finite ways to write data in a DRAM, which includes overwrite data. Thus, it would have been obvious to one ordinary skill in the art to overwrite data from a finite choice. Claim 11 recites the method corresponds to the operation of the system of claim 3, and is rejected under same rationale cited in claim 3. Claims 4, 6, 12, 14 are rejected under 35 U.S.C. §103 as being unpatentable over Chaffin et al (U.S. 2022/0043748), and in view of Kim et al (U.S. 2013/0185268), and further in view of Kachare et al (U.S. 2024/0311049), and further in view of Jim Handy, “Cache Book” by Jim Handy (1998). Regarding claim 4: The combination of Chaffin does not expressly teach the system according to claim 3, wherein before overwriting the victim cacheline with the obtained target cacheline, the cache control state machine is further configured to: determining whether the victim cacheline has been modified based on a state field of the victim cacheline, and when it is determined that the victim cacheline has not been modified, overwriting the victim cacheline with the obtained target cacheline; when it is determined that the victim cacheline has been modified, storing the victim cacheline into the main memory and overwriting the victim cacheline in the storage unit with the obtained target cacheline. Handy discloses a MESI protocol, when there is a read miss, a determination is made on a victim cache to determine whether or not the victim case has been modified, and update/overwrite the cache line when the cache line has not been modified (Shared or Exclusive state), or evict the modified cache line/victim cache line to main memory, and update/overwrite the victim cache line with new data from main memory. See Section 4.3.1, and Table 4.1, from CPU Bus, Read miss, Exclusive and Modified sections respectively. Handy discloses a well-known MESI technique as shown in chapter 4. Thus, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to incorporate the teaching of Handy to apply a known technique into the teaching of Chaffin to obtain the claimed limitation above. The motivation is to apply a well-known technique into a system/process ready for improvement of Chaffin, to yield predictable results. Claim 12 recites the method corresponds to the operation of the system of claim 4, and is rejected under same rationale cited in claim 4. Regarding claim 6: Kachare discloses cache controller 315 of FIG. 3 may determine if the data being accessed (either read or written) is currently in memory 320 of FIG. 3. If so, then at block 710, cache controller 315 may access the data and may update metadata 505 of FIG. 5(¶0090). Chaffin also discloses comparing tag from an operation with tag in the cache line (metadata cached in the storage unit), ¶0019, ¶0023. Comparing target tag with tag from the operation is interpreted as determining whether the target metadata corresponding to the access is cached. However, the combination of Chaffin does not teach after the cache control state machine performs corresponding operation on the target metadata cached in the storage unit according to the access request, the cache control state machine is further configured to mark a state field of the target cacheline where the target metadata is located as Modified to indicate that the target cacheline has been modified. Handy discloses a MESI protocol, when there is a write hit, the cache line state is updated to Modified. See Section 4.3.1, and Table 4.1, from CPU Bus, Write hit, Exclusive section. Handy discloses a well-known MESI technique as shown in chapter 4. Thus, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to incorporate the teaching of Handy to apply a known technique into the teaching of Chaffin to obtain the claimed limitation above. The motivation is to apply a well-known technique into a system/process ready for improvement of Chaffin, to yield predictable results. Claim 14 recites the method corresponds to the operation of the system of claim 6, and is rejected under same rationale cited in claim 6. Claims 5, 13 are rejected under 35 U.S.C. §103 as being unpatentable over Chaffin et al (U.S. 2022/0043748), and in view of Kim et al (U.S. 2013/0185268), and further in view of Moyer et al (U.S. 2023/0136114). Regarding claim 5: Kachare, if the data is not currently in memory 320, then, cache controller 315 may check to see if there is space in memory 320 to store the data. If not, then at block 725, cache controller 315 may evict some data from memory 320, after which there is space in memory 320 for the data. Then, at block 730, cache controller 315 may copy the data from storage device 325 of FIG. 3 into memory 320 (¶0091). However, Chaffin and Kachare do not teach the system according to claim 2, wherein when the access request is a read request, after the cache control state machine caches the target cacheline into the storage unit, the cache control state machine is further configured to mark a state field of the target cacheline as Exclusive to indicate that the target cacheline has not been modified. In an analogous art of cache memory (abstract), Moyer suggests when a cache miss occurs in the event that a memory instruction, such as a load or a store, any instruction that reads or writes from memory, or any hardware prefetching mechanism attempts to access a cache. To service this cache miss, the cache controller obtains the cache line from a memory and places that cache line into the cache. The cache controller also sets the coherency state for this cache line to one of the possible states, such as exclusive or shared (¶0028). Moyer, Chaffin are analogous because they both concern about cache access. Thus, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to incorporate the teaching of Moyer into the teaching of Chaffin to obtain the claimed limitations above. The motivation for doing so is to apply a known technique into a system/process ready for improvement of Chaffin, to yield predictable results. Claim 13 recites the method corresponds to the operation of the system of claim 5, and is rejected under same rationale cited in claim 5. Claim 7-8 are rejected under 35 U.S.C. §103 as being unpatentable over Chaffin et al (U.S. 2022/0043748), and in view of Kim et al (U.S. 2013/0185268), and further in view of Kachare et al (U.S. 2024/0311049), and further in view of Jim Handy, “Cache Book” by Jim Handy (1998), and further in view of Terpstra et al (U.S. 2024/0184697). Regarding claim 7: Kachare teaches in Fig. 5, cache controller manages metadata 505 which is stored in the cache 320, or within the cache controller 315. The metadata includes address of data, state of the data, access count, last access time, access frequency for the data currently cache in the cache (¶0076, ¶0077). However, the combination of Chaffin and Kachare does not teach wherein the cache memory further comprises a tag array memory configured to store a tag field, the state field, and a count field corresponding to each cacheline in the storage unit; wherein the tag field is configured to indicate a storage address of the cacheline in the main memory, the count field is configured to indicate a historical access times of the cacheline, and the state field is configured to indicate state of the cacheline. In an analogous art of memory caching (abstract), Terpstra teaches in Fig. 1, a cache includes a cache memory/databank 140 with multiple entries configured to store respective ache line, and tag array 130 includes multiple tags, each tag includes a pointer that point to an entry in the databank 140; the databank 140 is one of multiple databanks and a cache tag stored in the array 130 includes a bank identifier and an index for an entry in a databank corresponding to the bank identifier (¶0027). Terpstra, Kachare, and Chaffin are directed to cache memory. Thus, one of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to incorporate the teaching of Terpstra into the teaching of Chaffin to include a cache tag comprises tag field, state field, and count field to indicate storage of respective cache line in main memory, or indicate a historical access times of the respective cache line, or indicate state of the respective cache line. The motivation for doing so is to apply a known technique into a system/process ready for improvement of Chaffin, to yield predictable results. Regarding claim 8: The system according to claim 7, wherein determining whether the target metadata corresponding to the access request is cached in the storage unit comprises: comparing, by the cache control state machine, the tag field at a target address corresponding to the access request with the tag field of the cacheline in the at least one storage unit pointed to by an index field in the target address corresponding to the access request; Kachare discloses cache controller 315 of FIG. 3 may determine if the data being accessed (either read or written) is currently in cache 320. If so, then at block 710, cache controller 315 may access the data and may update metadata 505 of FIG. 5(¶0090). Chaffin also discloses the tag check circuit 136 compares the store address tag 128 associated with the store address 108 (e.g., from the metadata 126b) to the store operation tag 122 associated with the store operation (e.g., provided by the address calculation circuit 132, which may be via the cache lookup circuit 134). Based on the comparison, the tag check circuit 136 generates an indication 142 indicating that the store address tag 128 matches the store operation tag 122 or indicating that the store address tag 128 does not match the store operation tag 122, ¶0019, ¶0023, ¶0028. Terpstra teaches in Fig. 1, a cache includes a cache memory/databank 140 with multiple entries configured to store respective ache line, and tag array 130 includes multiple tags, each tag includes a pointer that point to an entry in the databank 140; the databank 140 is one of multiple databanks and a cache tag stored in the array 130 includes a bank identifier and an index for an entry in a databank corresponding to the bank identifier (¶0027). Fig. 4, a technique to access cached data includes matching request address to a tag stored in an array of cache tags, wherein the cache tag includes a data pointer that points to an entry in a databank; and, responsive to the request, accessing 430, using the data pointer, a cache line of data stored in an entry of the databank (¶0025). when the tag field in the target address matches the tag field of any cacheline in the at least one storage unit, determining that the target metadata corresponding to the access request is cached in the storage unit; and when the tag field in the target address does not match the tag fields of all cachelines in the at least one storage unit, determining that the target metadata corresponding to the access request is not cached in the storage unit. Chaffin, based on the comparison, the tag check circuit 136 generates an indication 142 indicating that the store address tag 128 matches the store operation tag 122 or indicating that the store address tag 128 does not match the store operation tag 122. The indication 142 is used by the data return circuit 118 to determine whether to complete the store operation when the store operation commits. In a second case, the cache lookup circuit 134 determines that the plurality of cache lines 124(A)-124(Z) in the cache memory 120 do not contain the data 126a and metadata 126b associated with the store address 108 for the store operation (¶0028-¶0029). It is implied that when the operation address matched with the stored address in the cache, the target metadata is also cached in the cache and or the target metadata is not cached in the cache when the comparison result is negative, since the combination of Chaffin teaches a cache line includes data, and corresponding metadata in previous claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Thottethodi et al (U.S. 2014/0297961) discloses addresses (e.g., virtual addresses or corresponding physical addresses) for respective cache lines are divided into multiple portions, including an index and a tag. Cache lines (which may also be referred to as blocks) are installed in the cache data array at locations indexed by the index portions of the corresponding addresses, and tags are stored in the cache tag array at locations indexed by the index portions of the corresponding addresses. (A cache line may correspond to a plurality of addresses that share common index and tag portions.) Cache data array and cache tag array are thus indexed by the index portions of the addresses. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOA D DOAN whose telephone number is (571)272-5950. The examiner can normally be reached Mon-Fri 1000-1700. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ROCIO DEL MAR PEREZ-VELEZ can be reached at 571-270-5935. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KHOA D DOAN/Primary Examiner, Art Unit 2133
Read full office action

Prosecution Timeline

Jan 29, 2025
Application Filed
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596491
DYNAMICALLY ADJUSTING MEMORY ALLOCATIONS BASED ON QUALITY-OF-SERVICE (QoS) POOL THRESHOLD VALUES IN PROCESSOR-BASED DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12586633
FAST PROGRAMMING SCHEME FOR POWER LOSS PROTECTION - A MACHINE LEARNING BASED ALGORITHM
2y 5m to grant Granted Mar 24, 2026
Patent 12585396
MEMORY SYSTEM AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12578882
SYSTEM, METHOD, AND PROGRAM FOR DATA TRANSFER PROCESS
2y 5m to grant Granted Mar 17, 2026
Patent 12578885
SELECTIVE DATA MAP UNIT ACCESS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
98%
With Interview (+8.3%)
2y 0m
Median Time to Grant
Low
PTA Risk
Based on 349 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month