Prosecution Insights
Last updated: April 19, 2026
Application No. 19/029,901

DATA REDUCTION METHOD, APPARATUS, AND SYSTEM

Non-Final OA §102
Filed
Jan 17, 2025
Examiner
KWONG, EDMUND H
Art Unit
2137
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
94%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
280 granted / 324 resolved
+31.4% vs TC avg
Moderate +7% lift
Without
With
+7.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
17 currently pending
Career history
341
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
6.4%
-33.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 324 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Application/Preliminary Amendment This action is in response to Applicant's filing on 17 January 2025. Claims 1-20 were originally pending. The preliminary amendment filed 5 February 2025 amending claims 4, 8, 9, 10, 14, 18, 19, and 20 has been entered. No claims have been added or cancelled. Accordingly, claims 1-20 remain pending and are under consideration. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on 27 February 2025 and 31 October 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: Data Reduction Using A Hot Data Table And A Cache Table. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Agarwala et al (US 9582421 B1, hereinafter Agarwala). Regarding claims 1 and 11, taking claim 11 as exemplary, Agarwala discloses an apparatus for data reducing, wherein the apparatus comprises at least one processor and a memory, the memory stores a computer program, that when executed by the at least one processor, causes the apparatus to perform operations (See Agarwala, Fig. 1 and Fig. 9, and Col. 3 lines 59 – Col. 4, line 20 and Col. 13, lines 39-45) comprising: obtaining a first address (See Agarwala Fig. 7, 702 “Object read (data or metadata) Input: logical address); and updating, based on popularity of the first address, an entry corresponding to the first address in a hot data table (See Agarwala, figure 3A disclosing L0 Cache 302 and Col. 6 lines 45-48 disclosing the L0 cache 302 is a logical index cache that maintains a mapping of logical metadata to a fingerprint of objects stored in L1 cache 304 and Fig. 6, disclosing L0 cache including LRU lists 602 and 604, and Col. 11, lines 41-52, disclosing the DOCL L0 cache is updated to reflect the current logical metadata to fingerprint mapping, and Col 12, lines 1-11, disclosing use of a logical address to search the L0 cache and a cache hit moves the corresponding entry to the head of the L0 list, or in other words, updated based on its popularity and Fig. 7, 704 706, 708, and Col 12, lines 1-11, disclosing use of a logical address to search the L0 cache and a cache hit moves the corresponding entry to the head of the L0 list, or in other words, updated based on its popularity), wherein popularity of a storage address in the hot data table is greater than popularity of a storage address in a first cache table (See Agarwala, figure 3A, disclosing L0 cache, L1 cache, and L2 cache, Col. 6 lines 45-48 disclosing the L0 cache 302 is a logical index cache that maintains a mapping of logical metadata to a fingerprint of objects stored in L1 cache 304, Col. 10 line 44 – Col 11 line 2, disclosing an L1 cache miss results in an L2 search and if the desired object is found in L2, it is promoted to MFU list in the L1 cache, or in other words, more recently or hotter addresses are found in the higher level caches versus colder and less recently accessed addresses in the lower level caches), and the first cache table is used for data reduction (See Agarwala, Col 10, lines 5-25 disclosing the L2 cache includes an L2 hash table 506 where the L2 cache is a deduplicated cache, or in other words, reduced amount of data is stored via deduplication, additionally, the L2 cache includes a LRU cache used for eviction, which is another form of “data reduction”). Regarding claims 2 and 12, taking claim 12 as exemplary, Agarwala disclosed the apparatus according to claim 11 as above. Agarwala further discloses wherein the first address is a storage address of first data (See Agarwala, Fig. 7, 702 “Object read (data or metadata) Input: logical address); and the updating, based on popularity of the first address, an entry corresponding to the first address in a hot data table comprises: when the first address is a hot address, updating, to a fingerprint of the first data, a fingerprint in the entry corresponding to the first address (See Agarwala, Col. 6, lines 45-48 disclosing the L0 cache is a mutable logical index cache that maintains a mapping of logical metadata to a fingerprint of the objects stored in the L1 cache and Col 12, lines 1-11, disclosing a cache hit in the L0 cache moves the logical address entry of the object to the head of the L0 list resulting in the object not growing stale as quickly nor evicted as quickly). Regarding claims 3 and 13, taking claim 13 as exemplary, Agarwala disclosed the apparatus according to claim 12 as above. Agarwala further discloses wherein the operations comprise: determining that the first address is the hot address when the first address exists in the hot data table (See Agarwala, Col 12, lines 1-11, disclosing a cache hit in the L0 cache moves the logical address entry of the object to the head of the L0 list resulting in the object not growing stale as quickly nor evicted as quickly, or in other words, the address is hot). Regarding claims 4 and 14, taking claim 14 as exemplary, Agarwala disclosed the apparatus according to claim 11 as above. Agarwala further discloses wherein the updating, based on popularity of the first address, an entry corresponding to the first address in a hot data table comprises: when the first address is a hot address, updating an eviction parameter in the entry corresponding to the first address (See Agarwala, Col 12, lines 1-11, disclosing a cache hit in the L0 cache moves the logical address entry of the object to the head of the L0 list resulting in the object not growing stale as quickly nor evicted as quickly) . Regarding claims 5 and 15, taking claim 15 as exemplary, Agarwala disclosed the apparatus according to claim 14 as above. Agarwala further discloses wherein the eviction parameter comprises at least one of the following: a quantity of times that data is written into the first address or a timestamp at which data is written into the first address for the last time (See Agarwala, Col 10 lines 5-24 disclosing the L2 cache includes a LRU list tracking the hotness of segments and a segment that has not been recently added or accessed can be a candidate for eviction from the L2 cache). Regarding claims 6 and 16, taking claim 16 as exemplary, Agarwala disclosed the apparatus according to claim 11 as above. Agarwala further discloses wherein the updating, based on popularity of the first address, an entry corresponding to the first address in a hot data table comprises: when the first address changes to a cold address, reading, into the first cache table, the entry corresponding to the first address (See Agarwala, Col. 6, lines 42-48, disclosing the L0 cache is a mutable logical index cache that maintains a mapping of metadata to fingerprints of objects in the L1 cache, and in combination with Col. 8, lines 29-45 disclosing L1 cache including a MRU/MFU list and stale objects evicted from the L1 cache to the L2 cache and Col. 8 lines 54-56, disclosing older versions of an object will eventually become old and evicted out from cache, or in other words, a cold address in the L1 layers results in eviction to L2, or applicant’s first cache table). Regarding claims 7 and 17, taking claim 17 as exemplary, Agarwala disclosed the apparatus according to claim 16 as above. Agarwala further discloses wherein the operations further comprise: when an amount of data of the hot data table reaches a preset amount of data, determining that the first address changes to the cold address (See Agarwala, Col. 12, lines 1-10, disclosing L0 cache having a L0 list of logical addresses and when addresses are old/stale, or in other words cold, a full L0 list of addresses from “hot” to “cold” results in a cold address being evicted). Regarding claims 8 and 18, taking claim 18 as exemplary, Agarwala disclosed the apparatus according to claim 16 as above. Agarwala further discloses wherein the reading, into the first cache table, the entry corresponding to the first address comprises: reading, into the first cache table based on an eviction parameter in the entry corresponding to the first address, the entry corresponding to the first address; or reading, into a second cache table, the entry corresponding to the first address, wherein popularity of a storage address in the second cache table is less than the popularity of the storage address in the hot data table and is greater than the popularity of the storage address in the first cache table; and, when the second cache table meets an eviction condition, reading, into the first cache table, the entry corresponding to the first address (See Agarwala, Col. 8, lines 29-45 disclosing L1 cache including a MRU/MFU list and stale objects evicted from the L1 cache to the L2 cache and Col. 8 lines 54-56, disclosing older versions of an object will eventually become old and evicted out from cache, or in other words, a cold address in the L1 layer, applicant’s second cache table, results in eviction to L2, or applicant’s first cache table). Regarding claims 9 and 19, taking claim 19 as exemplary, Agarwala disclosed the apparatus according to claim 11 as described above. Agarwala further discloses wherein the operations comprise: when the first address is a cold address, recording, in the hot data table, the entry corresponding to the first address (See Agarwala, Col. 12, lines 12-15, disclosing no hit in the L0 cache, or in other words, the address is a cold address, and Col. 12 lines 25-30 disclosing an L2 cache hit which results in the object being moved to the head of the MFU list in the L1 cache and Col. 6 lines 45-48 disclosing the L0 cache 302 is a logical index cache that maintains a mapping of logical metadata to a fingerprint of objects stored in L1 cache 304, which therefore results in the entry being in the L0 and L1 cache); and recording, in the first cache table, the entry corresponding to the first address (See Agarwala, Col. 12, lines 28-30, disclosing moving the object segment to the head of the L2 segment list) . Regarding claims 10 and 20, taking claim 20 as exemplary, Agarwala disclosed the apparatus according to claim 11 as described above. Agarwala further discloses wherein the operations comprise: when a condition for enabling a hot eviction function is met, enabling a hot eviction function to determine whether the first address is a hot address, wherein the condition for enabling the hot eviction function comprises at least one of the following: a proportion of a hot address in a storage address of a persistent storage medium is greater than a preset proportion, or an amount of data of the first cache table is greater than a target amount of data (See Agarwala, Fig. 7 step 726 disclosing promotion of a read object to the head of the MRU list and eviction of content in the MRU list if necessary to create space for the read/requested object, or in other words, hot eviction condition is met when the MRU list is full and Col. 12 lines 38-42 disclosing same). EXAMINER’S NOTE Examiner has cited particular columns and line numbers in the references applied to the claims above for the convenience of the Applicants. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the Applicants in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Goswami (US 2022/0107925 A1) discloses using a deduplication fingerprint index in a hash data structure comprising a plurality of blocks, wherein a block of the plurality of blocks comprises fingerprints computed based on content of respective data values. The system merges, in a merge operation, updates for the deduplication fingerprint index to the hash data structure stored in a persistent storage. As part of the merge operation, the system mirrors the updates to a cached copy of the hash data structure in a cache memory, and updates, in an indirect block, information regarding locations of blocks in the cached copy of the hash data structure. Seo et al (US 2012/0239862 A1) discloses a memory controller including a buffer unit configured to store an input address table and a first hot address table; and a processing unit configured to judge whether an address from the host coincides with one of addresses stored in the input address table and to store the address from the host in the first hot address table according to the judgment, Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDMUND H KWONG whose telephone number is (571)272-8691. The examiner can normally be reached Monday-Friday 10-6 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan P. Savla can be reached at 571-272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.H.K/Examiner, Art Unit 2137 /Arpan P. Savla/Supervisory Patent Examiner, Art Unit 2137
Read full office action

Prosecution Timeline

Jan 17, 2025
Application Filed
Mar 20, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585383
Method and System for Hardware Accelerated Online Capacity Expansion
2y 5m to grant Granted Mar 24, 2026
Patent 12561250
STORAGE DEVICE FOR MANAGING MAP DATA IN A HOST AND OPERATION METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Patent 12554591
DYNAMIC ADAPTATION OF BACKUP POLICY SCHEMES BASED ON THREAT CONFIDENCE
2y 5m to grant Granted Feb 17, 2026
Patent 12541314
INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD FOR MANAGING LOG INFORMATION THAT PROVIDES A STORAGE FUNCTION CONNECTED TO A NETWORK
2y 5m to grant Granted Feb 03, 2026
Patent 12536097
PSEUDO MAIN MEMORY SYSTEM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
94%
With Interview (+7.3%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 324 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month