Prosecution Insights
Last updated: April 19, 2026
Application No. 18/949,822

CACHING USING MACHINE LEARNED PREDICTIONS

Non-Final OA §103
Filed
Nov 15, 2024
Examiner
LOONAN, ERIC T
Art Unit
2137
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
91%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
271 granted / 423 resolved
+9.1% vs TC avg
Strong +27% interview lift
Without
With
+27.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
29 currently pending
Career history
452
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
20.1%
-19.9% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 423 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is filed responsive to the initial filing of application 18/949,822 on 15 November 2024. Claims 1-18, as originally filed, are currently pending and have been fully considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 16 January 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections The following claims are objected to due to informalities: Claims 1, 7, and 13: Lack of antecedent basis of “the future” and “the other data sets currently stored in the cache”. Claims 3, 9, and 15: “predicting a memory address for a date set stored in the cache” is believed to be a typo and should be “predicting a memory address for a data set stored in the cache”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over TIRUNAGARI (US PGPub 2012/0041914) in view of FLYNN et al (US PGPub 2011/0066808). With respect to Claim 1, TIRUNAGARI discloses a system comprising a data processing apparatus (Fig 7, Processor 770) and one or more storage devices on which are stored instructions that are operable (Fig 7, System Memory 710 comprises Application Code 715), when executed by the data processing apparatus, to cause the data processing apparatus to perform operations comprising: determining that particular data is not stored in a cache (¶ [0017] – “cache miss”) that is full (¶ [0018] – “when a cache is full, a caching algorithm may be applied to determine which data to remove to make room for new data to be cached); in response to determining that the particular data is not stored in the cache that is full, determining, by a caching process, whether to use a machine learning system or a non-machine learned caching process to evict data stored in the cache (Abstract – a neural network may select and apply a caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached; ¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters; ¶ [0020] – the neural cache {‘machine learning system’} may apply an optimal caching algorithm; ¶ [0018] – the cache replacement algorithm applied may comprise LRU {‘non-machine learned caching process’}), wherein: the machine learning system generates, as a prediction and by using a machine learned process, … a data set to evict from the cache (Abstract – a neural network may select and apply a caching algorithm; ¶ [0028] – a caching algorithm may be selected based on various reasons including historical cache hit rates for a previous execution of a given or similar application; ¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a caching algorithm by the neural network analogous to ‘the machine learning system generates, as a prediction and by using a machine learned process, … a data set to evict from the cache’}); and the non-machine learned caching process is separate from the machine learning system (¶[0067] – “Functionality may be separated or combined in blocks differently in various realizations of the systems and methods described herein or described with different terminology”) and generates … a data set in the cache to evict from the cache (Abstract – a neural network may select and apply a caching algorithm; ¶ [0018] – the cache replacement algorithm applied may comprise LRU {‘non-machine learned caching process’}; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a LRU caching algorithm analogous to ‘the non-machine learned caching process … generates … a data set to evict from the cache’}); the determining of whether to use the machine learning system or the non-machine learned caching process comprising: determining a predicted eviction accuracy that measures an accuracy of the prediction of the machine learning system in predicting that the data set to evict from the cache … will be used further in the future than the other data sets currently stored in the cache (¶ [0028] – the caching algorithm may be selected based on various reasons including historical cache hit rates for a previous execution of a given or similar application; ¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the application of a caching algorithm by the neural network analogous to ‘the machine learning system … predict(s) the data set to evict from the cache’}; ¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters {thus, further application of the selected caching algorithm may be used to determine next data to evict}); determining whether the predicted eviction accuracy of the machine learning system satisfies a threshold eviction accuracy (¶ [0024] – a value for one performance related parameter {e.g. cache hit rate} may be evaluated to see whether it meets or exceeds a threshold value); in response to determining that the predicted eviction accuracy of the machine learning system satisfies the threshold eviction accuracy (¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters): selecting, by the caching process, the machine learning system to predict … a data set stored in the cache to evict from the cache (¶ [0028] – a caching algorithm may be selected based on various reasons including historical cache hit rates for a previous execution of a given or similar application; ¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a caching algorithm by the neural network analogous to ‘the machine learning system predicts … a data set stored in the cache to evict from the cache’}); predicting, by the machine learned process of the machine learning system, … the data set stored in the cache to evict from the cache (¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a caching algorithm by the neural network analogous to ‘the machine learning system predicts … a data set stored in the cache to evict from the cache’}); evicting, from the cache, the data set … (¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached); and storing the particular data in the cache … (¶ [0018] – when a cache is full, a caching algorithm may be applied to determine which data to remove to make room for new data to be cached); and in response to determining that the predicted eviction accuracy of the machine learning system does not satisfy the threshold eviction accuracy (¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters): determining, using a non-machine learning system, … a data set stored in the cache to evict from the cache (¶ [0018] – the cache replacement algorithm applied may comprise LRU {‘non-machine learned caching process’}; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a LRU caching algorithm analogous to ‘determining, using a non-machine learned caching process … a data set stored in the cache to evict from the cache’}); evicting, from the cache, the data set … (¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached); and storing the particular data in the cache … (¶ [0018] – when a cache is full, a caching algorithm may be applied to determine which data to remove to make room for new data to be cached). TIRUNAGARI may not explicitly disclose (1) the machine learning system generates, as a prediction and by using a machine learned process, a memory address predicted by the machine learned process, the memory address being a memory location in the cache of a data set to evict from the cache; (2) the non-machine learned caching process … generates a memory address as output, the memory address being a memory location of a data set in the cache to evict from the cache; (3) wherein the data set to evict from the cache is addressed by the memory address. However, FLYNN discloses (1) the machine learning system generates, as a prediction and by using a machine learned process, a memory address predicted by the machine learned process, the memory address being a memory location in the cache of a data set to evict from the cache; (2) the non-machine learned caching process … generates a memory address as output, the memory address being a memory location of a data set in the cache to evict from the cache; (3) wherein the data set to evict from the cache is addressed by the memory address (¶ [0080] a cache controller may coordinate the exchange of data between clients and the backing store and may be responsible for maintaining an eviction policy that specifies how and when data is evicted; the eviction policy may be based upon cache eviction metadata; ¶ [0008] – the metadata may comprise cache entries … a cache entry may associate a logical address with one or more locations on identifying where the data is stored; ¶ [0009] – the cache entries may be indexed by logical address). Recited another way, TIRUNAGARI discloses that a machine learning system may select and apply a caching algorithm to identify data to evict from cache. But, TIRUNAGARI may not explicitly disclose identifying the data by memory address. However, FLYNN discloses that cached data may be indexed (and thus discarded) by logical address. TIRUNAGARI and FLYNN are analogous art because they are from the same field of endeavor of cache systems. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of TIRUNAGARI and FLYNN before him or her, to modify the resulting output from implementation of a caching algorithm of TIRUNAGARI to include a memory address as taught by FLYNN. A motivation for doing so would have been to provide indicators to identify data for the purpose of cache management (Section [0007]) as one needs to first identify the location of data in cache in order to evict the data from the cache. Therefore, it would have been obvious to combine TIRUNAGARI and FLYNN to obtain the invention as specified in the instant claims. With respect to Claim 7, TIRUNAGARI discloses a computer-implemented method, comprising: determining that particular data is not stored in a cache (¶ [0017] – “cache miss”) that is full (¶ [0018] – “when a cache is full, a caching algorithm may be applied to determine which data to remove to make room for new data to be cached); in response to determining that the particular data is not stored in the cache that is full, determining, by a caching process, whether to use a machine learning system or a non-machine learned caching process to evict data stored in the cache (Abstract – a neural network may select and apply a caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached; ¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters; ¶ [0020] – the neural cache {‘machine learning system’} may apply an optimal caching algorithm; ¶ [0018] – the cache replacement algorithm applied may comprise LRU {‘non-machine learned caching process’}), wherein: the machine learning system generates, as a prediction and by using a machine learned process, … a data set to evict from the cache (Abstract – a neural network may select and apply a caching algorithm; ¶ [0028] – a caching algorithm may be selected based on various reasons including historical cache hit rates for a previous execution of a given or similar application; ¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a caching algorithm by the neural network analogous to ‘the machine learning system generates, as a prediction and by using a machine learned process, … a data set to evict from the cache’}); and the non-machine learned caching process is separate from the machine learning system (¶[0067] – “Functionality may be separated or combined in blocks differently in various realizations of the systems and methods described herein or described with different terminology”) and generates … a data set in the cache to evict from the cache (Abstract – a neural network may select and apply a caching algorithm; ¶ [0018] – the cache replacement algorithm applied may comprise LRU {‘non-machine learned caching process’}; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a LRU caching algorithm analogous to ‘the non-machine learned caching process … generates … a data set to evict from the cache’}); the determining of whether to use the machine learning system or the non-machine learned caching process comprising: determining a predicted eviction accuracy that measures an accuracy of the prediction of the machine learning system in predicting that the data set to evict from the cache … will be used further in the future than the other data sets currently stored in the cache (¶ [0028] – the caching algorithm may be selected based on various reasons including historical cache hit rates for a previous execution of a given or similar application; ¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the application of a caching algorithm by the neural network analogous to ‘the machine learning system … predict(s) the data set to evict from the cache’}; ¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters {thus, further application of the selected caching algorithm may be used to determine next data to evict}); determining whether the predicted eviction accuracy of the machine learning system satisfies a threshold eviction accuracy (¶ [0024] – a value for one performance related parameter {e.g. cache hit rate} may be evaluated to see whether it meets or exceeds a threshold value); in response to determining that the predicted eviction accuracy of the machine learning system satisfies the threshold eviction accuracy (¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters): selecting, by the caching process, the machine learning system to predict … a data set stored in the cache to evict from the cache (¶ [0028] – a caching algorithm may be selected based on various reasons including historical cache hit rates for a previous execution of a given or similar application; ¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a caching algorithm by the neural network analogous to ‘the machine learning system predicts … a data set stored in the cache to evict from the cache’}); predicting, by the machine learned process of the machine learning system, … the data set stored in the cache to evict from the cache (¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a caching algorithm by the neural network analogous to ‘the machine learning system predicts … a data set stored in the cache to evict from the cache’}); evicting, from the cache, the data set … (¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached); and storing the particular data in the cache … (¶ [0018] – when a cache is full, a caching algorithm may be applied to determine which data to remove to make room for new data to be cached); and in response to determining that the predicted eviction accuracy of the machine learning system does not satisfy the threshold eviction accuracy (¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters): determining, using a non-machine learning system, … a data set stored in the cache to evict from the cache (¶ [0018] – the cache replacement algorithm applied may comprise LRU {‘non-machine learned caching process’}; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a LRU caching algorithm analogous to ‘determining, using a non-machine learned caching process … a data set stored in the cache to evict from the cache’}); evicting, from the cache, the data set … (¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached); and storing the particular data in the cache … (¶ [0018] – when a cache is full, a caching algorithm may be applied to determine which data to remove to make room for new data to be cached). TIRUNAGARI may not explicitly disclose (1) the machine learning system generates, as a prediction and by using a machine learned process, a memory address predicted by the machine learned process, the memory address being a memory location in the cache of a data set to evict from the cache; (2) the non-machine learned caching process … generates a memory address as output, the memory address being a memory location of a data set in the cache to evict from the cache; (3) wherein the data set to evict from the cache is addressed by the memory address. However, FLYNN discloses (1) the machine learning system generates, as a prediction and by using a machine learned process, a memory address predicted by the machine learned process, the memory address being a memory location in the cache of a data set to evict from the cache; (2) the non-machine learned caching process … generates a memory address as output, the memory address being a memory location of a data set in the cache to evict from the cache; (3) wherein the data set to evict from the cache is addressed by the memory address (¶ [0080] a cache controller may coordinate the exchange of data between clients and the backing store and may be responsible for maintaining an eviction policy that specifies how and when data is evicted; the eviction policy may be based upon cache eviction metadata; ¶ [0008] – the metadata may comprise cache entries … a cache entry may associate a logical address with one or more locations on identifying where the data is stored; ¶ [0009] – the cache entries may be indexed by logical address). Recited another way, TIRUNAGARI discloses that a machine learning system may select and apply a caching algorithm to identify data to evict from cache. But, TIRUNAGARI may not explicitly disclose identifying the data by memory address. However, FLYNN discloses that cached data may be indexed (and thus discarded) by logical address. TIRUNAGARI and FLYNN are analogous art because they are from the same field of endeavor of cache systems. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of TIRUNAGARI and FLYNN before him or her, to modify the resulting output from implementation of a caching algorithm of TIRUNAGARI to include a memory address as taught by FLYNN. A motivation for doing so would have been to provide indicators to identify data for the purpose of cache management (Section [0007]) as one needs to first identify the location of data in cache in order to evict the data from the cache. Therefore, it would have been obvious to combine TIRUNAGARI and FLYNN to obtain the invention as specified in the instant claims. With respect to Claim 13, TIRUNAGARI discloses a non-transitory computer storage medium encoded with instructions that, when executed by a data processing apparatus, cause the data processing apparatus to perform operations comprising: determining that particular data is not stored in a cache (¶ [0017] – “cache miss”) that is full (¶ [0018] – “when a cache is full, a caching algorithm may be applied to determine which data to remove to make room for new data to be cached); in response to determining that the particular data is not stored in the cache that is full, determining, by a caching process, whether to use a machine learning system or a non-machine learned caching process to evict data stored in the cache (Abstract – a neural network may select and apply a caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached; ¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters; ¶ [0020] – the neural cache {‘machine learning system’} may apply an optimal caching algorithm; ¶ [0018] – the cache replacement algorithm applied may comprise LRU {‘non-machine learned caching process’}), wherein: the machine learning system generates, as a prediction and by using a machine learned process, … a data set to evict from the cache (Abstract – a neural network may select and apply a caching algorithm; ¶ [0028] – a caching algorithm may be selected based on various reasons including historical cache hit rates for a previous execution of a given or similar application; ¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a caching algorithm by the neural network analogous to ‘the machine learning system generates, as a prediction and by using a machine learned process, … a data set to evict from the cache’}); and the non-machine learned caching process is separate from the machine learning system (¶[0067] – “Functionality may be separated or combined in blocks differently in various realizations of the systems and methods described herein or described with different terminology”) and generates … a data set in the cache to evict from the cache (Abstract – a neural network may select and apply a caching algorithm; ¶ [0018] – the cache replacement algorithm applied may comprise LRU {‘non-machine learned caching process’}; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a LRU caching algorithm analogous to ‘the non-machine learned caching process … generates … a data set to evict from the cache’}); the determining of whether to use the machine learning system or the non-machine learned caching process comprising: determining a predicted eviction accuracy that measures an accuracy of the prediction of the machine learning system in predicting that the data set to evict from the cache … will be used further in the future than the other data sets currently stored in the cache (¶ [0028] – the caching algorithm may be selected based on various reasons including historical cache hit rates for a previous execution of a given or similar application; ¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the application of a caching algorithm by the neural network analogous to ‘the machine learning system … predict(s) the data set to evict from the cache’}; ¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters {thus, further application of the selected caching algorithm may be used to determine next data to evict}); determining whether the predicted eviction accuracy of the machine learning system satisfies a threshold eviction accuracy (¶ [0024] – a value for one performance related parameter {e.g. cache hit rate} may be evaluated to see whether it meets or exceeds a threshold value); in response to determining that the predicted eviction accuracy of the machine learning system satisfies the threshold eviction accuracy (¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters): selecting, by the caching process, the machine learning system to predict … a data set stored in the cache to evict from the cache (¶ [0028] – a caching algorithm may be selected based on various reasons including historical cache hit rates for a previous execution of a given or similar application; ¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a caching algorithm by the neural network analogous to ‘the machine learning system predicts … a data set stored in the cache to evict from the cache’}); predicting, by the machine learned process of the machine learning system, … the data set stored in the cache to evict from the cache (¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a caching algorithm by the neural network analogous to ‘the machine learning system predicts … a data set stored in the cache to evict from the cache’}); evicting, from the cache, the data set … (¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached); and storing the particular data in the cache … (¶ [0018] – when a cache is full, a caching algorithm may be applied to determine which data to remove to make room for new data to be cached); and in response to determining that the predicted eviction accuracy of the machine learning system does not satisfy the threshold eviction accuracy (¶ [0024] – a selected caching algorithm may or may not change as a result of evaluating performance related parameters): determining, using a non-machine learning system, … a data set stored in the cache to evict from the cache (¶ [0018] – the cache replacement algorithm applied may comprise LRU {‘non-machine learned caching process’}; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a LRU caching algorithm analogous to ‘determining, using a non-machine learned caching process … a data set stored in the cache to evict from the cache’}); evicting, from the cache, the data set … (¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached); and storing the particular data in the cache … (¶ [0018] – when a cache is full, a caching algorithm may be applied to determine which data to remove to make room for new data to be cached). TIRUNAGARI may not explicitly disclose (1) the machine learning system generates, as a prediction and by using a machine learned process, a memory address predicted by the machine learned process, the memory address being a memory location in the cache of a data set to evict from the cache; (2) the non-machine learned caching process … generates a memory address as output, the memory address being a memory location of a data set in the cache to evict from the cache; (3) wherein the data set to evict from the cache is addressed by the memory address. However, FLYNN discloses (1) the machine learning system generates, as a prediction and by using a machine learned process, a memory address predicted by the machine learned process, the memory address being a memory location in the cache of a data set to evict from the cache; (2) the non-machine learned caching process … generates a memory address as output, the memory address being a memory location of a data set in the cache to evict from the cache; (3) wherein the data set to evict from the cache is addressed by the memory address (¶ [0080] a cache controller may coordinate the exchange of data between clients and the backing store and may be responsible for maintaining an eviction policy that specifies how and when data is evicted; the eviction policy may be based upon cache eviction metadata; ¶ [0008] – the metadata may comprise cache entries … a cache entry may associate a logical address with one or more locations on identifying where the data is stored; ¶ [0009] – the cache entries may be indexed by logical address). Recited another way, TIRUNAGARI discloses that a machine learning system may select and apply a caching algorithm to identify data to evict from cache. But, TIRUNAGARI may not explicitly disclose identifying the data by memory address. However, FLYNN discloses that cached data may be indexed (and thus discarded) by logical address. TIRUNAGARI and FLYNN are analogous art because they are from the same field of endeavor of cache systems. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of TIRUNAGARI and FLYNN before him or her, to modify the resulting output from implementation of a caching algorithm of TIRUNAGARI to include a memory address as taught by FLYNN. A motivation for doing so would have been to provide indicators to identify data for the purpose of cache management (Section [0007]) as one needs to first identify the location of data in cache in order to evict the data from the cache. Therefore, it would have been obvious to combine TIRUNAGARI and FLYNN to obtain the invention as specified in the instant claims. With respect to Claims 2, 8, and 14, the combination of TIRUNAGARI and FLYNN disclose the system/method/medium of each respective parent claim. TIRUNAGARI further discloses wherein determining a predicted eviction accuracy that measures an accuracy of the prediction of the machine learning system in predicting that the data set to evict from the cache… will be used further in the future than the other data sets currently stored in the cache comprises determining a predicted eviction accuracy based on a data set chain that includes data sets the machine learning system previously identified for eviction from the cache (¶ [0024] – performance related parameters may be evaluated to see whether it meets or exceeds a threshold value, and a selected caching algorithm may or may not change as a result of evaluating performance related parameters; ¶ [0022] – performance related parameters may include cache hit rates for a given resource or group of resources {a data set chain}). FLYNN further discloses wherein the data set to evict from the cache is addressed by the memory address (¶ [0080] a cache controller may coordinate the exchange of data between clients and the backing store and may be responsible for maintaining an eviction policy that specifies how and when data is evicted; the eviction policy may be based upon cache eviction metadata; ¶ [0008] – the metadata may comprise cache entries … a cache entry may associate a logical address with one or more locations on identifying where the data is stored; ¶ [0009] – the cache entries may be indexed by logical address). With respect to Claims 3, 9, and 15, the combination of TIRUNAGARI and FLYNN disclose the system/method/medium of each respective parent claim. TIRUNAGARI further discloses wherein predicting, by the machine learned process of the machine learning system, … the data set stored in the cache to evict from the cache comprises predicting … a data set (Abstract – a neural network may select and apply a caching algorithm; ¶ [0028] – a caching algorithm may be selected based on various reasons including historical cache hit rates for a previous execution of a given or similar application; ¶ [0019] – the system may implement a “neural cache” {analogous to a machine learning system} for cache efficiency; ¶ [0020] – the neural cache may apply an optimal caching algorithm; ¶ [0002] – caching algorithms may be used to determine which data to remove to make room for new data to be cached {the selection and application of a caching algorithm by the neural network analogous to ‘the machine learning system generates, as a prediction and by using a machine learned process, … a data set to evict from the cache’}). FLYNN further discloses wherein the data set stored in the cache has not been accessed with a particular time period (¶ [0054] – cache eviction metadata may comprise an indication {e.g. ‘cold’} of data that has not been accessed within a particular time threshold), and wherein the data set to evict from the cache is addressed by the memory address (¶ [0080] a cache controller may coordinate the exchange of data between clients and the backing store and may be responsible for maintaining an eviction policy that specifies how and when data is evicted; the eviction policy may be based upon cache eviction metadata; ¶ [0008] – the metadata may comprise cache entries … a cache entry may associate a logical address with one or more locations on identifying where the data is stored; ¶ [0009] – the cache entries may be indexed by logical address). With respect to Claims 4, 10, and 16, the combination of TIRUNAGARI and FLYNN disclose the system/method/medium of each respective parent claim. TIRUNAGARI further discloses determining whether the particular data was previously stored in the cache during a second time period preceding and adjacent to the particular time period without any intervening time periods; and in response to determining that the particular data was not previously stored in the cache during the second time period, creating a new data set chain that identifies the data stored in the cache for which the memory address was predicted by the machine learning system (¶ [0024-0025] – the neural network may perform periodic sampling of performance related parameters, and as a result of the analysis of the parameters, may change the caching algorithm in response; the result of changing the caching algorithm may result in using a different basis for which cached items should be replaced to make room for storing new items in the cache). With respect to Claims 5, 11, and 17, the combination of TIRUNAGARI and FLYNN disclose the system/method/medium of each respective parent claim. TIRUNAGARI further discloses in response to determining that the particular data was previously stored in the cache during the second time period: determining a data set chain that identifies the particular data; and updating the data set chain to identify the data stored in the cache for which the memory address was predicted by the machine learning system; and wherein determining the predicted eviction accuracy comprises determining the predicted eviction accuracy using a quantity of data sets identified by data set chain (¶ [0022] – performance related parameters may include cache hit rates for a given resource or group of resources {quantity of data sets}; ¶ [0024] – performance related parameters may be evaluated to see whether it meets or exceeds a threshold value). With respect to Claims 6, 12, and 18, the combination of TIRUNAGARI and FLYNN disclose the system/method/medium of each respective parent claim. TIRUNAGARI further discloses wherein the machine learned process is one of a neural network analysis system, a recurrent neural network analysis system, or a long short-term memory neural network system that is trained to predict the memory address that is the memory location in the cache of the data set to evict from the cache (Abstract – a neural network may select and apply a caching algorithm). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC T LOONAN whose telephone number is (571)272-6994. The examiner can normally be reached M-F 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan Savla can be reached at 571-272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERIC T LOONAN/Examiner, Art Unit 2137
Read full office action

Prosecution Timeline

Nov 15, 2024
Application Filed
Dec 13, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591369
NODE CACHE MIGRATION
2y 5m to grant Granted Mar 31, 2026
Patent 12578874
CONFIGURING A QUORUM COMPONENT ON NETWORK STORAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12547334
REFRESH OF STALE REFERENCE TO PHYSICAL FILE LOCATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12530144
SYSTEM AND METHOD FOR ESTIMATION OF ERROR BOUNDS FOR FILE SIZE CALCULATIONS USING MINHASH IN DEDUPLICATION SYSTEMS
2y 5m to grant Granted Jan 20, 2026
Patent 12524336
MANAGEMENT OF ERASABLE UNITS OF MEMORY BLOCKS IN SOLID STATE DRIVES
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
91%
With Interview (+27.0%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 423 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month