Prosecution Insights
Last updated: April 19, 2026
Application No. 18/349,939

REMOTE PREFETCH

Final Rejection §103
Filed
Jul 10, 2023
Examiner
HICKS, SHIRLEY D.
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
VAST DATA LTD.
OA Round
4 (Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
69 granted / 107 resolved
+9.5% vs TC avg
Strong +56% interview lift
Without
With
+56.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
38 currently pending
Career history
145
Total Applications
across all art units

Statute-Specific Performance

§101
10.7%
-29.3% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
24.2%
-15.8% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 107 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA Response to Amendments The action is responsive to the Applicant’s Amendment filed on 11/25/2025. Claims 1-17 are pending in the application. Response to Arguments Applicant’s arguments with respect to the rejections previously made and the amended claims filed on 11/25/2025 have been fully considered but they are not persuasive. In view of the claim amendments, the rejections are being updated accordingly. In regards to independent claim 1, Applicant argued that cited reference “Smith does not teach a file system entity (FSE) having a local part of the FSE that is stored at the LSS and a remote part that is stored at a remote storage system (RSS). Therefore, Smith cannot and does not teach two specific latencies”. However, this limitation is not in claim 1. Claim 1 recites, “that is connected to the LSS, wherein the read pattern is estimated to comprise future read requests that are aimed to a remote part of a file system entity (FSE) that is stored at a remote storage system (RSS)”. Claim 1 does not recite the limitation as the Applicant is arguing. The “wherein” clause is nonfunctional descriptive material describing the read pattern and is not functionally involved in the step recited, and intended use indicating the intended outcome. Thus, this descriptive material will not distinguish the claimed invention from the prior art in terms of patentability. The functional steps of claim 1 are detecting, performing, prefetching, and prefetching. Also, the Applicant argues that cited reference “Smith does not teach the difference between the latency of path A and path B, as being measured between the remote latency (measured for responses to read requests associated with the requestor and aim to the remote part of the FSE) and the LSS latency). However, claim 1 does not include a step that computes the difference between the latency of path A and path B, as being measured between the remote latency. Again, the functional steps of claim 1 are detecting, performing, prefetching, and prefetching. The “wherein” clauses of “wherein a LSS latency is an average latency… wherein a remote latency is an average latency…”, “wherein there is a latency difference that is measured between the remote latency and the LSS latency” are all nonfunctional descriptive material. None of the claimed steps are functionally computing a latency difference that is measured between the remote latency and the LSS latency as the Applicant is arguing. However, Smith teaches the latencies as a “chosen metric” in ([0705]: For example, chosen metrics may include, but are not limited to, one or more of the following… average latency, maximum latency, minimum latency, standard deviation of latency, other statistical measures of latency, combinations of these and/or other measures, metrics and the like etc. Smith also teaches in [1204]: In this case, the differential delay of paths A and B may be the difference in delay between path A and the delay of path B (e.g. the delay of path A minus the delay of path B, etc.)), which corresponds to the difference between the latency of path A and path B, which is simply a metric. In addition, Applicant argues that “Smith does not disclose a prefetch process that include these two steps, distinguished by the number of remote sub-parts are being prefetch and by the memory being used for the caching/storing the first and second number of remote sub-parts”. This limitation is not recited in claim 1as the Applicant is arguing. Again, the limitations of “the first number is selected as a number of remote sub-parts that is sufficient to prevent a latency” and “wherein the second number of remote sub-parts are expected to be read by the requestor” and “wherein the second number is selected based on…” are nonfunctional descriptive material. The instant specification states in [0030]: Upon detecting a read pattern, where future accesses are expected to be directed towards a remote content, a dual-tier prefetch is performed, blocks that are expected to be read next are prefetched from the remote storage system into the cache of the local storage system, and blocks, expected to be read following the cached blocks, are read into the storage devices of the local storage node. Likewise, Smith teaches in [0835]-[0840]: For example, in one embodiment, a prefetch unit (prefetcher, prefetch block, prefetch circuit, predictor, etc.) may predict, and/or otherwise calculate etc. future memory access… For example, in one embodiment, the prefetcher may predict that access (e.g. in a future window of time of predetermined length, etc.) may be made to regions A, B, C. This information may be used, for example, by a refresh engine and/or any other refresh control circuits to schedule, plan, control, order, queue, etc. refresh operations to memory region D. Of course any number of memory regions, groups of memory regions, arrangements of memory regions, sets of memory addresses, ranges of memory addresses, collections of memory regions, echelons, banks, sections, combinations and/or arrangements of these and/or any other part, portions, of memory etc. may be tracked, used for prediction. Smith provides more details in the paragraphs that follow, stating, “Of course, any level of granularity for any number, type, form, etc. of functions, etc. may be used.” This would include a prefetch process that include two steps, a first number, second number, etc. Therefore, Smith teaches the limitations as recited in claim 1, and thus, for at least the reasons as set forth above, it is submitted that the limitations recited in claim 1 are properly addressed. In regards to independent claims 9 and 17, the emphasized limitations that the Applicant argues in claims 9 and 17 are similar to the emphasized limitations of claim 1, which have been addressed above. See the response of claim 1 above for explanation. Furthermore, it is also submitted that all limitations in pending claims, including those not specifically argued, are properly addressed. The reason is set forth in the rejections. See claim analysis below for detail. Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claims 1-17 are rejected under 35 U.S.C. 103 as being unpatentable over Taylor et al. (US Patent No. 8788628 B1) in view of Smith (US 20190205244 A1) and Michaud et al. (US Patent No. 10289555 B1). 8. Regarding Claim 1, Taylor discloses a method for performing a remote prefetch ([Col. 1, lines 40-55]: The disclosed embodiments provide a system that facilitates pre-fetching data for a distributed filesystem… and proceeds to pre-fetch this additional cloud file from the cloud storage system), the method comprising: detecting, by a controller of a local storage system (LSS) (Fig. 3, cloud controller 300), a read pattern of read requests received from a requestor that is connected to the LSS (Fig. 3, [Col. 6, lines 47-55]: A request server 304 in cloud controller 300 may receive file requests from either local processes or via a network from a client 306; [Col. 2, lines 17-21]: the cloud controller receives user feedback that indicates expected file characteristics and access patterns), wherein the read pattern is estimated to comprise future read requests that are aimed to a remote part of a file system entity (FSE) that is stored at a remote storage system (RSS) (Fig. 3; [Abstract]: the cloud controller additionally determines that an additional cloud file in the cloud storage system includes data that is likely to be accessed in conjunction with the data block, and proceeds to pre-fetch this additional cloud file from the cloud storage system; [Col. 12, lines 43-67]: In some embodiments, a cloud controller attempts to optimize the placement of data into cloud files to reduce future access overhead… users may be provided with a way to configure a policy that reflects anticipated file access patterns); However, Taylor does not explicitly teach “wherein a LSS latency is an average latency measured by the LSS for responses to read requests associated with the requestor and aim to a local part of the FSE that is stored at the LSS; wherein a remote latency is an average latency measured for responses to read requests associated with the requestor and aim to the remote part of the FSE: wherein there is a latency difference that is measured between the remote latency and the LSS latency; wherein the controller comprises an integrated circuit; and performing a prefetch process of remote sub-parts of the remote part of the FSE, wherein the remote sub-parts are expected to be read by the requestor, based on the read pattern; wherein the performing of the prefetch process comprises: prefetching a first number of remote sub-parts to a cache memory of a processing node layer of the LSS, the first number is selected as a number of remote sub-parts that is sufficient to prevent a latency, associated with reading the remote sub-parts, from exceeding a threshold above a desired latency that is based on the LSS latency, and prefetching a second number of remote sub-parts to a storage layer of the LSS, wherein the second number of remote sub-parts are expected to be read by the requestor, according to the read pattern, following an expected request to read the first number of remote sub-parts, and wherein the second number is selected based on at least one out of (a) the latency difference, or (b) a read request rate of the requestor.” On the other hand, in the same field of endeavor, Smith teaches wherein a LSS latency is an average latency measured by the LSS for responses to read requests associated with the requestor and aim to a local part of the FSE that is stored at the LSS; wherein a remote latency is an average latency measured for responses to read requests associated with the requestor and aim to the remote part of the FSE ([0705]: For example, chosen metrics may include… average latency, maximum latency, minimum latency, standard deviation of latency, other statistical measures of latency, combinations of these and/or other measures, metrics and the like etc.): wherein there is a latency difference that is measured between the remote latency and the LSS latency ([0705]: For example, chosen metrics may include, but are not limited to, one or more of the following… average latency, maximum latency, minimum latency, standard deviation of latency, other statistical measures of latency, combinations of these and/or other measures, metrics and the like etc.; [1187]: Thus for example, a first bus may have a longer propagation delay (e.g. latency, etc.)…. than a second bus. For example, buses may be constructed (e.g. wired, laid out, shaped, etc.) so as to reduce (e.g. alter, ameliorate, dampen, etc.) the difference in electrical properties… [1204]: In this case, the differential delay of paths A and B may be the difference in delay between path A and the delay of path B (e.g. the delay of path A minus the delay of path B, etc.)); wherein the controller comprises an integrated circuit (Fig. 18-100; [0749]: In one embodiment, the apparatus 18 - 100 may include a three-dimensional integrated circuit); and performing a prefetch process of remote sub-parts of the remote part of the FSE, wherein the remote sub-parts are expected to be read by the requestor, based on the read pattern (Fig. 5; [Col. 2, lines 11-16]: The cloud controller can pre-fetch additional cloud files containing such temporally proximate data to reduce the download latency associated with subsequently downloading such cloud files from the cloud storage system on an on-demand basis); wherein the performing of the prefetch process comprises: prefetching a first number of remote sub-parts to a cache memory of a processing node layer of the LSS ([0102]: The row buffer may serve, function, etc. as a cache, store, etc. to reduce the latency of subsequent access to that row; [0835]-[0840]: For example, in one embodiment, a prefetch unit (prefetcher, prefetch block, prefetch circuit, predictor, etc.) may predict, and/or otherwise calculate etc. future memory access (e.g. based on history analysis, by analyzing strides and other patterns of memory access), the first number is selected as a number of remote sub-parts that is sufficient to prevent a latency, associated with reading the remote sub-parts, from exceeding a threshold above a desired latency that is based on the LSS latency ([0107]: One or more additional policies may be used including those, for example, that may select precharge operations first, row operations first, column operations first, etc. A column-first scheduling policy may, for example, reduce the access latency to active row; [0705]: For example, chosen metrics may include, but are not limited to, one or more of the following… average latency, maximum latency, minimum latency, standard deviation of latency, other statistical measures of latency, combinations of these and/or other measures, metrics and the like etc… may be such to optimize power (e.g. minimize power, operate such that power does not exceed a threshold, etc.)), and prefetching a second number of remote sub-parts to a storage layer of the LSS, wherein the second number of remote sub-parts are expected to be read by the requestor, according to the read pattern, following an expected request to read the first number of remote sub-parts, ([0102]: While a row is active in the row buffer, any number of reads or writes (column accesses) may be performed. After completion of the column access, the cached row may be written back to the memory array; [0165]: In FIG. 2, in one embodiment, a request and/or response may be asynchronous (e.g. split, separated, variable latency, etc.)). Additionally, Michaud teaches wherein the second number is selected based on at least one out of (a) the latency difference, or (b) a read request rate of the requestor ([Abstract]: a memory read-ahead process is performed which includes identifying a learned memory access pattern associated with the requestor; Figs. 6A-6B; [Col. 4, lines 5-10]: The learned memory access patterns 126 are utilized to perform intelligent read-ahead operations for pre-fetching and storing data in the cache memory 130, thereby decreasing access latencies associated with memory access requests for non-resident data (e.g., cache miss, page fault, etc.)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Taylor to incorporate the teachings of Smith and Michaud to prefetch a first number of remote sub-parts to a cache memory of the local storage system that is less than the threshold, and then prefetch a second number of remote sub-parts based on the latency difference or a read request rate of the requestor. The motivation for doing so would be to optimize channels for latency, as recognized by Smith ([0579] of Smith: In one embodiment, one or more VCs and/or other equivalent channels, paths, circuits, etc. (e.g. channels etc.) may be optimized… For example, one or more channels etc. may be optimized for latency, power, bandwidth and/or one or more other parameters, metrics, aspects, features, combinations of these and the like etc.), and to decrease access latencies associated with memory access requests for remote data, as recognized by Michaud ([Col. 4, lines 5-11] of Michaud: The learned memory access patterns 126 are utilized to perform intelligent read-ahead operations for pre-fetching and storing data in the cache memory 130, thereby decreasing access latencies associated with memory access requests for non-resident data (e.g., cache miss, page fault, etc.) which require time consuming operations to resolve non-resident data). Regarding Claim 2, the combined teachings of Taylor, Smith, and Michaud disclose the method according to claim 1. Taylor further teaches wherein the remote latency exceeds the LSS latency by a factor of at least ten ([Col. 8, line 4- 20]: Consider an example of a cloud controller receiving a request from a client to store a 10 GB file, in an environment where the network link between the cloud controller and a cloud storage system supports a transfer speed of 1 GB/minute and the cloud controller is configured to send a metadata snapshot every minute… The cloud controller then uploads the file data to the cloud storage system over a time interval (e.g., roughly ten minutes)). Regarding Claim 3, the combined teachings of Taylor, Smith, and Michaud disclose the method according to claim 1. Michaud further teaches comprising maintaining, as long as further remote sub-parts are expected to be read, at least the first number of remote sub-parts in the cache memory ([Col. 9, lines 62-67]: While the example embodiment of FIG. 2 illustrates the databases 230 and 232 as stand-alone entities for ease of illustration, it is to be understood that the databases 230 and 232 would be maintained in regions of the volatile memory 212 and/or or non-volatile memory 214 of the system memory 210 for low-latency access). Regarding Claim 4, the combined teachings of Taylor, Smith, and Michaud disclose the method according to claim 2. Michaud further teaches comprising maintaining, as long as the further remote-sub-parts are expected to be read, at least the second number of remote sub- parts in the storage layer ([Col. 9, lines 62-67]: While the example embodiment of FIG. 2 illustrates the databases 230 and 232 as stand-alone entities for ease of illustration, it is to be understood that the databases 230 and 232 would be maintained in regions of the volatile memory 212 and/or or non-volatile memory 214 of the system memory 210 for low-latency access). Regarding Claim 5, the combined teachings of Taylor, Smith, and Michaud disclose the method according to claim 1. Michaud further teaches wherein the desired latency does not exceed the LSS latency (Fig. 5; [Col. 17, line 63- Col. 18, line 14]: compare the time difference to a predetermined threshold value. If the determined time difference does not exceed the predetermined threshold, then the row entries for Process ID 49 will be deemed “recent” and the Counter value in the given row for Process ID 49 can be incremented). Regarding Claim 6, the combined teachings of Taylor, Smith, and Michaud disclose the method according to claim 1. Taylor further teaches wherein the read pattern is estimated to comprise local read requests aimed to a local part of the FSE that is stored in the LSS (Fig. 3; [Col. 6, lines 49-54]: A request server 304 in cloud controller 300 may receive file requests from either local processes or via a network from a client 306. These requests are presented to a storage management system that includes a transactional filesystem 308 that manages a set of filesystem metadata 310 and a local storage system 312… A set of block records 314 in metadata 310 include pointer fields that indicate the location of the file data in a disk block 316 in local storage 312); wherein the performing of the prefetch process comprises pre-fetching local sub-parts of the local part of the FSE in order to support the read pattern while maintaining the desired latency ([Col. 17, lines 27-31]: FIG. 6A illustrates a computing device 600 that receives and forwards requests for filesystem operations. Computing device 600 executes a request server 608 that receives requests for file operations from clients (610-612) in its computing environment 614). Regarding Claim 7, the combined teachings of Taylor, Smith, and Michaud disclose the method according to claim 1. Taylor further teaches, comprising: detecting, by the controller, a further read pattern that is associated with the requestor and is estimated to comprise further future read requests that are aimed to a further remote part of the FSE that is stored at a further remote storage system (FSS) ([Col. 12, lines 57-63]: In some embodiments, a cloud controller attempts to optimize the placement of data into cloud files to reduce future access overhead. For instance, the cloud controller may strive to, when possible, store all blocks for a file in the same cloud file… users may be provided with a way to configure a policy that reflects anticipated file access patterns); wherein there is a further latency difference between a further latency associated with the further remote part of the FSE and the LSS latency ([Col. 1, lines 27-30]: For instance, storing data remotely ("in the cloud") often increases access latency); and performing a further prefetch process of further remote sub-parts of the further remote part of the FSE in order to support the further read pattern while maintaining the desired latency ([Col. 2, lines 11-16]: The cloud controller can pre-fetch additional cloud files containing such temporally proximate data to reduce the download latency associated with subsequently downloading such cloud files from the cloud storage system on an on-demand basis.). Regarding Claim 8, the combined teachings of Taylor, Smith, and Michaud disclose the method according to claim 7. Taylor further teaches wherein the performing of the prefetch process comprises prefetching the first number of further remote sub-parts to the cache memory of the processing node layer of the LSS ([Col. 15, lines 29-41]: For instance, upon receiving a request to access a given data block for a file, a cloud controller may analyze the metadata for the file and then predictively pre-fetch other cloud files that contain other nearby data blocks), and prefetching a third number of further remote sub-parts to the storage layer of the LSS, the third number is selected based on at least one out of (a) the further latency difference, or (b) a read request rate of the requestor ([Col. 15, lines 29-41]: Alternatively (and/or additionally), the cloud controller may also pre-fetch data for other associated files that are likely to be accessed in conjunction with the original file. In both situations, the cloud controller can traverse its stored set of metadata to look up the physical locations (e.g., the CVAs and offsets) for cloud files that should be pre-fetched from the cloud storage system). Regarding Claim 9, Taylor discloses a non-transitory computer readable medium for responding to access requests, the non-transitory computer readable medium stores instructions for: detecting, by a controller of a local storage system (LSS) (Fig. 3, cloud controller 300), a read pattern of read requests received from a requestor that is connected to the LSS (Fig. 3, [Col. 6, lines 47-55]: A request server 304 in cloud controller 300 may receive file requests from either local processes or via a network from a client 306; [Col. 2, lines 17-21]: the cloud controller receives user feedback that indicates expected file characteristics and access patterns), wherein the read pattern is estimated to comprise future read requests that are aimed to a remote part of a file system entity (FSE) that is stored at a remote storage system (RSS) (Fig. 3; [Abstract]: the cloud controller additionally determines that an additional cloud file in the cloud storage system includes data that is likely to be accessed in conjunction with the data block, and proceeds to pre-fetch this additional cloud file from the cloud storage system; [Col. 12, lines 43-67]: In some embodiments, a cloud controller attempts to optimize the placement of data into cloud files to reduce future access overhead… users may be provided with a way to configure a policy that reflects anticipated file access patterns); However, Taylor does not explicitly teach “wherein a LSS latency is an average latency measured by the LSS for responses to read requests associated with the requestor and aim to a local part of the FSE that is stored at the LSS; wherein a remote latency is an average latency measured for responses to read requests associated with the requestor and aim to the remote part of the FSE: wherein there is a latency difference that is measured between the remote latency and the LSS latency; wherein the controller comprises an integrated circuit; and performing a prefetch process of remote sub-parts of the remote part of the FSE, wherein the remote sub-parts are expected to be read by the requestor, based on the read pattern; wherein the performing of the prefetch process comprises: prefetching a first number of remote sub-parts to a cache memory of a processing node layer of the LSS, the first number is selected as a number of remote sub-parts that is sufficient to prevent a latency, associated with reading the remote sub-parts, from exceeding a threshold above a desired latency that is based on the LSS latency, and prefetching a second number of remote sub-parts to a storage layer of the LSS, wherein the second number of remote sub-parts are expected to be read by the requestor, according to the read pattern, following an expected request to read the first number of remote sub-parts, and wherein the second number is selected based on at least one out of (a) the latency difference, or (b) a read request rate of the requestor.” On the other hand, in the same field of endeavor, Smith teaches wherein a LSS latency is an average latency measured by the LSS for responses to read requests associated with the requestor and aim to a local part of the FSE that is stored at the LSS; wherein a remote latency is an average latency measured for responses to read requests associated with the requestor and aim to the remote part of the FSE ([0705]: For example, chosen metrics may include… average latency, maximum latency, minimum latency, standard deviation of latency, other statistical measures of latency, combinations of these and/or other measures, metrics and the like etc.): wherein there is a latency difference that is measured between the remote latency and the LSS latency ([0705]: For example, chosen metrics may include, but are not limited to, one or more of the following… average latency, maximum latency, minimum latency, standard deviation of latency, other statistical measures of latency, combinations of these and/or other measures, metrics and the like etc.; [1187]: Thus for example, a first bus may have a longer propagation delay (e.g. latency, etc.)…. than a second bus. For example, buses may be constructed (e.g. wired, laid out, shaped, etc.) so as to reduce (e.g. alter, ameliorate, dampen, etc.) the difference in electrical properties… [1204]: In this case, the differential delay of paths A and B may be the difference in delay between path A and the delay of path B (e.g. the delay of path A minus the delay of path B, etc.)); wherein the controller comprises an integrated circuit (Fig. 18-100; [0749]: In one embodiment, the apparatus 18 - 100 may include a three-dimensional integrated circuit); and performing a prefetch process of remote sub-parts of the remote part of the FSE, wherein the remote sub-parts are expected to be read by the requestor, based on the read pattern (Fig. 5; [Col. 2, lines 11-16]: The cloud controller can pre-fetch additional cloud files containing such temporally proximate data to reduce the download latency associated with subsequently downloading such cloud files from the cloud storage system on an on-demand basis); wherein the performing of the prefetch process comprises: prefetching a first number of remote sub-parts to a cache memory of a processing node layer of the LSS ([0102]: The row buffer may serve, function, etc. as a cache, store, etc. to reduce the latency of subsequent access to that row; [0835]-[0840]: For example, in one embodiment, a prefetch unit (prefetcher, prefetch block, prefetch circuit, predictor, etc.) may predict, and/or otherwise calculate etc. future memory access (e.g. based on history analysis, by analyzing strides and other patterns of memory access), the first number is selected as a number of remote sub-parts that is sufficient to prevent a latency, associated with reading the remote sub-parts, from exceeding a threshold above a desired latency that is based on the LSS latency ([0107]: One or more additional policies may be used including those, for example, that may select precharge operations first, row operations first, column operations first, etc. A column-first scheduling policy may, for example, reduce the access latency to active row; [0705]: For example, chosen metrics may include, but are not limited to, one or more of the following… average latency, maximum latency, minimum latency, standard deviation of latency, other statistical measures of latency, combinations of these and/or other measures, metrics and the like etc… may be such to optimize power (e.g. minimize power, operate such that power does not exceed a threshold, etc.)), and prefetching a second number of remote sub-parts to a storage layer of the LSS, wherein the second number of remote sub-parts are expected to be read by the requestor, according to the read pattern, following an expected request to read the first number of remote sub-parts, ([0102]: While a row is active in the row buffer, any number of reads or writes (column accesses) may be performed. After completion of the column access, the cached row may be written back to the memory array; [0165]: In FIG. 2, in one embodiment, a request and/or response may be asynchronous (e.g. split, separated, variable latency, etc.)). Additionally, Michaud teaches wherein the second number is selected based on at least one out of (a) the latency difference, or (b) a read request rate of the requestor ([Abstract]: a memory read-ahead process is performed which includes identifying a learned memory access pattern associated with the requestor; Figs. 6A-6B; [Col. 4, lines 5-10]: The learned memory access patterns 126 are utilized to perform intelligent read-ahead operations for pre-fetching and storing data in the cache memory 130, thereby decreasing access latencies associated with memory access requests for non-resident data (e.g., cache miss, page fault, etc.)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Taylor to incorporate the teachings of Smith and Michaud to prefetch a first number of remote sub-parts to a cache memory of the local storage system that is less than the threshold, and then prefetch a second number of remote sub-parts based on the latency difference or a read request rate of the requestor. The motivation for doing so would be to optimize channels for latency, as recognized by Smith ([0579] of Smith: In one embodiment, one or more VCs and/or other equivalent channels, paths, circuits, etc. (e.g. channels etc.) may be optimized… For example, one or more channels etc. may be optimized for latency, power, bandwidth and/or one or more other parameters, metrics, aspects, features, combinations of these and the like etc.), and to decrease access latencies associated with memory access requests for remote data, as recognized by Michaud ([Col. 4, lines 5-11] of Michaud: The learned memory access patterns 126 are utilized to perform intelligent read-ahead operations for pre-fetching and storing data in the cache memory 130, thereby decreasing access latencies associated with memory access requests for non-resident data (e.g., cache miss, page fault, etc.) which require time consuming operations to resolve non-resident data). Regarding Claim 10, the combined teachings of Taylor, Smith, and Michaud disclose the non-transitory computer readable medium according to claim 9. Michaud further teaches wherein the remote latency exceeds the LSS latency by a factor of at least ten ([Col. 17, line 63- Col. 18, line 14]: in FIG. 5, the Timestamp value of 100 ms in the row that corresponds to Process ID 49 can indicate a time difference… If the determined time difference does not exceed the predetermined threshold, then the row entries for Process ID 49 will be deemed “recent”… In the example embodiment, since the Counter value in the row for Process ID 49 in the metadata structure 512 shown in FIG. 5 is already set to 10, the Counter value will not be incremented). Regarding Claim 11, the combined teachings of Taylor, Smith, and Michaud disclose the non-transitory computer readable medium according to claim 9. Michaud further teaches, that stores instructions for maintaining, as long as further remote-sub-parts are expected to be read, at least the first number of remote sub-parts in the cache memory ([Col. 9, lines 62-67]: While the example embodiment of FIG. 2 illustrates the databases 230 and 232 as stand-alone entities for ease of illustration, it is to be understood that the databases 230 and 232 would be maintained in regions of the volatile memory 212 and/or or non-volatile memory 214 of the system memory 210 for low-latency access). Regarding Claim 12, the combined teachings of Taylor, Smith, and Michaud disclose the non-transitory computer readable medium according to claim 11. Michaud further teaches, that stores instructions for maintaining, as long as the further remote-sub-parts are expected to be read, at least the second number of remote sub-parts in the storage layer ([Col. 9, lines 62-67]: While the example embodiment of FIG. 2 illustrates the databases 230 and 232 as stand-alone entities for ease of illustration, it is to be understood that the databases 230 and 232 would be maintained in regions of the volatile memory 212 and/or or non-volatile memory 214 of the system memory 210 for low-latency access). Regarding Claim 13, the combined teachings of Taylor, Smith, and Michaud disclose the non-transitory computer readable medium according to claim 9. Michaud further teaches wherein the desired latency does not exceed the LSS latency (Fig. 5; [Col. 17, line 63- Col. 18, line 14]: compare the time difference to a predetermined threshold value. If the determined time difference does not exceed the predetermined threshold, then the row entries for Process ID 49 will be deemed “recent” and the Counter value in the given row for Process ID 49 can be incremented). Regarding Claim 14, the combined teachings of Taylor, Smith, and Michaud disclose the non-transitory computer readable medium according to claim 9. Taylor further teaches wherein the read pattern is estimated to comprise local read requests aimed to a local part of the FSE that is stored in the LSS (Fig. 3; [Col. 6, lines 49-54]: A request server 304 in cloud controller 300 may receive file requests from either local processes or via a network from a client 306. These requests are presented to a storage management system that includes a transactional filesystem 308 that manages a set of filesystem metadata 310 and a local storage system 312… A set of block records 314 in metadata 310 include pointer fields that indicate the location of the file data in a disk block 316 in local storage 312); wherein the performing of the prefetch process comprises pre-fetching local sub-parts of the local part of the FSE in order to support the read pattern while maintaining the desired latency ([Col. 17, lines 27-31]: FIG. 6A illustrates a computing device 600 that receives and forwards requests for filesystem operations. Computing device 600 executes a request server 608 that receives requests for file operations from clients (610-612) in its computing environment 614). Regarding Claim 15, the combined teachings of Taylor, Smith, and Michaud disclose the non-transitory computer readable medium according to claim 9. Taylor further teaches, that stores instructions for: detecting, by the controller, a further read pattern that is associated with the requestor and is estimated to comprise further future read requests that are aimed to a further remote part of the FSE that is stored at a further remote storage system (FSS) ([Col. 12, lines 57-63]: In some embodiments, a cloud controller attempts to optimize the placement of data into cloud files to reduce future access overhead. For instance, the cloud controller may strive to, when possible, store all blocks for a file in the same cloud file… users may be provided with a way to configure a policy that reflects anticipated file access patterns); wherein there is a further latency difference between a further latency associated with the further remote part of the FSE and the LSS latency ([Col. 1, lines 27-30]: For instance, storing data remotely ("in the cloud") often increases access latency); and performing a further prefetch process of further remote sub-parts of the further remote part of the FSE in order to support the further read pattern while maintaining the desired latency ([Col. 2, lines 11-16]: The cloud controller can pre-fetch additional cloud files containing such temporally proximate data to reduce the download latency associated with subsequently downloading such cloud files from the cloud storage system on an on-demand basis). Regarding Claim 16, the combined teachings of Taylor, Smith, and Michaud disclose the non-transitory computer readable medium according to claim 15. Taylor further teaches wherein the performing of the prefetch process comprises prefetching the first number of further remote sub-parts to the cache memory of the processing node layer of the LSS ([Col. 15, lines 29-41]: For instance, upon receiving a request to access a given data block for a file, a cloud controller may analyze the metadata for the file and then predictively pre-fetch other cloud files that contain other nearby data blocks), and prefetching a third number of further remote sub-parts to the storage layer of the LSS , the third number is selected based on at least one out of (a) the further latency difference, or (b) a read request rate of the requestor ([Col. 15, lines 29-41]: Alternatively (and/or additionally), the cloud controller may also pre-fetch data for other associated files that are likely to be accessed in conjunction with the original file. In both situations, the cloud controller can traverse its stored set of metadata to look up the physical locations (e.g., the CVAs and offsets) for cloud files that should be pre-fetched from the cloud storage system). Regarding Claim 17, Taylor discloses a local storage system (LSS), comprising: a processing node layer of the LSS, the processing node comprising a cache memory; a storage layer; and a controller that comprises an integrated circuit ([Col. 6, lines 47-49]: FIG. 3 illustrates an exemplary system in which a cloud controller 300 (e.g., a caching storage device) manages and accesses data stored in a cloud storage system 302; [Col. 3, line 65 - Col. 7, line 10]: For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips) and is configured to: detect, a read pattern of read requests received from a requestor that is connected to the LSS (Fig. 3; [Col. 2, lines 17-19]: In some embodiments, the cloud controller receives user feedback that indicates expected file characteristics and access patterns; [Col. 6, lines 49-52]: A request server 304 in cloud controller 300 may receive file requests from either local processes or via a network from a client 306; [Col. 12, lines 58-60]: For example, users may be provided with a way to configure a policy that reflects anticipated file access patterns), wherein the read pattern is estimated to comprise future read requests that are aimed to a remote part of a file system entity (FSE) that is stored at a remote storage system (RSS) (Fig. 3; [Abstract]: the cloud controller additionally determines that an additional cloud file in the cloud storage system includes data that is likely to be accessed in conjunction with the data block, and proceeds to pre-fetch this additional cloud file from the cloud storage system; [Col. 12, lines 43-67]: In some embodiments, a cloud controller attempts to optimize the placement of data into cloud files to reduce future access overhead). However, Taylor does not explicitly teach “wherein a LSS latency is an average latency measured by the LSS for responses to read requests associated with the requestor and aim to a local part of the FSE that is stored at the LSS; wherein a remote latency is an average latency measured for responses to read requests associated with the requestor and aim to the remote part of the FSE: wherein there is a latency difference that is measured between the remote and the LSS latency; and perform a prefetch process of remote sub-parts of the remote part of the FSE, wherein the remote sub-parts are expected to be read by the requestor, based on the read pattern; wherein the performing of the prefetch process comprises: prefetching a first number of remote sub-parts to the cache memory, the first number is selected as a number of remote sub-parts that is sufficient to prevent a latency, associated with reading the remote sub-parts, from exceeding a threshold above a desired latency that is based on the LSS latency, and prefetching a second number of remote sub-parts to the storage layer, wherein the second number of remote sub-parts are expected to be read by the requestor, according to the read pattern, following an expected request to read the first number of remote sub-parts, and wherein the second number is selected based on at least one out of (a) the latency difference, or (b) a read request rate of the requestor”. On the other hand, in the same field of endeavor, Smith teaches wherein a LSS latency is an average latency measured by the LSS for responses to read requests associated with the requestor and aim to a local part of the FSE that is stored at the LSS; wherein a remote latency is an average latency measured for responses to read requests associated with the requestor and aim to the remote part of the FSE ([0705]: For example, chosen metrics may include… average latency, maximum latency, minimum latency, standard deviation of latency, other statistical measures of latency, combinations of these and/or other measures, metrics and the like etc.): wherein there is a latency difference that is measured between the remote and the LSS latency ([0705]: For example, chosen metrics may include, but are not limited to, one or more of the following… average latency, maximum latency, minimum latency, standard deviation of latency, other statistical measures of latency, combinations of these and/or other measures, metrics and the like etc.; [1187]: Thus for example, a first bus may have a longer propagation delay (e.g. latency, etc.)…. than a second bus. For example, buses may be constructed (e.g. wired, laid out, shaped, etc.) so as to reduce (e.g. alter, ameliorate, dampen, etc.) the difference in electrical properties… [1204]: In this case, the differential delay of paths A and B may be the difference in delay between path A and the delay of path B (e.g. the delay of path A minus the delay of path B, etc.)); and perform a prefetch process of remote sub-parts of the remote part of the FSE, wherein the remote sub-parts are expected to be read by the requestor, based on the read pattern (Fig. 5; [Col. 2, lines 11-16]: The cloud controller can pre-fetch additional cloud files containing such temporally proximate data to reduce the download latency associated with subsequently downloading such cloud files from the cloud storage system on an on-demand basis); wherein the performing of the prefetch process comprises: prefetching a first number of remote sub-parts to a cache memory of a processing node layer of the LSS ([0102]: The row buffer may serve, function, etc. as a cache, store, etc. to reduce the latency of subsequent access to that row; [0835]-[0840]: For example, in one embodiment, a prefetch unit (prefetcher, prefetch block, prefetch circuit, predictor, etc.) may predict, and/or otherwise calculate etc. future memory access (e.g. based on history analysis, by analyzing strides and other patterns of memory access), the first number is selected as a number of remote sub-parts that is sufficient to prevent a latency, associated with reading the remote sub-parts, from exceeding a threshold above a desired latency that is based on the LSS latency ([0107]: One or more additional policies may be used including those, for example, that may select precharge operations first, row operations first, column operations first, etc. A column-first scheduling policy may, for example, reduce the access latency to active row; [0705]: For example, chosen metrics may include, but are not limited to, one or more of the following… average latency, maximum latency, minimum latency, standard deviation of latency, other statistical measures of latency, combinations of these and/or other measures, metrics and the like etc… may be such to optimize power (e.g. minimize power, operate such that power does not exceed a threshold, etc.)), and prefetching a second number of remote sub-parts to a storage layer of the LSS, wherein the second number of remote sub-parts are expected to be read by the requestor, according to the read pattern, following an expected request to read the first number of remote sub-parts, ([0102]: While a row is active in the row buffer, any number of reads or writes (column accesses) may be performed. After completion of the column access, the cached row may be written back to the memory array; [0165]: In FIG. 2, in one embodiment, a request and/or response may be asynchronous (e.g. split, separated, variable latency, etc.)). Additionally, Michaud teaches wherein the second number is selected based on at least one out of (a) the latency difference, or (b) a read request rate of the requestor ([Abstract]: a memory read-ahead process is performed which includes identifying a learned memory access pattern associated with the requestor; Figs. 6A-6B; [Col. 4, lines 5-10]: The learned memory access patterns 126 are utilized to perform intelligent read-ahead operations for pre-fetching and storing data in the cache memory 130, thereby decreasing access latencies associated with memory access requests for non-resident data (e.g., cache miss, page fault, etc.)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Taylor to incorporate the teachings of Smith and Michaud to prefetch a first number of remote sub-parts to a cache memory of the local storage system that is less than the threshold, and then prefetch a second number of remote sub-parts based on the latency difference or a read request rate of the requestor. The motivation for doing so would be to optimize channels for latency, as recognized by Smith ([0579] of Smith: In one embodiment, one or more VCs and/or other equivalent channels, paths, circuits, etc. (e.g. channels etc.) may be optimized… For example, one or more channels etc. may be optimized for latency, power, bandwidth and/or one or more other parameters, metrics, aspects, features, combinations of these and the like etc.), and to decrease access latencies associated with memory access requests for remote data, as recognized by Michaud ([Col. 4, lines 5-11] of Michaud: The learned memory access patterns 126 are utilized to perform intelligent read-ahead operations for pre-fetching and storing data in the cache memory 130, thereby decreasing access latencies associated with memory access requests for non-resident data (e.g., cache miss, page fault, etc.) which require time consuming operations to resolve non-resident data). Conclusion 25. THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIRLEY D. HICKS whose telephone number is (571)272-3304. The examiner can normally be reached Mon - Fri 7:30 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached on (571) 272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S D H/Examiner, Art Unit 2168 /CHARLES RONES/Supervisory Patent Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Jul 10, 2023
Application Filed
Aug 23, 2024
Non-Final Rejection — §103
Dec 02, 2024
Response Filed
Apr 02, 2025
Final Rejection — §103
Aug 05, 2025
Request for Continued Examination
Aug 08, 2025
Response after Non-Final Action
Aug 21, 2025
Non-Final Rejection — §103
Nov 25, 2025
Response Filed
Mar 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596682
SYSTEM AND METHOD FOR OBJECT STORE FEDERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12499102
HIERARCHICAL DELIMITER IDENTIFICATION FOR PARSING OF RAW DATA
2y 5m to grant Granted Dec 16, 2025
Patent 12499146
MACHINE LEARNING AND NATURAL LANGUAGE PROCESSING (NLP)-BASED SYSTEM FOR SYSTEM-ON-CHIP (SoC) TROUBLESHOOTING
2y 5m to grant Granted Dec 16, 2025
Patent 12405818
BATCHING WAVEFORM DATA
2y 5m to grant Granted Sep 02, 2025
Patent 12380126
DISCOVERY OF SOURCE RANGE PARTITIONING INFORMATION IN DATA EXTRACT JOB
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+56.3%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 107 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month