Prosecution Insights
Last updated: April 18, 2026
Application No. 17/867,347

QUEUE OPTIMIZATION VIA PREDICITIVE CACHING IN CLOUD COMPUTING

Non-Final OA §103
Filed
Jul 18, 2022
Examiner
TSAI, SHENG JEN
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Relativity Oda LLC
OA Round
7 (Non-Final)
70%
Grant Probability
Favorable
7-8
OA Rounds
3y 6m
To Grant
83%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
556 granted / 790 resolved
+15.4% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
25 currently pending
Career history
815
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
24.7%
-15.3% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 790 resolved cases

Office Action

§103
DETAILED ACTION 1. This Office Action is taken in response to Applicants’ Amendments and Remarks filed on 3/9/2026 regarding application 17/867,347 filed on 7/18/2022. Claims 1, 3-9, 11-17, 19-22 are pending for consideration. 2. Response to Amendments and Remarks Applicants’ amendments and remarks have been fully and carefully considered, with the Examiner’s response set forth below. (1) In response to the amendments and remarks, an updated claim analysis has been made with newly identified reference(s). Refer to the corresponding sections of the following Office Action for details. 3. Examiner’s Note (1) In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient. (2) Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1, 6-9, and 14-17, and 19-21 are rejected under 35 U.S.C. 103 as being anticipated by Anderson et al. (US Patent 6,047,356, hereinafter Anderson), and in view of Bi et al. (US Patent Application Publication 2014/0129814, hereinafter Bi) and Ono et al. (US Patent Application Publication 2005/0257228, hereinafter Ono). As to claim 1, Anderson teaches A computer-implemented method for object-based data storage [the corresponding “object” may be “data” and/or “file” -- data storage system as shown in figure 1; A distributed file system with dedicated nodes capable of being connected to workstations at their bus. The system uses a complementary client-side and server-side file caching method that increases parallelism by issuing multiple server requests to keep the hardware devices busy simultaneously. Most of the node memory is used for file caching and input/output (I/O) device buffering using dynamic memory organization, reservation and allocation methods for competing memory-intensive activities (abstract); Bi also teaches this limitation – Described are an operating system startup acceleration method and device, a terminal and a computer readable medium. The method comprises: acquiring prefetch information corresponding to at least one process to be accelerated in a procedure of operating system startup, wherein the prefetch information comprises a file path, a shift value and a length value of a data block required by the process to be accelerated; and reading a corresponding data block into a system cache according to the acquired prefetch information, and completing a startup procedure of the process to be accelerated using the data block in the system cache (abstract)], comprising: maintaining, with one or more processors, a queue of operations relating to a plurality of documents maintained at an object-based data storage [the corresponding “documents” may be “files;” – … The present invention uses read-ahead and write-behind caching techniques for sequential rather than repetitive file access, which attempt to separate the disk or network access from the read or write steps of application programs in order to lessen the need for waiting by the application program. In read ahead, future file read access by an application is predicted and the data is read into the cache before being requested by the application. In write behind, data to be written is placed into a cache and, after the application program resumes execution, written to the disk (c1 L35-45); Write requests for the server are placed in the request queue for the appropriate device (box 216) and then an event is issued to the server task at the head of the server task queue (box 218). Read requests for the server are placed in the request queue for the appropriate device (box 220) but an event is issued (box 224) only if there is no read pending on the device (box 222) as explained below (c9 L35-42)]; identifying, with the one or more processors, a read operation in the queue to read a document from the object-based data storage [read operations as shown in figures 8, 9, and 10A; A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); A method of the present invention for caching file reads by a client from a network file server includes providing caches on both the server and the client, the server cache reading the data in mass storage device allocation units, remainders of files, or whole caches, whichever is less, and the client cache storing the data in multiples of cache blocks. Sufficient cache blocks are read ahead into the client cache to keep the server cache one mass storage device access ahead of the data currently read by the client application (c3 L18-26); A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); … Read requests for the server are placed in the request queue for the appropriate device (box 220) but an event is issued (box 224) only if there is no read pending on the device (box 222) as explained below (c9 L35-42); Bi also teaches read operation -- Described are an operating system startup acceleration method and device, a terminal and a computer readable medium. The method comprises: acquiring prefetch information corresponding to at least one process to be accelerated in a procedure of operating system startup, wherein the prefetch information comprises a file path, a shift value and a length value of a data block required by the process to be accelerated; and reading a corresponding data block into a system cache according to the acquired prefetch information, and completing a startup procedure of the process to be accelerated using the data block in the system cache (abstract)]; analyzing, with the one or more processors, the identified read operation to identify an additional document to read from the object-based data storage by identifying that the document and the additional document are included in a same file path [read operations as shown in figures 8, 9, and 10A; A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); … The present invention uses read-ahead and write-behind caching techniques for sequential rather than repetitive file access, which attempt to separate the disk or network access from the read or write steps of application programs in order to lessen the need for waiting by the application program. In read ahead, future file read access by an application is predicted and the data is read into the cache before being requested by the application. In write behind, data to be written is placed into a cache and, after the application program resumes execution, written to the disk (c1 L35-45); When the last operation was a read and a sequential read pattern has been detected, the client cache is in the read-ahead state. The cache contains a contiguous range of file data. Asynchronous "read-ahead" requests are issued for all blocks in the client cache not yet retrieved from the server to maximize performance by increasing parallelism … (c7 L67 to c8 L17); Thus, the “future file” that is predicted to be needed by the application is the corresponding “additional file” that is not associated with the read operation currently in the queue, because the “future file” is not requested by the application, and not in the queue of read requests. Therefore, Anderson clearly teaches the limitation of “additional document;” Bi more expressively teaches this limitation – a prefetch information acquisition module configured to acquire prefetch information corresponding to at least one process to be accelerated in the procedure of operating system startup, wherein the prefetch information comprises a file path, a shift value and a length value of a data block required by the process to be accelerated … (¶ 0013-0015); Generally, a data block or a piece of data content on a low speed storage device such as a hard disk is described by the format of < file path, shift value, length value>. Taking a hard disk for example, the file path represents a file where the data block locates in the hard disk, the shift value represents the byte offset in the file where the data block locates in the hard disk and the length value represents the byte size of the data block (¶ 0041); In Table 1, since both the first data block and the second data block belong to the B.DLL file, they have the same file path. At the same time, the last byte (i.e. shift value+length value) of the first data block is followed by the first byte (i.e. shift value) of the second data block, thus the first data block and the second data block are contiguous data blocks. Length values and shift values of contiguous data blocks may be combined to obtain a new data block (¶ 0046)], wherein the additional document is not associated with a read operation in the queue of operations [read operations as shown in figures 8, 9, and 10A; A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); … The present invention uses read-ahead and write-behind caching techniques for sequential rather than repetitive file access, which attempt to separate the disk or network access from the read or write steps of application programs in order to lessen the need for waiting by the application program. In read ahead, future file read access by an application is predicted and the data is read into the cache before being requested by the application. In write behind, data to be written is placed into a cache and, after the application program resumes execution, written to the disk (c1 L35-45); Bi also teaches this limitation by way of “prefetch” – Described are an operating system startup acceleration method and device, a terminal and a computer readable medium. The method comprises: acquiring prefetch information corresponding to at least one process to be accelerated in a procedure of operating system startup, wherein the prefetch information comprises a file path, a shift value and a length value of a data block required by the process to be accelerated; and reading a corresponding data block into a system cache according to the acquired prefetch information, and completing a startup procedure of the process to be accelerated using the data block in the system cache (abstract)], and wherein the file path comprises a folder path [Bi teaches that the files are DLL files, and it is a fact that DLL files are organize in a folder – Taking a Windows operating system for example, when system services and third party applications are started, each of the system services and third party applications generally correspond to a process. Basically, each process needs to load an executable file (executable file) and a Dynamic Link Library (DLL) file. When a process loads an executable file and a DLL file, it uses a memory mapping file to access the executable file and the DLL file (¶ 0032); To facilitate explanation, shift values and length values of three data blocks in a B.DLL file required by a process A are illustrated in Table 1: … In Table 1, since both the first data block and the second data block belong to the B.DLL file, they have the same file path. At the same time, the last byte (i.e. shift value+length value) of the first data block is followed by the first byte (i.e. shift value) of the second data block, thus the first data block and the second data block are contiguous data blocks. Length values and shift values of contiguous data blocks may be combined to obtain a new data block (¶ 0043-0046); Ono explicitly teaches that DLL files are organized in a folder -- A folder list box 213 is displayed when one of the change buttons 196 (discussed above with reference to FIG. 7) is clicked on. The folder list box 213 shows a list of folders such as dll files constituting effect plug-ins in the corresponding text box that indicates a list of available effect plug-ins. The user may click on an add button 214 to add a new folder to the folder list box 213; the user may also click on a delete button 215 to delete a desired folder from the folder list box 213. Any of the folders such as dll files constituting the effect plug-ins in the folder list box 213, when set as described, is not copied illustratively to a program execution area in the RAM 36 but merely assigned a file path (¶ 0097)]; in response to the identification of the additional document to read from the object-based data storage by identifying that the document and the additional document are included in a same file path [read operations as shown in figures 8, 9, and 10A; A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); … The present invention uses read-ahead and write-behind caching techniques for sequential rather than repetitive file access, which attempt to separate the disk or network access from the read or write steps of application programs in order to lessen the need for waiting by the application program. In read ahead, future file read access by an application is predicted and the data is read into the cache before being requested by the application. In write behind, data to be written is placed into a cache and, after the application program resumes execution, written to the disk (c1 L35-45); When the last operation was a read and a sequential read pattern has been detected, the client cache is in the read-ahead state. The cache contains a contiguous range of file data. Asynchronous "read-ahead" requests are issued for all blocks in the client cache not yet retrieved from the server to maximize performance by increasing parallelism … (c7 L67 to c8 L17); Thus, the “future file” that is predicted to be needed by the application is the corresponding “additional file” that is not associated with the read operation currently in the queue, because the “future file” is not requested by the application, and not in the queue of read requests. Therefore, Anderson clearly teaches the cited limitation; Bi more expressively teaches this limitation – a prefetch information acquisition module configured to acquire prefetch information corresponding to at least one process to be accelerated in the procedure of operating system startup, wherein the prefetch information comprises a file path, a shift value and a length value of a data block required by the process to be accelerated … (¶ 0013-0015); Generally, a data block or a piece of data content on a low speed storage device such as a hard disk is described by the format of < file path, shift value, length value>. Taking a hard disk for example, the file path represents a file where the data block locates in the hard disk, the shift value represents the byte offset in the file where the data block locates in the hard disk and the length value represents the byte size of the data block (¶ 0041); In Table 1, since both the first data block and the second data block belong to the B.DLL file, they have the same file path. At the same time, the last byte (i.e. shift value+length value) of the first data block is followed by the first byte (i.e. shift value) of the second data block, thus the first data block and the second data block are contiguous data blocks. Length values and shift values of contiguous data blocks may be combined to obtain a new data block (¶ 0046)]; creating, with the one or more processors, a read operation for the additional document [as shown in figure 8, steps 126, 128, 132, 136, 142, 130, 144, 146, and 152; When the last operation was a read and a sequential read pattern has been detected, the client cache is in the read-ahead state. The cache contains a contiguous range of file data. Asynchronous "read-ahead" requests are issued for all blocks in the client cache not yet retrieved from the server to maximize performance by increasing parallelism … (c7 L67 to c8 L17); The host read request is handled as shown in the flow chart of FIG. 8 using the network protocol diagram of FIG. 10A … If a sequential read is detected, the cache is set to sequential read status (box 128). The first block (if not already in the cache) and the appropriate number (as discussed above) of subsequent blocks are requested (box 132) … Otherwise, the block is deleted and the next block not yet in the cache is requested (box 144) … (c10 L23-47); … determining whether said first read request is part of a sequential pattern of read requests; if said first read request is part of a sequential pattern of read requests and said first block of file data is not in said first cache area of said general purpose cache memory, reading a range of blocks beginning with said first block of file data into said first cache area from said local mass storage device … (claim 5); Bi also teaches read operation -- Described are an operating system startup acceleration method and device, a terminal and a computer readable medium. The method comprises: acquiring prefetch information corresponding to at least one process to be accelerated in a procedure of operating system startup, wherein the prefetch information comprises a file path, a shift value and a length value of a data block required by the process to be accelerated; and reading a corresponding data block into a system cache according to the acquired prefetch information, and completing a startup procedure of the process to be accelerated using the data block in the system cache (abstract)]; retrieving, with the one or more processors, both the document and the additional document from the object-based data storage [read operations as shown in figures 8, 9, and 10A; A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); … The present invention uses read-ahead and write-behind caching techniques for sequential rather than repetitive file access, which attempt to separate the disk or network access from the read or write steps of application programs in order to lessen the need for waiting by the application program. In read ahead, future file read access by an application is predicted and the data is read into the cache before being requested by the application. In write behind, data to be written is placed into a cache and, after the application program resumes execution, written to the disk (c1 L35-45); Bi more also teaches this limitation – a prefetch information acquisition module configured to acquire prefetch information corresponding to at least one process to be accelerated in the procedure of operating system startup, wherein the prefetch information comprises a file path, a shift value and a length value of a data block required by the process to be accelerated … (¶ 0013-0015); Generally, a data block or a piece of data content on a low speed storage device such as a hard disk is described by the format of < file path, shift value, length value>. Taking a hard disk for example, the file path represents a file where the data block locates in the hard disk, the shift value represents the byte offset in the file where the data block locates in the hard disk and the length value represents the byte size of the data block (¶ 0041); In Table 1, since both the first data block and the second data block belong to the B.DLL file, they have the same file path. At the same time, the last byte (i.e. shift value+length value) of the first data block is followed by the first byte (i.e. shift value) of the second data block, thus the first data block and the second data block are contiguous data blocks. Length values and shift values of contiguous data blocks may be combined to obtain a new data block (¶ 0046)]; and storing, with the one or more processors, the document and the additional document in a cache memory [read operations as shown in figures 8, 9, and 10A; A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); … The present invention uses read-ahead and write-behind caching techniques for sequential rather than repetitive file access, which attempt to separate the disk or network access from the read or write steps of application programs in order to lessen the need for waiting by the application program. In read ahead, future file read access by an application is predicted and the data is read into the cache before being requested by the application. In write behind, data to be written is placed into a cache and, after the application program resumes execution, written to the disk (c1 L35-45); Bi also teaches read operation -- Described are an operating system startup acceleration method and device, a terminal and a computer readable medium. The method comprises: acquiring prefetch information corresponding to at least one process to be accelerated in a procedure of operating system startup, wherein the prefetch information comprises a file path, a shift value and a length value of a data block required by the process to be accelerated; and reading a corresponding data block into a system cache according to the acquired prefetch information, and completing a startup procedure of the process to be accelerated using the data block in the system cache (abstract)]. Regarding claim 1, Anderson does not expressively teach the document and the additional document are included in a same file path However, files that are related and associated with each other are typically stored in the same file path under a folder/directory to facilitate efficient retrieval and accessing. For example, Bi specifically teaches DLL files to be prefetched are organized in the same file path [Generally, a data block or a piece of data content on a low speed storage device such as a hard disk is described by the format of < file path, shift value, length value>. Taking a hard disk for example, the file path represents a file where the data block locates in the hard disk, the shift value represents the byte offset in the file where the data block locates in the hard disk and the length value represents the byte size of the data block (¶ 0041); In Table 1, since both the first data block and the second data block belong to the B.DLL file, they have the same file path. At the same time, the last byte (i.e. shift value+length value) of the first data block is followed by the first byte (i.e. shift value) of the second data block, thus the first data block and the second data block are contiguous data blocks. Length values and shift values of contiguous data blocks may be combined to obtain a new data block (¶ 0046)]. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to store files that are related and associated with each other in the same file path under a folder/directory, as demonstrated by Bi, and to incorporate it into the existing scheme disclosed by Anderson, in order to facilitate efficient retrieval and accessing. It is also noted that, although Bi does not explicitly mention that the DLL files are organized in a folder/directory structure, it is a fact that DLL files are organized in a folder/directory structure, as demonstrated by the Ono reference. As to claim 6, Anderson in view of Bi teaches The computer-implemented method of claim 1, wherein: the read operation is comprised in a plurality of read operations in the queue; and the identifying the additional document to read from the object-based data storage to the cache memory comprises analyzing the plurality of read operations to identify the additional document [Anderson -- read operations as shown in figures 8, 9, and 10A; A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); … The present invention uses read-ahead and write-behind caching techniques for sequential rather than repetitive file access, which attempt to separate the disk or network access from the read or write steps of application programs in order to lessen the need for waiting by the application program. In read ahead, future file read access by an application is predicted and the data is read into the cache before being requested by the application. In write behind, data to be written is placed into a cache and, after the application program resumes execution, written to the disk (c1 L35-45)]. As to claim 7, Anderson in view of Bi teaches The computer-implemented method of claim 1, further comprising: subsequent to the storing of the additional document in the cache memory, detecting, with the one or more processors, a read operation in the queue of operations associated with the additional document; and providing, with the one or more processors, the additional document from the cache memory and without sending an additional retrieve transaction to the object- based data storage [Anderson -- read operations as shown in figures 8, 9, and 10A; A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); … The present invention uses read-ahead and write-behind caching techniques for sequential rather than repetitive file access, which attempt to separate the disk or network access from the read or write steps of application programs in order to lessen the need for waiting by the application program. In read ahead, future file read access by an application is predicted and the data is read into the cache before being requested by the application. In write behind, data to be written is placed into a cache and, after the application program resumes execution, written to the disk (c1 L35-45)]. As to claim 8, Anderson in view of Bi teaches The computer-implemented method of claim 1, further comprising slicing the documents maintained at the object-based storage into individual slices, and wherein each individual slice of the individual slices has a memory size of less than a predetermined individual slice memory size [Anderson -- the corresponding “slices” may be the “blocks” of the file -- read operations as shown in figures 8, 9, and 10A; A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); Client cache sizes are determined as follows. One soft reservation is made, equal to the sum of the ideal size is of all client caches. When the soft reservation is fully granted, the actual size of each client cache is its ideal size. If, however, the soft reservation request is not fully granted, the memory is divided among client caches in proportion to the predicted data rate of client access to each open file. This rate may be calculated periodically by the custodian task running on the node and described in greater detail below. In the preferred embodiment, this rate is computed as an exponentially weighted average of the number of bytes transferred in fixed periods of time. This average is calculated by adding one-half the previous average and one-half the number of bytes transferred during the latest time period. Other prediction techniques are possible without departing from the scope of the present invention … (c8 L24-57)]. As to claim 9, it recites substantially the same limitations as in claim 1, and is rejected for the same reasons set forth in the analysis of claim 1. Refer to “As to claim 1” presented earlier in this Office Action for details. As to claim 14, it recites substantially the same limitations as in claim 6, and is rejected for the same reasons set forth in the analysis of claim 6. Refer to “As to claim 6” presented earlier in this Office Action for details. As to claim 15, it recites substantially the same limitations as in claim 7, and is rejected for the same reasons set forth in the analysis of claim 7. Refer to “As to claim 7” presented earlier in this Office Action for details. As to claim 16, it recites substantially the same limitations as in claim 8, and is rejected for the same reasons set forth in the analysis of claim 8. Refer to “As to claim 8” presented earlier in this Office Action for details. As to claim 17, it recites substantially the same limitations as in claim 1, and is rejected for the same reasons set forth in the analysis of claim 1. Refer to “As to claim 1” presented earlier in this Office Action for details. As to claim 19, it recites substantially the same limitations as in claim 6, and is rejected for the same reasons set forth in the analysis of claim 6. Refer to “As to claim 6” presented earlier in this Office Action for details. As to claim 20, Anderson in view of Bi teaches The one or more non-transitory computer readable media of claim 19, wherein: the retrieval of the document from the object-based storage comprises retrieving an initial document fragment for the document; and the storing of the document in the cache memory comprises storing the initial document fragment for the document in the cache memory without storing a subsequent document fragment in the cache memory [Anderson -- A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); In the empty state, a client cache contains no data. Its ideal, maximum and actual sizes are zero. A client cache is in this state initially, and whenever memory allocation has reduced its allocated size to zero (c7 L54-57)]. As to claim 21, Anderson in view of Bi teaches The one or more non-transitory computer readable media of claim 17, wherein: the read operation is comprised in a plurality of read operations in the queue [Anderson -- A distributed file system with dedicated nodes capable of being connected to workstations at their bus. The system uses a complementary client-side and server-side file caching method that increases parallelism by issuing multiple server requests to keep the hardware devices busy simultaneously. Most of the node memory is used for file caching and input/output (I/O) device buffering using dynamic memory organization, reservation and allocation methods for competing memory-intensive activities (abstract); Distributed file systems include network nodes, which are computer systems attached directly to a network. Each network node has a processor, random-access memory (RAM), and an interface to a communication network. Nodes that are able to act as "servers" are interfaced to mass storage devices such as disk drives. The mass storage devices are usually partitioned in allocation units and data is read from or written to the device in multiples of sectors up to one allocation unit. In an access to a file on a given disk, the network node where the disk is located is called the "server" and the node from which the request was issued is called the "client." In a read access, data flows from the server to the client; in a write access, data flows from the client to the server … (c1 L10-25); … Read requests for the server are placed in the request queue for the appropriate device (box 220) but an event is issued (box 224) only if there is no read pending on the device (box 222) as explained below (c9 L35-42)]; and the instructions, when executed by one or more processors, further cause the one or more processors to limit a quantity of additional documents that may be loaded into the cache memory to a predetermined number across the plurality of read operations [Anderson -- A memory reservation method of the present invention includes specifying a minimum and a maximum amount of memory to be reserved for an activity. If enough memory is available, an amount of memory between the minimum and the maximum is reserved for the activity. For each activity for which memory has been reserved, the amount of memory reserved is dynamically adjusted between the minimum and the maximum such that the sum of all reservations is less than or equal to the memory available (c2 L48-57); Node 34 receives a sequence of client file access requests from host 24. The requests could also originate from the node itself without departing from the scope of the present invention. The types of requests include: 1) open a particular local or remote file; 2) read a range of bytes from an open local or remote file into a memory cache in the node … (c4 L44-55); The DMS 54 has separate notions of "reservation" and "allocation." An activity can reserve some number of buffers; this does not allocate specific buffers, but ensures that a subsequent allocation request will succeed. In FIG. 2, the DMS 54 arbitrates conflicting memory reservations by activities 50, 52 and 53. The DMS 54 provides two types of memory reservation. An activity makes a "hard" reservation for its minimal memory requirement. A hard reservation request specifies a number of buffers, and either succeeds or fails … In addition, an activity can make a "soft" reservation request, in which it specifies the maximum number of buffers it can use … (c5 L35-67 and c6 L1-44); When the last operation was a read and a sequential read pattern has been detected, the client cache is in the read-ahead state … For a given open file, the optimal number N of parallel requests depends on the client cache block size X, the disk allocation unit size Y, the average network latency Z, and the network bandwidth B … Thus the ideal size of a read-ahead cache is N as defined above. The maximum arid actual sizes depend on memory allocation (c7 L66 to c8 L17)]. 5. Claims 3-5, and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Anderson in view of Bi, and further in view of Ivonkovic et al. (US Patent Application Publication 2021/0132915, hereinafter Ivonkovic). As to claim 3, Anderson in view of Bi teaches The computer-implemented method of claim 1, wherein the analyzing the read operation identified in the queue comprises analyzing the read operation relating to the document with a machine learning algorithm trained by a plurality of documents maintained at the object-based data storage [Anderson -- Client cache sizes are determined as follows. One soft reservation is made, equal to the sum of the ideal size is of all client caches. When the soft reservation is fully granted, the actual size of each client cache is its ideal size. If, however, the soft reservation request is not fully granted, the memory is divided among client caches in proportion to the predicted data rate of client access to each open file. This rate may be calculated periodically by the custodian task running on the node and described in greater detail below. In the preferred embodiment, this rate is computed as an exponentially weighted average of the number of bytes transferred in fixed periods of time. This average is calculated by adding one-half the previous average and one-half the number of bytes transferred during the latest time period. Other prediction techniques are possible without departing from the scope of the present invention … (c8 L24-57)]. Regarding claim 3, Anderson in view of Bi does not expressively mention the term “machine learning algorithms.” However, Anderson does teach a computer mode updating the weighted time-average to predict data rate of client access to each open file [… the memory is divided among client caches in proportion to the predicted data rate of client access to each open file. This rate may be calculated periodically by the custodian task running on the node and described in greater detail below. In the preferred embodiment, this rate is computed as an exponentially weighted average of the number of bytes transferred in fixed periods of time. This average is calculated by adding one-half the previous average and one-half the number of bytes transferred during the latest time period. Other prediction techniques are possible without departing from the scope of the present invention … (c8 L24-57)]. This represents one type of machine learning algorithm because the parameters are updated by the computer as time goes by. Further, machine learning algorithms are well known and commonly used in the art. For example, Ivonkovic specifically teaches using machine learning algorithms to select a target in a vector space [The target source code may include a pair of target source code snippets from a target codebase and generating the code insight for the target source code using the machine learning model may include, for each target source code snippet in the pair of target source code snippets, generating a vector representation for the corresponding target source code snippet using the machine learning model configured to receive a set of target features extracted from the corresponding target source code snippet as feature inputs, determining a vector-space distance between the pair of target source code snippets based on the vector representation, and determining the pair of target source code snippets are duplicates of one another when the vector-space distance satisfies a distance threshold. The predicted label for the training source code may include at least one of a predicted level of complexity of the target source code, a predicted quality of the target source code, a predicted testing requirement for the target source code, or a predicted difficulty rating of the target source code. The predicted code transformation for the target source code may include at least one of updated target source code fixing a build error in the target source code, executable code for the target source code, a revision to the target source code, or suggested replacement source code for replacing the target source code (¶ 0008)]. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to use a machine learning algorithm to select a desired target, as demonstrated by Ivonkovic, and to incorporate it into the existing scheme disclosed by Anderson in view of Bi, because Anderson’s weighted time-average updating and prediction model by itself is already a preliminary form of machine learning algorithm, which may be further extended and enhanced for better performance. As to claim 4, Anderson in view of Bi & Ivonkovic teaches The computer-implemented method of claim 3, wherein the additional document is identified based upon a distance to the document in a vector space defined by the machine learning algorithm [Ivonkovic -- The target source code may include a pair of target source code snippets from a target codebase and generating the code insight for the target source code using the machine learning model may include, for each target source code snippet in the pair of target source code snippets, generating a vector representation for the corresponding target source code snippet using the machine learning model configured to receive a set of target features extracted from the corresponding target source code snippet as feature inputs, determining a vector-space distance between the pair of target source code snippets based on the vector representation, and determining the pair of target source code snippets are duplicates of one another when the vector-space distance satisfies a distance threshold. The predicted label for the training source code may include at least one of a predicted level of complexity of the target source code, a predicted quality of the target source code, a predicted testing requirement for the target source code, or a predicted difficulty rating of the target source code. The predicted code transformation for the target source code may include at least one of updated target source code fixing a build error in the target source code, executable code for the target source code, a revision to the target source code, or suggested replacement source code for replacing the target source code (¶ 0008)]. As to claim 5, Anderson in view of Bi & Ivonkovic teaches The computer-implemented method of claim 1, wherein the document is associated with a la bel and the additional document is associated with the same label [Anderson – as shown in figure 10A; A method for file read caching of the present invention on the client or server side includes verifying that the cache blocks are a range of blocks contiguous in the file and beginning with the required cache block, and, if the full range of blocks are not in the cache, reading the missing blocks into the cache. The read request may be served before or after any missing blocks are read into the cache depending on whether the requested data is initially available in cache (c2 L58-65); The client handler task maintains a "client cache" in RAM 40 for each open file. Each client cache stores a contiguous range of data from that file. Each cache is divided into non-overlapping "client cache" blocks. These blocks are typically of a constant size, but need not be. Each client cache is in one of the following four states: empty, read, read-ahead, and write. Each client cache has an "ideal size" (depending only on its state), a "maximum size" (depending on the memory management decisions) and an "actual size" (the number of cache blocks in memory) (c7 L26-35); In a distributed file system including high speed random access general purpose memory within a network node coupled to a host computer and a plurality of mass storage devices interconnected via a network for storing data files in disparate locations … allocating, by means of a processor within said network node, a portion of said general purpose memory to said at least one cache area in an amount proportional to said associated file data flow rate (claim 1)]. As to claim 11, it recites substantially the same limitations as in claim 3, and is rejected for the same reasons set forth in the analysis of claim 3. Refer to “As to claim 3” presented earlier in this Office Action for details. As to claim 12, it recites substantially the same limitations as in claim 4, and is rejected for the same reasons set forth in the analysis of claim 4. Refer to “As to claim 4” presented earlier in this Office Action for details. As to claim 13, it recites substantially the same limitations as in claim 5, and is rejected for the same reasons set forth in the analysis of claim 5. Refer to “As to claim 5” presented earlier in this Office Action for details. 6. Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Anderson in view of Bi, and further in view of Esser et al. (US Patent Application Publication 2018/0060238, hereinafter Esser). Regarding claim 22, Anderson in view of Bi does not teach a threshold controls a number of additional documents identified by the one or more processors, and wherein the threshold varies based on an age of a document previously predictively loaded into the cache memory. However, Esser specifically teaches the cited limitation [Referring to FIG. 1, dynamic cache sizing operation 100 in accordance with one embodiment may obtain or determine one or more cache operating parameters (block 105). Illustrative cache operating parameters include, but are not limited by, cache line eviction rate (i.e., the rate at which cache lines are being removed from the cache in accordance with a specified cache line replacement algorithm) and cache line age (i.e., how long a cache line has been in the cache) … One illustrative resizing test could be whether a first threshold number of cache lines are older than a first specified age (suggesting the cache may be larger than it needs to be). Another illustrative resizing test could be whether a second threshold number of cache lines are younger than a second specified age (suggesting the cache may be smaller than it needs to be). Yet another illustrative resizing test could be whether a third threshold percent of cache lines are older than a third specified age. Still another illustrative resizing test could be whether a fourth threshold percent of cache lines are younger than a fourth specified age … (¶ 0014); First, as Applicant admits, Esser specifically teaches “dynamically resizing cache memory,” and it is a matter of fact that the more additional documents to be stored in the cache memory, the more space would be needed for the cache memory. Thus, the number of additional documents that can be stored in the cache memory is directly proportional to the size of the cache memory, which is the essence of Esser’s “dynamically resizing cache memory.” Second, claim 22 recites “the threshold varies based on an age of a document previously predictively loaded into the cache memory,” and Applicant readily admits that “Esser is directed to dynamically resizing cache memory based on cache line age,” Thus Esser indeed teaches the aspect “the threshold varies based on an age of a document previously predictively loaded into the cache memory,” because the cache lines to be evicted from the cache memory depends on the age limit of the cache lines, which in turn determines how much space may be reclaimed for reuse by evicting those cache lines older than a certain age limit. Therefore, Esser clearly teaches the limitations recited in claim 22]. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to have a threshold controls a number of additional documents identified by the one or more processors, and wherein the threshold varies based on an age of a document previously predictively loaded into the cache memory, as specifically demonstrated by Esser, and to incorporate it into the existing scheme disclosed by Anderson in view of Bi, in order to properly adjust the size of the cache memory. Conclusion 7. Claims 1, 3-9, 11-17, 19-22 are rejected as explained above. 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHENG JEN TSAI whose telephone number is 571-272-4244. The examiner can normally be reached on Monday-Friday, 9-6. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached on 571-272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /SHENG JEN TSAI/Primary Examiner, Art Unit 2136
Read full office action

Prosecution Timeline

Jul 18, 2022
Application Filed
Dec 03, 2023
Non-Final Rejection — §103
Mar 08, 2024
Response Filed
Apr 10, 2024
Final Rejection — §103
Sep 03, 2024
Request for Continued Examination
Sep 05, 2024
Response after Non-Final Action
Sep 22, 2024
Non-Final Rejection — §103
Jan 13, 2025
Response Filed
Jan 17, 2025
Final Rejection — §103
May 27, 2025
Request for Continued Examination
Jun 02, 2025
Response after Non-Final Action
Aug 24, 2025
Non-Final Rejection — §103
Nov 25, 2025
Response Filed
Dec 08, 2025
Final Rejection — §103
Mar 02, 2026
Interview Requested
Mar 04, 2026
Examiner Interview Summary
Mar 04, 2026
Applicant Interview (Telephonic)
Mar 09, 2026
Request for Continued Examination
Mar 14, 2026
Response after Non-Final Action
Apr 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596490
MEMORY MANAGEMENT USING A REGISTER
2y 5m to grant Granted Apr 07, 2026
Patent 12585387
Clock Domain Phase Adjustment for Memory Operations
2y 5m to grant Granted Mar 24, 2026
Patent 12579075
USING RETIRED PAGES HISTORY FOR INSTRUCTION TRANSLATION LOOKASIDE BUFFER (TLB) PREFETCHING IN PROCESSOR-BASED DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12572474
SPARSITY COMPRESSION FOR INCREASED CACHE CAPACITY
2y 5m to grant Granted Mar 10, 2026
Patent 12561070
AUTONOMOUS BATTERY RECHARGE CONTROLLER
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
70%
Grant Probability
83%
With Interview (+12.9%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 790 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month