Prosecution Insights
Last updated: April 19, 2026
Application No. 18/473,152

DYNAMIC PREFETCHING FOR DATABASE QUERIES

Non-Final OA §103
Filed
Sep 22, 2023
Examiner
VUONG, CAO DANG
Art Unit
2153
Tech Center
2100 — Computer Architecture & Software
Assignee
Amazon Technologies, Inc.
OA Round
4 (Non-Final)
68%
Grant Probability
Favorable
4-5
OA Rounds
3y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
74 granted / 109 resolved
+12.9% vs TC avg
Strong +26% interview lift
Without
With
+26.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
21 currently pending
Career history
130
Total Applications
across all art units

Statute-Specific Performance

§101
13.9%
-26.1% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
12.6%
-27.4% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 109 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Non-Final Office Action is in response to the application 18/473,152 filed on 01/23/2026. Status of Claims: Claims 21, 28, and 35 are amended in this Office Action. Claims 21-40 are pending in this Office Action. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/23/2026 has been entered. Response to Arguments CLAIM REJECTIONS UNDER 35 U.S.C. § 101 Applicant’s arguments filed on 12/23/2025 (pages 12-14) regarding claim rejections under 35 U.S.C 101 and the amendments submitted have been fully considered. The rejections made under 35 U.S.C 101 in the previous office action are now withdrawn after considering the applicant’s remarks and amendments. Rejection of Claims under 35 U.S.C. §102/103 After reviewing the Applicant’s arguments filed in the remarks filed 12/23/2025 (pg. 8-12) regarding to 21, 28, and 35, the Examiner respectfully submits that the arguments are partially not persuasive. Regarding claims 21, 28, and 35 : The applicant argued that Ku does not teach “ configuring a prefetcher according to a first prefetch policy to use a first amount of computational resources and reconfiguring the prefetcher according to a second prefetch policy to use a second amount of computational resources different from the first amount of resources”. The examiner respectfully disagrees with the Applicant; the Examiner respectfully submits that Ku discloses “Fig. 3 & Col 5 line 3-16: Although a method 20 illustrated by the flow chart in FIG. 2 has been described above, in one embodiment a state machine 30 (FIG. 3) is implemented by each query plan 11P. Specifically, when performing prefetch, each query plan 11P transitions between the four states 31-34 as follows. Query plan 11P initially starts in state 31 and if prefetch size pfsz is greater than the number of prefetched rowids in the internal buffer (which may be computed as described below in reference to FIG. 4 in one embodiment), query plan 11P makes a transition to state 32 (labeled as "load" in FIG. 3). In state 32, query plan 11P repeatedly fetches the rowids into a rowid buffer, until the rowid buffer becomes full. When the rowid buffer does become full, query plan 11P makes a transition to state 33 (labeled as "prefetch" in FIG. 3)…Col 5 line 31-45: Therefore, when prefetch size pfsz is smaller than the number of prefetched rowids in the internal buffer, then query plan 11P makes a transition 35 back to state 32. In state 32, additional rowids are fetched from the index into the rowid buffer, and as soon as the rowid buffer becomes full again, query plan 11P returns to state 33. In this manner, the two states 32 and 33 are repeatedly visited, until in state 32 use of the index indicates that no more rowids are available (e.g. sends an end of file).”. The system of Ku is directed to prefetching blocks of data into a buffer cache wherein the system initially prefetches data blocks according to a number of available row ids in the buffer cache. Row ids are fetched into the buffer cache for the initial prefetching operation and the amount of rowids added to the buffer cache can be equivalent to a first amount of computational resources. The system of Ku is capable of fetching additional rowids from the index to the buffer cache when a comparison between a prefetch size and the number of available rowid in the buffer cache triggers the operation. The system can determine to add additional rowid into the buffer cache during a prefetching operation and the additional rowid added into the buffer cache can be equivalent to a second amount of computational resources different from the first amount of resources. Once additional rowids are added to the buffer cache, the system can go back to the prefetching operation thus any data prefetched during this stage can be equivalent to a second portion of the database, different from the first portion of the database. Ku also discloses “Col 4 line 13-19: In act 23 (FIG. 2), each query plan 11P uses an index to determine a number of rowids that are inserted into an internal buffer. The size of the internal buffer may be selected in any manner well known in the art, although in one specific example a buffer sufficiently large to hold 40 rowids is used. The rowids identify data blocks that are to be fetched from disk”. The rowids in Ku is equivalent to a resource where they are inserted into a buffer that is utilized to determine the amount of data to be prefetched into the buffer and to identify data blocks that are fetched from disk. Therefore, is can be associated to computational resources of a system and the amount of rowids can be varied based on the prefetching queries. The applicant further argues “Rowids in Ku are identifiers of data, or elements, of the database being prefetched and cannot map to computational resources as recited in the claim”. The examiner respectfully disagrees and submits that there is no definite description in the specification of what computational resources are limited to or definition of what computational resources mean that one of ordinary skills in the art can assert that the statement made by the applicant is true. Under BRI, computational resource can be understood as hardware or software component that one of ordinary skills in the art can use to perform operations within a system and an amount of rowid available for a prefetching operation can be equivalent to an amount of computational resources. Also, Ku discloses “Col 4 line 13-19: The rowids identify data blocks that are to be fetched from disk…Col 5 line 13-16: In state 32, query plan 11P repeatedly fetches the rowids into a rowid buffer, until the rowid buffer becomes full. When the rowid buffer does become full, query plan 11P makes a transition to state 33 (labeled as "prefetch" in FIG. 3)”. Rowids that are added to the buffer are not solely just identifiers of data, or elements, of the database but they can, for example, change the computational resource of the buffer such as its capacity and the rowids are used to identify data blocks that are to be fetched from disk thus the amount of rowids in the buffer can also have a correlation to the system’s processing capacity. The applicant argues “Ku does not disclose prefetching according to multiple different policies in the performance of a single query”. Firstly, the examiner respectfully submits that there is no limitation found in the claim that is associated with or similar to “prefetching according to multiple different policies in the performance of a single query”. Even if assuming arguendo that the claim recites “prefetching according to multiple different policies in the performance of a single query”, Ku teaches “Col 5 line 3-16 & line 31-45: In one embodiment a state machine 30 (FIG. 3) is implemented by each query plan 11P. Specifically, when performing prefetch, each query plan 11P transitions between the four states 31-34 as follows... In state 32, query plan 11P repeatedly fetches the rowids into a rowid buffer, until the rowid buffer becomes full. When the rowid buffer does become full, query plan 11P makes a transition to state 33 (labeled as "prefetch" in FIG. 3)…When prefetch size pfsz is smaller than the number of prefetched rowids in the internal buffer, then query plan 11P makes a transition 35 back to state 32. In state 32, additional rowids are fetched from the index into the rowid buffer, and as soon as the rowid buffer becomes full again, query plan 11P returns to state 33. In this manner, the two states 32 and 33 are repeatedly visited, until in state 32 use of the index indicates that no more rowids are available. When the end has been reached, query plan 11P makes a transition to state 34 to complete the processing of previously prefetched data blocks that remain in the buffer cache. Once all blocks have been processed, query plan 11P leaves state 34, having concluded all the necessary data processing that required access to disk”. The system of Ku adjusts the computational resources such as fetching number of rowids into a rowid buffer within a query execution such as query plan 11P therefore a query plan of Ku can correspond to a single query that prefetches data according to multiple different policies. Applicant’s remaining amendments have been fully considered, however, after further examination, new grounds of rejection are presented necessitated by applicant’s amendments. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21-23, 28-30, 32, 35, and 37-38 are rejected under 35 U.S.C. 103 as being unpatentable over Ku (US Patent 7359890) "Ku" in view of Zohar et al. (US PGPUB 20060112232) “Zohar”. Regarding claim 21, Ku teaches a method, comprising: configuring a prefetcher according to a first prefetch policy to use a first amount of computational resources (Col 5 line 13-16: In state 32, query plan 11P repeatedly fetches the rowids into a rowid buffer, until the rowid buffer becomes full. When the rowid buffer does become full, query plan 11P makes a transition to state 33 (labeled as "prefetch" in FIG. 3)… Examiner’s note: In the load state is where the system determines a first amount of computational resources such as amount of rowids to fetch to a buffer for a subsequent prefetch of data); prefetching, by the prefetcher configured according to the first prefetch policy from storage into a memory buffer, a first portion of a database using a first amount of computational resources (Fig. 3 & Col 5 line 7-23: In state 32, query plan 11P repeatedly fetches the rowids into a rowid buffer, until the rowid buffer becomes full. When the rowid buffer does become full, query plan 11P makes a transition to state 33 (labeled as "prefetch" in FIG. 3). In state 33, query plan 11P initially prefetches as many blocks (a first portion) as possible into the buffer cache. Depending on the embodiment, the act of prefetching may be done synchronously or asynchronously with the next act, of processing the data blocks… Examiner’s note: Thus, the fetching the rowids into a rowid buffer before prefetching the data blocks can be equivalent to using a first amount of computational resources. The system loads rowids into the buffer with certain amount for example up to until the rowid buffer becomes full, and use that amount of computational resources to prefetch data to the buffer cache); reconfiguring the prefetcher according to a second prefetch policy to use a second amount of computational resources different from the first amount of resources (Col 5 line 31-36: When prefetch size pfsz is smaller than the number of prefetched rowids in the internal buffer, then query plan 11P makes a transition 35 back to state 32. In state 32, additional rowids are fetched from the index into the rowid buffer, and as soon as the rowid buffer becomes full again, query plan 11P returns to state 33… Examiner’s note: The addition of rowids into rowid buffer before continuing to prefetching the data blocks can be equivalent to modifying an amount of computing resources associated with prefetching and the system further prefetches data blocks based on the second amount of computational resources for the query plan); prefetching, by the prefetcher configured according to the second prefetch policy from the storage into the memory buffer, a second portion of the database, different from the first portion of the database, using a second amount of computational resources different from the first amount of resources (Fig. 3 & Col 5 line 31-45: When prefetch size pfsz is smaller than the number of prefetched rowids in the internal buffer, then query plan 11P makes a transition 35 back to state 32. In state 32, additional rowids are fetched from the index into the rowid buffer, and as soon as the rowid buffer becomes full again, query plan 11P returns to state 33… Examiner’s note: The system makes a determination such as to when to add more rowids into the buffer during a query plan execution and continue to prefetch the data blocks into the buffer. This additional of rowids is equivalent to a second amount of computational resources different from the first amount of resources. The additional rowids are used to prefetch data blocks to the buffer so the data blocks are equivalent to a second portion of the database). Ku does not explicitly teach accessing, from the memory buffer responsive to a query of the database, the prefetched first portion of the database and the prefetched second portion of the database. Zohar teaches accessing, from the memory buffer responsive to a query of the database, the prefetched first portion of the database and the prefetched second portion of the database (Fig. 2 & [0052]: As part of some embodiments of the present invention, initially a request may be received at the cache 100, for example, from a host, to read one or a string of data blocks (two or more successive blocks) (block 210). In accordance with some embodiments of the present invention, upon receiving the request, the cache management module 120 may determine whether the requested data is already stored in the cache 100 or not (block 220)… [0053]: In case it is determined that the requested data blocks are already in the cache 100, the cache management module 120 may retrieve the data blocks from the data space address 130 and may service the request (block 230), for example, by transmitting the requested data to the host which requested the data through the communication module 110… Examiner’s note: The cache can correspond to the memory buffer where it can store prefetched data and can be used to process queries that want to access the cache for particular data). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Zohar teachings in the Ku system. Skilled artisan would have been motivated to incorporate accessing prefetched data in a cache taught by Zohar in the Ku system to reduce latency in data retrieval process, improve user experience, and improve efficiency in handling datasets. This close relation between both of the references highly suggests an expectation of success. Regarding claim 22, Ku in view of Zohar teaches all of the limitations of claim 21. Ku further teaches configuring prefetching to use the first amount of computational resources prior to prefetching the first portion of the database (Fig. 3 & col 5 line 13-16: Query plan 11P makes a transition to state 32 (labeled as "load" in FIG. 3). In state 32, query plan 11P repeatedly fetches the rowids into a rowid buffer, until the rowid buffer becomes full. When the rowid buffer does become full, query plan 11P makes a transition to state 33 (labeled as "prefetch" in FIG. 3)…Thus, in the load state is where the system determines a first amount of computational resources such as amount of rowids to fetch to a buffer for a subsequent prefetch of data. Therefore, the first amount of computational resources is configured before the actual prefetching of data is executed); and modifying prefetching to use the second amount of computational resources prior to prefetching the second portion of the database (Col 5 line 31-36: When prefetch size pfsz is smaller than the number of prefetched rowids in the internal buffer, then query plan 11P makes a transition 35 back to state 32. In state 32, additional rowids are fetched from the index into the rowid buffer, and as soon as the rowid buffer becomes full again, query plan 11P returns to state 33… Thus, the addition of rowids into rowid buffer before prefetching the data blocks can be equivalent to modifying an amount of computing resources associated with prefetching and the system further prefetches data blocks based on the second amount of computational resources). Regarding claim 23, Ku in view of Zohar teaches all of the limitations of claim 22. Ku further teaches wherein the second amount of computational resources comprises no computational resources, and wherein modifying prefetching to use the second amount of computational resources comprises disabling prefetching for the second portion of the database (Col 3 line 8-17 & 53-58: The prefetch size may be set by a Database Administrator (DBA) through a script or even manually through a user interface, the change is based on usage of the buffer cache, e.g. if previously prefetched blocks are being aged out without being used then the prefetch size may be reduced, and depending on severity of usage of the buffer cache, in extreme cases prefetching may even be turned off… In one specific example, if the following condition is satisfied in act 22 then prefetching is turned on and otherwise turned off: min (no. of rowids fetched from index, pfsz)*CM*CF.ltoreq.2… Thus, prefetching can be disabled based on a determination and disabling prefetching can be mapped to second amount of computational resources comprises no computational resources since no resource is needed when prefetching is disabled). Regarding claim 28, note the rejections of claim 21. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings. Regarding claim 29, note the rejections of claim 22. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings. Regarding claim 30, Ku in view of Zohar teaches all of the limitations of claim 29. Ku further teaches wherein modifying prefetching to use the second amount of computational resources comprises modifying a maximum number of prefetch requests to perform over a period of time (Col 4 line 20-24: The query plan performs a parallel fetch of the data blocks (such as blocks 12A-12N in FIG. 1) into the buffer cache, and the number of data blocks being fetched is limited to be no more than the prefetch size… Examiner’s note: Thus, the parallel fetch of data blocks can be equivalent to number of prefetch requests to perform over a period of time and the number of fetch of data blocks can be a maximum number such as the determined prefetch size). Regarding claim 32, Ku in view of Zohar teaches all of the limitations of claim 28. Ku further teaches wherein the first amount of computational resources comprises no computational resources, and wherein modifying prefetching to use the second amount of computational resources comprises enabling prefetching for the second portion of the database (Col 3 line 35-55: Each query plan determines if performance of the buffer cache allows prefetch to be done. In this act, any method well known in the art may be used to evaluate cache performance, to ensure that a certain threshold has been reached in terms of a cost-benefit tradeoff. For example, a 50% reduction in I/O latency may be set as a minimum benefit, and this may require an effective prefetch size to be greater than or equal to 2… In one specific example, if the following condition is satisfied in act 22 then prefetching is turned on and otherwise turned off… Examiner’s note: Thus, a determination can be made to decide whether to turn on or off the prefetching process. Turning on a prefetching process based on a determination can be equivalent to modifying prefetching to use the second amount of computational resources comprises enabling prefetching for the second portion of the database from no computational resources). Regarding claim 35, note the rejections of claim 21. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings. Regarding claim 37, note the rejections of claim 22. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings. Regarding claim 38, note the rejections of claim 23. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings. Claims 24-27, 31, 33, 36, and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Ku (US Patent 7359890) "Ku" in view of Zohar et al. (US PGPUB 20060112232) “Zohar”, and Weissman et al. (USPGPUB 20110258179) “Weissman”. Regarding claim 24, Ku in view of Zohar teaches all of the limitations of claim 21. Ku does not explicitly teach wherein the query is associated with a first table and a second table of the database, wherein the first portion of the database comprises elements of the first table, and wherein the second portion of the database comprises elements of the second table. Weissman teaches wherein the query is associated with a first table and a second table of the database, wherein the first portion of the database comprises elements of the first table, and wherein the second portion of the database comprises elements of the second table ([0027] The host system retrieves, based on the request received, one or more locations of the data to be retrieved. A customer schema describes the one or more locations of data to be retrieved, in which the customer schema specifies each of the plurality of data elements of the data to be retrieved as residing within either the non-relational data store or residing within the relational data store, or as being available from both the non-relational data store and the relational data store… [0052]: Optimizing the original database query includes a) identifying a first sub-query within the original database query directed to a table within relational data store in which the first sub-query corresponds to a first portion of data to be retrieved based on an incoming request (first portion of the database comprises elements of the first table); b) identifying a second sub-query within the original database query directed to a table in the non-relational data store in which the second sub-query corresponds to a second portion of the data to be retrieved based on the request (the second portion of the database comprises elements of the second table)…Thus, a query for data can have associations to different tables for different portions desired by the query). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Weissman teachings in the Ku and Zohar system. Skilled artisan would have been motivated to incorporate associating a query to different tables taught by Weissman in the Ku and Zohar system to increase the data retrieval process of the query and increase the efficiency in retrieving data from different locations. This close relation between both of the references highly suggests an expectation of success. Regarding claim 25, Ku in view of Zohar teaches all of the limitations of claim 21. Ku further teaches wherein prefetching the second portion of the database comprises prefetching pages of the second index of the database (Col 4 line 13-19: In act 23 (FIG. 2), each query plan 11P uses an index to determine a number of rowids that are inserted into an internal buffer. The size of the internal buffer may be selected in any manner well known in the art, although in one specific example a buffer sufficiently large to hold 40 rowids is used. The rowids identify data blocks that are to be fetched from disk…Col 4 line 49-54: Next, query plan 11P goes to act 26 (FIG. 2) to check if use of the index has indicated that an end of the rowids has been reached…Col 5 line 31-39: Additional rowids (prefetching pages of the second index of the database) are fetched from the index into the rowid buffer, and as soon as the rowid buffer becomes full again, query plan 11P returns to state 33. In this manner, the two states 32 and 33 are repeatedly visited, until in state 32 use of the index indicates that no more rowids are available (e.g. sends an end of file)… Examiner’s note: Thus, the system uses index to create rowids stored in the buffer wherein the rowids are used to identify data blocks that are to be fetched from disk.). Ku in view of Zohar does not explicitly teach wherein the query is associated with a first index and a second index of the database. Weissman teaches wherein the query is associated with a first index and a second index of the database ([0056] In one embodiment, optimizing the database query 217 includes… b) injecting a new join operation to a foreign key index into the leading sub-query to the parent table in the relational data store, wherein the join operation joins a custom index on a foreign key for the non-relational data store; and c) leading the optimized database query 350 with the sub-query to the parent table having the join operation injected therein… Examiner’s note: The queries for accessing data is optimized by processing indexing properties and incorporate them into the queries thus the queries can be associated with indexes). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Weissman teachings in the Ku and Zohar system. Skilled artisan would have been motivated to incorporate associating indexes with queries taught by Weissman in the Ku and Zohar system to improve the processing speed of the queries and improve the data retrieval process of the queries. This close relation between both of the references highly suggests an expectation of success. Regarding claim 26, Ku in view Zohar teaches all of the limitations of claim 21. Ku does not explicitly teach wherein the query comprises a join of a first table and a second table of the database, wherein the second portion of the database comprises the elements of the second table identified using elements retrieved from the first table. Weissman teaches the query comprises a join of a first table and a second table of the database, wherein the second portion of the database comprises the elements of the second table identified using elements retrieved from the first table (Fig. 2 & [0027]: A schema of the system describes the one or more locations of data to be retrieved, in which the customer schema specifies each of the plurality of data elements of the data to be retrieved as residing within either the non-relational data store or residing within the relational data store, or as being available from both the non-relational data store and the relational data store… [0030]: FIG. 2 within the expanded view of database query 217 are several sub-query strings such as "retrieve data element `a` from the non-relational data store" (e.g., 150) and "retrieve data element `b` from the relational data store" (e.g., 155) and another sub-query string which states "select `x` from `y` where `z` reflective of a generic Structured Query Language (SQL) type query… [0036]: The system executes the optimized database query against the multi-tenant database system includes referencing data elements stored in both the relational data store and the non-relational data store so as to retrieve the requisite data. In fig.2, data is retrieved from both datastores and eventually combined into a plurality of data and put in the query layer for subsequent retrieval. Also, wherein the second portion of the database comprises the elements of the second table identified using elements retrieved from the first table can be equivalent to a feature of a relational database ). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Weissman teachings in the Ku and Zohar system. Skilled artisan would have been motivated to incorporate collecting data from different locations to satisfy a query taught by Weissman in the Ku and Zohar system to increase the amount of data the system is able to access to and improves optimization and the process of content retrieval. This close relation between both of the references highly suggests an expectation of success. Regarding claim 27, Ku in view Zohar teaches all of the limitations of claim 21. Ku does not explicitly teach wherein performing the query comprises retrieving elements of a non-covering index. Weissman teaches performing the query comprises retrieving elements of a non-covering index ([0017]: The system is implemented to retrieve data from a multi-tenant database system having a relational data store and a non-relational data store. A host system for the multi-tenant database system receives a request specifying data to be retrieved from the multi-tenant database system, retrieving, based on the request via the host system, one or more locations of the data to be retrieved, generating, at the host system, a database query based on the request, in which the database query specifies a plurality of data elements to be retrieved, the plurality of data elements including one or more data elements residing within the non-relational data store and one or more other data elements residing within the relational data store. Thus, the query is required to retrieve data from multiple data stores which is equivalent to elements of a non-covering index). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Weissman teachings in the Ku and Zohar system. Skilled artisan would have been motivated to incorporate elements of a non- covering index taught by Weissman in the Ku and Zohar system so the query can search in a particular table for the remaining of the elements needed for the query. This does not require the query to look up the whole database as that would increase the cost of processing. This close relation between both of the references highly suggests an expectation of success. Regarding claim 31, note the rejections of claim 24. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings. Regarding claim 33, note the rejections of claim 25. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings. Regarding claim 36, Ku in view Zohar teaches all of the limitations of claim 35. Ku in view Zohar does not explicitly teach wherein the first amount of computational resources comprises a first number of threads usable to perform prefetching, wherein the second amount of computational resources comprises a second number of threads usable to perform prefetching different from the first number of threads. Weismann teaches wherein the first amount of computational resources comprises a first number of threads usable to perform prefetching, wherein the second amount of computational resources comprises a second number of threads usable to perform prefetching different from the first number of threads ([0076]:The query layer agent of the system executes the plurality of optimized sub-queries making up an optimized database query by designating or allocating each of the plurality of optimized sub-queries to one distinct work thread processor within a pool of work thread processors, in which each work thread processor in the pool executes zero, one, or a plurality of the plurality of sub-queries constituting the optimized database query. Thus, the number of thread operated on sub-query can be equivalent to computational resources comprises a first/second number of thread and the number can be varied for each sub-query). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Weissman teachings in the Ku and Zohar system. Skilled artisan would have been motivated to incorporate work threads to queries taught by Weissman in the Ku and Zohar system to relate to the number of threads during execution so query may be parallelized resulting in a more time-efficient execution, as recognized by Weissman ([0076]). This close relation between both of the references highly suggests an expectation of success. Regarding claim 39, note the rejections of claim 24. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings. Claims 34 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Ku (US Patent 7359890) "Ku" in view of Zohar et al. (US PGPUB 20060112232) “Zohar”, Mogul et al. (US PGPUB 20030126232) “Mogul” and Hill et al. (US PGPUB 20030188107) “Hill”. Regarding claim 34, Ku in view of Zohar teaches all of the limitations of claim 28. Ku further teaches wherein prefetching the second portion of the database comprises prefetching elements of the database indicated in a prefetch request (Fig. 3 & Col 5 line 31-45: When prefetch size pfsz is smaller than the number of prefetched rowids in the internal buffer, then query plan 11P makes a transition 35 back to state 32. In state 32, additional rowids are fetched from the index into the rowid buffer, and as soon as the rowid buffer becomes full again, query plan 11P returns to state 33 (prefetching the second portion)…Col 5 line 42-45: Once all blocks have been processed, query plan 11P leaves state 34, having concluded all the necessary data processing that required access to disk…Thus, the system accesses a database such as a disk to prefetch portions from it to a buffer cache). Ku in view of Zohar does not explicitly teach wherein the prefetch request is assigned a priority relative to other prefetch requests and wherein one or more other prefetch requests associated with the query are discarded based at least in part on an older age of the one or more prefetch requests relative to the prefetch request. Mogul teaches wherein the prefetch request is assigned a priority relative to other prefetch requests ([0061]: [0061] A scheduler 132 (FIG. 1) reads the entries in the prefetch queue 130 and demand fetch queue 128 and schedules the downloading of the files listed in those entries (step 214). Typically, entries in the demand fetch queue 128 are given higher priority by the scheduler than entries in the prefetch queue 130. The files scheduled for downloading are fetched (step 216) in accordance with their scheduling for use by the client computer and/or for storage in the cache 134 for possible later use by the client computer…Thus, prefetches are assigned with a priority and executed accordingly for storage in the cache). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Mogul teachings in the Ku and Zohar system. Skilled artisan would have been motivated to incorporate assigning priorities to prefetches taught by Mogul in the Ku and Zohar system in order to prefetch data that is more important to a corpus first thus improves the data satisfactory from the user for data prefetched in the memory. This close relation between both of the references highly suggests an expectation of success. Ku in view of Zohar and Mogul does not explicitly teach wherein one or more other prefetch requests associated with the query are discarded based at least in part on an older age of the one or more prefetch requests relative to the prefetch request. Hill teaches one or more other prefetch requests associated with the query are discarded based at least in part on an older age of the one or more prefetch requests relative to the prefetch request (0050]: When the external transaction queue 114 is full, the internal transaction queue 112 cannot pass any more requests into the external transaction queue. The external transaction queue 114 may include a kill mechanism to remove speculative requests (e.g., prefetches) from its queue registers to free up space for other requests from the internal transaction queue 112…Thus, prefetch requests in an external transaction queue can be equivalent to prefetch requests with older age and the prefetch requests can be removed from the external transaction queue to free up space for other requests from the internal transaction queue). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the Hill teachings in the Ku, Zohar and Mogul system. Skilled artisan would have been motivated to incorporate removing older prefetch requests taught by Hill in the Ku, Zohar and Mogul system in order to free up space for other requests, thus prevents data congestions and improves the performance of the system. This close relation between both of the references highly suggests an expectation of success. Regarding claim 40, note the rejections of claim 34. The instant claims recite substantially same limitations as the above-rejected claims and are therefore rejected under the same prior-art teachings. Prior Art The prior arts made of record and not relied upon is considered pertinent to applicant's disclosure. See form PTO-892. Sela et al. (US Patent 10430328) is directed to controlling a prefetch operation of the NV cache on the memory system. The host system determines whether to prefetch data. If so, the host system sends a command to the memory system to prefetch the data stored in main memory. After sending the prefetch command, the host system sends a command to read the prefetched data. FIG. 10B illustrates an example of the memory system managing a prefetch operation for the NV cache. The memory system receives a command to prefetch data stored in the main memory. The memory system copies the data from main memory into the NV cache. The memory system determines whether a command to access the prefetched data has been received. If so, at, the memory device accesses the prefetched data from the NV cache. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAO DANG VUONG whose telephone number is (571)272-1812. The examiner can normally be reached M-F 7:30-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached at (571) 272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.D.V./ Examiner, Art Unit 2153 02/18/2026 /KAVITA STANLEY/ Supervisory Patent Examiner, Art Unit 2153
Read full office action

Prosecution Timeline

Sep 22, 2023
Application Filed
Aug 21, 2024
Non-Final Rejection — §103
Nov 27, 2024
Response Filed
May 08, 2025
Non-Final Rejection — §103
Aug 12, 2025
Response Filed
Oct 17, 2025
Final Rejection — §103
Dec 23, 2025
Response after Non-Final Action
Jan 23, 2026
Request for Continued Examination
Feb 01, 2026
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596699
POPULATING MULTI-LAYER TECHNOLOGY PRODUCT CATALOGS
2y 5m to grant Granted Apr 07, 2026
Patent 12561356
NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12536162
SYSTEM AND METHOD FOR ANALYSIS OF GRAPH DATABASES USING INTELLIGENT REASONING SYSTEMS
2y 5m to grant Granted Jan 27, 2026
Patent 12524438
CENTRALIZED DATABASE MANAGEMENT SYSTEM FOR DATABASE SYNCHRONIZATION USING SAME-SIZE INVERTIBLE BLOOM FILTERS
2y 5m to grant Granted Jan 13, 2026
Patent 12517926
System, Method, and Computer Program Product for Analyzing a Relational Database Using Embedding Learning
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
68%
Grant Probability
94%
With Interview (+26.2%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 109 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month