Prosecution Insights
Last updated: April 19, 2026
Application No. 19/245,897

PREDICTION OF CACHEABLE QUERIES

Non-Final OA §102§103§DP
Filed
Jun 23, 2025
Examiner
BIBBEE, JARED M
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
Adp Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
94%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
529 granted / 660 resolved
+25.2% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
672
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
5.2%
-34.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 660 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,353,410. Although the claims at issue are not identical, they are not patentably distinct from each other because they are an obvious variant. Instant Application U.S. Patent No. 12,353,410 Claim 1: A system comprising: one or more processors, coupled with memory, to: generate, using a machine learning model, a plurality of predicted requests based on one or more requests received from a client device; compute, for each predicted request of the plurality of predicted requests, a metric associated with retrieving data for a respective predicted request from a database; select a subset of predicted requests from the plurality of predicted requests based on the metric corresponding to each predicted request and a threshold metric; generate, using the machine learning model, labels classifying the subset of predicted requests; store, using the labels as cache keys, data from the database for the subset of predicted requests in a cache store; and transmit, responsive to receiving a client request matching a predicted request of the subset of predicted requests, corresponding data from the cache store. Claim 1: A system, comprising: one or more processors, coupled with a cache store, the one or more processors configured to: predict, using a machine learning model, a plurality of requests indicative of one or more subsequent requests identify, using the machine learning model and based on a comparison with a threshold metric, a subset of predicted requests from the plurality of requests, construct, using the machine learning model and a set of labels comprising classifications for the subset of predicted requests; retrieve, using the set of labels and from a database, data for the subset of predicted requests; transmit, responsive to receipt of a subsequent request that matches one of the subset of predicted requests, a cache value from the cache store that corresponds to the subsequent request. Claim 2 and 3: The system of claim 1, wherein the one or more processors further: generate a plurality of predicted request identifiers associated with the subset of predicted requests; and construct, using the machine learning model and based on the plurality of predicted request identifiers, the labels comprising classifications for the subset of predicted requests. The system of claim 1, wherein the one or more processors: receive one or more identifiers and key-value pairs associated with the one or more requests; and generate, using the machine learning model, the plurality of predicted requests based on the one or more identifiers and key-value pairs. Claim 1: the subset of predicted requests comprising predicted request identifiers and corresponding predicted key-value pairs; construct, using the machine learning model and based on the predicted request identifiers and the corresponding predicted key-value pairs, a set of labels comprising classifications for the subset of predicted requests; Claim 7: The system of claim 1, wherein the one or more processors further: configure, in the cache store, the labels as the cache keys and the data from the database as cache values for the subset of predicted requests. Claim 1: configure, in the cache store, the set of labels as cache keys and the data retrieved for the subset of predicted requests as cache values; With regards to claims 2-20 of the instant application, these claims contain limitations similar to those found in claims 2-20 of U.S. Patent No. 12,353,410, thus, are analyzed similarly as shown above. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, 4, 7, 11-12, 14, 17, and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shekhar (US 20240143593 A1). As to claim 1, Shekhar teaches a system, comprising: one or more processors (See [0103]-[0104]), coupled with memory, to: generate, using a machine learning model, a plurality of predicted requests based on one or more subsequent requests from a client device (Shekhar discloses using a heuristic such as an “origin ID” that the machine learning (ML) model has identified to likely trigger a certain query or query pattern to be issued to the system. Thus, when a trigger heuristic is observed or identified in real-time, the cache manager can decide that the predicted query pattern associated with the trigger heuristic is expected to be issued. See [0018]); compute, for each predicted request of the plurality of predicted requests, a metric associated with retrieving data for a respective predicted request from a database (Shekhar discloses the machine learning (ML) model has identified to likely trigger a certain query or query pattern to be issued to the system in [0018]. Shekhar discloses based on a cache hit ratio, the cache manager can determine an effectiveness and success of the current caching mechanism (i.e. metric). If the hit ratio falls below a threshold (i.e. metric), the cache manager's ML model may re-learn (e.g., as is used in Reinforcement Learning strategies) the traffic pattern and then adjust the caching mechanism dynamically by changing the cache population strategy and/or the cache eviction strategy in [0015] and [0096]-[0099]); select a subset of predicted requests from the plurality of predicted requests based on the metric corresponding to each predicted request and a threshold metric (Shekhar discloses the machine learning (ML) model has identified to likely trigger a certain query or query pattern to be issued to the system in [0018]. Shekhar discloses based on a cache hit ratio, the cache manager can determine an effectiveness and success of the current caching mechanism. If the hit ratio falls below a threshold (i.e. metric), the cache manager's ML model may re-learn (e.g., as is used in Reinforcement Learning strategies) the traffic pattern and then adjust the caching mechanism dynamically by changing the cache population strategy and/or the cache eviction strategy (i.e. select a subset of predicted requests) in [0015] and [0096]-[0099]); generate, using the machine learning model, labels classifying the subset of predicted requests (Shekhar discloses a cache key (label) which is used to identify the corresponding resource in [0031]-[0032].); store, using the labels as cache keys, data from the database for the subset of predicted requests in a cache store (Shekhar discloses the cache manager pre-fetches query results associated with the predicted upcoming set of queries and pre-populates a cache with the query results from the predicted queries in [0013]. Shekhar discloses a cache key (label) which is used to identify the corresponding resource in [0031]-[0032].); and transmit, responsive to receiving a client request matching a predicted request of the subset of predicted requests, corresponding data from the cache store (Shekhar discloses if and when the predicted set of queries are actually received by the system, the query results are already in the cache (cache hit). Accordingly, the system may then return the query results to the query requestor in a much faster response time as compared to retrieving the query results from a non-cache storage location when the query results are not in the cache (cache miss). See [0013].). Claims 11 and 20 are method and medium claims, respectively, and contain limitations similar to those found in claim 1 above. Thus, they are rejected similarly to claim 1, as shown above. As to claims 2 and 12, Shekhar teaches generate a plurality of predicted request identifiers associated with the subset of predicted requests (Shekhar discloses real-time transactions and queries being received in [0013]. Shekhar also discloses an origin ID such as a particular username, a certain type of user (e.g., role of a user), an application that submits query requests such as a dashboard application, a function within an application such as a particular report, etc. is used as a trigger in [0019].); and construct, using the machine learning model and based on the plurality of predicted request identifiers, the labels comprising classifications for the subset of predicted requests (Shekhar discloses a cache key (label) which is used to identify the corresponding resource in [0031]-[0032].). As to claims 4 and 14, Shekhar teaches predict, responsive to receiving a second request, using the machine learning model, a second plurality of predicted requests based on the first request and the second request; identify, using the machine learning model and based on a comparison with the threshold metric, a second subset of predicted requests from the second plurality of predicted requests indicative of one or more subsequent requests; and transmit, responsive to receipt of a subsequent request that matches one of a group of predicted requests comprising the second subset of predicted requests and one or more of the subset of predicted requests that are below a cache store threshold, a cache value from the cache store that corresponds to the subsequent request (Please see claim 1 analysis above. Claims 2, 13, and 20 recite the same steps but for subsequent (second) request(s). The process disclosed in Shekhar would be applicable to not only the initial request but any subsequent requests.). As to claim 7 and 17, Shekhar teaches configure, in the cache store, the labels as the cache keys and the data from the database as cache values for the subset of predicted requests (Shekhar discloses the cache manager pre-fetches query results associated with the predicted upcoming set of queries and pre-populates a cache with the query results from the predicted queries in [0013]. Shekhar discloses a cache key (label) which is used to identify the corresponding resource in [0031]-[0032].). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 3 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shekhar (US 20240143593 A1) in view of Kanefsky (US 20120023120 A1). As to claims 3 and 13, Shekhar teaches receive one or more identifiers (Shekhar discloses real-time transactions and queries being received in [0013]. Shekhar also discloses an origin ID such as a particular username, a certain type of user (e.g., role of a user), an application that submits query requests such as a dashboard application, a function within an application such as a particular report, etc. is used as a trigger in [0019].); and generate, using the machine learning model, the plurality of predicted requests (Shekhar discloses using a heuristic such as an “origin ID” that the machine learning (ML) model has identified to likely trigger a certain query or query pattern to be issued to the system. Thus, when a trigger heuristic is observed or identified in real-time, the cache manager can decide that the predicted query pattern associated with the trigger heuristic is expected to be issued. See [0018]). Shekhar fails to teach a set of key-value pairs associated with query requests. However, Kanefsky teaches provide the initial query suggestions and additional query suggestions in the form of key value pairs, the key value pairs being pairs of query input sequences and query suggestions having initial query characters that match the query input sequences (See [0038] and Claim 8). Before the effective filing date, it would have been obvious to one of ordinary skill in the art, to modify the teachings of Shekhar to incorporate the PREDICTIVE QUERY SUGGESTION CACHING as taught by Kanefsky for the purpose of speeding up the process by which the browser accesses and displays suggestions based on the user's current partial query. Claim(s) 5, 8, 9, 15, 18, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shekhar (US 20240143593 A1) in view of Langseth et al (US 20180173705 A1). As to claim 5 and 15, Shekhar fails to teach the metric associated with retrieving data for the respective predicted request comprises at least one of a latency of the plurality of predicted requests and a frequency of the plurality of predicted requests. However, Langseth teaches the metric associated with retrieving data for the respective predicted request comprises at least one of a latency of the plurality of predicted requests and a frequency of the plurality of predicted requests (See [0018]). Before the effective filing date, it would have been obvious to one of ordinary skill in the art, to modify the teachings of Shekhar to incorporate the SYSTEM AND METHOD FOR FACILITATING QUERIES VIA REQUEST-PREDICTION-BASED TEMPORARY STORAGE OF QUERY RESULTS as taught by Langseth for the purpose of significantly decrease latency or other delays for sufficiently responding to requests and improve efficiency of temporary data storage or other computer resource usage. As to claim 8 and 18, Langseth teaches generate a ranking of the plurality of predicted requests based on the metric corresponding to each predicted request; and select the subset of predicted requests based on the ranking (Langseth discloses obtainment and/or temporary storage of the subset of results (or the lack thereof with respect to the other subsets of results) may be based on cost information, frequency information, preference information, or other information. The selectiveness of the obtainment and/or temporary storage of results (prior to particular requests occurring) may significantly decrease latency or other delays for sufficiently responding to requests and improve efficiency of temporary data storage or other computer resource usage. See [0018]. It would have been obvious to use the cost information as a ranking metric to aid in selectiveness of the obtainment and/or temporary storage of results.). As to claim 9 and 19, Langseth teaches determine the metric for each of the plurality of predicted requests by calculating a product of a latency associated with each of the plurality of predicted requests and a frequency of receiving each of the plurality of predicted requests (Langseth discloses obtainment and/or temporary storage of the subset of results (or the lack thereof with respect to the other subsets of results) may be based on cost information, frequency information, preference information, or other information. The selectiveness of the obtainment and/or temporary storage of results (prior to particular requests occurring) may significantly decrease latency or other delays for sufficiently responding to requests and improve efficiency of temporary data storage or other computer resource usage. See [0018]. It would have been obvious to use the cost information as a metric to aid in selectiveness of the obtainment and/or temporary storage of results.). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shekhar (US 20240143593 A1) in view of Heimendinger (US 20110055202 A1). As to claim 10, Shekhar fails to teach train the machine learning model based on historical data associated with a plurality of profiles received from a plurality of entities; receive an entity identifier and a profile associated with each of the one or more requests; and use the entity identifier and the profile associated with each of the one or more requests to generate the plurality of predicted requests. However, Heimendinger teaches train the machine learning model based on historical data associated with a plurality of profiles received from a plurality of entities; receive an entity identifier and a profile associated with each of the one or more requests; and use the entity identifier and the profile associated with each of the one or more requests to generate the plurality of predicted requests (Heimendinger discloses the predictive model may be formed/adjusted based on user or organization profiles, usage history, and similar factors in [0005] and [0021].) Before the effective filing date, it would have been obvious to one of ordinary skill in the art, to modify the teachings of Shekhar to incorporate the PREDICTIVE DATA CACHING as taught by Heimendinger for the purpose of significantly decrease latency or other delays for sufficiently responding to requests and improve efficiency of temporary data storage or other computer resource usage. Allowable Subject Matter Claims 6 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The prior art fails to teach/suggest or make obvious “generate the plurality of predicted requests, the one or more processors further: generate a hash from data associated with each of the one or more requests based on a hash function; concatenate each hash with a value derived from the data associated with each of the one or more requests to generate input feature encoding; and generate the plurality of predicted requests by identifying requests associated with the input feature encoding”, as recited in claims 6 and 16. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Li et al (US 20220156340 A1) - Techniques of this disclosure are directed to enable a computing device to process voice queries and provide query answers even when the computing device and vehicle do not have internet connectivity. According to the disclosed techniques, a computing device may detect a query via input devices of the computing device and output a query answer determined based on the detected query. Rather than directly querying a remote computing system, various aspects of the techniques of this disclosure may enable the computing device to use a query answer cache to generate the query answer. The query answer cache may include predicted queries and query answers retrieved from a query answer cache of a remote computing system, thereby enabling the computing device to respond to the detected queries while experiencing unreliable internet connection. Bouvrie et al (US 20240232729 A1) - Systems, methods, and computer-program products for online booking of lodging location reservations include receiving desired reservation information from a user; updating information stored in cache using a price and availability predictive machine learning model using the received desired reservation information; performing a search in the cache for solutions to the received desired reservation information; constructing all possible solutions available in the cache that satisfy the received desired reservation information, including splits in stays between more than one lodging locations; determining a score for each solution using a scoring machine learning model that considers the user's preferences; identifying a subset of solutions based on the score of each solution; performing live pricing and availability verification for the subset of solutions by querying a provider corresponding to each of the subset of solutions; and presenting the subset of solutions to the user with the verified pricing and availability information. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to JARED M BIBBEE whose telephone number is (571)270-1054. The examiner can normally be reached Monday-Thursday 8AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, APU MOFIZ can be reached on 5712724080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JARED M BIBBEE/ Primary Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Jun 23, 2025
Application Filed
Mar 27, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596742
METHOD AND SYSTEM FOR CURATING MEDIA CONTENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596747
NATURAL LANGUAGE SEARCH OVER SECURITY VIDEOS
2y 5m to grant Granted Apr 07, 2026
Patent 12572427
PARALLELIZATION OF INCREMENTAL BACKUPS
2y 5m to grant Granted Mar 10, 2026
Patent 12572578
CONTENT COLLABORATION PLATFORM WITH DYNAMICALLY-POPULATED TABLES
2y 5m to grant Granted Mar 10, 2026
Patent 12566747
RECURSIVE ENDORSEMENTS FOR DATABASE ENTRIES
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
94%
With Interview (+13.7%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 660 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month