Prosecution Insights
Last updated: April 19, 2026
Application No. 19/092,710

AUTOMATIC ENTERPRISE DATABASE AND QUERY OPTIMIZATION

Non-Final OA §101
Filed
Mar 27, 2025
Examiner
BIBBEE, JARED M
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
The PNC Financial Services Group, Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
94%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
529 granted / 660 resolved
+25.2% vs TC avg
Moderate +14% lift
Without
With
+13.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
672
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
5.2%
-34.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 660 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As to independent claims 1 and 15: At Step 1: The claims are directed to a “method” and “system” and thus directed to a statutory category. At Step 2A, Prong One: The claims recite the following limitations directed to an abstract idea: “assigning, by the server system, a score to each of the data queries, wherein the score for a data query is indicative of a quality of the data query” as drafted recites a mental process and/or mathematical concept. One can mentally evaluate/judge or mathematically calculate the quality of a query and then assign a classification using pen and paper. “generating based on the scores for the data queries, a general score for at least one user group, wherein the at least one user group comprises one or more users that are associated with the enterprise” as drafted recites a mathematical concept. One can mathematically calculate a score for a user group based on the quality of queries submitted by the group. At Step 2A, Prong Two: The claims recite the following additional elements: That the method and system are performed by a “a computer”, “a server system”, and “a distributed database management platform of an enterprise” which is a high-level recitation of a generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. “receiving data queries from users associated with the enterprise for data stored in the distributed database management platform” is insignificant extra-solution activity. This limitation is recited as receiving data (i.e. mere data gathering). This does not provide integration into a practical application. “automatically undertaking a computing query optimization action for the at least one user group based on the general score for the at least one user group” is insignificant extra-solution activity. This limitation is recited as mere outputting of data or providing/presenting data. Specifically, Applicant’s specification [0027], [0029], and [0030] states that the computing query optimization action comprises presenting a notification, such as, an alert sent to, and displayed on, a user interface of the user device. This does not provide integration into a practical application. Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application. At Step 2B: The conclusions for the mere implementation using a computer are carried over and do not provide significantly more. With respect to the “receiving” and “automatically undertaking” identified as extra-solution activity in Step 2A Prong 2, when re-evaluated as Step 2B this limitation is well-understood, routine, and conventional and remains insignificant extra-solution activity. See MPEP 2106.05(d)(II) “i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).” To the extent this is a request for “input” and “output” on records that is well-understood, routine and conventional. See MPEP 2106.05(d)(II) “iii. Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log).” Looking at the claims as a whole does not change this conclusion and the claim is ineligible. As to dependent claims 2-14 and 16-20: At Step 1: The claims are directed to a “method” and “system” and thus directed to a statutory category. At Step 2A, Prong One: The claims recite the following limitations directed to an abstract idea: “the computing query optimization action is undertaken based on the general score being below a predetermined threshold score” as drafted recites a mental process. One can mentally evaluate/judge whether a score is above or below a threshold. “assigning a category classification for each of the data queries, wherein the category classification for each data query is indicative of a quality of the data query” as drafted recites a mental process and/or mathematical concept. One can mentally evaluate/judge the quality of a query and then assign a classification using pen and paper. “the score for each data query is based on a presence of at least one query parameter in the data query” as drafted recites a mental process. One can mentally evaluate/judge the query and then determine the presence of a significant parameter. “the assigning of the score comprises: comparing at least one query parameter of the data query to one or more parameter criteria to identify one or more query patterns, wherein each of the one or more query patterns is associated with a specific quality category classification; and generating the score to assign to the data based on a presence of the one or more query patterns in the data query” as drafted recites a mental process. One can mentally evaluate/judge the quality of a query and then assign a classification using pen and paper. “assigning of the score is further based on a computing resource usage associated with the data query or the at least one query parameter” as drafted recites a mental process and/or mathematical concept. One can mentally evaluate/judge the quality of a query and then assign a classification using pen and paper. At Step 2A, Prong Two: The claims recite the following additional elements: That the method and system are performed by a “a computer”, “a server system”, “a distributed database management platform of an enterprise”, “a Hadoop distributed database management platform”, and “SQL queries” which is a high-level recitation of a generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. “the computing query optimization action comprises reducing an ability of the at least one user group to submit data queries to the distributed database management platform” is recited as mere outputting of data or providing/presenting data. Specifically, Applicant’s specification [0027], [0029], and [0030] states that the computing query optimization action comprises presenting a notification, such as, an alert sent to, and displayed on, a user interface of the user device. This does not provide integration into a practical application. “reducing the ability of the at least one user group to submit data queries to the distributed database management platform comprises reducing computing resources of enterprise available to accounts associated with the at least one user group” is recited as mere outputting of data or providing/presenting data. Specifically, Applicant’s specification [0027], [0029], and [0030] states that the computing query optimization action comprises presenting a notification, such as, an alert sent to, and displayed on, a user interface of the user device. This does not provide integration into a practical application. “reducing the ability of the at least one user group to submit data queries to the distributed database management platform comprises placing accounts associated with the at least one user group in a longer query submission queue time for the distributed database management platform” is recited as mere outputting of data or providing/presenting data. Specifically, Applicant’s specification [0027], [0029], and [0030] states that the computing query optimization action comprises presenting a notification, such as, an alert sent to, and displayed on, a user interface of the user device. This does not provide integration into a practical application. “the computing query optimization action comprises an actionable notification, wherein the actionable notification comprises at least one of providing an option to amend a data query to a recommended data query, information on excess resources used by a data query, an alert that a data query is not optimized, or an alert that a data query has received a specific categorization” is recited as mere outputting of data or providing/presenting data. Specifically, Applicant’s specification [0027], [0029], and [0030] states that the computing query optimization action comprises presenting a notification, such as, an alert sent to, and displayed on, a user interface of the user device. This does not provide integration into a practical application. “the computing query optimization action comprises display an interactive UI for the at least one user group, wherein the interactive Al allows the at least one user group to navigate through a history of previously submitted data queries from the at least one user group, recommendations on improvements to the previously submitted data queries, or an interactive query training module” is recited as mere outputting of data or providing/presenting data. This does not provide integration into a practical application. “the computing query optimization action comprises display an interactive UI for the at least one user group, wherein the interactive Al allows the at least one user group to navigate through a history of previously submitted data queries from the at least one user group, recommendations on improvements to the previously submitted data queries, or an interactive query training module” is recited as mere outputting of data or providing/presenting data. This does not provide integration into a practical application. Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application. At Step 2B: The conclusions for the mere implementation using a computer are carried over and do not provide significantly more. With respect to the “reducing” identified as extra-solution activity in Step 2A Prong 2, when re-evaluated as Step 2B this limitation is well-understood, routine, and conventional and remains insignificant extra-solution activity. See MPEP 2106.05(d)(II) “i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).” To the extent this is a request for “input” and “output” on records that is well-understood, routine and conventional. See MPEP 2106.05(d)(II) “iii. Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log).” With respect to the “computing query optimization action” identified as extra-solution activity in Step 2A Prong 2, when re-evaluated as Step 2B this limitation is well-understood, routine, and conventional and remains insignificant extra-solution activity. See MPEP 2106.05(d)(II) “ii. Performing repetitive calculations, Flook, 437 U.S. at 594, 198 USPQ2d at 199 (recomputing or readjusting alarm limit values); Bancorp Services v. Sun Life, 687 F.3d 1266, 1278, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012) ("The computer required by some of Bancorp’s claims is employed only for its most basic function, the performance of repetitive calculations, and as such does not impose meaningful limits on the scope of those claims."). Looking at the claims as a whole does not change this conclusion and the claim is ineligible. Allowable Subject Matter With respect to claims 1-20, there is no prior art rejection. Regarding independent claims 1 and 15: Prior art relied upon: SECK et al (US 20250209071 A1) Blake et al (US 20230231854 A1) The above prior art fails to teach: “generating, by the server system, based on the scores for the data queries, a general score for at least one user group, wherein the at least one user group comprises one or more users that are associated with the enterprise; and automatically undertaking a computing query optimization action for the at least one user group based on the general score for the at least one user group” SECK lacks any discussion of generating, based on the scores for the data queries, a general score for at least one user group within an enterprise and automatically computing query optimization action for the user group based on the general score for the user group. Instead, SECK discloses a method for query optimization may include capturing a proposed database query input into a user interface. The method may further include providing the proposed database query to a machine-learning model. The machine-learning model may have been trained, using one or more gathered and/or simulated sets of query execution overhead data and one or more gathered and/or simulated sets of database queries, to determine a potential execution overhead of a database query and output a query execution score. The method may further include outputting, by the machine-learning model, the query execution score based on the proposed database query. The method may further include determining that the query execution score exceeds a query execution score threshold. The method may further include triggering a corrective action based on the query execution score exceeding the query execution score threshold (See Abstract). Blake lacks any discussion of generating, based on the scores for the data queries, a general score for at least one user group within an enterprise and automatically computing query optimization action for the user group based on the general score for the user group. Instead, Blake discloses identifying users within an enterprise who pose heightened security risks to the enterprise. A method can include receiving, by a computing system, information about users in the enterprise, grouping the users into groups based on at least one grouping feature and the user information, the at least one grouping feature including, for each of the users, behavior, activity, role, department, region, role-based risk score, event-based risk score, and/or composite risk score, identifying, for each group, normalized behavior of users in the group, generating, for each user in each group, a composite risk score based on deviation of the user's activity from the normalized behavior of the group, identifying, for each group, a subset of users in the group to be added to a watch list, and adding the subset of users to the watch list (See Abstract). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ma et al (US 12242479 B1) - Systems and methods described herein relate to automatic index recommendations for improved database query performance. Candidate indexes are identified. The candidate indexes are associated with a database query that is classified as a slow query. A feature vector is generated for each candidate index to represent statement features and statistical features associated with the candidate index. The feature vectors are provided to one or more machine learning models to obtain an index recommendation value for each candidate index. An index recommendation is presented at a user device. The index recommendation identifies a first index of the candidate indexes based at least partially on the index recommendation value obtained for the first index. User input indicative of a user selection of the first index is received. A database schema is updated to include the first index in response to the user input. Seck et al (US 12222938 B1) - A method for query optimization may include capturing a proposed database query input into a user interface. The method may further include providing the proposed database query to a machine-learning model. The machine-learning model may have been trained, using one or more gathered and/or simulated sets of query execution overhead data and one or more gathered and/or simulated sets of database queries, to determine a potential execution overhead of a database query and output a query execution score. The method may further include outputting, by the machine-learning model, the query execution score based on the proposed database query. The method may further include determining that the query execution score exceeds a query execution score threshold. The method may further include triggering a corrective action based on the query execution score exceeding the query execution score threshold. Ignatyev et al (US 20180336247 A1) - Operations include estimating, in real time, a runtime of a query. The query optimization system receives set of query definitions for defining a target query. The system uses the set of query definition elements to determine an estimated runtime for the target query. If the estimated runtime exceeds some acceptable threshold value, then the system determines a modification to the set of query definition elements. The system uses the modification to generate a modified query, corresponding to a lower estimated runtime. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to JARED M BIBBEE whose telephone number is (571)270-1054. The examiner can normally be reached Monday-Thursday 8AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, APU MOFIZ can be reached at 5712724080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JARED M BIBBEE/Primary Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Mar 27, 2025
Application Filed
Mar 05, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596742
METHOD AND SYSTEM FOR CURATING MEDIA CONTENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596747
NATURAL LANGUAGE SEARCH OVER SECURITY VIDEOS
2y 5m to grant Granted Apr 07, 2026
Patent 12572427
PARALLELIZATION OF INCREMENTAL BACKUPS
2y 5m to grant Granted Mar 10, 2026
Patent 12572578
CONTENT COLLABORATION PLATFORM WITH DYNAMICALLY-POPULATED TABLES
2y 5m to grant Granted Mar 10, 2026
Patent 12566747
RECURSIVE ENDORSEMENTS FOR DATABASE ENTRIES
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
94%
With Interview (+13.7%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 660 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month