DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/16/2025 has been entered.
Response to Amendment
The amendment filed 12/16/2025 has been entered. Applicant has amended claims 1, 12, and 20. Claims 1-20 are currently pending in the instant application.
Response to Arguments
Applicant’s arguments, see pages 9-10, filed 12/16/2025, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Pandey et al (US20200042647). Pandey teaches the amended limitation as seen below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al (US 2018/0314735) in view of Pandey et al (US20200042647).
Regarding claim 1, Liu teaches A computing system comprising: one or more processors; and one or more memory resources storing instructions executable by the one or more processors to perform operations, the operations comprising (see Figure 8, a diagram of the system): receiving a query, via the computer system the query associated with one or more requests executable by the computing system ([0037] As an example, suppose that the available memory of a computer system running a Gauss200 OLAP Data Warehouse system is 64 gigabytes (GB) and that three queries, Q1, Q2, and Q3, arrive at the system one after the other. With current technology, the estimated memory cost of each of the three queries is 20 GB (i.e., <Q1, 20 GB>, <Q2, 20 GB> and <Q3, 20 GB>). Q1 is admitted into the database system first and starts executing because its memory cost is 20 GB, which is less than the system's current available memory of 64 GB. Thus, the query is accommodated.); generating, via the computer system, using a machine-learned model, a query cost for the query, wherein the query cost is indicative of an estimated performance of the computing system ([0049] The predictive model module 110 is configured to train the co-predictive model to generate a predictive trained model. In one embodiment, given the predictive trained model and arriving queries without execution, the predictive model module 110 outputs the estimated resource cost and detected peak values (extreme event), if any, at each time unit. The details of each component are discussed below, [0062] - );
Liu does not explicitly teach cancelling, by the computer system, the query before completing execution of the one or more requests, based on the query cost and a priority of the query, wherein the priority is indicative of whether the query can be cancelled, and cancelling the query comprises terminating execution of the query.
Pandey teaches cancelling, by the computer system, the query before completing execution of the one or more requests, based on the query cost and a priority of the query, wherein the priority is indicative of whether the query can be cancelled, and cancelling the query comprises terminating execution of the query. ([0028] and [0025] - FIG. 3 is an illustration of various aspects of a cognitive analysis system for facilitating actions related to requested queries, utilized in embodiments of the present invention. Based on the machine learning performed by the program code, embodiments of the present invention include a “smart” system 310, which mines intermediate data 320, including statistics produced in all phases of query execution, as illustrated in FIG. 2, to track all queries, both successful and unsuccessful, executed on a database resource 330. The smart system 310 can include cognitive analysis algorithms, which the program code utilizes to analyze collected data. The program code updates the smart system 310 upon execution and/or cancellation of subsequently requested queries, such that the machine learning and cognitive analysis aspects of the program code improve with the additional data. ).,
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Liu to include based on the query cost, cancelling the query before completing execution of the one or more requests by the computing system as taught by Pandey. It would be advantageous since it allows for system stability by cancelling queries that surpass current system capacity by preventing problematic queries from executing on a database as taught by the cited sections of Pandey.
Regarding claim 2, Liu in view of Pandey teaches the computing system of claim 1, Liu further teaches wherein generating a query cost for the query comprises: accessing index data indicative of a location of data ingested by the computing system, wherein the data is associated with the query; and identifying a field and an operator for the query, wherein the field is indicative of the location of the data based on the index data and the operator is indicative of an action to be taken on the data ([0056] FIG. 4 is a schematic diagram illustrating a process 400 for generating a query plan feature vector in accordance with an embodiment of the present disclosure. In one embodiment, the process 400 may be used by the predictive ML process 300 to generate the feature vector xk. As depicted in FIG. 4, the process 400 receives a query at block 402. The query specifies what is to be selected (e.g., SELECTED n.name), from where (e.g., FROM tbNation n, tbRegion r), the conditions for selection (e.g., WHERE n.regionkey=r.regionkey AND r.name=‘EUROPE’) and the order to return the selected data (e.g., ORDER BY n.name). The process 400 at block 404 parses the query down into its operators to produce a query plan. An operator is a reserved word or a character used in a query to perform operation(s), such as comparisons and arithmetic operations. An operator may be used to specify a condition or to serve as conjunctions for multiple conditions in a statement. The process 400 at block 406 generates a query plan feature vector that includes each the number of instances each operator appears in the query and the sum of cardinalities for each instance of the operator. In one embodiment, the sum of cardinalities indicates the actual data size to be process corresponding to an operator. For example, if a sort operator appears twice in a query plan with cardinalities 3000 and 45000, the query plan feature vector includes a “sort instance count” element containing the value 2 and a “sort cardinality sum” element containing the value 48000).
Regarding claim 3, Liu in view of Pandey teaches the computing system of claim 2, Liu further teaches wherein generating a query cost for the query comprises: predicting a static query cost based on the field and the operator, wherein the static query cost is indicative of computing resources to be consumed by executing the query; receiving performance metrics associated with the computing system; and generating the query cost based on the static query costs and the performance metrics ([0082] Additionally, in certain embodiments, the process 700 may be simultaneously processing or managing more than one query at a time. For example, while a query is being executed, the process 700 may begin processing additional queries that it receives. For instance, using the above example, after Q1 begins executing, the process 700 also initiates execution of Q2 as the available memory of the computer system is 40 GB, which is greater than the estimated system memory cost for <Q2, 24 GB>. While Q1 and Q2 are executing, the system has 16 GB of system memory available. Thus, the system's currently available memory of 16 GB is insufficient for initiating execution of Q3, <Q3, 24 GB>, and thus, Q3 is queued in the wait queue to avoid system OOM occurring), see also [0084]) .
Regarding claim 4, Liu in view of Pandey teaches the computing system of claim 3, Liu further teaches wherein the performance metrics are indicative of at least one of: (i) an associated latency, (ii) a CPU utilization, (iii) an associated running time, or (iv) a queue of queries to be executed ([0082] Additionally, in certain embodiments, the process 700 may be simultaneously processing or managing more than one query at a time. For example, while a query is being executed, the process 700 may begin processing additional queries that it receives. For instance, using the above example, after Q1 begins executing, the process 700 also initiates execution of Q2 as the available memory of the computer system is 40 GB, which is greater than the estimated system memory cost for <Q2, 24 GB>. While Q1 and Q2 are executing, the system has 16 GB of system memory available. Thus, the system's currently available memory of 16 GB is insufficient for initiating execution of Q3, <Q3, 24 GB>, and thus, Q3 is queued in the wait queue to avoid system OOM occurring), see also [0084]) . .
Regarding claim 5, Liu in view of Pandey teaches the computing system of claim 1, Liu further teaches wherein the operations further comprise: determining, based on the query cost, a probability of decreasing performance of the computing system by executing the query ([0082] Additionally, in certain embodiments, the process 700 may be simultaneously processing or managing more than one query at a time. For example, while a query is being executed, the process 700 may begin processing additional queries that it receives. For instance, using the above example, after Q1 begins executing, the process 700 also initiates execution of Q2 as the available memory of the computer system is 40 GB, which is greater than the estimated system memory cost for <Q2, 24 GB>. While Q1 and Q2 are executing, the system has 16 GB of system memory available. Thus, the system's currently available memory of 16 GB is insufficient for initiating execution of Q3, <Q3, 24 GB>, and thus, Q3 is queued in the wait queue to avoid system OOM occurring.).
Regarding claim 6, Liu in view of Pandey teaches the computing system of claim 1, Pandey further teaches wherein the operations further comprise: determining, based on the query cost, a probability of increasing performance of the computing system by cancelling the query ([0039] The target table(s) of a requested query can also be relevant to the program code of the smart system (e.g., FIG. 3, 310) in predicting the success of the query. As understood by one of skill in the art, there will be many situations where the target table is in the deadlock state. In situations any query referring to the same table will also either fail or will enter the deadlock state. When an optimizer (e.g., FIG. 2, 230) generates different plans for a query, the optimizer does not take the state of target tables into consideration. Thus, the optimizer would not factor the state of the table into its considerations, even if the table which the SQL query is referring to is using a deadlock state. In this situation, in some embodiments of the present invention, the program code will cause the proposed analytical engine to wait for the table to get out of the deadlock state, or will stop the query from executing altogether, based on the state of the table).
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Liu to include determining, based on the query cost, a probability of increasing performance of the computing system by cancelling the query as taught by Pandey It would be advantageous since it allows for system stability by cancelling queries that surpass current system capacity as taught by the cited sections of Pandey .
Regarding claim 7 Liu in view of Pandey teaches the computing system of claim 1, Pandey further teaches wherein the machine-learned model is configured to: receive feedback data associated with a cancellation score, wherein the cancellation score is associated with one or more cancelled queries; and determine, based on the feedback data, one or more updated query costs associated with respective queries ([0027] It is a compilation of intermediate data from past queries that provides intelligence to some embodiments of the smart system 310 of FIG. 3. For example, when an application requests a given query, and this query is executed, but encounters issues during execution, such as taking an extended period of time to execute when compared to other queries, program code in some embodiments of the present invention analyzes the given query to determine the reason for the extended execution time, which could be an issue with the query itself and/or a concurrent issues with a computing resource, etc. Once the program code determines the cause of the delay, the program code will retain the query, as well as the cause of the issue, for use in future query analyses. In embodiments of the present invention, the program code of the smart system 310 monitors and examines individual queries and/or all the queries, in order to generate a (regularly-updated) data resource for utilization in cognitive analyses).
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Liu to include receive feedback data associated with a cancellation score, wherein the cancellation score is associated with one or more cancelled queries; and determine, based on the feedback data, one or more updated query costs associated with respective queries as taught by Pandey It would be advantageous since it allows for system stability by cancelling queries that surpass current system capacity as taught by the cited sections of Pandey .
Regarding claim 8, Liu in view of Pandey teaches the computing system of claim 7, Pandey further teaches wherein the cancellation score is associated with an actual performance of the computing system in response to the one or more cancelled queries ([0026] The program code of the smart system 310 continuously learns which queries will adversely affect a computing system, including a database resource 330, by collecting and utilizing intermediate data 320 that includes, but not limited to: 1) query plans of queries; 2) actual time of query execution; 3) a deadlock scenario's query text, optimizer plans, disk problems, and/or parsing anomalies; 4) crash statistics, including parser data, compiler information, optimizer statistics, execution statistics, and/or meta data; 5) unknown errors during a vulnerable and learnable pattern in queries; 6) network related statistics at times of failure; and/or 6) schema-related statistics during execution of queries where errors or other technical issues were experienced.).
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Liu to include wherein the cancellation score is associated with an actual performance of the computing system in response to the one or more cancelled queries as taught by Pandey It would be advantageous since it allows for system stability by cancelling queries that surpass current system capacity as taught by the cited sections of Pandey .
Regarding claim 9, Liu in view of Pandey teaches the computing system of claim 1, Pandey further teaches wherein the operations comprise: generating a cancellation trigger based on the query cost; and invoking the cancellation trigger, wherein invoking the cancellation trigger cancels the query ([0028] Returning to FIG. 3, the program code generates and maintains the cognitive repository 340 by analyzing all the query executions (or a given sample of the queries, depending on implementations) on the database resource 330 through the current time/date and progressively builds the cognitive repository 340, including intermediate data, as a learning catalogue of the queries, including failing and successful queries. Utilizing the cognitive repository 340, when an application requests a query, the program code utilizes the data in the cognitive repository 340 to predict whether the requested query, submitted at a given time, will fail or succeed. Based on this prediction, the program code takes an action related to the query, including but not limited to, cancelling execution of the query, delaying execution of the query, and/or executing the query. In some embodiments of the present invention, the program code alerts a user of the prediction and solicits input from the user (e.g., through a user interface 150, FIG. 1), enabling the user to override or accept the recommendations given by the program code.).
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Liu to include generating a cancellation trigger based on the query cost; and invoking the cancellation trigger, wherein invoking the cancellation trigger cancels the query as taught by Pandey It would be advantageous since it allows for system stability by cancelling queries that surpass current system capacity as taught by the cited sections of Pandey .
Regarding claim 10, Liu in view of Pandey teaches the computing system of claim 9, Pandey further teaches wherein invoking the cancellation trigger comprises: determining a type of cancellation trigger, wherein the type of cancellation trigger is associated with a signal to cancel the query; and determining a cancellation controller from a plurality of cancellation controllers, based on the type of cancellation trigger (0044] Returning to FIG. 5, based on the prediction regarding the success of the query, the program code initiates an action to preserve performance of the database resource (530). These actions may, include, but are not limited to, executing the query, pre-empting the query, cancelling execution of the query, delaying execution of the query executing the query with a warning, and/or causing the query execution to fail. For example, in some embodiments of the present invention, the program code pre-empts the requested query based on determining that, if executed, the query will hog system resources and thus, the program code pre-empts the query to prevent denials of service to other queries. In some embodiments of the present invention, the program code determines which action to take based on the prediction, which may be a binary value, a quantitative ranked value, or a qualitative result. In some embodiments of the present invention, the action by the program code can include sending an alarm or a notification to a user, alerting the user to the request to execute a bad query (e.g., a query that can lead to a production issues, tax the system, etc.). In some embodiments of the present invention, whether the program code sends an alarm is dictated based on whether the prediction meets or exceeds a given threshold; the threshold can be a default threshold or a user-defined threshold. The program code updates resources utilized for the cognitive analysis, for use in future analyses, based on the action related to the requested query (540)).
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Liu to include herein invoking the cancellation trigger comprises: determining a type of cancellation trigger, wherein the type of cancellation trigger is associated with a signal to cancel the query; and determining a cancellation controller from a plurality of cancellation controllers, based on the type of cancellation trigger as taught by Pandey It would be advantageous since it allows for system stability by cancelling queries that surpass current system capacity as taught by the cited sections of Pandey .
Regarding claim 11, Liu in view of Pandey teaches the computing system of claim 10, Pandey further teaches wherein the plurality of cancellation controllers comprises at least one of: (i) an early termination controller, (ii) a user-context-based controller, or (iii) a utilization controller (0044] Returning to FIG. 5, based on the prediction regarding the success of the query, the program code initiates an action to preserve performance of the database resource (530). These actions may, include, but are not limited to, executing the query, pre-empting the query, cancelling execution of the query, delaying execution of the query executing the query with a warning, and/or causing the query execution to fail. For example, in some embodiments of the present invention, the program code pre-empts the requested query based on determining that, if executed, the query will hog system resources and thus, the program code pre-empts the query to prevent denials of service to other queries. In some embodiments of the present invention, the program code determines which action to take based on the prediction, which may be a binary value, a quantitative ranked value, or a qualitative result. In some embodiments of the present invention, the action by the program code can include sending an alarm or a notification to a user, alerting the user to the request to execute a bad query (e.g., a query that can lead to a production issues, tax the system, etc.). In some embodiments of the present invention, whether the program code sends an alarm is dictated based on whether the prediction meets or exceeds a given threshold; the threshold can be a default threshold or a user-defined threshold. The program code updates resources utilized for the cognitive analysis, for use in future analyses, based on the action related to the requested query (540)).
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Liu to include wherein the plurality of cancellation controllers comprises at least one of: (i) an early termination controller, (ii) a user-context-based controller, or (iii) a utilization controller as taught by Pandey It would be advantageous since it allows for system stability by cancelling queries that surpass current system capacity as taught by the cited sections of Pandey .
Regarding claim 12, Liu in view of Pandey teaches the computing system of claim 10, Pandey further teaches wherein the signal to cancel the query is indicative of a cancellation status associated with the query (0044] Returning to FIG. 5, based on the prediction regarding the success of the query, the program code initiates an action to preserve performance of the database resource (530). These actions may, include, but are not limited to, executing the query, pre-empting the query, cancelling execution of the query, delaying execution of the query executing the query with a warning, and/or causing the query execution to fail. For example, in some embodiments of the present invention, the program code pre-empts the requested query based on determining that, if executed, the query will hog system resources and thus, the program code pre-empts the query to prevent denials of service to other queries. In some embodiments of the present invention, the program code determines which action to take based on the prediction, which may be a binary value, a quantitative ranked value, or a qualitative result. In some embodiments of the present invention, the action by the program code can include sending an alarm or a notification to a user, alerting the user to the request to execute a bad query (e.g., a query that can lead to a production issues, tax the system, etc.). In some embodiments of the present invention, whether the program code sends an alarm is dictated based on whether the prediction meets or exceeds a given threshold; the threshold can be a default threshold or a user-defined threshold. The program code updates resources utilized for the cognitive analysis, for use in future analyses, based on the action related to the requested query (540)).
Accordingly, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to have modified the teachings of Liu to include signal to cancel the query is indicative of a cancellation status associated with the query as taught by Pandey It would be advantageous since it allows for system stability by cancelling queries that surpass current system capacity as taught by the cited sections of Pandey .
Claims 13-20 are rejected using similar reasoning seen in the rejection of claims 1-12 due to reciting similar limitations but directed towards a method and non-transitory computer-readable media.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL SHARPLESS whose telephone number is (571)272-1521. The examiner can normally be reached M-F 7:30 AM- 3:30 PM (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALEKSANDR KERZHNER can be reached at 571-270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.C.S./Examiner, Art Unit 2165
/ALEKSANDR KERZHNER/ Supervisory Patent Examiner, Art Unit 2165