Prosecution Insights
Last updated: April 19, 2026
Application No. 18/661,239

DATABASE MANAGEMENT SYSTEM PERFORMANCE ISSUE CORRECTION

Non-Final OA §101§103
Filed
May 10, 2024
Examiner
ELIAS, EARL L
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
SAP SE
OA Round
1 (Non-Final)
57%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
80%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
56 granted / 99 resolved
+1.6% vs TC avg
Strong +24% interview lift
Without
With
+23.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
19 currently pending
Career history
118
Total Applications
across all art units

Statute-Specific Performance

§101
28.7%
-11.3% vs TC avg
§103
52.9%
+12.9% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 99 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action has been issued in response to Applicant’s Communication of application S/N 18/661,239 filed on May 10, 2024. Claims 1-20 are pending with the application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. With respect to claim 1, the limitations directed towards comparing the first query execution signature data to second query execution signature data describing execution of a second query at the database management system, is a process that, under its broadest reasonably interpretation, covers performance of these limitation in the mind and certain methods of organizing human activity but for the recitation of generic computer components. That is, other than reciting a testing computing system for testing a database management system, comprising: at least one processor programmed to perform operations comprising: accessing first performance data describing a plurality of operations executed by the database management system to implement a first query; executing a graph neural network using the first performance data to generate a graph neural network output; generating first query execution signature data describing the execution of the first query at the database management system, the generating of the first query execution signature data being based at least in part on the graph neural network output; and based on the comparing, storing an indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent, nothing in the claim precludes these steps from practically being performed in the mind and/or by a human with pen and paper. For example, but for the limitations stating a testing computing system for testing a database management system, comprising: at least one processor programmed to perform operations comprising: accessing first performance data describing a plurality of operations executed by the database management system to implement a first query; executing a graph neural network using the first performance data to generate a graph neural network output; generating first query execution signature data describing the execution of the first query at the database management system, the generating of the first query execution signature data being based at least in part on the graph neural network output; and based on the comparing, storing an indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent, the mention of comparing the first query execution signature data to second query execution signature data describing execution of a second query at the database management system, in the context of this claim, encompasses mentally determining whether or not queries are equivalent, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. The judicial exception is not integrated into a practical application by additional elements. In particular, a testing computing system for testing a database management system, comprising: at least one processor programmed to perform operations comprising: accessing first performance data describing a plurality of operations executed by the database management system to implement a first query; executing a graph neural network using the first performance data to generate a graph neural network output; generating first query execution signature data describing the execution of the first query at the database management system, the generating of the first query execution signature data being based at least in part on the graph neural network output; and based on the comparing, storing an indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent is recited at a high level of generality (i.e., as a generic computer performing a generic computer function of search) such that it amounts to no more than mere instructions to apply the exception. A testing computing system for testing a database management system, comprising: at least one processor programmed to perform operations comprising: accessing first performance data describing a plurality of operations executed by the database management system to implement a first query; executing a graph neural network using the first performance data to generate a graph neural network output; generating first query execution signature data describing the execution of the first query at the database management system, the generating of the first query execution signature data being based at least in part on the graph neural network output; and based on the comparing, storing an indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent is considered by the examiner to be mere data gathering such that it amounts to no more than insignificant extra solution activity. These elements do not integrate the abstract idea into a practical application because it does not impose a meaningful limit on the judicial exception and it merely confines the claim to a particular technological environment or field of use for data gathering in conjunction with the abstract idea. This claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements, a testing computing system for testing a database management system, comprising: at least one processor programmed to perform operations comprising: accessing first performance data describing a plurality of operations executed by the database management system to implement a first query; executing a graph neural network using the first performance data to generate a graph neural network output; generating first query execution signature data describing the execution of the first query at the database management system, the generating of the first query execution signature data being based at least in part on the graph neural network output; and based on the comparing, storing an indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent is recited at a high level of generality to apply the exception using generic components. The additional elements, a testing computing system for testing a database management system, comprising: at least one processor programmed to perform operations comprising: accessing first performance data describing a plurality of operations executed by the database management system to implement a first query; executing a graph neural network using the first performance data to generate a graph neural network output; generating first query execution signature data describing the execution of the first query at the database management system, the generating of the first query execution signature data being based at least in part on the graph neural network output; and based on the comparing, storing an indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent is interpreted to be well understood, routine and conventional activity (Receiving or transmitting data over a network e.g., using the internet to gather data, Symantec (see MPEP 2106.05(d))). Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. To further elaborate, the additional limitations of a testing computing system for testing a database management system, comprising: at least one processor programmed to perform operations comprising: accessing first performance data describing a plurality of operations executed by the database management system to implement a first query; executing a graph neural network using the first performance data to generate a graph neural network output; generating first query execution signature data describing the execution of the first query at the database management system, the generating of the first query execution signature data being based at least in part on the graph neural network output; and based on the comparing, storing an indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent does not impose a meaningful limit on the judicial exception and it merely confines the claim to a particular technological environment or field of use. Claim 1 is not patent eligible. Claims 10 and 19 are similarly rejected because they are similar in scope. With respects to claims 2, 11, and 20, the limitations are directed towards selecting a first portion of the plurality of operations having higher execution times than a second portion of the plurality of operations; and generating a key operations graph, the key operations graph comprising a plurality of graph elements corresponding to the first portion of the plurality of operations the executing of the graph neural network being based at least in part on the key operations graph. These additional elements are interpreted to merely confine the claim to a particular technological environment. Therefore, claims 2, 11, and 20 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception. With respects to claims 3 and 12, the limitations are directed towards determining a number of common graph elements between the key operations graph and a second key operations graph comprising a second plurality of graph elements corresponding to operations executed by the database management system to implement the second query, the storing of the indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent also being based at least in part on the number of common graph elements. These elements directed to determining a number of common graph elements between the key operations graph and a second key operations graph comprising a second plurality of graph elements corresponding to operations executed by the database management system to implement the second query further elaborates the abstract idea and the human mind and/or with pen and paper can determine a number of common graph elements between the key operations graph and a second key operations graph comprising a second plurality of graph elements corresponding to operations executed by the database management system to implement the second query. The additional elements directed to the storing of the indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent also being based at least in part on the number of common graph elements merely confine the claim to a particular technological environment. Therefore, claims 3 and 12 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception. With respects to claims 4 and 13, the limitations are directed towards the selecting of the first portion of the plurality of operations comprising ranking of the plurality of operations by execution time. These elements further elaborates the abstract idea and the human mind and/or with pen and paper can rank the plurality of operations by execution time. Therefore, claims 4 and 13 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception. With respects to claims 5 and 14, the limitations are directed towards the graph neural network being a Siamese graph neural network comprising a graph attention convolutional branch and a graph convolutional branch. These elements further elaborates the abstract idea and the human mind and/or with pen and paper can use Siamese graph neural network comprising a graph attention convolutional branch and a graph convolutional branch. Therefore, claims 5 and 14 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception. With respects to claims 6 and 15, the limitations are directed towards the graph neural network output being based at least in part on a concatenation of the graph attention convolutional branch and an output of the graph convolutional branch. These additional elements are interpreted to merely confine the claim to a particular technological environment. Therefore, claims 6 and 15 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception. With respects to claims 7 and 16, the limitations are directed towards the operations further comprising executing a fully connected neural network using the graph neural network output, the first query execution signature data also being based at least in part on an output of the fully connected neural network. These additional elements are interpreted to merely confine the claim to a particular technological environment. Therefore, claims 7 and 16 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception. With respects to claims 8 and 17, the limitations are directed towards the comparing comprising generating a cosine similarity between the first query execution signature data and the second query execution signature data. These elements further elaborates the abstract idea and the human mind and/or with pen and paper can compare by using generated cosine similarity between the first query execution signature data and the second query execution signature data. Therefore, claims 8 and 17 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception. With respects to claims 9 and 18, the limitations are directed towards the operations further comprising executing a large language model based at least in part on the first performance data to generate a large language model output, the storing of the indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent also being based at least in part on the large language model output. These additional elements are interpreted to merely confine the claim to a particular technological environment. Therefore, claims 9 and 18 do not recite additional limitations which tie the abstract idea into a practical application and does not amount to significantly more than the identified judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 4, 7, 10, 11, 13, 16, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anand et al. (U.S. Publication No.: US 20200356462 A1) hereinafter Anand, in view of Park et al. (U.S. Publication No.: US 20240185033 A1) hereinafter Park. As to claim 1: Anand discloses: A testing computing system for testing a database management system [Paragraph 0046 teaches some of the training data 408 may be used to initially train the model 418, and some may be held back as a validation subset 414. The portion of the training data 408 not including the validation subset 414 may be used to train the model 418, whereas the validation subset 418 may be held back and used to test the trained model 418 to verify that the model 418 is able to generalize its predictions to new data. Note: A computing apparatus (computing system) associated with validating database performance reads on the claims.], comprising: at least one processor programmed to perform operations comprising: accessing first performance data describing a plurality of operations executed by the database management system to implement a first query [Paragraph 0030 teaches logs raw information relating to database queries and accesses (e.g., the time and date at which a connection was established, what was searched for in a query and when, the originator of the query, etc). Paragraph 0031 teaches the metrics… may include, for example, a number of queries 204… the rate of queries 206 over a period of time, the number of connections 208 in existence at a given time or over a given period of time, the size of the database 210 at a given time, the latency 212 in responding to queries. Note: Accessing performance metrics associated with a data retrieval method that includes the use of a query, wherein comparing performance metrics include a first and second performance metrics, wherein the what was searched for and when (plurality of executions) reads on the claims.]; executing a neural network using the first performance data to generate a neural network output [Paragraph 0047 teaches the training data 408 may be applied to train a model 418. Depending on the particular application, different types of models 418 may be suitable for use. For instance, in the depicted example, an artificial neural network (ANN) may be particularly well-suited to learning associations between performance metrics 410 and the database settings 412 that gave rise to the performance metrics 410.]; comparing the first query execution signature data to second query execution signature data describing execution of a second query at the database management system [Paragraph 0022 teaches queries or accesses to the databases may be logged… time stamps associated with the queries may be used to determine a rate at which queries to the database(s) are being made/processed… performance information may be used to determine performance data, which may be associated with relevant times keyed to the performance data. Paragraph 0044 teaches the database performance characteristics at the time t.sub.1; and a change in the performance characteristics at some time t.sub.2. Note: Comparing T1 and T2 describing two different query execution times (query execution signature data) reads on the claims.]; and based on the comparing, storing an indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent [Paragraph 0044 teaches the database performance characteristics at the time t.sub.1; and a change in the performance characteristics at some time t.sub.2. Paragraph 0065 and Figure 6C teach If not, then processing may proceed to block 662, where the system may receive a request (e.g., from an administrator, or programmatically from the administrator device). Figure 6C and Paragraph 0066 teaches where the system determines if the conditions have evolved so as to warrant an alert or notification. Note: Determining that conditions associated with query executions T1 and T2 have not evolved or changed (equivalent) reads on the claims.] Anand discloses most of the limitations as set forth in claim 1 but does not appear to expressly disclose executing a graph neural network to generate a graph neural network output, generating first query execution signature data describing the execution of the first query at the database management system, and the generating of the first query execution signature data being based at least in part on the graph neural network output. Park discloses: executing a graph neural network to generate a graph neural network output [Paragraph 0040 teaches the memory 130 may store a graph neural network training program 200 and data necessary to execute the graph neural network training program 200. Paragraph 0056 teaches the first graph neural network GNN1 may receive graph data G and generate first node embeddings NE1 which represents nodes in the graph data G as vectors, and the second graph neural network GNN1 may receive the graph data G and generate second node embeddings NE2 which represents the nodes in the graph data G as vectors.] generating first query execution signature data describing the execution of the first query at the database management system [Paragraph 0008 teaches determining a predetermined first number of neighbor nodes closest to the query node using a node embedding corresponding to the query node among the first node embeddings and node embeddings corresponding to other nodes in the training graph data among the second node embeddings. Paragraph 0056 and Fig. 3 teaches the first graph neural network GNN1 may receive graph data G and generate first node embeddings NE1 which represents nodes in the graph data G as vectors. Note: Generating first node embeddings indicative (describing) query node data in graph data, wherein the query is used (executed) in training the GNN.], the generating of the first query execution signature data being based at least in part on the graph neural network output [Paragraph 0043 teaches FIG. 3 is a conceptual diagram showing operations performed by the graph neural network training program to train a graph neural network. Paragraph 0056 teaches the first graph neural network GNN1 may receive graph data G and generate first node embeddings NE1 which represents nodes in the graph data G as vectors.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Anand, by incorporating utilizing a graph neural network to output data based on data describing queries (see Park Paragraph 0040, 0043, 0056, and Figure 3), because both applications are directed to neural network utilization; incorporating utilizing a graph neural network to output data based on data describing queries to improve the accuracy of training of the graph neural network (see Park Paragraph 0021). Claims 10 and 19 are similarly rejected because they are similar in scope. As to claim 2: Anand discloses: The testing computing system of claim 1, the operations further comprising: selecting a first portion of the plurality of operations having higher execution times than a second portion of the plurality of operations [Paragraph 0022 teaches the system may compare the time between when a query is made and when a reply is sent in order to determine a latency associated with the query. Paragraph 0030 teaches the performance metrics may be stored directly in the relational databases, which logs raw information relating to database queries and accesses (e.g., the time and date at which a connection was established, what was searched for in a query and when, the originator of the query, etc). Paragraph 0031 teaches a number of queries 204 in a given period of time (or since the last time the performance metrics were checked, or since the beginning of tracking of the performance metrics). Paragraph 0044 teaches the database performance characteristics at the time t.sub.1; and a change in the performance characteristics at some time t.sub.2 a sufficient time after t.sub.1. Note: A change in latency (execution times) between query execution t1 and t2, wherein latency that has for one of the query execution times is interpreted to be higher or lower than the other reads on the claims. ]; and Anand and Park discloses all of the limitations as set forth in claim 1. Park also discloses: generating a key operations graph, the key operations graph comprising a plurality of graph elements corresponding to the first portion of the plurality of operations the executing of the graph neural network being based at least in part on the key operations graph [Paragraph 0040 teaches the memory 130 may store a graph neural network training program 200 and data necessary to execute the graph neural network training program 200. Paragraph 0056 teaches the first graph neural network GNN1 may receive graph data G and generate first node embeddings NE1 which represents nodes in the graph data G as vectors, and the second graph neural network GNN1 may receive the graph data G and generate second node embeddings NE2 which represents the nodes in the graph data G as vectors.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Anand, by incorporating utilizing a graph neural network to output data based on data describing queries (see Park Paragraph 0040, 0043, 0056, and Figure 3), because both applications are directed to neural network utilization; incorporating utilizing a graph neural network to output data based on data describing queries to improve the accuracy of training of the graph neural network (see Park Paragraph 0021). Claims 11 and 20 are similarly rejected because they are similar in scope. As to claim 4: Anand discloses: The testing computing system of claim 2, the selecting of the first portion of the plurality of operations comprising ranking of the plurality of operations by execution time [Paragraph 0022 teaches the system may compare the time between when a query is made and when a reply is sent in order to determine a latency associated with the query. Paragraph 0030 teaches the performance metrics may be stored directly in the relational databases, which logs raw information relating to database queries and accesses (e.g., the time and date at which a connection was established, what was searched for in a query and when, the originator of the query, etc). Paragraph 0031 teaches a number of queries 204 in a given period of time (or since the last time the performance metrics were checked, or since the beginning of tracking of the performance metrics). Paragraph 0044 teaches the database performance characteristics at the time t.sub.1; and a change in the performance characteristics at some time t.sub.2 a sufficient time after t.sub.1. Note: A change in latency (execution times) between query execution t1 and t2, wherein latency that has for one of the query execution times is interpreted to be higher or lower than the other reads on the claims.] Claim 13 is similarly rejected because it is similar in scope. As to claim 7: Anand and Park discloses all of the limitations as set forth in claim 1. Park also discloses: The testing computing system of claim 1, the operations further comprising executing a fully connected neural network using the graph neural network output, the first query execution signature data also being based at least in part on an output of the fully connected neural network [Paragraph 0008 teaches determining a predetermined first number of neighbor nodes closest to the query node using a node embedding corresponding to the query node among the first node embeddings and node embeddings corresponding to other nodes in the training graph data among the second node embeddings. Paragraph 0048 and Figure 3 teaches he real positive determination unit 210 may determine neighbor nodes using a k-NN algorithm. Paragraph 0051 and Figure 3 teaches the real positive determination unit 210 may cluster all nodes into the second number of clusters using a k-means clustering algorithm. Paragraph 0056 teaches the first graph neural network GNN1 may receive graph data G and generate first node embeddings NE1 which represents nodes in the graph data G as vectors, and the second graph neural network GNN1 may receive the graph data G and generate second node embeddings NE2 which represents the nodes in the graph data G as vectors. Note: Executing a K-NN or K-nearest Neural Network (fully connected neural network) that is used to create a plurality of clusters associated with query nodes reads on the claims.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Anand, by incorporating utilizing a graph neural network to output data based on data describing queries (see Park Paragraph 0040, 0043, 0056, and Figure 3), because both applications are directed to neural network utilization; incorporating utilizing a graph neural network to output data based on data describing queries to improve the accuracy of training of the graph neural network (see Park Paragraph 0021). Claim 16 is similarly rejected because it is similar in scope. Claim(s) 3 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anand et al. (U.S. Publication No.: US 20200356462 A1) hereinafter Anand, in view of Park et al. (U.S. Publication No.: US 20240185033 A1) hereinafter Park, and further in view of Shveidel et al. (U.S. Publication No.: US 20200034277 A1) hereinafter Shveidel. As to claim 3: Anand discloses: The testing computing system of claim 2, the operations further comprising operations executed by the database management system to implement the second query [Paragraph 0022 teaches queries or accesses to the databases may be logged… time stamps associated with the queries may be used to determine a rate at which queries to the database(s) are being made/processed… performance information may be used to determine performance data, which may be associated with relevant times keyed to the performance data. Paragraph 0044 teaches the database performance characteristics at the time t.sub.1; and a change in the performance characteristics at some time t.sub.2. Note: Comparing T1 and T2 describing two different query execution times (query execution signature data) reads on the claims.] Anand and Park discloses most of the limitations as set forth in claim 1 but does not appear to expressly disclose determining a number of common graph elements between the key operations graph and a second key operations graph comprising a second plurality of graph elements corresponding to operations executed by the database management system to implement the second query, the storing of the indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent also being based at least in part on the number of common graph elements. Shveidel discloses: determining a number of common graph elements between the key operations graph and a second key operations graph comprising a second plurality of graph elements [Paragraph 0034 teaches monitor performance of tasks using directed-graphs (diagrams). The performance data may be collected in one or more points-of-interest into performance data containers. Performance data containers may be presented as nodes and edges of the directed-graph related to a specific task. Paragraph 0074 teaches if the first graph includes five edges that connect a pair of first nodes, and the second graph includes six edges that connect a pair of second nodes that correspond to the pair of first nodes, the sixth edge in the second graph may be regarded as one that does not have a matching counterpart in the first graph.] corresponding to operations executed by the database management system [Paragraph 0003 teaches identifying a first subset of the first set of performance data, the first subset corresponding to an execution of one or more first thread instances, the first thread instances being instantiated using the first set of files.], the storing of the indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent also being based at least in part on the number of common graph elements [Paragraph 0039 teaches may employ performance counters to collect data for each directed-graph node, and the performance counters may include counters for accumulating a number of accesses, accumulating a number of requested units (for cases when a single access contains a batch of requested units (e.g., data blocks)). Paragraph 0074 teaches if the first graph includes five edges that connect a pair of first nodes, and the second graph includes six edges that connect a pair of second nodes that correspond to the pair of first nodes, the sixth edge in the second graph may be regarded as one that does not have a matching counterpart in the first graph. Note: Performance data for requests for data (queries) stored in two different graphs, wherein both graphs are analyzed to count similarities between the graphs and similarities between query performance data reads on the claims.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Anand and Park, by incorporating performance data for requests for data (queries) stored in two different graphs, wherein both graphs are analyzed to count similarities between the graphs and similarities between query performance data (see Shveidel Paragraph 0003, 0034, 0039, and 0074), because the three applications are directed to data analysis; incorporating performance data for requests for data (queries) stored in two different graphs, wherein both graphs are analyzed to count similarities between the graphs and similarities between query performance data improves its efficiency and/or remove software bugs that have caused increased resource consumption and/or degradation in system performance (see Shveidel Paragraph 0074). Claim 12 is similarly rejected because it is similar in scope. Claim(s) 5, 6, 14, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anand et al. (U.S. Publication No.: US 20200356462 A1) hereinafter Anand, in view of Park et al. (U.S. Publication No.: US 20240185033 A1) hereinafter Park, and further in view of Fan et al. (Graph Neural Networks for Social Recommendation, 2019 World Wide Web Conference, 2019, pp.417-426) hereinafter Fan. As to claim 5: Anand and Park discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose the graph neural network being a Siamese graph neural network comprising a graph attention convolutional branch and a graph convolutional branch. Fan discloses: The testing computing system of claim 1, the graph neural network being a Siamese graph neural network [2.2 An Overview of the Proposed Framework teaches The architecture of the proposed model is shown in Figure 2. The model consists of three components: user modeling, item modeling, and rating prediction. Note: Utilizing user modeling and item modeling (Siamese graph) to come up with output in a graph neural network as showing in figure 2 reads on the claims.] comprising a graph attention convolutional branch [2.2 An Overview of the Proposed Framework teaches the architecture of the proposed model is shown in Figure 2. The model consists of three components: user modeling, item modeling, and rating prediction. 2.3 User Modeling – Social Aggregation teaches we perform an attention mechanism with a two-layer neural network to extract these users that are important to influence ui, and model their tie strengths. Note: Utilizing a graph as shown in figure 2 attached to the model having two components that includes user modeling (graph attention convolutional branch), wherein the user modeling (graph attention convolutional branch) incorporates attention mechanism and aggregation is interpreted be convolutional reads on the claims.] and a graph convolutional branch [2.2 An Overview of the Proposed Framework teaches the architecture of the proposed model is shown in Figure 2. The model consists of three components: user modeling, item modeling, and rating prediction. 2.3 User Modeling – Item Aggregation teaches One popular aggregation function for Aggreitems is the mean operator where we take the element-wise mean of the vectors in {xia, ∀a ∈ C(i)}. This mean-based aggregator is a linear approximation of a localized spectral convolution [15]. Note: Utilizing a graph as shown in figure 2 attached to the model having two components that includes item modeling (graph convolutional branch), wherein the item modeling (graph convolutional branch) incorporates aggregation which is interpreted be convolutional reads on the claims.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Anand and Park, by incorporating utilizing user modeling and item modeling (Siamese graph) to come up with output in a graph neural network as showing in figure 2 (see Fan Figure 2, An Overview of the Proposed Framework, 2.3 User Modeling – Item Aggregation, and 2.3 User Modeling – Social Aggregation), because the three publications are directed to data analysis; utilizing user modeling and item modeling (Siamese graph) to come up with output in a graph neural network as showing in figure 2 improves performance (see Fan 3.3 Model Analysis – Opinions in Interaction). Claim 14 is similarly rejected because it is similar in scope. As to claim 6: Anand, Park, and Fan discloses all of the limitations as set forth in claim 1 and 5. Fan also discloses: The testing computing system of claim 5, the graph neural network output being based at least in part on a concatenation of the graph attention convolutional branch and an output of the graph convolutional branch [2.2 An Overview of the Proposed Framework teaches it is intuitive to obtain user latent factors by combining information from both item space and social space. Table 1: Notation teaches ⊕ the concatenation operator of two vectors. 2.5 Rating Prediction teaches we apply the proposed GraphRec model for the recommendation task of rating prediction. With the latent factors of users and items (i.e., hi and zj), we can first concatenate them hi ⊕ zj. Note: Concatenating output from both subgraph or branches of the GraphRec model as shown in Figure 2 reads on the claim.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Anand and Park, by incorporating utilizing user modeling and item modeling (Siamese graph) to come up with output in a graph neural network as showing in figure 2 (see Fan Figure 2, An Overview of the Proposed Framework, 2.3 User Modeling – Item Aggregation, 2.3 User Modeling – Social Aggregation, and 2.5 Rating Prediction), because the three publications are directed to data analysis; utilizing user modeling and item modeling (Siamese graph) to come up with output in a graph neural network as showing in figure 2 improves performance (see Fan 3.3 Model Analysis – Opinions in Interaction). Claim 15 is similarly rejected because it is similar in scope. Claim(s) 8 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anand et al. (U.S. Publication No.: US 20200356462 A1) hereinafter Anand, in view of Park et al. (U.S. Publication No.: US 20240185033 A1) hereinafter Park, and further in view of Safronov et al. (U.S. Publication No.: US 20200192961 A1) hereinafter Safronov. As to claim 8: Anand and Park discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose generating a cosine similarity between the first query execution signature data and the second query execution signature data. Safronov discloses: The testing computing system of claim 1, the comparing comprising generating a cosine similarity between the first query execution signature data and the second query execution signature data [Paragraph 0136 teaches the first similarity parameter 442 may be generated by determining a cosine similarity between the first query vector 444 and the second query vector 446.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Anand and Park, by incorporating determining a cosine similarity between the first query vector 444 and the second query vector 446 (see Safronov Paragraph 0136), because the three publications are directed to data analysis; determining a cosine similarity between the first query vector 444 and the second query vector 446 improves quality of data analysis (see Safronov Paragraph 0012). Claim 17 is similarly rejected because it is similar in scope. Claim(s) 9 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anand et al. (U.S. Publication No.: US 20200356462 A1) hereinafter Anand, in view of Park et al. (U.S. Publication No.: US 20240185033 A1) hereinafter Park, in view of Preston et al. (U.S. Publication No.: US 20250233802 A1) hereinafter Preston, and further in view of Kierzyk (U.S. Publication No.: US 20250190456 A1) hereinafter Kierzyk. As to claim 9: Anand and Park discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose executing a large language model based at least in part on the first performance data to generate a large language model output, the storing of the indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent also being based at least in part on the large language model output. Preston discloses: The testing computing system of claim 1, the operations further comprising executing a large language model based at least in part on the first performance data to generate a large language model output [Paragraph 0051 teaches inputs a completed meta-prompt 210 and performance metrics from a chat-based database 220, into a large language model (LLM 240). Paragraph 0054 teaches the chat-based database 220 stores the content performance metrics as vectors or embeddings, which are then input into the LLM 240. Paragraph 0057 teaches the meta-prompt construction includes queries to the backend (the chat-based database 220) for the most up to date versions of the support content.], It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Anand and Park, by incorporating inputting a completed meta-prompt 210 and performance metrics from a chat-based database 220, into a large language model (see Preston Paragraph 0051, 0054, and 0057), because the three publications are directed to data analysis; inputting a completed meta-prompt 210 and performance metrics from a chat-based database 220, into a large language model improves performance metrics (see Preston Paragraph 0066). Anand, Park, and Preston discloses all of the limitations as set forth in claim 1 and some of 9 but does not appear to expressly disclose the storing of the indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent also being based at least in part on the large language model output. Kierzyk discloses the storing of the indication that the execution of the first query at the database management system and the execution of the second query at the database management system are equivalent also being based at least in part on the large language model output [Paragraph 0064 teaches the electronic processor 200 may transmit (or otherwise provide) the outputs from the first LLM query and the second LLM query to the embedding server 110. The embedding server 110 may generate, using the embedding model(s) 155, the corresponding LLM query embeddings (e.g., the first LLM query embedding, the second LLM query embedding. Paragraph 0065 teaches after generating the LLM query embeddings (e.g., the first LLM query embedding and the second LLM query embedding), the electronic processor 200 may determine a similarity metric between the LLM query embeddings (at block 535). As used herein, the similarity metric may represent a degree of similarity between embeddings or vectors. Note: Using the output of llm queries (execution of first and second queries) to generate embeddings based on the output, and determining a similarity metric based on those embeddings reads on the claims.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Anand, Park, and Preston, by incorporating using the output of llm queries (execution of first and second queries) to generate embeddings based on the output, and determining a similarity metric based on those embeddings (see Kierzyk Paragraph 0064), because the three publications are directed to data analysis; incorporating using the output of llm queries (execution of first and second queries) to generate embeddings based on the output, and determining a similarity metric based on those embeddings provides a technical solution (see Kierzyk Paragraph 0005). Claim 18 is similarly rejected because it is similar in scope. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EARL LEVI ELIAS whose telephone number is (571)272-9762. The examiner can normally be reached Monday - Friday (IFP). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EARL LEVI ELIAS/ Examiner, Art Unit 2169 /SHERIEF BADAWI/ Supervisory Patent Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

May 10, 2024
Application Filed
Feb 03, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591699
ESTABLISHING COMMUNICATION STREAM WITH DATABASE CONTROL AGENT OVER WHICH DATABASE COMMANDS ARE DISPATCHED FOR EXECUTION AGAINST DATABASE
2y 5m to grant Granted Mar 31, 2026
Patent 12572538
UNIFIED QUERY OPTIMIZATION FOR SCALE-OUT QUERY PROCESSING
2y 5m to grant Granted Mar 10, 2026
Patent 12547903
HUMAN-COMPUTER INTERACTION METHOD AND APPARATUS, STORAGE MEDIUM AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12511267
METHOD AND SYSTEM FOR CREATING AND REMOVING A TEMPORARY SUBUSER OF A COMPUTING DEVICE
2y 5m to grant Granted Dec 30, 2025
Patent 12493645
TAGGING TELECOMMUNICATION INTERACTIONS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
57%
Grant Probability
80%
With Interview (+23.5%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 99 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month