Prosecution Insights
Last updated: April 19, 2026
Application No. 18/887,463

DYNAMIC ANALYTICAL MODEL OPTIMIZATIONS

Non-Final OA §101§103
Filed
Sep 17, 2024
Examiner
ROSTAMI, MOHAMMAD S
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
SAP SE
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
3y 10m
To Grant
93%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
425 granted / 635 resolved
+11.9% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
37 currently pending
Career history
672
Total Applications
across all art units

Statute-Specific Performance

§101
21.3%
-18.7% vs TC avg
§103
54.9%
+14.9% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
4.4%
-35.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 635 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are pending of which claims 1, 8 and 15 are in independent form. Claims 1-20 are rejected under 35 U.S.C. 101, abstract idea. Claims 1-20 are rejected under 35 U.S.C. 103. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claim(s) recite(s) text/summary generation using LLM models enhanced by retrieval augmented generation (RAG) system. With respect to step 1 of the patent subject matter eligibility analysis, the claims are directed to a process, machine, manufacture, or composition of matter. Independent claim 1 is directed to a system which includes a memory and one or more processing device, which is directed to one of the four statutory subject matters. Independent claim 8 is directed to a method, which is a process. Independent claim 15 is directed to non-transitory machine-readable medium, which is directed to one of the four statutory subject matters. Independent All other claims depend on claims 1, 8, and 15. As such, claims 1-20 are directed to a statutory category. Regarding claims 1, 8 and 15: With respect to step 2A, prong one (Judicial Exception), the claims recite an abstract idea, law of nature, or natural phenomenon. Specifically, the following limitations recite mathematical concepts and/or mental processes and/or certain methods of organizing human activity. The claim recites sequence of operations that amount to information organization, searching, evaluation, and ranking directed to an abstract idea: Receiving a call to a database; Identifying a data model including joins of data structure; Generating a hash of the data model; Determining whether the hash matches a stored hash; Identifying database engine; Generating engine level optimizations based on predicted data volume and entity type; and Processing the call using the optimization. These steps fall into recognized abstract idea: Mental Process: identifying a data model; determining whether hashes match; selecting engines; generating optimizations; Mathematical concept/algorithm: Hash functions. These operations correspond to: hash function, evaluation, comparison, and decision making, performed on generic off the shelf technology. There are no steps performed that provides a technical improvement to the computing system itself. Thus, the claims recite an abstract idea (mental process/mathematical concepts/information organization). With respect to step 2A, Prong Two (Particular Application), the claims do not recite additional elements that integrate the judicial exception into a practical application. The following limitations are considered “additional elements” and explanation will be given as to why these “additional elements” do not integrate the judicial exception into a practical application. The claims add: A hardware processor; Non-transitory machine readable medium; a database. The claims do not improve: database engine architecture, hashing technology, query execution mechanism, computer functionality itself. Instead, the claims merely: analyzing a data model, compares hashes, generates optimization decisions, applies the results to database engine. These components merely use conventional computer components as tools to execute the abstract idea. This is simply using computers as tools to perform data analysis and optimization (which is abstract as explained above). The limitations fail to transform the exception into a practical application. There is also no technical improvements such as: a new embedding technique; a new retrieval algorithm; an improved NN architecture; an improvement to computer processing. Instead, the claims recite conventional and generic computer functions performed in a routine manner (optimizing queries and database queries), which does not amount to a practical application. With respect to Step 2B. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recited components are merely generic computer/database elements performing their routine, well-understood, and conventional functions. See Alive, MPEP 2016.05(d). The steps mentioned in the independent claims merely constitutes: generic processors, generic database systems, generic data storage, generic hash functions. These components perform, well understood, routine, conventional functions, such as: hashing data, comparing values, selecting processing engines, executing database queries. There is no unconventional database architecture, hashing mechanism, or query processing improvements. Courts have consistently helped such high-level information management operations are conventional. The claims provide no new algorithm, architecture, data structure, specialized hardware and/or technical improvements. All are routine, conventional operations business/ market place logic. Considering claims as a whole, the ordered combination of elements also reflects nothing more than the typical workflow of distributed systems, and therefore DOES NOT add “significantly more” than the abstract idea. Such generic, high‐level, and nominal involvement of a computer or computer‐based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent‐eligible, as noted at pg.74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo. Further, See, e.g., Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 134 S. Ct. 2347, 2359‐60, 110 USPQ2d 1976, 1984 (2014). See also OIP Techs. v. Amazon.com, 788 F.3d 1359, 1364, 115 USPQ2d 1090, 1093‐94 (Fed. Cir. 2015) ("Just as Diehr could not save the claims in Alice, which were directed to 'implement[ing] the abstract idea of intermediated settlement on a generic computer', it cannot save O/P's claims directed to implementing the abstract idea of price optimization on a generic computer.") (citations omitted). See also, Affinity Labs of Texas LLC v. DirecTV LLC, 838 F.3d 1253, 1257‐1258 (Fed. Cir. 2016) (mere recitation of a GUI does not make a claimpatent‐eligible); Intellectual Ventures I LLC v. Capital One Bank, 792 F.3d 1363, 1370 (Fed. Cir. 2015) ("the interactive interface limitation is a generic computer element".). The additional elements are broadly applied to the abstract idea at a high level of generality ("similar to how the recitation of the computer in the claims in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer,") as explained in MPEP § 2106.05(f)) and they operate in a well‐understood, routine, and conventional manner. MPEP § 2106.0S(d)(II) sets forth the following: The courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. • Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec ... ; TLI Communications LLC v. AV Auto. LLC ... ; OIP Techs., Inc., v. Amazon.com, Inc ... ; buySAFE, Inc. v. Google, Inc ... ; • Performing repetitive calculations, Flook ... ; Bancorp Services v. Sun Life ... ; • Electronic recordkeeping, Alice Corp ... ; Ultramercial ... ; • Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc ... ; • Electronically scanning or extracting data from a physical document, Content Extraction and Transmission, LLC v. Wells Fargo Bank ... ; and • A web browser's back and forward button functionality, Internet Patent • Corp. v. Active Network, Inc. ... . . . Courts have held computer-implemented processes not to be significantly more than an abstract idea (and thus ineligible) where the claim as a whole amounts to nothing more than generic computer functions merely used to implement an abstract idea, such as an idea that could be done by a human analog (i.e., by hand or by merely thinking). In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrate the abstract idea into a practical application. Their collective functions merely provide conventional computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that the ordered combination amounts to significantly more than the abstract idea itself. The dependent claims have been fully considered as well, however, similar to the findings for claims above, these claims are similarly directed to the “Mental Processes” grouping of abstract ideas set forth in the 2019 PEG, without integrating it into a practical application and with, at most, a general purpose computer that serves to tie the idea to a particular technological environment, which does not add significantly more to the claims. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea. Looking at the claim as a whole does not change this conclusion and the claim is ineligible. Regarding claims 2, 9 and 16; (Messaging/Requesting Optimization), The claim recites: sending a message to the first entity requesting separate engine level optimization, where message includes the hash. These limitations merely involve transmitting information between entities regarding the optimization parameters. Sending a request message containing a hash value is a data communication avidity that does not change how the database or processor operates (insignificant extra-solution activity). These claims do not: improve networking technology, improve database engine structure, improve hashing technique; introduce a new protocol or communication mechanism. The claims simply gather or transmits information used in the abstract optimization process. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 3, 10, and 17; (Query Construction/Integration of Optimizations), The claim recites: stitching each engine-level optimization into an information access query sent to the database. These limitations merely incorporate optimization parameters into a query structure before execution. Constructing or modifying a database query based on selected parameters is a routine data processing operation commonly performed by query optimizers (Data Manipulation/Data Processing). These claims do not specify: a new execution architecture, a new query language, an improve data structure; a new database processing mechanism. The claims simply insert optimization instructions into a query, which is a conventional database operation. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 4, 11, and 18; (Prioritizing Optimization Types), The claim recites: determining whether a server level database optimization exists and prioritizing engine-level optimization when conflicts occur. These limitations merely involve evaluating two sets of optimization rules and selecting which one should be prioritized. Such rule-based decision making corresponds to evaluating conditions and selecting between alternatives (Mental Process). These claims do not specify: how the conflict resolution improves database execution, a specific resolution algorithm for resolving conflicts, any new database architecture. The claims simply apply prioritized rules, which is a common logical operation. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 5, 12, and 19; (Particular Database Environment), The claim recites: the database is an in-memory database. Simply limiting the environment to an in-memory database does not change the nature of the claim operation. Claims still perform the same abstract operations: hashing the data model, comparing hashes, generating an optimized decision (Field-of-use). Specifying a particular computing environment is generally considered a field-of-use limitation and does not transform the abstract idea into patent-eligible subject matter. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 6, 12, and 20; (ML Optimization Generation), The claim recites: passing the hash and entity indications to a machine learning model trained by a machine learning algorithm to generate optimizations. These limitations merely apply a generic ML model. Using ML learning to perform data analysis or predictions does not itself confer eligibility when the model is used for abstract decision making (Mental Process Implemented Using ML Tools). These claims do not specify: the structure of the model, the training technique, the model architecture; how the ML improves database engine operation. Instead, the model is used as a clack-box decision tool for generating optimization recommendations. Thus, the claims merely used ML learning as a tool to perform the abstract optimization analysis. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Regarding claims 7, and 14; (Reusing Previously Generated Optimization), The claim recites: retrieving previously used engine-level database optimizations associated with a stored hash and applying them. These limitations merely involve retrieving store information and reusing previously determined optimization parameters. Storing and retrieving previously generated data corresponds to data storage and retrieve operations, which are functional computer functions (Data Receival and Reprocessing). These claims do not describe: a new caching architecture, improved memory architecture, improved database execution mechanism. The claims simply use previously store optimization data. This does not change the nature of the abstract idea. It does not add a technical improvement to an abstract idea, such as improving computer functionality, data structure, or processing architecture. There is no practical application, and no inventive step, the claims are still considered abstract. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Noh; Jaeyun et al. (US 20180349404 A1) [Noh] in view of Chadha; Karan et al. (US 12277117 B1) [Chadha] in view of Beitchman; Marc Howard et al. (US 11055352 B1) [Beitchman]. Regarding claims 1, 8, and 15, Noh discloses, a system comprising: at least one hardware processor; and a computer-readable medium storing instructions that, when executed by the at least one hardware processor, cause the at least one hardware processor to perform operations (See Fig. 11) comprising: receiving a call to a database from a first entity (he queries provided by different users can include one or more same data entity identifiers, but the entity identifiers can resolve to different data entities that are associated with the different users. In some cases, the different database users may be associated with different numbers of data entity records for a data entity targeted by the query ¶ [0017]. n at least some embodiments, the query processor 122 can be configured to use the database query …. In such an embodiment, the query processor 122 can be configured to generate a hash of the database query 142 and to search the shared execution plan cache 110 for a hashed database query that matches the generated hash of the database query 142 ¶ [0025]); determining whether the hash of the data model matches a hash stored in an optimization data store (In a particular embodiment, the one or more pre-compiled execution plans are associated with hashes of the one or more database queries. In such an embodiment, the query processor 122 can be configured to generate a hash of the database query 142 and to search the shared execution plan cache 110 for a hashed database query that matches the generated hash of the database query 142 ¶ [0025]. Also see ¶ [0060], [0061], and [0070]); in response to a determination that the hash of the data model does not match a hash stored in the optimization data store (In at least some embodiments, the query processor 122 can be configured to use the database query 142 to retrieve the pre-compiled execution plan 112 from the shared execution plan cache 110. The shared execution plan cache 110 can comprise one or more pre-compiled execution plans. The one or more pre-compiled execution plans can be associated in the shared execution plan cache with one or more database queries that were previously used to generate the one or more pre-compiled execution plans. For example, the one or more database queries can be stored in the shared execution plan cache 110 as look-up keys that can be used to retrieve pre-compiled execution plans associated with the database queries. The query processor 122 can be configured to compare the database query 142 to the one or more database queries in the shared execution plan cache 110 and to retrieve one of the one or more pre-compiled execution plans associated with a query that matches the database query 142. In the example scenario depicted in FIG. 1, the query processor 122 matches the database query 142 with a database query (not shown) associated with the pre-compiled execution plan 112. In a particular embodiment, the one or more pre-compiled execution plans are associated with hashes of the one or more database queries. In such an embodiment, the query processor 122 can be configured to generate a hash of the database query 142 and to search the shared execution plan cache 110 for a hashed database query that matches the generated hash of the database query 142 ¶ [0025]. Also see ¶ [0060], [0061]. At 350, the multi-user execution plan cache is searched with the compiled execution plan. In at least some embodiments, a database server can comprise multiple query processors capable of processing separate database queries in parallel. For example, the multiple query processors can comprise one or more different processes and/or one or more different threads in a multithreaded execution environment. In such an embodiment, it is possible for one query processor to search the multi-user execution plan cache, not find an execution plan associated with a received query, and proceed to compile an execution plan for the received query. Meanwhile, a separate query processor may complete compilation of an execution plan for the same query (received as part of a separate database query request), and add the compiled execution plan to the multi-user execution plan cache ¶ [0074]-[0079]. Examiner specifies that “does not match a hash”, follows the same logic as compiling an execution plan for the query and storing the plan in the cache; thus, the system performs additional operations in response to determining that the generated hash does not match a stored hash). However, Noh does not explicitly facilitates identifying a data model to handle the call, the data model identifying a joining of data structures in the database; using a hash function to generate a hash of the data model; processing the call using the data model by applying each different engine-level optimization to a corresponding database engine. Chadha discloses, identifying a data model to handle the call, the data model identifying a joining of data structures in the database (relational join is a data processing operation in a relational data management system [col. 10, ll. 5-22]); using a hash function to generate a hash of the data model (As discussed further below, stored plan cache 414 is a local in-memory cache (e.g., provided by a given compute service manager instance), which, in an embodiment, includes additional capabilities to improve latency, throughput and reduce cost for at least OLTP style workloads. In an example, query coordinator 250 compiles the query plan once for a certain query hash, stores the compiled plan in stored plan cache 414, and reuses the stored plan for subsequent executions. Thus, stored plan cache 414 is a performance sensitive feature [col. 25, ll. 60-col. 26, ll. 2]); processing the call using the data model by applying each different engine-level optimization to a corresponding database engine (As discussed further below, stored plan cache 414 is a local in-memory cache (e.g., provided by a given compute service manager instance), which, in an embodiment, includes additional capabilities to improve latency, throughput and reduce cost for at least OLTP style workloads. In an example, query coordinator 250 compiles the query plan once for a certain query hash, stores the compiled plan in stored plan cache 414, and reuses the stored plan for subsequent executions. Thus, stored plan cache 414 is a performance sensitive feature [col., 25, ll. 60-col. 26, ll. 2]). It would have been obvious to one ordinary skilled in the art at the time of the filing of the present invention to combine the teachings of the cited references because Chadha's system would have allowed Noh to facilitate identifying a data model to handle the call, the data model identifying a joining of data structures in the database; using a hash function to generate a hash of the data model; processing the call using the data model by applying each different engine-level optimization to a corresponding database engine. The motivation to combine is apparent in the Noh’s reference, because there is a need to improve optimizing the performance of tasks with databases. However neither Noh nor Chadha explicitly facilitates identifying a plurality of database engines for processing the data model; each of the plurality of database engines, generating a different engine-level optimization based on a predicted volume of data to be used to process the call using the data model and based on type of the first entity, each different engine-level optimization indicating a change in process flow for a corresponding database engine. Beitchman discloses, identifying a plurality of database engines for processing the data model (To process queries 102, query engines 120 may submit requests, 104a, 104b, and 104c, to get an optimized query plan for the query 102 from engine independent query plan optimizer 140, in various embodiments. Engine independent query plan optimizer 140 may implement query engine identification 142 to detect or otherwise identify the type of query engine for which the query is to be performed. In some embodiments, query engine identification 142 may identify multiple query engines types for a query (e.g., if queries 102a, 102b, and 102c were sub queries of a single query) in order to perform federated query processing [col. 3, ll. 59, col. 4, ll. 2]); for each of the plurality of database engines, generating a different engine-level optimization based on a predicted volume of data to be used to process the call using the data model and based on type of the first entity, each different engine-level optimization indicating a change in process flow for a corresponding database engine (FIG. 5 is a sequence diagram for managed execution of queries utilizing a resource planner, according to some embodiments. Query 530 may be received at managed query service control plane 320 which may submit the query 532 to query optimization service 292. Query optimization service 292 may generate an optimized query plant to process the query based on metadata requested 534 and received 536 from data catalog service 280. Query optimization service 292 may determine an optimized query and translate the optimized query into an engine-specific format for the engine implemented at provisioned cluster 510. Query optimization service 292 may then submit the query optimization plan 538 to query tracker 340. Query tracker 340 may obtain a lease 540 on a cluster 542 from resource manager service and then initiate initiate execution of the query 544 according to the optimized query plan at the provisioned cluster 510, sending a query execution instruction to a managed query agent 512 [col. 12, ll. 15-34]. Also see [col. 12, ll. 57-col. 12, ll. 2]. The number and size of the computing clusters 920 in the warm cluster pool 910 can be determined based upon a variety of factors including, but not limited to, historical and/or expected volumes of query requests, the price of the computing resources utilized to implement the computing clusters 920, and/or other factors or considerations, in some embodiments [col. 16, ll. 26-32]. Examiner specifies that optimized query represents: execution steps, join order, scan strategy, execution sequence, which implies change in process flow; additionally examiner specifies that maintaining data as part of data volumes and optimization based on the metadata from the data catalog; metadata is query optimization includes: table statistics, row counts, cardinality estimates, data size, which are all used by optimizer to estimate the volume of data processing by a query). It would have been obvious to one ordinary skilled in the art at the time of the filing of the present invention to combine the teachings of the cited references because Beitchman's system would have allowed Noh and Chadha to facilitate identifying a plurality of database engines for processing the data model; each of the plurality of database engines, generating a different engine-level optimization based on a predicted volume of data to be used to process the call using the data model and based on type of the first entity, each different engine-level optimization indicating a change in process flow for a corresponding database engine. The motivation to combine is apparent in the Noh and Chadha’s reference, because there is a need for improving techniques that can optimize the performance of a query independent of the type of query engine processing the query. Regarding claims 2, 9, and 16, the combination of Noh, Chadha and Beitchman discloses, sending a message to the first entity requesting a separate engine-level database optimization for each of the plurality of database engines requesting, [the message including the hash] (Beitchman: FIG. 5 is a sequence diagram for managed execution of queries utilizing a resource planner, according to some embodiments. Query 530 may be received at managed query service control plane 320 which may submit the query 532 to query optimization service 292. Query optimization service 292 may generate an optimized query plant to process the query based on metadata requested 534 and received 536 from data catalog service 280. Query optimization service 292 may determine an optimized query and translate the optimized query into an engine-specific format for the engine implemented at provisioned cluster 510. Query optimization service 292 may then submit the query optimization plan 538 to query tracker 340. Query tracker 340 may obtain a lease 540 on a cluster 542 from resource manager service and then initiate execution of the query 544 according to the optimized query plan at the provisioned cluster 510, sending a query execution instruction to a managed query agent 512 [col. 12, ll. 15-34]. Also see [col. 12, ll. 57-col. 13, ll. 2]); the message including the hash (Noh: In a particular embodiment, the one or more pre-compiled execution plans are associated with hashes of the one or more database queries. In such an embodiment, the query processor 122 can be configured to generate a hash of the database query 142 and to search the shared execution plan cache 110 for a hashed database query that matches the generated hash of the database query 142 ¶ [0025]. Also see ¶ [0060], [0061], and [0070]). Regarding claims 3, 10 and 17, the combination of Noh, Chadha and Beitchman discloses, wherein the processing the call includes stitching each different engine-level database optimization to an information access query to be sent to the database (Beitchman: Generating optimized query plan translating the optimized query into engine-specific format [col. 12, ll. 15-34]. Also see [col. 12, ll. 57-col. 12, ll. 2]. Examiner specifies that, a query optimization services generates an optimized query plan and translates the optimized query into an engine-specific format). Regarding claims 4, 11, and 18, the combination of Noh, Chadha and Beitchman discloses, determining whether a server level database optimization exists for the data model and prioritizing each different engine-level optimization over the server level database optimization if there are any conflicts (Chadha: Check if the plan requires re-compilation due to data dependent optimization [col. 17, ll. 38-47]; determining whether the particular query plan requires re-compilation based on the data dependent optimization comprises: …. determining whether the data property constraint still holds based on the set of data properties, wherein the data property constraint comprises a condition that is met based on a set of source tables or files associated with the particular query plan [col. 23, ll. 35-49]. Examiner specifies that these steps correspond to evaluating and prioritizing different optimization strategies). Regarding claims 5, 12, and 19, the combination of Noh, Chadha and Beitchman discloses, wherein the database is an in-memory database (Chadha: Perform full compilation Generate a compiled query plan (e.g., a stored plan) and store it in stored plan cache 414 (e.g., local in-memory cache) [col. 17, ll. 51-54]. Also see [col. 22, ll. 4-10], [col., 25, ll. 60-col. 26, ll. 2]). Regarding claims 6, 13, and 20, the combination of Noh, Chadha and Beitchman discloses, passing the hash and an indication of the first entity to a machine learning model trained by a machine learning algorithm to generate different engine-level optimization based on a predicted volume of data to be used to process the call using the data model and based on type of the first entity (Beitchman: adaptive query optimization and optimization based on metadata from a data catalog service [col. 12, ll. 15-34]. Also see [col. 12, ll. 57-col. 13, ll. 2]. Examiner specifies that the optimization service generates optimized query plans based on metadata received from a data catalog). Regarding claims 7, and 14, the combination of Noh, Chadha and Beitchman discloses, wherein in response to a determination that the hash of the data model matches a hash stored in the optimization data store: retrieving one or more previously used engine-level database optimizations corresponding to the hash stored in the optimization data store; and processing the call using the data model by applying each previously used different engine-level optimization to a corresponding database engine (Noh: execution plans are stored in shared execution plan cache ¶ [0025]; the cache may be searched using a hashed database query ¶ [0060]-[0061]; when matching execution plan is found, the query processor uses the cached execution plan ¶ [0074]; cache plans can be used for subsequent queries ¶ [0075]. Examiner specifies that these sections clearly teaches: hash match detection; retrieved stored optimizations; reusing cached execution plans). Conclusion The examiner requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD S ROSTAMI whose telephone number is (571)270-1980. The examiner can normally be reached Mon-Fri From 9 a.m. to 5 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571)270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 3/6/2026 /MOHAMMAD S ROSTAMI/ Primary Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Sep 17, 2024
Application Filed
Mar 06, 2026
Non-Final Rejection — §101, §103
Apr 02, 2026
Applicant Interview (Telephonic)
Apr 04, 2026
Examiner Interview Summary
Apr 06, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596705
CHANGE CONTROL AND VERSION MANAGEMENT OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12579127
DETECTING LABELS OF A DATA CATALOG INCORRECTLY ASSIGNED TO DATA SET FIELDS
2y 5m to grant Granted Mar 17, 2026
Patent 12561392
RELATIVE FUZZINESS FOR FAST REDUCTION OF FALSE POSITIVES AND FALSE NEGATIVES IN COMPUTATIONAL TEXT SEARCHES
2y 5m to grant Granted Feb 24, 2026
Patent 12561360
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12561312
DISTRIBUTED STREAM-BASED ACID TRANSACTIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
93%
With Interview (+26.3%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 635 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month