Prosecution Insights
Last updated: April 19, 2026
Application No. 19/207,178

DATABASE SYSTEMS WITH A SET OF SUBSYSTEMS

Non-Final OA §101§103
Filed
May 13, 2025
Examiner
SOMERS, MARC S
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Ocient Holdings LLC
OA Round
1 (Non-Final)
65%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
364 granted / 563 resolved
+9.7% vs TC avg
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
36 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
47.9%
+7.9% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 563 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. With regard to claim 1: Step 2A, Prong One: The claim recites the following limitations which are drawn towards an abstract idea: and collectively execute a set of local query operational instructions on at least a portion of the ingested data set to produce a local partial query response (recites mental process steps of performing evaluations/analysis/calculations on data which can include mathematical functions); and and a query planning subsystem operable to: generate the set of local query operational instructions, the set of intermediate query operational instructions, and the set of global query operational instructions (recites mental process steps of evaluation and judgement including formulating a plan or order of operations/instructions to perform); As seen from above, the identified limitations recite concepts associated with an abstract idea and thus the respective claim recites a judicial exception (see 2106.04(a)) and thus requires further analysis as discussed below. Step 2A, Prong Two: The following limitations have been identified as being additional elements as discussed below. a database system comprises: a load and store sub-system that includes: a data input module operable to ingest a data set (recites insignificant extrasolution activity of data gathering or receiving information, see MPEP 2106.05(g)); short term storage operable to temporarily store the data set to produce an ingested data set; and long term storage operable to store the ingested data set (recites insignificant extrasolution activity of storing data in memory, see MPEP 2106.05(g)); a query execution sub-system that includes: a plurality of local query engines (recites apply-it limitations of reciting generic computer components as a tool to implement the abstract idea, see MPEP 2106.05(f)) operable to: collectively obtain the ingested data set (recites insignificant extrasolution activity of data gathering or receiving information, see MPEP 2106.05(g)); a plurality of intermediate query engines … (recites apply-it limitations of reciting generic computer components as a tool to implement the abstract idea, see MPEP 2106.05(f)); and a global query engine … (recites apply-it limitations of reciting generic computer components as a tool to implement the abstract idea, see MPEP 2106.05(f)); assign the set of local query operational instructions to a set of local query engines of the plurality of local query engines; assign the set of intermediate query operational instructions to a set intermediate query engines of the plurality of intermediate query engines; and assign the set of global query operational instructions to the global query engine (recites insignificant extrasolution activity of transmitting information, see MPEP 2106.05(g)). As seen from the above discussion, the identified limitations did not integrate the judicial exception into a practical application (see MPEP 2106.04(d)). This judicial exception is not integrated into a practical application because the additional elements recite generic computer elements at a high-level of generality to perform/implement the abstract idea as well as various generic functions of retrieving and storing information as well as transmitting information. Step 2B: Below is the analysis of the claims: a database system comprises: a load and store sub-system that includes: a data input module operable to ingest a data set (recites well-understood, routine, and conventional activity of data gathering or receiving information, see MPEP 2106.05(d)); short term storage operable to temporarily store the data set to produce an ingested data set; and long term storage operable to store the ingested data set (recites well-understood, routine, and conventional activity of storing data in memory, see MPEP 2106.05(d)); a query execution sub-system that includes: a plurality of local query engines (recites apply-it limitations of reciting generic computer components as a tool to implement the abstract idea, see MPEP 2106.05(f)) operable to: collectively obtain the ingested data set (recites well-understood, routine, and conventional activity of data gathering or receiving information, see MPEP 2106.05(d)); a plurality of intermediate query engines … (recites apply-it limitations of reciting generic computer components as a tool to implement the abstract idea, see MPEP 2106.05(f)); and a global query engine … (recites apply-it limitations of reciting generic computer components as a tool to implement the abstract idea, see MPEP 2106.05(f)); assign the set of local query operational instructions to a set of local query engines of the plurality of local query engines; assign the set of intermediate query operational instructions to a set intermediate query engines of the plurality of intermediate query engines; and assign the set of global query operational instructions to the global query engine (recites well-understood, routine, and conventional activity of transmitting information, see MPEP 2106.05(d)). As seen from above, the respective claim elements taken individually do not amount to significantly more than the judicial exception. When taken as a whole (in combination), the claim also does not amount to significantly more than the abstract idea because the additional elements generic computer elements at a high-level of generality to perform/implement the abstract idea as well as various generic functions of retrieving and storing information as well as transmitting information. With regard to claim 2, this claim recites wherein the data set comprises: a batch load data set; or a streaming data set (recites field of use limitations describing the intended means/format of the data that is gathered/acquired, see MPEP 2106.05(h), and adds no meaningful limitation beyond that of the abstract idea as discussed above). With regard to claim 3, this claim recites wherein the data input module comprises: a batch interface for receiving the batch load data set; and a streaming interface for receiving the streaming data set (recites apply-it limitations of describing generic computer elements for generic computer functions of being able to receive particular data via computerized interfaces, see MPEP 2106.05(f)). With regard to 4, this claim recites wherein the load and store sub-system further comprises: a short term storage processing module operable to process the data set to produce a formatted data set (recites mental process steps of converting data from one format to another), wherein the short term storage operable to temporarily store the formatted data set as the ingested data set (recites insignificant extrasolution activity of storing information which amounts to well-understood, routine, and conventional information of storing information, see MPEP 2106.05(d)). With regard to claim 5, this claim recites wherein the load and store sub-system further comprises: a long term storage processing module operable to process the ingested data set to produce a formatted ingested data set (recites mental process steps of converting data from one format to another), wherein the long term storage operable to store the formatted ingested data set (recites insignificant extrasolution activity of storing information which amounts to well-understood, routine, and conventional information of storing information, see MPEP 2106.05(d)). With regard to claim 6, this claim recites wherein the plurality of local query engines, the plurality of intermediate query engines, and the global query engine store one or more of: intermediate data, pipeline data, and result data (recites insignificant extrasolution activity of storing information which amounts to well-understood, routine, and conventional information of storing information, see MPEP 2106.05(d)). With regard to claim 7, this claim recites wherein the query planning subsystem is operable to generate the set of local query operational instructions, the set of intermediate query operational instructions, and the set of global query operational instructions by: obtaining a query (recites insignificant extrasolution activity of receiving information which amounts to well-understood, routine, and conventional activity of receiving information, see MPEP 2106.05(d)); generating an initial query plan; optimizing the initial query plan based at least in part on database system data to produce an optimized query plan; and parsing the optimized query plan to assign operational instructions of the optimized query plan as the set of local query operational instructions, the set of intermediate query operational instructions, and the set of global query operational instructions (recites mental process steps of evaluation and judgement including formulating a plan or order of operations/instructions to perform). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-6 are rejected under 35 U.S.C. 103 as being unpatentable over Lang et al [US 2017/0083588 A1] in view of Oberbrekling et al [US 2018/0074786 A1]. With regard to claim 1, Lang teaches a database system comprises: a data input module operable to ingest a data set (see paragraphs [0045], [0047], [0060]-[0061], and [0063]; the system is able to have a sub-system component with memory that can receive/load data and be able to perform operations on the ingested data to form intermediate results; “For example, the query partitioner 310 may partition the query 104 based upon the number, kinds, and sequence of operators comprising the query 104; e.g., a query 104 specified in SQL may be partitioned into a first query portion 206 comprising a “SELECT NAME, DATE FROM RECORDS” operation that projects selected records from a data set;”, para 45; Examiner Note (EN): projecting from a dataset is similar to ingesting data since both relate to the receiving of data from a dataset); a query execution sub-system that includes: a plurality of local query engines operable to: collectively obtain the ingested data set; and collectively execute a set of local query operational instructions on at least a portion of the ingested data set to produce a local partial query response (see paragraphs [0045], [0047], and [0061]; the system can have multiple query engines that obtain portions of the data set and can execute respective query instructions to form local query response; “For instance, the data set 102 may be distributed over the node set 106, and respective nodes 106 may apply a query operator to the subset of the data set 102 that is stored by and/or accessible to the node 106. In this model, the nodes 108 selected from the node set 106 may be arranged as a processing chain or pipeline; e.g., a node 108 may receive a first intermediate result 214 produced by a previous selected node 108 by performing a previous query portion 206 of the query 104, may execute the query instruction set 212 over the first intermediate result 214 to produce a second intermediate query result 214, and may transmit the second intermediate query result 214 to a next selected node 322 of the node set 108.”, para 61); a plurality of intermediate query engines operable to collectively execute a set of intermediate query operational instructions on at least a portion of the local partial query response to produce an intermediate query response (see [0045], [0047], and [0061]; the system can have a processing chain/pipeline with earlier stages/nodes sending results to a second set of query engines/nodes that perform additional operations in accordance with an instruction set to produce a intermediate query response; “For instance, the data set 102 may be distributed over the node set 106, and respective nodes 106 may apply a query operator to the subset of the data set 102 that is stored by and/or accessible to the node 106. In this model, the nodes 108 selected from the node set 106 may be arranged as a processing chain or pipeline; e.g., a node 108 may receive a first intermediate result 214 produced by a previous selected node 108 by performing a previous query portion 206 of the query 104, may execute the query instruction set 212 over the first intermediate result 214 to produce a second intermediate query result 214, and may transmit the second intermediate query result 214 to a next selected node 322 of the node set 108.”, para 61); and a global query engine operable to execute a set of global query operational instructions on at least a portion of the intermediate partial query response to produce a query result (see paragraphs [0027], [0036], and [0037], and [0061]; the system can have a processing chain/pipeline with that can cumulate to a final node receiving that forms the final query result/response; “For instance, the data set 102 may be distributed over the node set 106, and respective nodes 106 may apply a query operator to the subset of the data set 102 that is stored by and/or accessible to the node 106. In this model, the nodes 108 selected from the node set 106 may be arranged as a processing chain or pipeline; e.g., a node 108 may receive a first intermediate result 214 produced by a previous selected node 108 by performing a previous query portion 206 of the query 104, may execute the query instruction set 212 over the first intermediate result 214 to produce a second intermediate query result 214, and may transmit the second intermediate query result 214 to a next selected node 322 of the node set 108.”, para 61; “The third selected node 108 receives the intermediate results 214 from the other selected nodes 108, executes the query instruction set 212 for the third selected node 108 that implements the ORDER BY operation on the collection of intermediate results 214, and provides a query result 118 that fulfills the query 104.”); and a query planning subsystem operable to: generate the set of local query operational instructions, the set of intermediate query operational instructions, and the set of global query operational instructions; assign the set of local query operational instructions to a set of local query engines of the plurality of local query engines; assign the set of intermediate query operational instructions to a set intermediate query engines of the plurality of intermediate query engines; and assign the set of global query operational instructions to the global query engine (see Figure 2 and paragraph [0036]; the system query instructions are generated and sent to the assigned/chosen node/query engine; “For the respective query portions 206 and the selected node 108 that is chosen therefor, a query instruction set 212 is generated 210, wherein the query instruction set 212, when executed by the selected node 108, causes the selected node 108 to implement the query portion 206 of the query 104. Additionally, if the query portion 206 produces an intermediate result 214—such as a selection of records (e.g., SQL WHERE, or MapReduce Map) to which a subsequent query portion 206 is to be applied (e.g., SQL SELECT, or MapReduce Reduce), the execution of the query instruction set 212 also causes the selected node 108 to transmit 216 the intermediate result 214 to a next selected node 108 that applies the subsequent query portion 206 to the intermediate result 214. After the custom instruction set 212 is generated 210 for a selected query portion 206, the query instruction set 210 is transmitted to the selected node 108. The selected nodes 108 are then instructed to invoke the query instruction sets 210, which causes the set of selected nodes 108 to execute the query instruction sets 210 that, together, cause the selected nodes 108 to perform the entire query 104 in a distributed manner”, para 36). Lang does not appear to explicitly teach: a load and store sub-system that includes: short term storage operable to temporarily store the data set to produce an ingested data set; and long term storage operable to store the ingested data set. Oberbrekling teaches a load and store sub-system that includes: short term storage operable to temporarily store the data set to produce an ingested data set; and long term storage operable to store the ingested data set (see paragraphs [0039], [0037], [0053], [0055], [0063], and [0068]; the system can have an ingest subsystem that allows for the short term storage of data as well as have long term storage for the data after it has been ingested; “In certain embodiments of the present disclosure, prior to loading data into a data warehouse (or other data target) the data is processed through a pipeline (also referred to herein as a semantic pipeline) which includes various processing stages. In some embodiments, the pipeline can include an ingest stage, prepare stage, profile stage, transform stage, and publish stage.”, para 37; “The distributed storage system 105 provides a temporary storage space for ingested data files, which can then also provide storage of intermediate processing files, and for temporary storage of results prior to publication.”, para 39; “The publishing sub-system can deliver the processed data to one or more data targets. A data target may correspond to a place where the processed data can be sent. The place may be, for example, a location in memory, a computing system, a database, or a system that provides a service.”, para 53). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the distributed query processing system of Lang by including a load/ingest sub-system can that uses both temporary and non-temporary storage as taught by Oberbrekling in order to allow the system to be able to receive new data and store that data in quicker volatile memory for temporary storage and any subsequent operations before sending the data to more permanent but slower storage for other downstream processes while ensuring no data loss if the node losses power since ingested data is not in any temporary or volatile storage. With regard to claim 2, Lang in view of Oberbrekling teach wherein the data set comprises: a batch load data set; or a streaming data set (see Oberbrekling, paragraph [0179]; see Lang, paragraph [0033]; the data set can be from a data stream or batch data set). With regard to claim 3, Lang in view of Oberbrekling teach wherein the data input module comprises: a batch interface for receiving the batch load data set; and a streaming interface for receiving the streaming data set (see Lang, paragraph [0033]; the system can receive batch or streaming data). With regard to claim 4, Lang in view of Oberbrekling teach wherein the load and store sub-system further comprises: a short term storage processing module operable to process the data set to produce a formatted data set, wherein the short term storage operable to temporarily store the formatted data set as the ingested data set (see Oberbrekling, paragraph [0063]; the system can produce a formatted data set). With regard to claim 5, Lang in view of Oberbrekling teach wherein the load and store sub-system further comprises: a long term storage processing module operable to process the ingested data set to produce a formatted ingested data set, wherein the long term storage operable to store the formatted ingested data set (see Oberbrekling, paragraphs [0039], [0040], [0053], and [0059]; the system includes other modules/sub-systems includes mean to process the ingested data to produce/transform/format the data and be able to store the results). With regard to claim 6, Lang in view of Oberbrekling teach wherein the plurality of local query engines, the plurality of intermediate query engines, and the global query engine store one or more of: intermediate data, pipeline data, and result data (see Lang, paragraphs [0046] and [0027]; see Oberbrekling, paragraph [0053]; the various pipelined engines can store their resultant/intermediate data). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Lang et al [US 2017/0083588 A1] in view of Oberbrekling et al [US 2018/0074786 A1] in further view of McKenna [US 2015/0154256 A1]. With regard to claim 7, Lang in view of Oberbrekling teach all the claim limitations of claim 1 as discussed above. Lang in view of Oberbrekling teach query execution but do not appear to explicitly teach: wherein the query planning subsystem is operable to generate the set of local query operational instructions, the set of intermediate query operational instructions, and the set of global query operational instructions by: obtaining a query; generating an initial query plan; optimizing the initial query plan based at least in part on database system data to produce an optimized query plan; and parsing the optimized query plan to assign operational instructions of the optimized query plan as the set of local query operational instructions, the set of intermediate query operational instructions, and the set of global query operational instructions. McKenna teach obtaining a query; generating an initial query plan; optimizing the initial query plan based at least in part on database system data to produce an optimized query plan (see Figures 2 and 3; see paragraphs [0047]-[0049], [0074], [0002], and [0038]; the system has means to obtain a query and generate a query plan that can be optimized). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the distributed query processing system of Lang in view of Oberbrekling by including providing a query planner and query optimizer as taught by McKenna in order to determine the optimal or best possible sequence of instructions/operations to execute a query so that the system would be able to minimize processor processing time by having an optimal plan that performs/executes the query in an efficient manner. Lang in view of Oberbrekling in further view of McKenna teach wherein the query planning subsystem is operable to generate the set of local query operational instructions, the set of intermediate query operational instructions, and the set of global query operational instructions by: obtaining a query; generating an initial query plan; optimizing the initial query plan based at least in part on database system data to produce an optimized query plan; and parsing the optimized query plan to assign operational instructions of the optimized query plan as the set of local query operational instructions, the set of intermediate query operational instructions, and the set of global query operational instructions (see McKenna, see Figures 2 and 3; see paragraphs [0047]-[0049], [0074], [0002], and [0038]; see Lang, Figure 2 and paragraphs [0045] and [0036]; system query instructions are generated and sent to the assigned/chosen node/query engine). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Barsness et al [US 2017/0168748 A1] teaches at Figure 1 and paragraphs [0026] and [0029] that various compute nodes can be chained/pipelined together with results/output (partial response) from one stage being sent to another stage for further processing while considering processing resources for where various processing elements (engines) and respective operators should be allocated. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARC S SOMERS whose telephone number is (571)270-3567. The examiner can normally be reached M-F 11-8 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann Lo can be reached at 5712729767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARC S SOMERS/Primary Examiner, Art Unit 2159 2/10/2026
Read full office action

Prosecution Timeline

May 13, 2025
Application Filed
Feb 11, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579099
CONTROL LEVEL TAGGING METHOD AND SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12561288
METHOD AND APPARATUS TO VERIFY FILE METADATA IN A DEDUPLICATION FILESYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12554681
SYSTEM AND METHOD OF UNDOING DATA BASED ON DATA FLOW MANAGEMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12541502
METHODS AND APPARATUSES FOR IMPROVING PROCESSING EFFICIENCY IN A DISTRIBUTED SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12530365
SYSTEMS AND METHODS FOR A MACHINE LEARNING FRAMEWORK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+34.6%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 563 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month