DETAILED ACTION
Remarks
This communication is in response to the amendment/arguments filed on October 14, 2025 has been fully considered. The rejection is made final. Claims 1-20 are pending for examination.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Examiner Notes
Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
The examiner requests, in response to this Office action, supports are shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application.
When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c).
Information Disclosure Statement
As required by M.P.E.P. 609(C), the applicant’s submissions of the Information Disclosure Statements dated December 29, 2025 is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C (2), a copy of the PTOL-1449 initialed and dated by the examiner is attached to the instant office action.
Response to Amendment
The rejection of claims 1-9 under 35 USC § 112 in the previous office action has been withdrawn because of the amendment to the claims.
Objection to the claims 1 and 9 imposed in the previous office action is withdrawn because of the amendment to the claims.
Response to Arguments
Applicant's arguments filed October 14, 2025 have been fully considered but they are not persuasive.
In response to Applicant’s argument on page 10-11 that Amended claim 1 patentably distinguishes over Pereira, Cruanes, and Lee because no combination of Pereira, Cruanes, and Lee recites “"a database search system for searching for data in a distributed database system", where the database search system comprises "at least one memory separate from a memory of the distributed database system" and "at least one processor separate from one or more processors of the plurality of database management nodes" of the distributed database system, along with: (1) "a plurality of query management nodes executed by the at least one processor," and (2) "a query routing module executed by the at least one processor" that is configured to "receive queries from the plurality of database management nodes" and "route each of the queries to at least one of the plurality of query management nodes for execution"”, is acknowledged but not deemed to be persuasive.
"a database search system for searching for data in a distributed database system".
Pereira, Abstract discloses an efficient large scale search system for video and multi-media content using a distributed database and search (i.e., a database search system for searching for data in a distributed database system). Therefore, Pereira teaches the above argued limitation of claim 1.
where the database search system comprises "at least one memory separate from a memory of the distributed database system".
Cruanes [0060] and Fig. 6 discloses operating environment 600 having multiple distributed virtual warehouses and virtual warehouse groups. Environment 600 includes resource manager 102 that communicates with virtual warehouse groups 604 and 606 through a data communication network 602 (i.e., distributed database system). Cruanes [0064] and Fig. 7 discloses that the resource manager also distributes the multiple tasks to execution nodes in the execution platform. The execution nodes in the execution platform are implemented within virtual warehouses, see Fig. 6, 604, 606 (i.e., at least one memory separates from a memory of the distributed database system). Each execution node performs an assigned task and returns a task result to the resource manager. The execution nodes return the task results to the query coordinator. Therefore, Cruanes teaches the above argued limitation of claim 1.
and "at least one processor separate from one or more processors of the plurality of database management nodes" of the distributed database system.
Cruanes [0038] and Fig. 3 execution platform 112 includes multiple virtual warehouses 302, 304, and 306. Each virtual warehouse includes multiple execution nodes that each include a data cache and a processor. Virtual warehouses 302, 304, and 306 are capable of executing multiple queries (and other tasks) in parallel by using the multiple execution nodes (i.e., virtual warehouses are separate nodes capable executing queries etc. by itself). Please also Figs. 5-6 and [0055 – 0061]. Therefore, Cruanes teaches the above argued limitation of claim 1.
along with:
(1) "a plurality of query management nodes executed by the at least one processor."
Cruanes [0038] and Fig. 3 execution platform 112 includes multiple virtual warehouses 302, 304, and 306. Each virtual warehouse includes multiple execution nodes that each include a data cache and a processor. Virtual warehouses 302, 304, and 306 are capable of executing multiple queries (and other tasks) in parallel by using the multiple execution nodes (i.e., virtual warehouses are separate nodes executed by processor and capable of managing queries). Please also Figs. 5-6 and [0055 – 0061]. Therefore, Cruanes teaches the above argued limitation of claim 1.
and
(2) "a query routing module executed by the at least one processor".
Lee [0068] discloses HA/DR system includes a primary system and a secondary system and is capable of load balancing between primary system and secondary system. The primary system 505 and the secondary system 510 includes processors 545 and 560 respectively, Lee [0061-0062]. A query routed to the primary system in a load balancing effort will be executed before, during or after a particular transaction log is replayed (i.e., a query routing module executed by the at least one processor). Therefore, Lee teaches the above argued limitation of claim 1.
that is configured to "receive queries from the plurality of database management nodes" and "route each of the queries to at least one of the plurality of query management nodes for execution",
Lee [0062] discloses a collection of clients may each maintain an open connection to both the primary system 505 and the secondary system 525 … a client 515 application may submit a query request to the primary system 505. A process control 555 load balancing process executing on processor 545 then may determine where the query should be executed and replies to the client 515 with instructions identifying which system the client 515 should issue the query to (i.e., receive queries from the plurality of database management nodes). Lee [0068] discloses that transaction logs are replicated and replayed at the secondary system 510 only after a transaction executes in the primary system 505. Secondary system 510, therefore, is always slightly behind an associated primary system 515. Also, there is no guarantee that a query routed to the primary system (i.e., routing the query) in a load balancing effort will be executed before, during or after a particular transaction log is replayed. Lee [0071] discloses a load balancing process executing on a processor within the index server 615 in the primary system 605 then may determine where the query should be executed and replies to the client 615 with instructions identifying the system to which the client 615 should issue the query (i.e., routing the query). Also please see Lee [0075] and [0080]. Therefore, Lee teaches the above argued limitation of claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 7-13 and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Pereira et al. (US Patent Publication No. 2012/0095958 A1, ‘Pereira’, hereafter) in view of Cruanes et al. (US Patent Publication No. 2020/0201881 A1, ‘Cruanes’, hereafter) and further in view of Lee et al. (US Patent Publication No. 2018/0150368 A1, ‘Lee’, hereafter).
Regarding 1. Pereira teaches a database search system for searching for data in a distributed database system (An efficient large scale search system for video and multi-media content using a distributed database and search, Pereira, Abstract), the distributed database system comprising a plurality of database management nodes, each of the plurality of database management nodes managing respective data stored in the distributed database system (A computer-readable storage medium may be coupled to the processor through local connections such that the processor can read information from, and write information to, the storage medium or through network connections such that the processor can download information from or upload information to the storage medium. … An efficient large scale search system for video and multi-media content using a distributed database and search, and tiered search servers is described, Pereira, Abstract, [0036], [0037]), the database search system comprising:
Pereira does not teach
at least one memory separates from a memory of the distributed database system, the at least one memory configured to store at least one index for data managed by the plurality of database management nodes, the at least one index comprising values of at least one field in the data managed by the plurality of database management nodes; and
at least one processor separates from one or more processors of the plurality of database management nodes;
a plurality of query management nodes executed by the at least one processor, the plurality of query management nodes each configured to:
receive a query for data stored in the distributed database;
execute, using the at least one index, the query to identify data targeted by the query stored in the distributed database; and
transmit, to at least one database management node of the plurality of database management nodes, information indicating the data requested by the query for locating the identified in the distributed database;
However, Cruanes teaches
at least one memory separates from a memory of the distributed database system, the at least one memory (Cruanes [0060] and Fig. 6 discloses operating environment 600 having multiple distributed virtual warehouses and virtual warehouse groups. Environment 600 includes resource manager 102 that communicates with virtual warehouse groups 604 and 606 through a data communication network 602 (i.e., distributed database system). Cruanes [0064] and Fig. 7 discloses that the resource manager also distributes the multiple tasks to execution nodes in the execution platform. The execution nodes in the execution platform are implemented within virtual warehouses, see Fig. 6, 604, 606 (i.e., at least one memory separates from a memory of the distributed database system). Each execution node performs an assigned task and returns a task result to the resource manager. The execution nodes return the task results to the query coordinator) configured to store at least one index for data managed by the plurality of database management nodes, the at least one index comprising values of at least one field in the data managed by the plurality of database management nodes (accessing data from a cache in an execution node, retrieving data from a remote storage device, updating data in a cache, storing data in a remote storage device, and the like. The resource manager also distributes the multiple tasks to execution nodes in the execution platform, Cruanes [0033], [0063-0064]); and
at least one processor separates from one or more processors of the plurality of database management nodes (Cruanes [0038] and Fig. 3 execution platform 112 includes multiple virtual warehouses 302, 304, and 306. Each virtual warehouse includes multiple execution nodes that each include a data cache and a processor. Virtual warehouses 302, 304, and 306 are capable of executing multiple queries (and other tasks) in parallel by using the multiple execution nodes (i.e., virtual warehouses are separate nodes capable executing queries etc. by itself). Please also Figs. 5-6 and [0055 – 0061]);
a plurality of query management nodes executed by the at least one processor, the plurality of query management nodes (Cruanes [0038] and Fig. 3 execution platform 112 includes multiple virtual warehouses 302, 304, and 306. Each virtual warehouse includes multiple execution nodes that each include a data cache and a processor. Virtual warehouses 302, 304, and 306 are capable of executing multiple queries (and other tasks) in parallel by using the multiple execution nodes (i.e., virtual warehouses are separate nodes executed by processor and capable of managing queries). Please also Figs. 5-6 and [0055 – 0061]) each configured to:
receive a query for data stored in the distributed database system (Cruanes [0025], [0033], [0063-0064]);
processing, using the at least one query management node, the query (Cruanes [0038], [0055-0061], Figs. 3 and 5-6), the processing comprising:
identify, using the at least one index, the data targeted by the query stored in the distributed database system (Cruanes [0025], [0033], [0063-0064]); and
transmit, to at least one database management node of the plurality of database management nodes, information indicating the data requested by the query for locating the identified in the distributed database (Cruanes [0035], [0053]);
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Pereira and Cruanes before him/her, to modify Pereira with the teaching of Cruanes’s caching systems and methods. One would have been motivated to do so for the benefit of improving data storage and data retrieval that alleviates the limitations of existing systems such as a bottleneck that slows data read and data write operations. This bottleneck is further aggravated with the addition of more processing nodes (Cruanes, Abstract and [0003]).
Pereira and Cruanes do not teach
a query routing module executed by the at least one processor, the query routing module configured to:
receive queries from the plurality of database management nodes; and
route each of the queries to at least one of the plurality of query management nodes for execution.
However, Lee teaches
a query routing module executed by the at least one processor, the query routing module (HA/DR system includes a primary system and a secondary system and is capable of load balancing between primary system and secondary system. The primary system 505 and the secondary system 510 includes processors 545 and 560 respectively, Lee [0061-0062]. A query routed to the primary system in a load balancing effort will be executed before, during or after a particular transaction log is replayed, Lee [0068]) configured to:
receive queries from the plurality of database management nodes; and route each of the queries to at least one of the plurality of query management nodes for execution (Lee [0062] discloses a collection of clients may each maintain an open connection to both the primary system 505 and the secondary system 525 … a client 515 application may submit a query request to the primary system 505. A process control 555 load balancing process executing on processor 545 then may determine where the query should be executed and replies to the client 515 with instructions identifying which system the client 515 should issue the query to (i.e., receive queries from the plurality of database management nodes). Lee [0068] discloses that transaction logs are replicated and replayed at the secondary system 510 only after a transaction executes in the primary system 505. Secondary system 510, therefore, is always slightly behind an associated primary system 515. Also, there is no guarantee that a query routed to the primary system (i.e., routing the query) in a load balancing effort will be executed before, during or after a particular transaction log is replayed. Lee [0071] discloses a load balancing process executing on a processor within the index server 615 in the primary system 605 then may determine where the query should be executed and replies to the client 615 with instructions identifying the system to which the client 615 should issue the query (i.e., routing the query). Also please see Lee [0075] and [0080]).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Pereira, Cruanes and Lee before him/her, to further modify Pereira with the teaching of Lee’s workload shifting in a database system using hint-based routing. One would have been motivated to do so for the benefit of provide increased average throughput for a database system during high workloads to reduce the likelihood that a request to the database system for data may be queued, buffered or rejected until sufficient system resources are available to complete the request (Lee, Abstract and [0012]).
Regarding 2. Pereira as modified teaches, wherein each of the plurality of query management nodes is configured to execute queries using at least one virtual machine (VM) (Cruanes [0035]).
Regarding 3. Pereira as modified teaches, wherein the at least one virtual machine is at least one compute-optimized VM (Cruanes [0035]).
Regarding 4. Pereira as modified teaches, wherein the plurality of components comprises an index construction module executed by the at least one processor, the index construction module configured to:
construct the at least one index stored in the at least one memory (Cruanes [0064], [0068-0069]); and
update the at least one index based on updates to values of the at least one field in the data managed by the plurality of database management nodes (Cruanes [0064], [0068-0069]).
Regarding 7. Pereira as modified teaches, wherein the distributed database stores a plurality of replicated datasets each managed by a respective one of the plurality of database management nodes (Lee [0066], [0068]).
Regarding 8. Pereira as modified teaches, wherein the distributed database stores a plurality of data partitions each managed by a respective one of the plurality of database management nodes (Pereira, Abstract, [0056-0057], [0092]).
Regarding 9. Pereira as modified teaches, further comprising a plurality of processors including the at least one processor, the plurality of processors distributed across multiple geographic regions, and each of the query management nodes is configured for execution by one or more processors in a respective one of the multiple geographic regions (Cruanes [0027], [0030]).
Regarding 10-13, the system steps of claims 1-4 substantially encompass the method recited in claims 10-13. Therefore, claims 10-13 are rejected for at least the same reason as claims 1-4 above.
Regarding 16. Pereira teaches a non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor (A computer-readable storage medium may be coupled to the processor through local connections such that the processor can read information from, and write information to, the storage medium or through network connections such that the processor can download information from or upload information to the storage medium, Pereira, [0036]), causes the at least one processor of a database search system to perform a method of searching for data hosted by a distributed database system (An efficient large scale search system for video and multi-media content using a distributed database and search, and tiered search servers is described, Pereira, Abstract, [0037]), the at least one processor being separate from one or more processors of the distributed database system (The search system can be tuned to the desired speed of multimedia matching by centralized and distributed systems, by replication of individual search machines or search machine clusters, … a distributed search system may be operable on a variety of distributed networks, such as a peer to peer (P2P) system. Each of the user sites, 102 and 103, remote user device 114, and server 106 may include a processor complex having one or more processors, Pereira, [0037], [0047], [0084]), the method comprising:
although claim 16 directed to a medium, it is similar in scope to claim 1. The system steps of claim 1 substantially encompass the medium recited in claim 16. Therefore; claim 16 is rejected for at least the same reason as claim 1 above.
Regarding 17-19, the system steps of claims 2-4 substantially encompass the medium recited in claims 17-19. Therefore, claims 17-19 are rejected for at least the same reason as claims 2-4 above.
Claims 5-6, 14-15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Pereira et al. in view of Cruanes et al. in view of Lee and further in view of Martin et al. (US Patent Publication No. 2023/0177054 A1, ‘Martin’, hereafter).
Regarding 5. Pereira, Cruanes and Lee do not teach, wherein the query routing module is further configured to perform load balancing of query processing across the plurality of query management nodes.
However, Martin teaches wherein the query routing module is further configured to perform load balancing of query processing across the plurality of query management nodes (Martin [0034], [0037], [0039]).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Pereira, Cruanes, Lee and Martin before him/her, to further modify Pereira with the teaching of Martin’s reduced latency query processing. One would have been motivated to do so for the benefit of fast efficient query processing (Martin, Abstract).
Regarding 6. Pereira as modified teaches, wherein the query routing module is further configured to:
receive a first query from a first database management node of the plurality of database management nodes (Martin [0004-0006], [0034-0037]);
identify a first one of the plurality of query management nodes to execute a first query (Martin [0004-0006], [0034-0037];
determine that the first query management node does not have a processing bandwidth designated for execution of the first query (Martin [0037], [0039]); and
route the first query to a second one of the plurality of query management nodes in response to determining that the first query management node does not have the processing bandwidth designated for execution of the first query (Lee [0080]).
Regarding 14-15, the system steps of claims 5-6 substantially encompass the method recited in claims 14-15. Therefore, claims 14-15 are rejected for at least the same reason as claims 5-6 above.
Regarding 20, the system steps of claim 6 substantially encompass the medium recited in claim 20. Therefore, claim 20 is rejected for at least the same reason as claim 6 above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASANUL MOBIN whose telephone number is (571)270-1289. The examiner can normally be reached on 9AM to 6:00PM EST M-F.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at 571-272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HASANUL MOBIN/
Primary Examiner, Art Unit 2168