Prosecution Insights
Last updated: April 19, 2026
Application No. 18/518,916

DYNAMICALLY SCALING A DISTRIBUTED DATABASE ACCORDING TO A CLUSTER-WIDE RESOURCE ALLOCATION

Non-Final OA §103
Filed
Nov 24, 2023
Examiner
HOANG, KEN
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Amazon Technologies, Inc.
OA Round
3 (Non-Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
277 granted / 383 resolved
+17.3% vs TC avg
Strong +32% interview lift
Without
With
+31.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
28 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
60.8%
+20.8% vs TC avg
§102
7.2%
-32.8% vs TC avg
§112
7.3%
-32.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 383 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/07/2025 has been entered. Examiner Notes (1) In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121 (b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131 (b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as "Applicants believe no new matter has been introduced" may be deemed insufficient. (2) Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Response to Arguments Applicant’s arguments with respect to claims 1, 5, and 14 have been considered but are moot in view of the new ground(s) of rejection (See new reference of Anand). Examiner Notes (1) In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121 (b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131 (b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as "Applicants believe no new matter has been introduced" may be deemed insufficient. (2) Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-5, 11, 13-14, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Saxena et al. (U.S. Patent No. 10,922,316 B2) in view of Anand et al. (U.S. Pub. No. 2024/0152404 A1), further in view of Wu et al. (U.S. Patent No. 11,030,169 B1). Regarding claim 1, Saxena teaches a system, comprising: at least one processor; and a memory, storing program instructions that when executed by the at least one processor, cause the at least one processor to implement a database system, configured to: monitor performance metrics collected from a cluster of different query processing nodes of the database system assigned to handle access to different shards of a system-managed table for a scaling event (col. 10, computer nodes may receive instructions to specific to the shards or partitions of the data to which the compute node has access; computer nodes may implement metrics collection in order to obtain various performance metrics that may be collected for performing granular performance analysis for database queries; also see col. 7, line 38-50, control plane may implement cluster performance monitoring, which may track, store, organize and/or evaluate performance metrics collected from the queries performed at processing cluster). responsive to detecting the scaling event: determine a scaling decision for a query processing node of the different query processing nodes based, at least in part, on an evaluation of the respective performance metrics, wherein the scaling decision selects one of a plurality scaling operations, wherein the plurality of scaling operations comprise: increasing or decreasing database capacity units (DCUs) allocated to the query processing node from a total number of DCUs that are a cluster-wide allocation of DCUs for the table across the different query processing nodes (col. 7, line 47-56, performance monitoring may evaluate processing cluster performance in order to trigger the performance of various control plane operations (node replacement or failover operations); cluster scaling may be implemented as part of control plan to response to user request to add or remove node from a processing cluster or automatically triggered request/events to add or remove nodes (e.g., based on utilization thresholds for processing, storage, network, or other cluster resource); also see Fig. 8, col. 11, line 65-67 and col. 12, line 1-17, wherein the number of reserved query execution slots may be expanded by obtaining execution slots from the general query execution resources). Saxena does not explicitly disclose: wherein individual DCUs of the total number of DCUs are assignable differently to individual ones of the different query processing nodes but a sum total of DCUs assigned across the different query processing nodes does not exceed the cluster-wide allocation of DCUs for the table. Anand teaches: wherein individual DCUs of the total number of DCUs are assignable differently to individual ones of the different query processing nodes but a sum total of DCUs assigned across the different query processing nodes does not exceed the cluster-wide allocation of DCUs for the table (paragraph [0056]- [0058], a cross-cluster capacity component 210 at each of a plurality of clusters, where each cluster has a scalable number of nodes in the form of physical or virtual containers; also see paragraph, broadcasting available required capacity for its own cluster on which the cross-cluster capacity component operates; broadcast providing information on capacity availability or capacity requirements from other clusters relating to their nodes; also see paragraph [0064], determining availability versus requirement of capacity; for example, a specific cluster (cluster A) has extra capacity and at the same time another specific cluster (cluster B) requires more capacity; the cross-cluster capacity component running on Cluster A deallocates node(s) from Cluster A and the cross-cluster capacity component running on Cluster B reallocates the same node(s) to Cluster B, noted, as each cluster comprising its own capacity requirement with scalable number of nodes; as Cluster B requires more capability for processing, the extra capacity (nodes) from Cluster A is deallocated and allocated in Cluster B, thus, the capacity across clusters are unchanged, which read on “query processing nodes does not exceed the cluster-wide allocation of DCUs for the table” as claimed). It would have been obvious to one of ordinary skill in art before the effective filing date of the claim invention to include wherein individual DCUs of the total number of DCUs are assignable differently to individual ones of the different query processing nodes but a sum total of DCUs assigned across the different query processing nodes does not exceed the cluster-wide allocation of DCUs for the table into scaling operations of Saxena. Motivation to do so would be to include wherein individual DCUs of the total number of DCUs are assignable differently to individual ones of the different query processing nodes but a sum total of DCUs assigned across the different query processing nodes does not exceed the cluster-wide allocation of DCUs for the table for automating application deployment, scaling, and management (Anand, paragraph [0003], line 9-10). Saxena and Anand do not explicitly disclose: adding a new query processing node to handle a sub-portion split from the shard of the table assigned to the query processing node. Wu teaches: adding a new query processing node to handle a sub-portion split from the shard of the table assigned to the query processing node (col. 14, line also see Fig. 6, col. 15, line 20-37, the split operation is local because it is performed at the shard and a portion of the data currently stored in the node hosting the shard being split remain being stored in the same node after the local split operation is performed; once a subset of shards is selected to be split, a local split operation may be performed wherein a first portion of a shard remains stored in a node hosting the shard and a second portion of the shard is caused to be stored in a second node; a split key value within a key range of a shard that is to be split is determined; the second portion of the shard assigned to a new shard; also see col. 4, line 41-49, one or more additional nodes may be included in a provider network and may receive a split second portion of a shard of a sub-set; additional nodes, such as nodes A and B may be added as demand for provider network service increases). It would have been obvious to one of ordinary skill in art before the effective filing date of the claim invention to include adding a new query processing node to handle a sub-portion split from the shard of the table assigned to the query processing node into scaling operations of Saxena. Motivation to do so would be to include adding a new query processing node to handle a sub-portion split from the shard of the table assigned to the query processing node to perform a re-sharding operation for a set of shards without re-balancing of data across the full set of shards (Wu, col. 2, line 12-14). Saxena as modified by Anand and Wu further teach: wherein the new query processing node is allocated one or more DCUs from the total number of DCUs (Saxena, col. 7, line 47-56, performance monitoring may evaluate processing cluster performance in order to trigger the performance of various control plane operations (node replacement or failover operations); cluster scaling may be implemented as part of control plan to response to user request to add or remove node from a processing cluster or automatically triggered request/events to add or remove nodes (e.g., based on utilization thresholds for processing, storage, network, or other cluster resource); also see Fig. 8, col. 11, line 65-67 and col. 12, line 1-17, , wherein the number of reserved query execution slots may be expanded by obtaining execution slots from the general query execution resources); and perform the scaling operation selected by the database system with respect to the query processing node (Saxena, col. 7, line 47-56, performance monitoring may evaluate processing cluster performance in order to trigger the performance of various control plane operations (node replacement or failover operations); cluster scaling may be implemented as part of control plan to response to user request to add or remove node from a processing cluster or automatically triggered request/events to add or remove nodes (e.g., based on utilization thresholds for processing, storage, network, or other cluster resource); also see Fig. 8, col. 11, line 65-67 and col. 12, line 1-17, wherein the number of reserved query execution slots may be expanded by obtaining execution slots from the general query execution resources) . Regarding claim 4, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 1, further teach: wherein the database system is a relational database service implemented as part of a provider network and wherein the system-managed table is created in response to a request received at the database service to create the system-managed table (Saxena, Fig. 3, col. 7, line 16-55, database service may be implemented by a large collection of computing devices; different subset of these computing devices may be controlled by control plane; control plane may implement cluster performance monitoring, which may track, store, organize and/or evaluate performance metrics collected for queries performed at processing clusters; performance monitoring may receive reported metrics with regards to Fig. 3, and store them in common store location for the database). As per claims 5 and 14, these claims are rejected on grounds corresponding to the same rationales given above for rejected claim 1 and are similarly rejected. Regarding claim 11, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 5, further teach: receiving, at the database system, a request to perform a split operation for the table (Wu, col. 12, line 23-30, an admin console may provide an option for a re-sharding operation to be performed in response to a user input, such as a request to perform re-sharding operation); and responsive to the request, adding the new query processing node to handle the sub- portion split from the portion of the table assigned to the query processing node (Wu, col. 14, line also see Fig. 6, col. 15, line 20-37, the split operation is local because it is performed at the shard and a portion of the data currently stored in the node hosting the shard being split remain being stored in the same node after the local split operation is performed; once a subset of shards is selected to be split, a local split operation may be performed wherein a first portion of a shard remains stored in a node hosting the shard and a second portion of the shard is caused to be stored in a second node; a split key value within a key range of a shard that is to be split is determined; the second portion of the shard assigned to a new shard; also see col. 4, line 41-49, one or more additional nodes may be included in a provider network and may receive a split second portion of a shard of a sub-set; additional nodes, such as nodes A and B may be added as demand for provider network service increases), wherein the new query processing node is allocated one or more DCUs from the total number of DCUs (Saxena, col. 7, line 47-56, performance monitoring may evaluate processing cluster performance in order to trigger the performance of various control plane operations (node replacement or failover operations); cluster scaling may be implemented as part of control plan to response to user request to add or remove node from a processing cluster or automatically triggered request/events to add or remove nodes (e.g., based on utilization thresholds for processing, storage, network, or other cluster resource); also see Fig. 8, col. 11, line 65-67 and col. 12, line 1-17, wherein the number of reserved query execution slots may be expanded by obtaining execution slots from the general query execution resources). Regarding claim 13, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 5, further teach: wherein the plurality of query processing nodes comprise at least one distributed transaction node and at least one data access node and wherein the query processing node is the at least distributed transaction node (Saxena, Fig. 3, col. 7, line 16-65, Fig. 3 illustrates leader node 310 and compute node 320a-320n for data processing, also see col. 8, line 20-51). Regarding claim 19, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 14, further teach: wherein an initial distribution of at least some of the total number of DCUs is made across the plurality of query processing nodes responsive to creating the table as a system-managed table or responsive to a request enabling the table to be a system-managed table (Saxena, Fig. 3, col. 7, line 16-55, database service may be implemented by a large collection of computing devices; different subset of these computing devices may be controlled by control plane; control plane may implement cluster performance monitoring, which may track, store, organize and/or evaluate performance metrics collected for queries performed at processing clusters; performance monitoring may receive reported metrics with regards to Fig. 3, and store them in common store location for the database; also see Saxena, col. 7, line 47-56, performance monitoring may evaluate processing cluster performance in order to trigger the performance of various control plane operations (node replacement or failover operations); cluster scaling may be implemented as part of control plan to response to user request to add or remove node from a processing cluster or automatically triggered request/events to add or remove nodes (e.g., based on utilization thresholds for processing, storage, network, or other cluster resource); also see Fig. 8, col. 11, line 65-67 and col. 12, line 1-17, wherein the number of reserved query execution slots may be expanded by obtaining execution slots from the general query execution resources). Regarding claim 20, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 14, further teach: wherein the database system is a database service implemented as part of a provider network and wherein the table is created as a system managed table in response to a request received at the database service (Saxena, Fig. 3, col. 7, line 16-55, database service may be implemented by a large collection of computing devices; different subset of these computing devices may be controlled by control plane; control plane may implement cluster performance monitoring, which may track, store, organize and/or evaluate performance metrics collected for queries performed at processing clusters; performance monitoring may receive reported metrics with regards to Fig. 3, and store them in common store location for the database). Claims 2, 6-7 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Saxena et al. (U.S. Patent No. 10,922,316B2) in view of Anand et al. (U.S. Pub. No. 2024/0152404 A1) and Wu et al. (U.S. Patent No. 11,030,169 B1), further in view of PAL et al. (U.S. Pub. No. 2018/0089324 A1, referred as “PAL”). Regarding claim 2, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 1, but do not explicitly disclose: wherein increasing the DCUs allocated to the query processing node comprises: increasing a local DCU maximum at the query processing node by a number of DCUs. PAL teaches: wherein increasing the DCUs allocated to the query processing node comprises: increasing a local DCU maximum at the query processing node by a number of DCUs (paragraph [0711], the query coordinator can monitor the partitions in the search layers and dynamically adjust the number of partitions in each depending on the status of individual partitions, the status of the nodes, the status of the query, etc.; the query can allocate additional partitions 3606; also see paragraph [0557], the platform’s ubiquitous, on-demand access to a shared pool of configurable computing resources; also see paragraph [0606], the maximum capacity that can be run at any one time…; also see paragraph [0667], [0705] ). It would have been obvious to one of ordinary skill in art before the effective filing date of the claim invention to include wherein increasing the DCUs allocated to the query processing node comprises: increasing a local DCU maximum at the query processing node by a number of DCUs into scaling operations of Saxena. Motivation to do so would be to include wherein increasing the DCUs allocated to the query processing node comprises: increasing a local DCU maximum at the query processing node by a number of DCUs to provide the results within a particular amount of time (PAL, paragraph [0666]). Saxena as modified by Anand, Wu and PAL further teach: removing a corresponding number of DCUs from a reserve pool of DCUs in the cluster-wide allocation of DCUs (PAL, paragraph [0711], the query coordinator can monitor the partitions in the search layers and dynamically adjust the number of partitions in each depending on the status of individual partitions, the status of the nodes, the status of the query, etc.; the query can allocate additional partitions 3606; also see paragraph [0557], the platform’s ubiquitous, on-demand access to a shared pool of configurable computing resources, noted, adding additional partitions from the pool of computing resources, indicates that the resource from the pool is removed to be allocate to the needed layer; also see paragraph [0638], the node can determine how many processors to allocate to different tasks; each processor of the node can be used as a partition to intake, process and collect data according to the task, which reads on as claimed); identifying one or more other query processing nodes of the plurality of query processing nodes to remove the corresponding number of DCUs (PAL, paragraph [0711], the query coordinator can monitor the partitions in the search layers and dynamically adjust the number of partitions in each depending on the status of individual partitions, the status of the nodes, the status of the query, etc.; the query can allocate additional partitions 3606; if the query coordinator 3304 determine that some of the partitions are underutilized, then it can deallocate it from a particular layer and make it available for other queries, or assign it to a different layer, etc.; also see paragraph [0557], the platform’s ubiquitous, on-demand access to a shared pool of configurable computing resources); lowering respective local DCU maximums at the identified other query processing nodes to remove the corresponding number of DCUs (PAL, paragraph [0711], if the query coordinator 3304 determine that some of the partitions are underutilized, then it can deallocate it from a particular layer and make it available for other queries, or assign it to a different layer, etc.; also see paragraph [0557], the platform’s ubiquitous, on-demand access to a shared pool of configurable computing resources); and returning the corresponding number of DCUs to the reserved pool of DCUs (PAL, paragraph [0711], if the query coordinator 3304 determine that some of the partitions are underutilized, then it can deallocate it from a particular layer and make it available for other queries, or assign it to a different layer, etc.; also see paragraph [0557], the platform’s ubiquitous, on-demand access to a shared pool of configurable computing resources). As per claim 6, this claim is rejected on grounds corresponding to the same rationales given above for rejected claim 2 and is similarly rejected. Regarding claim 7, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 5, but do not explicitly disclose: wherein decreasing the DCUs allocated to the query processing node comprises: decreasing a local DCU maximum at the query processing node by a number of DCUs; and adding a corresponding number of DCUs to a reserve pool of DCUs in the cluster wide allocation of DCUs. PAL teaches: wherein decreasing the DCUs allocated to the query processing node comprises: decreasing a local DCU maximum at the query processing node by a number of DCUs (paragraph [0711], the query coordinator can monitor the partitions in the search layers and dynamically adjust the number of partitions in each depending on the status of individual partitions, the status of the nodes, the status of the query, etc.; the query can allocate additional partitions 3606; if the query coordinator 3304 determine that some of the partitions are underutilized, then it can deallocate it from a particular layer and make it available for other queries, or assign it to a different layer, etc.; also see paragraph [0557], the platform’s ubiquitous, on-demand access to a shared pool of configurable computing resources); and adding a corresponding number of DCUs to a reserve pool of DCUs in the cluster wide allocation of DCUs (paragraph [0711], if the query coordinator 3304 determine that some of the partitions are underutilized, then it can deallocate it from a particular layer and make it available for other queries, or assign it to a different layer, etc.; also see paragraph [0557], the platform’s ubiquitous, on-demand access to a shared pool of configurable computing resources). It would have been obvious to one of ordinary skill in art before the effective filing date of the claim invention to include wherein decreasing the DCUs allocated to the query processing node comprises: decreasing a local DCU maximum at the query processing node by a number of DCUs; and adding a corresponding number of DCUs to a reserve pool of DCUs in the cluster wide allocation of DCUs into scaling operations of Saxena. Motivation to do so would be to include wherein increasing the DCUs allocated to the query processing node comprises: wherein decreasing the DCUs allocated to the query processing node comprises: decreasing a local DCU maximum at the query processing node by a number of DCUs; and adding a corresponding number of DCUs to a reserve pool of DCUs in the cluster wide allocation of DCUs to provide the results within a particular amount of time (PAL, paragraph [0666]). Regarding claim 15, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 14, but do not explicitly disclose: wherein increasing the DCUs allocated to the query processing node comprises: increasing a local DCU maximum at the query processing node by a number of DCUs. PAL teaches: increasing a local DCU maximum at the query processing node by a number of DCUs (paragraph [0711], the query coordinator can monitor the partitions in the search layers and dynamically adjust the number of partitions in each depending on the status of individual partitions, the status of the nodes, the status of the query, etc.; the query can allocate additional partitions 3606; also see paragraph [0557], the platform’s ubiquitous, on-demand access to a shared pool of configurable computing resources; also see paragraph [0606], the maximum capacity that can be run at any one time…; also see paragraph [0667], [0705] ). It would have been obvious to one of ordinary skill in art before the effective filing date of the claim invention to include increasing a local DCU maximum at the query processing node by a number of DCUs into scaling operations of Saxena. Motivation to do so would be to include increasing a local DCU maximum at the query processing node by a number of DCUs to provide the results within a particular amount of time (PAL, paragraph [0666]). Saxena as modified by Anand, Wu and PAL further teach: removing a corresponding number of DCUs from a reserve pool of DCUs in the cluster-wide allocation of DCUs (PAL, paragraph [0711], the query coordinator can monitor the partitions in the search layers and dynamically adjust the number of partitions in each depending on the status of individual partitions, the status of the nodes, the status of the query, etc.; the query can allocate additional partitions 3606; also see paragraph [0557], the platform’s ubiquitous, on-demand access to a shared pool of configurable computing resources, noted, adding additional partitions from the pool of computing resources, indicates that the resource from the pool is removed to be allocate to the needed layer; also see paragraph [0638], the node can determine how many processors to allocate to different tasks; each processor of the node can be used as a partition to intake, process and collect data according to the task, which reads on as claimed). As per claim 16, this claim is rejected on grounds corresponding to the same rationales given above for rejected claim 7 and is similarly rejected. Claims 3, 8 and 17 rejected under 35 U.S.C. 103 as being unpatentable over Saxena et al. (U.S. Patent No. 10,922,316B2) in view of Anand et al. (U.S. Pub. No. 2024/0152404 A1) and Wu et al. (U.S. Patent No. 11,030,169 B1), further in view of Ghosh et al. ("Morphus: Supporting Online Reconfigurations in Sharded NoSQL Systems"; 2015 IEEE 12th International Conference on Autonomic Computing). Regarding claim 3, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 1, but do not explicitly disclose: wherein adding the new query processing node to handle the sub-portion split from the shard of the table assigned to the query processing node comprises: creating a copy of the shard for the new query processing node to access. Ghosh teaches: creating a copy of the shard for the new query processing node to access (servers are grouped into disjoint replica sets; each replica set contains the same number of servers which are exact replicas of each other’s). It would have been obvious to one of ordinary skill in art before the effective filing date of the claim invention to include creating a copy of the shard for the new query processing node to access into scaling operations of Saxena. Motivation to do so would be to include creating a copy of the shard for the new query processing node to access major problem that directly affect the life of a system administrator-without an automated reconfiguration primitive, reconfiguration operations today are laborious and manual, consume significant amount of time, and open the room for human errors during the reconfiguration (Ghosh, page 2, line 11-16). Saxena as modified by Anand, Wu and Ghosh further teach: adding the new query processing node to a cluster of query processing nodes comprising the plurality of query processing nodes (Wu, col. 4, line 41-49, one or more additional nodes may be included in a provider network and may receive a split second portion of a shard of a sub-set; additional nodes, such as nodes A and B may be added as demand for provider network service increases); determining a split point in the shard that identifies the sub-portion of the shard to reassign to the new query processing node (Wu, Fig. 6, col. 15, line 20-37, the split operation is local because it is performed at the shard and a portion of the data currently stored in the node hosting the shard being split remain being stored in the same node after the local split operation is performed; once a subset of shards is selected to be split, a local split operation may be performed wherein a first portion of a shard remains stored in a node hosting the shard and a second portion of the shard is caused to be stored in a second node; a split key value within a key range of a shard that is to be split is determined; the second portion of the shard assigned to a new shard; also see col. 14, line 35-65); blocking updates to the shard at the query processing node (Ghosh, page 3, left column, create new chunks and set split points for new chunks; disable background process; in the isolation phase, oplog replay is disabled at the selected secondaries; collecting time stamp in order to know where to restart replaying the oplog in the future); updating mapping information to indicate that the sub-portion of the shard is assigned to the new query processing node (Wu, col. 13, line 13-27, a directory may be updated to indicate newly added shards, to indicate which pieces of data are associated with different shards, and to indicate storage locations of different shards; also see col. 4, line 41-49, one or more additional nodes may be included in a provider network and may receive a split second portion of a shard of a sub-set); and unblocking the updates to the shard (Ghosh, page 3, right column, each primary forwards each item in the oplog to its appropriate new secondary, based on the new chunk ranges; this secondary can be located from our placement plan in the execution phase, if the operation involved the new shard key; the write throttle is required because of the atomic commit phase that follows right afterward), wherein the query processing node is assigned a remaining portion of the shard and the new query processing node is assigned the sub-portion (Wu, Fig. 6, col. 15, line 20-37, the split operation is local because it is performed at the shard and a portion of the data currently stored in the node hosting the shard being split remain being stored in the same node after the local split operation is performed; once a subset of shards is selected to be split, a local split operation may be performed wherein a first portion of a shard remains stored in a node hosting the shard and a second portion of the shard is caused to be stored in a second node; a split key value within a key range of a shard that is to be split is determined; the second portion of the shard assigned to a new shard; also see col. 14, line 35-65). As per claims 8 and 17, these claims are rejected on grounds corresponding to the same rationales given above for rejected claim 3 and are similarly rejected. Claims 9, 10 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Saxena et al. (U.S. Patent No. 10,922,316B2) in view of Anand et al. (U.S. Pub. No. 2024/0152404 A1) and Wu et al. (U.S. Patent No. 11,030,169 B1), further in view of Geiger et al. (U.S. Patent No. 10,754,691 B2). Regarding claim 9, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 5, but do not explicitly disclose: wherein the total number of DCUs is specified in a request received at the database system. Geiger teaches: wherein the total number of DCUs is specified in a request received at the database system (Fig. 6, col. 10, line 39-55, the scaling requests a 25% increase in memory). It would have been obvious to one of ordinary skill in art before the effective filing date of the claim invention to include wherein the total number of DCUs is specified in a request received at the database system into scaling operations of Saxena. Motivation to do so would be to include wherein the total number of DCUs is specified in a request received at the database system providing application workloads with the capability to dynamically scale resources across multiple cloud providers; depicting an approach that can be executed on an information handling system that implements a policy based request/ approval system to scale resources across multiple cloud environments/providers (Geiger, col. 7, line 14-19). Regarding claim 10, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 5, but do not explicitly disclose: wherein the total number of DCUs is increased from a prior total number according to a request to increase DCUs received at the database system. Geiger teaches: wherein the total number of DCUs is increased from a prior total number according to a request to increase DCUs received at the database system (Fig. 6, col. 10, line 39-55, the scaling requests a 25% increase in memory). It would have been obvious to one of ordinary skill in art before the effective filing date of the claim invention to include wherein the total number of DCUs is increased from a prior total number according to a request to increase DCUs received at the database system into scaling operations of Saxena. Motivation to do so would be to include w wherein the total number of DCUs is increased from a prior total number according to a request to increase DCUs received at the database system providing application workloads with the capability to dynamically scale resources across multiple cloud providers; depicting an approach that can be executed on an information handling system that implements a policy based request/ approval system to scale resources across multiple cloud environments/providers (Geiger, col. 7, line 14-19). As per claim 18, this claim is rejected on grounds corresponding to the same rationales given above for rejected claim 9 and is similarly rejected. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Saxena et al. (U.S. Patent No. 10,922,316B2) in view of Anand et al. (U.S. Pub. No. 2024/0152404 A1) and Wu et al. (U.S. Patent No. 11,030,169 B1), further in view of Greenwood et al. (U.S. Pub. No. 2019/0158422 A1). Regarding claim 12, Saxena as modified by Anand and Wu teach all claimed limitations as set forth in rejection of claim 5, further teach: evaluating, by the database system, further respective performance metrics collected from the plurality of different query processing nodes of the database system (Saxena, col. 7, line 46-55, performance monitoring may evaluate processing cluster performance in order to trigger the performance of various control plane operations; cluster scaling may be implemented as part of control plan), wherein the evaluating determines a further scaling decision selecting a further scaling operation for a different query processing node of the different query processing nodes (Saxena, col. 7, line 47-56, performance monitoring may evaluate processing cluster performance in order to trigger the performance of various control plane operations (node replacement or failover operations); cluster scaling may be implemented as part of control plan to response to user request to add or remove node from a processing cluster or automatically triggered request/events to add or remove nodes (e.g., based on utilization thresholds for processing, storage, network, or other cluster resource); also see Fig. 8, col. 11, line 65-67 and col. 12, line 1-17, wherein the number of reserved query execution slots may be expanded by obtaining execution slots from the general query execution resources) but do not explicitly disclose: determining that a minimum number of DCUs from the total number of DCUs are not available to perform the further scaling operation. Greenwood teaches: determining that a minimum number of DCUs from the total number of DCUs are not available to perform the further scaling operation (paragraph [0056], the fragmentation measure may indicate that 200 Terabytes of storage amongst resource hosts in a data zone is unavailable; the total number of unavailable storage may then be subtracted from total storage to determine the available capacity of 100 Terabytes; also see Fig. 8, paragraph [0018]; noted, ‘for’ indicates intended use; Minton v. Nat ’l Ass ’n of Securities Dealers, Inc., 336 F.3d 1373, 1381, 67 USPQ2d 1614, 1620 (Fed. Cir. 2003) “whereby clause in a method claim is not given weight when it simply expresses the intended result of a process step positively recited.” Examples of claim language, although not exhaustive, that may raise a question as to the limiting effect of the language in a claim are: (A) “adapted to” or “adapted for” clauses; (B) “wherein” clauses; and (C) “whereby” clauses. Therefore intended use limitations are not required to be taught, see MPEP 2111.04 [R-3])). It would have been obvious to one of ordinary skill in art before the effective filing date of the claim invention to include determining that a minimum number of DCUs from the total number of DCUs are not available to perform the further scaling operation into scaling operations of Saxena. Motivation to do so would be to include determining that a minimum number of DCUs from the total number of DCUs are not available to perform the further scaling operation to provide consistent and reliable on-demand virtual computing resource (Greenwood, paragraph [0003], line 14-15). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEN HOANG whose telephone number is (571)272-8401. The examiner can normally be reached M-F 7:30am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at (571)272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEN HOANG/ Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Nov 24, 2023
Application Filed
Apr 02, 2025
Non-Final Rejection — §103
Jul 07, 2025
Response Filed
Aug 04, 2025
Final Rejection — §103
Oct 07, 2025
Response after Non-Final Action
Nov 07, 2025
Request for Continued Examination
Nov 16, 2025
Response after Non-Final Action
Nov 24, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596751
IMAGE SYNTHESIS BASED ON PREDICTIVE ANALYTICS
2y 5m to grant Granted Apr 07, 2026
Patent 12579118
SYSTEM AND METHODS FOR AUTOMATED STANDARDIZATION OF HETEROGENEOUS DATA USING MACHINE LEARNING
2y 5m to grant Granted Mar 17, 2026
Patent 12531138
PARAMETERIZED TEMPLATE FOR CLINICAL RESEARCH STUDY SYSTEMS
2y 5m to grant Granted Jan 20, 2026
Patent 12481898
SCALABLE INTEGRATED INFORMATION STRUCTURE SYSTEM
2y 5m to grant Granted Nov 25, 2025
Patent 12475469
FRAUD DETECTION SYSTEMS AND METHODS
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+31.6%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 383 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month