Prosecution Insights
Last updated: April 19, 2026
Application No. 19/155,413

METADATA LOAD BALANCING METHOD, APPARATUS, AND DEVICE, AND NON-VOLATILE READABLE STORAGE MEDIUM

Non-Final OA §101§102§103
Filed
Aug 11, 2025
Examiner
TSAI, SHENG JEN
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
IEIT Systems Co., Ltd.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
83%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
556 granted / 790 resolved
+15.4% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
25 currently pending
Career history
815
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
24.7%
-15.3% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 790 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION 1. This Office Action is taken in response to Applicants’ application 19/155,413 filed on 8/11/2025. Claims 1-3, 6-17, and 19-23 are pending for consideration. 2. Examiner’s Note (1) In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient. (2) Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 3. Claims 1-3, 6, 8-14, and 19-23 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Kumar et al. (US Patent 10,909,094, hereinafter Kumar). As to claim 1, Kumar teaches A metadata load balancing method [In some embodiments, the data storage service 120 may be a cloud-based service that hosts data stores, such as block- or chunk-based data volumes, of clients … In some embodiments, data storage service 120 may be configured as a number of distinct systems (e.g., in a cluster topology) implementing load balancing and other request management features configured to dynamically manage large-scale web services request processing loads (c5 L33-56)], comprising: acquiring metadata load pressure information corresponding to respective metadata services [performing autoscaling to create new cells when an original cell is overloaded with too many metadata records -- as shown in figure 1, where metadata records/files (132a, 132b) are distributed among a plurality of metadata storage locations (130, 134), which provide metadata services; as shown in figures 2B and 2C, where metadata records/files are distributed among a plurality of cells (cell I 280, cell II 290, cell III 295), which provide metadata services; FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)] in a distributed file storage cluster [… For example, the components of the data storage service 120 may be implemented by a distributed system including a number of computing nodes (or simply, nodes), such as computing systems described below. In some embodiments, the functionality of a given storage service system component may be implemented by a particular computing node or may be distributed across several computing nodes. In some embodiments, a given computing node may implement the functionality of more than one storage service system component (c6 L3-12); In some embodiments, the data storage service 120 may be a cloud-based service that hosts data stores, such as block- or chunk-based data volumes, of clients … In some embodiments, data storage service 120 may be configured as a number of distinct systems (e.g., in a cluster topology) implementing load balancing and other request management features configured to dynamically manage large-scale web services request processing loads (c5 L33-56)]. determining a metadata migration time [Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs. In embodiments, when it is determined that a metadata record is to be migrated to a different storage location, the scheduler determines a time to migrate the metadata record. The migration time may lie within a migration window, selected based on an expected migration time needed for the metadata record and the collected time data in order to reduce a probability that record mutations will occur during the migration. In embodiments, the jobs may be snapshot jobs that modify a snapshot record, and the migration may be performed as a result of a cell partitioning operation occurring within the snapshotting system (abstract)], target metadata services scheduled for metadata migration [for example, newly created cells become the target destination receiving the migrated metadata records -- In some embodiments, migrations may occur for a large number of records within a relatively short period of time. For example, in some embodiments, many migrations may be caused by a single event, such as a partition within the data store or snapshotting system. For example, in a partition, a cell of computing resources is automatically partitioned into two or more cells in response to a scaling event. In that case, a subset of the metadata records on the original cell must be scheduled for migration to a new cell within a short period of time … (c4 L9-25); FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32)], and a quantity of to-be-migrated metadata between the target metadata services according to the metadata load pressure information [as shown in figure 2C, where metadata record C (292) is migrated from cell II (290) to cell III (295); Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs… (abstract); The systems and methods described herein may be employed in various combinations and in various embodiments to implement a metadata record migration system that schedules migrations of the metadata records based on observed mutations of the metadata records … (c2 L60-67); FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); As shown, the system next selects at time t3 record D 540 to migrate over record C 530, which also has a migration window 544 available at the same time. In this case, the system may select to migrate record D first, because the migration window 534 for record C is relatively large. The system may determine by migrating record D first, there will still be time left to migrate record C after record D in window 534. Indeed, in this example, after the migration of record D is completed, there is sufficient time remaining in window 534 to migrate record C, and record C is migrated at time t4. Accordingly, in this manner, the system may schedule the migration of many records in the system very quickly, based on the available migration windows and available resources, so that the migrations are performed as speedily and safely as possible (c17 L66 to c18 L13)]; acquiring migration parameters corresponding to respective sub-tree partitions in the target metadata services [the corresponding “migration parameter” is the “time data of mutations of metadata records” -- Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs … (abstract); To address such difficulties, embodiments of the disclosed system may monitor the jobs that mutate the metadata records over a period of time, and collect time data for any mutations, in order to learn the mutation behavior of the metadata record. The collected time data may then be programmatically analyzed to determine an appropriate time to perform the migration of the metadata record … (c3 L22-38); In some embodiments, the time data tracker 160 may output a set of record mutation time data. In some embodiments, the output data may be maintained in volatile memory … (c7 L52-67); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)] and a workload input/output (I/O) mode of the distributed file storage cluster [as shown in figure 1, where clients (110) performs I/O operations, via a network (115), through jobs (140), which perform mutation operations (142a, 142b) that change/update metadata records, and where metadata records/files (132a, 132b) are distributed among a plurality of metadata storage locations (130, 134), which provide metadata services; as shown in figures 2B and 2C, where metadata records/files are distributed among a plurality of cells (cell I 280, cell II 290, cell III 295), which provide metadata services; In some embodiments, the data storage service 120 may be a cloud-based service that hosts data stores, such as block- or chunk-based data volumes, of clients. In some embodiments, the data storage service 120 may implement other types of data storage, such as for example a database, a file system, or the like … In some embodiments, data storage service 120 may be configured as a number of distinct systems (e.g., in a cluster topology) implementing load balancing and other request management features configured to dynamically manage large-scale web services request processing load (c5 L32-56); In some embodiments, the mutation operations 142 may occur only at certain points in the job process … For example, when a long running job such as a snapshotting job finally completes, the metadata record may be updated a final time but without immediately alerting the client or user that initiated the job … (c7 L8-33); … If the client expects to have a steady-state workload that requires an instance to be up most of the time, the client may reserve a High Uptime Ratio instance and potentially pay an even lower hourly usage fee … In some embodiments, compute instance configurations may also include compute instances with a general or specific purpose, such as computational workloads for compute intensive applications (e.g., high-traffic web applications … memory intensive workloads … storage optimized workloads … (c11 L21-57); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)] through load analysis components deployed in the metadata services [as shown in figure 6, step 620, “Select a time to perform the migration based on the analysis of time data and the expected migration duration of the record …;” At operation 620, a selection of the migration time for the record is made. The selection may be based on an analysis of the collected time data and an expected migration duration of the record. The selection is performed so as to reduce the likelihood that updates to the record will occur during the migration. In some embodiments, the selection may be performed by for example the migration schedule 180, as discussed in connection with FIG. 1. In some embodiments, the collected time data may be analyzed to determine a set of possible migration windows to perform the migration … (c18 L57-67)]; when determining that the workload I/O mode is metadata intensive I/O [as shown in figure 1, where clients (110) performs I/O operations, via a network (115), through jobs (140), which perform mutation operations (142a, 142b) that change/update metadata records, and where metadata records/files (132a, 132b) are distributed among a plurality of metadata storage locations (130, 134); Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs … (abstract); To address such difficulties, embodiments of the disclosed system may monitor the jobs that mutate the metadata records over a period of time, and collect time data for any mutations, in order to learn the mutation behavior of the metadata record. The collected time data may then be programmatically analyzed to determine an appropriate time to perform the migration of the metadata record … (c3 L22-38); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)], determining exporter sub-tree partitions and importer sub-tree partitions according to the migration parameters through sub-tree selection components deployed in the metadata services [as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]; and migrating metadata in the quantity of to-be-migrated metadata from the exporter sub-tree partitions to the importer sub-tree partitions when the metadata migration time arrive [as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); As shown, the system next selects at time t3 record D 540 to migrate over record C 530, which also has a migration window 544 available at the same time. In this case, the system may select to migrate record D first, because the migration window 534 for record C is relatively large. The system may determine by migrating record D first, there will still be time left to migrate record C after record D in window 534. Indeed, in this example, after the migration of record D is completed, there is sufficient time remaining in window 534 to migrate record C, and record C is migrated at time t4. Accordingly, in this manner, the system may schedule the migration of many records in the system very quickly, based on the available migration windows and available resources, so that the migrations are performed as speedily and safely as possible (c17 L66 to c18 L13); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]. As to claim 2, Kumar teaches The metadata load balancing method according to claim 1, wherein the acquiring metadata load pressure information corresponding to respective metadata services in a distributed file storage cluster comprises: acquiring the metadata load pressure information corresponding to the respective metadata services [performing autoscaling to create new cells (i.e., metadata services) when an original cell is overloaded with too many metadata records -- FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)] through load monitors deployed in the metadata services of the distributed file storage cluster [To address such difficulties, embodiments of the disclosed system may monitor the jobs that mutate the metadata records over a period of time, and collect time data for any mutations, in order to learn the mutation behavior of the metadata record. The collected time data may then be programmatically analyzed to determine an appropriate time to perform the migration of the metadata record … (c3 L22-38); … For example, the components of the data storage service 120 may be implemented by a distributed system including a number of computing nodes (or simply, nodes), such as computing systems described below. In some embodiments, the functionality of a given storage service system component may be implemented by a particular computing node or may be distributed across several computing nodes. In some embodiments, a given computing node may implement the functionality of more than one storage service system component (c6 L3-12)]. As to claim 3, Kumar teaches The metadata load balancing method according to claim 1, wherein the determining a metadata migration time, target metadata services scheduled for metadata migration, and a quantity of to-be-migrated metadata between the target metadata services according to the metadata load pressure information comprises: determining the metadata migration time [Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs. In embodiments, when it is determined that a metadata record is to be migrated to a different storage location, the scheduler determines a time to migrate the metadata record. The migration time may lie within a migration window, selected based on an expected migration time needed for the metadata record and the collected time data in order to reduce a probability that record mutations will occur during the migration. In embodiments, the jobs may be snapshot jobs that modify a snapshot record, and the migration may be performed as a result of a cell partitioning operation occurring within the snapshotting system (abstract)], the target metadata services scheduled for metadata migration [In some embodiments, migrations may occur for a large number of record within a relatively short period of time. For example, in some embodiments, many migrations may be caused by a single event, such as a partition within the data store or snapshotting system. For example, in a partition, a cell of computing resources is automatically partitioned into two or more cells in response to a scaling event. In that case, a subset of the metadata records on the original cell must be scheduled for migration to a new cell within a short period of time … (c4 L9-25)], and the quantity of to-be-migrated metadata between the target metadata services [as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs… (abstract); The systems and methods described herein may be employed in various combinations and in various embodiments to implement a metadata record migration system that schedules migrations of the metadata records based on observed mutations of the metadata records … (c2 L60-67); As shown, the system next selects at time t3 record D 540 to migrate over record C 530, which also has a migration window 544 available at the same time. In this case, the system may select to migrate record D first, because the migration window 534 for record C is relatively large. The system may determine by migrating record D first, there will still be time left to migrate record C after record D in window 534. Indeed, in this example, after the migration of record D is completed, there is sufficient time remaining in window 534 to migrate record C, and record C is migrated at time t4. Accordingly, in this manner, the system may schedule the migration of many records in the system very quickly, based on the available migration windows and available resources, so that the migrations are performed as speedily and safely as possible (c17 L66 to c18 L13)] according to the metadata load pressure information through a metadata migration initiator set in a pre-selected metadata service [as shown in figure 2C, where metadata migration is initiated in Cell II (290) by partitioning cell III (295); Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs. In embodiments, when it is determined that a metadata record is to be migrated to a different storage location, the scheduler determines a time to migrate the metadata record. The migration time may lie within a migration window, selected based on an expected migration time needed for the metadata record and the collected time data in order to reduce a probability that record mutations will occur during the migration. In embodiments, the jobs may be snapshot jobs that modify a snapshot record, and the migration may be performed as a result of a cell partitioning operation occurring within the snapshotting system (abstract); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]. As to claim 6, Kumar teaches The metadata load balancing method according to claim 1, wherein the acquiring the migration parameters corresponding to the respective sub-tree partitions in the target metadata services through load analysis components deployed in the metadata services comprises: collecting statistics on historical workloads corresponding to the respective sub-tree partitions through the load analysis components deployed in the metadata services [To address such difficulties, embodiments of the disclosed system may monitor the jobs that mutate the metadata records over a period of time, and collect time data for any mutations, in order to learn the mutation behavior of the metadata record. The collected time data may then be programmatically analyzed to determine an appropriate time to perform the migration of the metadata record … (c3 L22-38); as shown in figure 6, step 620, “Select a time to perform the migration based on the analysis of time data and the expected migration duration of the record …;” At operation 620, a selection of the migration time for the record is made. The selection may be based on an analysis of the collected time data and an expected migration duration of the record. The selection is performed so as to reduce the likelihood that updates to the record will occur during the migration. In some embodiments, the selection may be performed by for example the migration schedule 180, as discussed in connection with FIG. 1. In some embodiments, the collected time data may be analyzed to determine a set of possible migration windows to perform the migration … (c18 L57-67)]; determining metadata access differences of the sub-tree partitions according to the historical workloads [as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; and cell I(280) has only one record A (282) which is accessed one time, while cell II (290) has two records B (291) and C (292), each is accessed one time for a total two times; FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]; and determining the migration parameters corresponding to the respective sub-tree partitions according to the metadata access differences [the corresponding “migration parameter” is the “time data of mutations behaviors of metadata records” -- Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs … (abstract); To address such difficulties, embodiments of the disclosed system may monitor the jobs that mutate the metadata records over a period of time, and collect time data for any mutations, in order to learn the mutation behavior of the metadata record. The collected time data may then be programmatically analyzed to determine an appropriate time to perform the migration of the metadata record … (c3 L22-38); In some embodiments, the time data tracker 160 may output a set of record mutation time data. In some embodiments, the output data may be maintained in volatile memory … (c7 L52-67); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]. As to claim 8, Kumar teaches The metadata load balancing method according to claim 6, wherein after the determining the exporter sub-tree partitions and importer sub-tree partitions according to the migration parameters, the metadata load balancing method further comprises: when a historical metadata access request table that reflects spatial locality exists among historical metadata access request tables maintained in the metadata services, selecting target sub-tree partitions from the sub-tree partitions at same levels as the exporter sub-tree partitions [as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]; and increasing the migration parameters of the target sub-tree partitions by a preset value [the corresponding “migration parameter” is the “time data of mutations behaviors of metadata records” -- Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs … (abstract); To address such difficulties, embodiments of the disclosed system may monitor the jobs that mutate the metadata records over a period of time, and collect time data for any mutations, in order to learn the mutation behavior of the metadata record. The collected time data may then be programmatically analyzed to determine an appropriate time to perform the migration of the metadata record … (c3 L22-38); In some embodiments, the time data tracker 160 may output a set of record mutation time data. In some embodiments, the output data may be maintained in volatile memory … (c7 L52-67)]. As to claim 9, Kumar teaches The metadata load balancing method according to claim 1, wherein the acquiring metadata load pressure information corresponding to respective metadata services in a distributed file storage cluster comprises: acquiring a number of metadata requests processed per unit duration corresponding to the respective metadata services in the distributed file storage cluster [To address such difficulties, embodiments of the disclosed system may monitor the jobs that mutate the metadata records over a period of time, and collect time data for any mutations, in order to learn the mutation behavior of the metadata record. The collected time data may then be programmatically analyzed to determine an appropriate time to perform the migration of the metadata record … (c3 L22-38); In some embodiments, the time data tracker 160 may output a set of record mutation time data. In some embodiments, the output data may be maintained in volatile memory … (c7 L52-67)]; and determining the metadata load pressure information corresponding to the respective metadata services according to the number of metadata requests processed per unit duration corresponding to the respective metadata services [as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]. As to claim 10, Kumar teaches The metadata load balancing method according to claim 9, wherein the determining the metadata load pressure information corresponding to the respective metadata services according to the number of metadata requests processed per unit duration corresponding to the respective metadata services comprises: determining the metadata load pressure information corresponding to the respective metadata services according to a statistical number of metadata requests processed per unit duration corresponding to the respective metadata services within a preset duration [To address such difficulties, embodiments of the disclosed system may monitor the jobs that mutate the metadata records over a period of time, and collect time data for any mutations, in order to learn the mutation behavior of the metadata record. The collected time data may then be programmatically analyzed to determine an appropriate time to perform the migration of the metadata record … (c3 L22-38); In some embodiments, the time data tracker 160 may output a set of record mutation time data. In some embodiments, the output data may be maintained in volatile memory … (c7 L52-67); as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]. As to claim 11, Kumar teaches The metadata load balancing method according to claim 1, wherein the determining a metadata migration time, target metadata services scheduled for metadata migration, and a quantity of to-be-migrated metadata between the target metadata services according to the metadata load pressure information comprises: determining metadata load balance values corresponding to the respective metadata services according to the metadata load pressure information [as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]; and determining the metadata migration time [Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs. In embodiments, when it is determined that a metadata record is to be migrated to a different storage location, the scheduler determines a time to migrate the metadata record. The migration time may lie within a migration window, selected based on an expected migration time needed for the metadata record and the collected time data in order to reduce a probability that record mutations will occur during the migration. In embodiments, the jobs may be snapshot jobs that modify a snapshot record, and the migration may be performed as a result of a cell partitioning operation occurring within the snapshotting system (abstract)], the target metadata services scheduled for metadata migration [In some embodiments, migrations may occur for a large number of record within a relatively short period of time. For example, in some embodiments, many migrations may be caused by a single event, such as a partition within the data store or snapshotting system. For example, in a partition, a cell of computing resources is automatically partitioned into two or more cells in response to a scaling event. In that case, a subset of the metadata records on the original cell must be scheduled for migration to a new cell within a short period of time … (c4 L9-25)], and the quantity of to-be-migrated metadata between the target metadata services according to the metadata load pressure information according to the metadata load balance values [Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs… (abstract); The systems and methods described herein may be employed in various combinations and in various embodiments to implement a metadata record migration system that schedules migrations of the metadata records based on observed mutations of the metadata records … (c2 L60-67); As shown, the system next selects at time t3 record D 540 to migrate over record C 530, which also has a migration window 544 available at the same time. In this case, the system may select to migrate record D first, because the migration window 534 for record C is relatively large. The system may determine by migrating record D first, there will still be time left to migrate record C after record D in window 534. Indeed, in this example, after the migration of record D is completed, there is sufficient time remaining in window 534 to migrate record C, and record C is migrated at time t4. Accordingly, in this manner, the system may schedule the migration of many records in the system very quickly, based on the available migration windows and available resources, so that the migrations are performed as speedily and safely as possible (c17 L66 to c18 L13)]. As to claim 12, Kumar teaches The metadata load balancing method according to claim 11, wherein the determining the metadata migration time, the target metadata services scheduled for metadata migration, and the quantity of to-be-migrated metadata between the target metadata services according to the metadata load balance values comprises: determining whether a metadata load balance value of the metadata load balance values that exceeds a preset threshold exists [as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs… (abstract); The systems and methods described herein may be employed in various combinations and in various embodiments to implement a metadata record migration system that schedules migrations of the metadata records based on observed mutations of the metadata records … (c2 L60-67); FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]; and when the metadata load balance value that exceeds the preset threshold exists, performing the step of and determining the metadata migration time [Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs. In embodiments, when it is determined that a metadata record is to be migrated to a different storage location, the scheduler determines a time to migrate the metadata record. The migration time may lie within a migration window, selected based on an expected migration time needed for the metadata record and the collected time data in order to reduce a probability that record mutations will occur during the migration. In embodiments, the jobs may be snapshot jobs that modify a snapshot record, and the migration may be performed as a result of a cell partitioning operation occurring within the snapshotting system (abstract)], the target metadata services scheduled for metadata migration [In some embodiments, migrations may occur for a large number of record within a relatively short period of time. For example, in some embodiments, many migrations may be caused by a single event, such as a partition within the data store or snapshotting system. For example, in a partition, a cell of computing resources is automatically partitioned into two or more cells in response to a scaling event. In that case, a subset of the metadata records on the original cell must be scheduled for migration to a new cell within a short period of time … (c4 L9-25)], and the quantity of to-be-migrated metadata between the target metadata services according to the metadata load pressure information according to the metadata load balance values [Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs… (abstract); The systems and methods described herein may be employed in various combinations and in various embodiments to implement a metadata record migration system that schedules migrations of the metadata records based on observed mutations of the metadata records … (c2 L60-67); As shown, the system next selects at time t3 record D 540 to migrate over record C 530, which also has a migration window 544 available at the same time. In this case, the system may select to migrate record D first, because the migration window 534 for record C is relatively large. The system may determine by migrating record D first, there will still be time left to migrate record C after record D in window 534. Indeed, in this example, after the migration of record D is completed, there is sufficient time remaining in window 534 to migrate record C, and record C is migrated at time t4. Accordingly, in this manner, the system may schedule the migration of many records in the system very quickly, based on the available migration windows and available resources, so that the migrations are performed as speedily and safely as possible (c17 L66 to c18 L13)]. As to claim 13, Kumar teaches The metadata load balancing method according to claim 1,characterized in that wherein the determining a metadata migration time, target metadata services scheduled for metadata migration, and a quantity of to-be-migrated metadata between the target metadata services according to the metadata load pressure information comprises: obtaining load differences that the respective metadata services tolerate [as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; and cell I(280) has only one record A (282) which is accessed one time, while cell II (290) has two records B (291) and C (292), each is accessed one time for a total two times; Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs… (abstract); The systems and methods described herein may be employed in various combinations and in various embodiments to implement a metadata record migration system that schedules migrations of the metadata records based on observed mutations of the metadata records … (c2 L60-67); FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]; and determining the metadata migration time [Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs. In embodiments, when it is determined that a metadata record is to be migrated to a different storage location, the scheduler determines a time to migrate the metadata record. The migration time may lie within a migration window, selected based on an expected migration time needed for the metadata record and the collected time data in order to reduce a probability that record mutations will occur during the migration. In embodiments, the jobs may be snapshot jobs that modify a snapshot record, and the migration may be performed as a result of a cell partitioning operation occurring within the snapshotting system (abstract)], the target metadata services scheduled for metadata migration [In some embodiments, migrations may occur for a large number of record within a relatively short period of time. For example, in some embodiments, many migrations may be caused by a single event, such as a partition within the data store or snapshotting system. For example, in a partition, a cell of computing resources is automatically partitioned into two or more cells in response to a scaling event. In that case, a subset of the metadata records on the original cell must be scheduled for migration to a new cell within a short period of time … (c4 L9-25)], and the quantity of to-be-migrated metadata between the target metadata services by combining the metadata load pressure information and the load differences [as shown in figure 2B and 2C, where cell I(280) has only one record A (282) which is accessed one time, while cell II (290) has two records B (291) and C (292), each is accessed one time for a total two times; Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs… (abstract); The systems and methods described herein may be employed in various combinations and in various embodiments to implement a metadata record migration system that schedules migrations of the metadata records based on observed mutations of the metadata records … (c2 L60-67); As shown, the system next selects at time t3 record D 540 to migrate over record C 530, which also has a migration window 544 available at the same time. In this case, the system may select to migrate record D first, because the migration window 534 for record C is relatively large. The system may determine by migrating record D first, there will still be time left to migrate record C after record D in window 534. Indeed, in this example, after the migration of record D is completed, there is sufficient time remaining in window 534 to migrate record C, and record C is migrated at time t4. Accordingly, in this manner, the system may schedule the migration of many records in the system very quickly, based on the available migration windows and available resources, so that the migrations are performed as speedily and safely as possible (c17 L66 to c18 L13)]. As to claim 14, Kumar teaches The metadata load balancing method according to claim 1, wherein the acquiring metadata load pressure information corresponding to respective metadata services in a distributed file storage cluster comprises: acquiring a number of input/output operations per second performed by the metadata services in the distributed file storage cluster, respectively [as shown in figure 1, where clients (110) performs I/O operations, via a network (115), through jobs (140), which perform mutation operations (142a, 142b) that change/update metadata records, and where metadata records/files (132a, 132b) are distributed among a plurality of metadata storage locations (130, 134), which provide metadata services; as shown in figures 2B and 2C, where metadata records/files are distributed among a plurality of cells (cell I 280, cell II 290, cell III 295), which provide metadata services; To address such difficulties, embodiments of the disclosed system may monitor the jobs that mutate the metadata records over a period of time, and collect time data for any mutations, in order to learn the mutation behavior of the metadata record. The collected time data may then be programmatically analyzed to determine an appropriate time to perform the migration of the metadata record … (c3 L22-38); In some embodiments, the time data tracker 160 may output a set of record mutation time data. In some embodiments, the output data may be maintained in volatile memory … (c7 L52-67); In some embodiments, the data storage service 120 may be a cloud-based service that hosts data stores, such as block- or chunk-based data volumes, of clients. In some embodiments, the data storage service 120 may implement other types of data storage, such as for example a database, a file system, or the like … In some embodiments, data storage service 120 may be configured as a number of distinct systems (e.g., in a cluster topology) implementing load balancing and other request management features configured to dynamically manage large-scale web services request processing load (c5 L32-56); In some embodiments, the mutation operations 142 may occur only at certain points in the job process … For example, when a long running job such as a snapshotting job finally completes, the metadata record may be updated a final time but without immediately alerting the client or user that initiated the job … (c7 L8-33)]; and determining the metadata load pressure information corresponding to the respective metadata services according to the number of input/output operations per second performed by the metadata services [Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs… (abstract); The systems and methods described herein may be employed in various combinations and in various embodiments to implement a metadata record migration system that schedules migrations of the metadata records based on observed mutations of the metadata records … (c2 L60-67); FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]. As to claim 19, it recites substantially the same limitations as in claim 1, and is rejected for the same reasons set forth in the analysis of claim 1. Refer to “As to claim 1” presented earlier in this Office Action for details. Further regarding claim 19, Kumar teaches a memory [memory, figure 10, 1020], configured to store a computer program [program instructions, figure 10, 1025]; and a processor, configured to implement steps of a metadata load balance method [processors, figure 10, 1010a-1010n]. As to claim 20, it recites substantially the same limitations as in claim 19, and is rejected for the same reasons set forth in the analysis of claim 19. Refer to “As to claim 19” presented earlier in this Office Action for details. Further regarding claim 20, Kumar teaches a non-volatile read storage medium [System memory 1020 may be configured to store instructions and data accessible by processor(s) 1010. In various embodiments, system memory 1020 may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions and data implementing one or more desired functions, such as those methods, techniques, and data described above, are shown stored within system memory 1020 as code 1025 and data 1035 (c23 L62 to c24 L5)]. As to claim 21, it recites substantially the same limitations as in claim 2, and is rejected for the same reasons set forth in the analysis of claim 2. Refer to “As to claim 2” presented earlier in this Office Action for details. As to claim 22, it recites substantially the same limitations as in claim 3, and is rejected for the same reasons set forth in the analysis of claim 3. Refer to “As to claim 3” presented earlier in this Office Action for details. As to claim 23, it recites substantially the same limitations as in claim 6, and is rejected for the same reasons set forth in the analysis of claim 6. Refer to “As to claim 6” presented earlier in this Office Action for details. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (US Patent 10,909,094, hereinafter Kumar), and in view of Muniswamy et al. (US Patent 10,275,489, hereinafter Muniswamy). As to claim 7, Kumar teaches The metadata load balancing method according to claim 6, wherein the determining the migration parameters corresponding to the respective sub-tree partitions according to the metadata access differences comprises: acquiring a preset metadata throughput [Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs … (abstract); To address such difficulties, embodiments of the disclosed system may monitor the jobs that mutate the metadata records over a period of time, and collect time data for any mutations, in order to learn the mutation behavior of the metadata record. The collected time data may then be programmatically analyzed to determine an appropriate time to perform the migration of the metadata record … (c3 L22-38)]; and when determining that a historical workload greater than the preset metadata throughput exists, determining the migration parameters corresponding to the respective sub-tree partitions according to the metadata access differences [as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32)]. Regarding claim 7, Kumar does not expressively teach a maximum metadata throughput. However, Muniswamy specifically teaches a maximum metadata throughput for a Query Accelerator Node (QAN) which accesses metadata records in a database system [The QANs of a given QAF may differ not only in their capabilities in some embodiments, as shown in FIG. 10, but also in their workloads. FIG. 11 illustrates an example of intelligent client-side routing of I/O (input/output) requests, resulting in a potentially non-uniform workload distribution among query accelerator nodes of a fleet, according to at least some embodiments… The IRMs may, for example, be provided metadata regarding the differences between the QANs 1180 and/or the rules to be followed when selecting a destination for a given query. Such metadata may include rules indicating that certain types of queries should be directed only to master QANs, for example, or to specific QANs which are known to generate and cache code for those types of queries. In some embodiments, the workload directed to any given QAN may depend on the subset or partition of the application's data set to which a query is directed … (c23 L16-45); In at least one embodiment, an additional mechanism for dynamically rate-limiting the back-end writes 1641 may be implemented. Respective dynamic capacity indicators 1667 such as token buckets may be initialized, e.g., by the write coordinator (as indicated by arrow 1629) based on the write throughput limits 1663 for one or more data item collections … The available capacity may be increased based on a refill rate setting corresponding to the most recent write throughput limit —e.g., if the write throughput limit is 100 writes per second for a particular table, a corresponding token bucket may be refilled at the rate of 10 tokens every 100 milliseconds (up to a maximum bucket population of 100). In some embodiments, write capacity indicators may not be us … (c31 L66 to c32 L6); Write coordinators 1784A and 1784B of QAFs QAF1 and QAF2 may each attempt to discover their write throughput limits using a discovery protocol in the depicted embodiment … The maximum sustainable level based on the gradual increase may be considered the back-end write throughput limit for some period of time (or until additional throttling or error messages are received). The limits may be re-checked periodically in some embodiments, e.g., once every T seconds. The write coordinators may update their respective back-end write throughput limit metadata records 1763 (e.g., 1763A or 1763B) as and when new information is obtained via the discovery protocol. In the depicted embodiment, the metadata 1763 may include an indication of the most recent timestamp 1745 (e.g., 1745A or 1745B) at which the discovery protocol was executed, the total number of write-back worker threads 1747 (e.g., 1747A or 1747B) instantiated concurrently … (c32 L39-67); In one simple example scenario, to support a steady load of 100 back-end writes per second, bucket 2002 may be configured with an initial population of 100 tokens, a maximum allowable population of 100 tokens and a minimum of zero tokens. The refill rate may be set to 100 tokens per second, and one token may be added for refill purposes (assuming the maximum population limit is not exceeded) once every 10 milliseconds. If a steady state workload of 100 writes per second, uniformly distributed during each second, is experienced at the QAN, the refill rate and the write request rate may balance each other … (c35 L32-52)]. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to set a maximum metadata throughput limit, as specifically demonstrated by Muniswamy, and to incorporate it into the existing scheme disclosed by Kumar, because Muniswamy teaches doing so allow maintain a steady load in a distributed storage system with a plurality of QANs [In one simple example scenario, to support a steady load of 100 back-end writes per second, bucket 2002 may be configured with an initial population of 100 tokens, a maximum allowable population of 100 tokens and a minimum of zero tokens. The refill rate may be set to 100 tokens per second, and one token may be added for refill purposes (assuming the maximum population limit is not exceeded) once every 10 milliseconds. If a steady state workload of 100 writes per second, uniformly distributed during each second, is experienced at the QAN, the refill rate and the write request rate may balance each other … (c35 L32-52)]. 5. Claims 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (US Patent 10,909,094, hereinafter Kumar), and in view of Tai et al. (US Patent Application Publication 20200026643, hereinafter Tai). As to claim 15, Kumar teaches The metadata load balancing method according to claim 1, wherein the determining exporter sub-tree partitions and importer sub-tree partitions according to the migration parameters comprises: sorting the sub-tree partitions according to the migration parameters to obtain sorting results [the corresponding “migration parameter” is the “time data of mutations of metadata records” -- Systems and methods are provided to implement a metadata record migration system that schedules the migrations of metadata records that are frequently mutated. In embodiments, the scheduler collects timing data of jobs that modify the metadata records, including the timing of various mutation operations within the jobs … (abstract); To address such difficulties, embodiments of the disclosed system may monitor the jobs that mutate the metadata records over a period of time, and collect time data for any mutations, in order to learn the mutation behavior of the metadata record. The collected time data may then be programmatically analyzed to determine an appropriate time to perform the migration of the metadata record … (c3 L22-38); In some embodiments, the time data tracker 160 may output a set of record mutation time data. In some embodiments, the output data may be maintained in volatile memory … (c7 L52-67); Tai more expressively teaches the sorting process – records are sorted according to their number/count of access/read/write (i.e., the wear levels or load levels for each record, in order to achieve wear leveling, or load balancing, across all records) as shown in figure 2; One technique of managing the endurance of memory components is wear leveling. A wear leveling operation can attempt to evenly distribute the physical wear across the data units of memory components … Wear leveling techniques often use a sorting process to find the data unit(s) with a maximum read or write count and the data unit(s) with a minimum read count or write count. Data of a data unit having a maximum read or write count can be swapped with data of a data unit having a minimum read or write count in an attempt to evenly distribute the wear across the data units of memory components (¶ 0011); At process 200B of selection process 200, wear leveling management component 113 sorts data units 210 in an order based on the wear metric associated with each of the data units 210. In the particular example, the data units 210 are sorted from the highest write count to the lowest write count (e.g., “WC=867” is first, followed by “WC=683,” and so forth). As noted above, in other implementations a different wear metric can be used. It can also be noted that in other implementations, data units 210 can be sorted in an opposite order (e.g., sorted from lowest wear metric to highest wear metric) … (¶ 0038-0039); At block 415, processing logic sorts the data units 210 in a first order based on a wear metric associated with the data units 210. The wear metric is indicative of a level of physical wear of the data units. Wear metrics can include write count, read count, or a combination of write count and read count. For example, the wear metric can be a write count associated with the data units 210, and the data units 210 are sorted in an order from highest write count to lowest write count (¶ 0060)]; and selecting, according to the sorting results, a first preset quantity of sub-tree partitions from an end with large migration parameters as the exporter sub-tree partitions [At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28); Tai more expressively teaches this limitation – One technique of managing the endurance of memory components is wear leveling. A wear leveling operation can attempt to evenly distribute the physical wear across the data units of memory components … Wear leveling techniques often use a sorting process to find the data unit(s) with a maximum read or write count and the data unit(s) with a minimum read count or write count. Data of a data unit having a maximum read or write count can be swapped with data of a data unit having a minimum read or write count in an attempt to evenly distribute the wear across the data units of memory components (¶ 0011); At block 460, processing logic performs the wear leveling operation using entries of the record. For example, processing logic can perform the wear leveling operation using the first entry (e.g., data unit with the maximum wear metric of the sorted record) and the last entry (e.g., data unit with minimum wear metric of the sorted record). For example, processing logic can swap the data of the candidate data unit of the first entry having the highest wear metric with the data of the candidate data unit of the last entry having the lowest wear metric. In implementations, a wear leveling operation can be performed on any pair of entries of the record (¶ 0069)], and a second preset quantity of sub-tree partitions from an end with small migration parameters as the importer sub-tree partitions [for example, new cells have full, unused capacity and no existing migration parameters -- In some embodiments, migrations may occur for a large number of record within a relatively short period of time. For example, in some embodiments, many migrations may be caused by a single event, such as a partition within the data store or snapshotting system. For example, in a partition, a cell of computing resources is automatically partitioned into two or more cells in response to a scaling event. In that case, a subset of the metadata records on the original cell must be scheduled for migration to a new cell within a short period of time … (c4 L9-25); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28); Tai more expressively teaches this limitation – One technique of managing the endurance of memory components is wear leveling. A wear leveling operation can attempt to evenly distribute the physical wear across the data units of memory components … Wear leveling techniques often use a sorting process to find the data unit(s) with a maximum read or write count and the data unit(s) with a minimum read count or write count. Data of a data unit having a maximum read or write count can be swapped with data of a data unit having a minimum read or write count in an attempt to evenly distribute the wear across the data units of memory components (¶ 0011); At block 460, processing logic performs the wear leveling operation using entries of the record. For example, processing logic can perform the wear leveling operation using the first entry (e.g., data unit with the maximum wear metric of the sorted record) and the last entry (e.g., data unit with minimum wear metric of the sorted record). For example, processing logic can swap the data of the candidate data unit of the first entry having the highest wear metric with the data of the candidate data unit of the last entry having the lowest wear metric. In implementations, a wear leveling operation can be performed on any pair of entries of the record (¶ 0069)]. Regarding claim 15, Kumar does not expressively teach sorting partitions according to the migration parameters to obtain sorting result, and then selecting exporter and importer entities accordingly. However, Tai specifically teaches sorting records according to the wear levels (i.e., migration parameters) to obtain sorting result [records are sorted according to their number/count of access/read/write (i.e., the wear levels or load levels for each record, in order to achieve wear leveling, or load balancing, across all records) as shown in figure 2; One technique of managing the endurance of memory components is wear leveling. A wear leveling operation can attempt to evenly distribute the physical wear across the data units of memory components … Wear leveling techniques often use a sorting process to find the data unit(s) with a maximum read or write count and the data unit(s) with a minimum read count or write count. Data of a data unit having a maximum read or write count can be swapped with data of a data unit having a minimum read or write count in an attempt to evenly distribute the wear across the data units of memory components (¶ 0011); At process 200B of selection process 200, wear leveling management component 113 sorts data units 210 in an order based on the wear metric associated with each of the data units 210. In the particular example, the data units 210 are sorted from the highest write count to the lowest write count (e.g., “WC=867” is first, followed by “WC=683,” and so forth). As noted above, in other implementations a different wear metric can be used. It can also be noted that in other implementations, data units 210 can be sorted in an opposite order (e.g., sorted from lowest wear metric to highest wear metric) … (¶ 0038-0039); At block 415, processing logic sorts the data units 210 in a first order based on a wear metric associated with the data units 210. The wear metric is indicative of a level of physical wear of the data units. Wear metrics can include write count, read count, or a combination of write count and read count. For example, the wear metric can be a write count associated with the data units 210, and the data units 210 are sorted in an order from highest write count to lowest write count (¶ 0060)], and then selecting exporter and importer entities accordingly [At block 460, processing logic performs the wear leveling operation using entries of the record. For example, processing logic can perform the wear leveling operation using the first entry (e.g., data unit with the maximum wear metric of the sorted record) and the last entry (e.g., data unit with minimum wear metric of the sorted record). For example, processing logic can swap the data of the candidate data unit of the first entry having the highest wear metric with the data of the candidate data unit of the last entry having the lowest wear metric. In implementations, a wear leveling operation can be performed on any pair of entries of the record (¶ 0069)]. Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to sort records/partitions according to the wear levels (i.e., migration parameters) to obtain sorting result, and then selecting exporter and importer entities accordingly, as specifically demonstrated by Tai, and to incorporate it into the existing scheme disclosed by Kumar, because Tai teaches doing so enhances the endurance and lifespan of the storage devices [One technique of managing the endurance of memory components is wear leveling. A wear leveling operation can attempt to evenly distribute the physical wear across the data units of memory components … (¶ 0011)]. As to claim 16, Kumar in view of Tai teaches The metadata load balancing method according to claim 15, wherein after the selecting, according to the sorting results, a first preset quantity of sub-tree partitions from an end with large migration parameters as the exporter sub tree partitions, the metadata load balancing method further comprises: determining remaining sub-tree partitions, except for the exporter sub-tree partitions, as invalid migration candidates [Kumar -- as shown in figure 2C, where partition is performed from cell II (290) instead of cell I (280); FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32)]. As to claim 17, Kumar teaches The metadata load balancing method according claim 15, wherein the selecting a second preset quantity of sub-tree partitions from an end with small migration parameters as the importer sub-tree partitions comprises: counting a quantity of sub-tree partitions with spare capacities greater than or equal to a preset capacity value among the sub-tree partitions [Kumar: new cells have full, unused capacities -- In some embodiments, migrations may occur for a large number of record within a relatively short period of time. For example, in some embodiments, many migrations may be caused by a single event, such as a partition within the data store or snapshotting system. For example, in a partition, a cell of computing resources is automatically partitioned into two or more cells in response to a scaling event. In that case, a subset of the metadata records on the original cell must be scheduled for migration to a new cell within a short period of time … (c4 L9-25); In some embodiments, as shown, the metadata record migration manager 150 may manage the migration of metadata records, such as metadata records 132a and 132b for the data storage service 120. As shown, in some embodiments, the metadata records may be stored in a metadata storage location 130 … As shown, in some embodiments, there may be multiple storage locations to store metadata records, such as for example new metadata storage location 134 … (c6 13-31); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)]; when the quantity of sub-tree partitions with spare capacities greater than or equal to the preset capacity value is greater than or equal to the second preset quantity [Kumar: new cells have full, unused capacities, and the number of new cells created is greater than or equal to the second preset quantity -- In some embodiments, migrations may occur for a large number of record within a relatively short period of time. For example, in some embodiments, many migrations may be caused by a single event, such as a partition within the data store or snapshotting system. For example, in a partition, a cell of computing resources is automatically partitioned into two or more cells in response to a scaling event. In that case, a subset of the metadata records on the original cell must be scheduled for migration to a new cell within a short period of time … (c4 L9-25); In some embodiments, as shown, the metadata record migration manager 150 may manage the migration of metadata records, such as metadata records 132a and 132b for the data storage service 120. As shown, in some embodiments, the metadata records may be stored in a metadata storage location 130 … As shown, in some embodiments, there may be multiple storage locations to store metadata records, such as for example new metadata storage location 134 … (c6 13-31); At operation 724, a cell is partitioned into at least one new cell. As discussed, in some embodiments, when it is determined that one cell is handling too much traffic or hosting too much data, that cell may be automatically partitioned into two more cells, which may divide the traffic or data between the two. In some embodiments, the system may provision a new cell of computing resources as a new cell and transfer some of the data stores assigned to the original cell to the new cell … (c20 L16-28)], selecting the second preset quantity of sub-tree partitions from the end with small migration parameters as the importer sub-tree partitions [Kumar: load balancing between two existing cells -- In some embodiments, the data storage service 120 may be a cloud-based service that hosts data stores, such as block- or chunk-based data volumes, of clients … In some embodiments, data storage service 120 may be configured as a number of distinct systems (e.g., in a cluster topology) implementing load balancing and other request management features configured to dynamically manage large-scale web services request processing loads (c5 L33-56); Tai more expressively teaches this limitation -- At block 460, processing logic performs the wear leveling operation using entries of the record. For example, processing logic can perform the wear leveling operation using the first entry (e.g., data unit with the maximum wear metric of the sorted record) and the last entry (e.g., data unit with minimum wear metric of the sorted record). For example, processing logic can swap the data of the candidate data unit of the first entry having the highest wear metric with the data of the candidate data unit of the last entry having the lowest wear metric. In implementations, a wear leveling operation can be performed on any pair of entries of the record (¶ 0069)]; and when the quantity of sub-tree partitions with spare capacities greater than or equal to the preset capacity value is less than the second preset quantity, determining the sub-tree partitions with spare capacities greater than or equal to the preset capacity value as the importer sub-tree partitions [Kumar: new cells have full, unused capacities -- as shown in figure 2B and 2C, where cell I (280), cell II (290), and cell III (295) form a sub-tree structure; as shown in figure 2C, where metadata record C (292) is migrated from cell II (290), to cell III (295); FIG. 2C depicts a partitioning process of the cells, which causes a record migration from one cell to another. As shown, at some point in time, the snapshot service 230 may partition 293 cell II 290 to create a new cell III 295. In some embodiments, the new cell may be created using a subset of computing resources from the original cell. In some embodiments, the new cell may be created from a set of newly provisioned computing resources. In some embodiments, the partition may occur as a result of an autoscaling event. For example, the snapshotting service may determine at some point that cell II 290 is handling too many snapshotting jobs or is hosting too many snapshots for too many data volumes. When this condition is detected, the snapshot service may automatically cause cell II to be partitioned into additional cells (c14 L18-32)]. Claim Rejections - 35 USC § 101 6. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 20 recites “A computer non-volatile readable storage medium …” and the Specification of the current Application does not explicitly exclude a data signal embodied in a carrier wave as this type of storage medium. As such, claim 20 may be directed to a computer readable medium including a data signal embodied in a carrier wave. This subject matter does not fall within a statutory category of invention because it is neither a process, machine, manufacture, nor a composition of matter. Instead, it is directed to a form of energy. Forms of energy do not fall within a statutory category since they are clearly not a series of steps or acts to constitute a machine, not a tangible physical article or object which is some form of matter to be a product and constitute a manufacture, and not a composition of two or more substances to constitute a composition of matter. Applicant is recommended to change the wordings of “non-volatile readable storage medium” to be “non-transitory readable storage medium” to overcome the 101 rejection. Conclusion 7. Claims 1-3, 6-17, and 19-23 are rejected as explained above. 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHENG JEN TSAI whose telephone number is 571-272-4244. The examiner can normally be reached on Monday-Friday, 9-6. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached on 571-272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /SHENG JEN TSAI/Primary Examiner, Art Unit 2139
Read full office action

Prosecution Timeline

Aug 11, 2025
Application Filed
Feb 22, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596490
MEMORY MANAGEMENT USING A REGISTER
2y 5m to grant Granted Apr 07, 2026
Patent 12585387
Clock Domain Phase Adjustment for Memory Operations
2y 5m to grant Granted Mar 24, 2026
Patent 12579075
USING RETIRED PAGES HISTORY FOR INSTRUCTION TRANSLATION LOOKASIDE BUFFER (TLB) PREFETCHING IN PROCESSOR-BASED DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12572474
SPARSITY COMPRESSION FOR INCREASED CACHE CAPACITY
2y 5m to grant Granted Mar 10, 2026
Patent 12561070
AUTONOMOUS BATTERY RECHARGE CONTROLLER
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
83%
With Interview (+13.0%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 790 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month