DETAILED ACTION
This Office Action is in response to claims filed on 10/30/2025.
Claims 1, 3, 5-19, and 21-23 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see page 10 of remarks, filed 10/30/2025, with respect to claim rejections of claim 2-4 and 17-20 under 35 U.S.C. 112(b) have been fully considered and are persuasive. The rejection of 07/30/2025 has been withdrawn.
Applicant's arguments filed 10/30/2025 have been fully considered but they are not persuasive. Applicant argues in substance:
Applicant respectfully submits that the independent claims as presented for reconsideration are not anticipated nor made obvious by Certain, Hallgren, Li, and Drapeau, either singly or in combination with any reference of record. For example, Applicant has amended independent claim 1 to recite:
exposing a first deletion application programming interface (API) on the first computing system partition; and
receiving a call on a deletion method of the first deletion API on the first computing system partition;
…
exposing a second deletion API on the second computing system partition;
receiving a call on a deletion method of the second deletion API on the second computing system partition.
…
Applicant submits that neither Certain, Hallgren, Li, nor Drapeau, alone or in combination, disclose, teach, or suggest this claim language. Certain is directed to “scalable architecture for propagating updates may be implemented for data replicated from a data set” and in particular to a “conditional atomic operation to apply the update to the item in the replicated portion of the data set.” Certain Abstract. The Office Action on page 15 notes “Certain does not explicitly teach the chained partition structure incorporating a second and third computing system partition.” Additionally, Certain does not disclose first deletion API operating locally on a first computing system partition that call to a second deletion API operating locally on a second computing system partition. Instead, Certain uses propagation nodes to push conditional atomic updates to secondary indexes, not the claimed first deletion API to second deletion API as claimed. Therefore, Certain does not disclose all of the elements of claims 1-4, 6, 8, 15, 17, and 19-20. Thus, Applicant respectfully requests reconsideration and withdrawal of rejections 1-4, 6, 8, 15, 17 and 19-20 based on Certain.
With respect to point (a), Examiner respectfully disagrees with Applicant. With respect to the teachings of Certain, Certain appropriately teaches the broadest reasonable interpretation of the claims, wherein Certain is directed to “multiple tiers of propagation nodes (nodes 530 and 540) propagating updates” wherein “updates may include or cause the deletion of items from a secondary index (or partition thereof).” Certain, Col. 15. It is also presently understood that a database API may be used to perform the actions described herein (see Certain, Col. 13).
Applicant refers to page 15 of the Office Action, filed 07/30/2025, and asserts the recited deficiencies of the Certain reference demonstrate that the reference does not teach the claimed limitations. However, Applicant’s argument omits the statement of the reference’s general teaching, additionally expressed on page 15 of the Office Action, where the Office Action recites “Certain reasonably teaches the method of source and target partitions connected and accessible through a multi-layered propagation mechanism for distribution deletion request (Certain, Col. 4).” In this regard, Examiner reasserts Certain reasonably describes performing a delete operation at a source partition and propagating the deletion request to one or more subsequent destination partitions. Certain reasonably discloses a plurality of processing nodes 330 managing a particular storage partition capable of propagating changes to other processing nodes (Certain, Col. 8). The presented deficiencies of the disclosure merely reflect that the disclosure does not explicitly disclose the specific numerical partition propagation pair (e.g., first, second, third, etc. partitions). However, the current claim language does not substantially distinguish any claimed partitions from one another, differing merely by trivial numerical labeling, such that propagating a request between a first and second partition is substantially similar to propagating a request between a second and third partition. Additionally, Certain recites as such stating “that previous descriptions of implementing a scalable architecture for propagating updates to replicated data are not to be limiting, but merely provided as logical examples. The number of nodes or partitions of data set 100 may be different as may be the number of nodes storing replicated portion of data set 130 or propagation nodes 142, for example” (Col. 5, Certain). Accordingly, Certain reasonably teaches all instances of general propagation of a deletion request from any n partition to any subsequent n+1 partition within a disclosed multi-layered distributed storage system.
Hallgren reasonably teaches the multi-ordered propagation mechanism applicable between successive partitions, including a second and a third partition. Li reasonably teaches the generation of propagation records within a propagation table. Drapeau reasonably teaches aggregation of deletion status results from a plurality of computing partitions.
Further, Applicant amendments set forth that each partition may contain a local deletion API operating on a partition, independent from all other deletion APIs on other partitions. While the combination of references does not explicitly discuss local deletion APIs, Tsang et al. teaches these limitations and is applied below.
Argument has not been found to be persuasive.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 6, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Certain et al. Patent No. US 11,314,717 B1 (hereinafter Certain) in view of Tsang et al. Pub. No. US 2017/0366624 A1 (hereinafter Tsang).
With regard to claim 1, Certain teaches a computer implemented method, comprising (Col. 2, lines 33-36, The … methods described herein may be employed in various combinations and in various embodiments to implement a scalable architecture for propagating updates to replicated data, according to some embodiments):
receiving a deletion request at a first computing system partition (Col. 3, lines 12-19, For example, as illustrated in FIG. 1, different nodes, such as nodes 110a, 110b, and 110c may store data that is part of data set 100, such as data 112a, 112b, and 112c respectively, in one embodiment; Col. 3, lines 52-54 and lines 63-65, For example, as illustrated in Fig. 1, updates, such as updates 102a, 102b, and 102c may be received and processed at nodes 110a, 110b, and 110c; Col. 15, lines 52-55, Updates may include or cause the deletion of items from a secondary index (or partition thereof). Deletion requests may, for instance, remove an attribute or item from a database table) that stores data for a first user (Col. 8, lines 4-7 and lines 15-21, Fig. 3, is a logical block diagram illustrating a database service that may implement a scalable architecture for propagating updates to replicated data, according to some embodiments … In one embodiment, database service 210 may also implement a plurality of processing nodes 330, each of which may manage one or more partitions 370 of a data set (e.g., a database) on behalf of clients/users), the deletion request identifying an item of information (Col. 11, lines 3-11, In some embodiments, the name of an attribute may always be a string, but its value may be a string, number, string set, or number set. The following are all examples of attributes: "ImageID"=1, "Title"="flower", "Tags"={"flower", "jasmine", "white" }, "Ratings"={3, 4, 2}. The items may be managed by assigning each item a primary key value (which may include one or more attribute values), and this primary key value may also be used to uniquely identify the item) to be deleted from the first computing system partition (Col. 19, lines 60-67, For example, item A may have multiple attributes (e.g., Attribute AA, BB, CC, DD, EE, FF, and so on). A secondary index may include items where the value of Attribute AA=”2017” and may also include the values of Attributes DD and EE. If the update item has changed the value of AA, DD, or EE, then the update may be applicable (including updates that would result in the removal of an item from the secondary index), wherein receiving the deletion request at the first computing system partition comprises:
exposing a first deletion application programming interface (API) on the first computing system partition (Col. 11, lines 65-67, and Col. 12, lines 1-3 and lines 25-30, Database serviced 210 may provide an application programming interface (API) for requesting various operations targeting tables, indexes, items, and/or attributes maintained on behalf of storage service clients. In some embodiments, the service (and/or the underlying system) may provide both control plane APIs and data plane APIs … The data plane APIs provided by database service 210 (and/or the underlying system) may be used to perform item-level operations such as storing, deleting, retrieving, and/or updating items and/or their attributes, or performing index-based search-type operations across multiple items in a table, such as queries or scans); and
receiving a call on a deletion method of the first deletion API on the first computing system partition (Col. 20, lines 60-64, As indicated in 1120, updates to the partition of the source table may be received. As discussed above with regard to FIG. 3, updates, may be received from a client of a database table via an API or other interface, describing the changes to be performed as part of the update);
accessing a first propagation table in the first computing system partition, that stores a first propagation record (Col. 11, lines 29-39, In at least some embodiments, database service 210 may implement propagation nodes 380. Propagation nodes 380 may receive indications of updates (or the updated items) from processing nodes 330, in some embodiments. Propagation nodes 380 may obtain (e.g., from control plane 320) mapping information that identifies the secondary index(es) associated with a table, and the format, organization, or other specification that describes the contents of the secondary indexes, including mapping information that maps items stored in a secondary index to respective partitions of the secondary index), to determine that the item of information has been propagated to a second computing system partition based on the first propagation record (Col. 3, lines 28-38, Replicated portion(s) of data set 120 may also be maintained for access, in various embodiments. For example, nodes, such as nodes 130a, 130b, and 130c may respectively store data 132a, 132b, and 132c, which may be a portion of one or more different parts of data set 100, in one embodiment. As discussed below with regard to FIGS. 2-8 and 10-13, data 132 may be a secondary index, projection, or other view of data ( or partitions thereof) that represents a subset of data set 100, in one embodiment, which may be stored according to a different format, schema, or other arrangement than data set 100; Col. 4, lines 13-22, Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments. For example, propagation mesh 140 may implement a scalable architecture including one or more propagation nodes, such as propagation nodes 142a, 142b, 142c, 142d, and 142e, which may receive update(s) committed to data set 110, such as update(s) 104a, 104b, and 104c, from nodes 110, identify or determine which updates need to be replicated (as not all updates may be appropriate for otherwise applicable to replicated portions); and
propagation the deletion request to the second computing system partition (Col. 4, lines 22-25, and send the identified updates, such as replicated update(s) 106a, 106b, 106c, to the appropriate nodes 130 of replicated portions of data set 120, in some embodiments), wherein propagating the deletion request to the second computing system partition comprises (Col. 5, lines 32-38, Please note that previous descriptions of implementing a scalable architecture for propagating updates to replicated data are not intended to be limiting, but are merely provided as logical examples. The number nodes or partitions of data set 100 may be different as may be the number of nodes storing replicated portion of data set 130 or propagation nodes 142, for example):
exposing a second deletion API (Col. 21, lines 22-31, FIG. 12 is a high-level flowchart illustrating various methods and techniques to process a conditional atomic request to apply an update to replicated portion of a data set, according to some embodiments. As indicated at 1210, a conditional, atomic update request for an item may be received from a propagation node, in some embodiments. The request may be formatted according to an API or other interface format (as discussed above with regard to FIG. 3) which may indicate the update is conditional) on the second computing system partition (Fig. 1, Replicated Portions of dataset 120; Col. 4, lines 13-15, Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments);
receiving a call on a deletion method of the second deletion API on the second computing system partition (Col. 21, lines 38-51, As indicated at 1220, a comparison of version identifier of the request and the current version identifier may be performed, in some embodiments … As indicated by a positive exit from 1230, if the version identifier is later than the current version identifier, then the update may be applied to the item (e.g., the item may be overwritten with the updated version of the item, the item may be inserted, or the item may be deleted or marked for deletion with a tombstone marker), as indicated at 1240, in some embodiments; Col. 20, lines 61-64, As discussed above with regard to FIG. 3, updates may be received from a client of a database table via an API or other interface, describing the changes to be performed as part of the update).
Certain reasonably teaches a plurality of APIs enabled to perform deletion operations may be provided associated with an underlying system (Col. 12). However, Certain does not explicitly teach a second deletion application programming interface (API) that is local to a second computing system partition.
Tsang teaches a second deletion application programming interface (API) ([0015], In examples described herein, the cluster layer may be located between an API layer and a service layer of a node. In such examples the clustering layer may intercept an API call from the API layer to the service layer, determine whether the API call is at least one of a request to create, to retrieve, or to delete a resource, and based (at least in part) on the determination, may shard and the database and/or forward the API call) on the second computing system partition (0023], In some examples, each node 110 owns or stores a shard (e.g., partition) of database 160. As described herein, a shard of a database may involve a horizontal partition of data in a database. Each database shard may refer to an individual partition of the database; [0024], Nodes 110 of multi-node cluster 100 each comprise an application programming interface (API) layer 120, a clustering layer 130, a service layer 140, and a data access object layer 150; [0025], In some examples, node 110 may receive an API. As used herein, an API call may represent a request operation, function, or routine to be performed by an application implemented by the multi-node cluster and that is recognized by the API layer of the application.)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Tsang with the teachings of Certain in order to provide a method that teaches computing system partition nodes maintaining a local API to perform database operations, including resource deletion. The motivation for applying Tsang teaching with Certain teaching is to provide a method that allows for deletion operations to be propagated across distributed nodes while enabling each node to locally manage a particular storage partition through an API layer that is agnostic to the database structure, thereby supporting flexible data partitioning (Tsang, [0015]). Certain and Tsang are analogous art directed towards data partitioning. Therefore, it would have been obvious for one of ordinary skill in the art to combine Tsang with Certain to teach the claimed invention in order to provide local partition API that enable each node to manage deletion and propagation operations for a respective data partition independent of a database structure.
With regard to claim 3, Certain teaches the computer implemented method of claim 1 and further comprising:
deleting the item of information from the first computing system partition using the first deletion API on the first computing system partition (Col. 19, lines 35-40, an update to an item that has been committed to a partition of a table may be received. As discussed above, the updates may … delete items, entries, values, attributes, or other information in the partition of the table; Col. 20, lines 64-66, As indicated in 1130, the updates to the partition of the source table may be performed, in some embodiments)
However, Certain does not explicitly teach that such the deletion API is local to a first computing partition.
Tsang teaches using the first deletion API on the first computing system partition. ([0073], Based (at least in part) on a determination that the API call is the request to delete the database resource and the determination that the location of the database resource is node 410, as described above in relation to instructions 456 and 458 of FIG. 4, instructions 568 may forward to the API call to service layer 470. Service layer 470 may then forward the request to delete the database resource to data access object layer 480 to delete the resource from database 490)
Rationale to claim 1 applied here.
With regard to claim 6, Certain teaches the computer implemented method of claim 1 wherein accessing the propagation table in the first computing system partition comprises:
identifying the first propagation record as identifying propagation of the item of information (Col. 20, lines 7-13, As indicated at 1030, node(s) storing partition(s) of the secondary index(es) of the table to apply the update to the item(s) in the partition(s) of the secondary index(es) may be identified in some embodiments. Mapping information or a partitioning scheme (e.g., a hashing technique) may identify which nodes host the partitions including respective copies of the item, in some embodiments); and
determining from the first propagation record that the item of information has been propagated to the second computing system partition (Col. 20, lines 14-23, As indicated at 1040, request(s) may be sent to the identified node(s) to perform conditional atomic operations to apply the update to the item may be sent, in various embodiments. As noted above, the request may include a condition that compares the first version identifier associated with the update to respective second version identifier(s) for the item at the identified node(s). For example, as discussed below with regard to FIG. 12, if the version identifier of the update is later than the current version, the update may be performed).
With regard to claim 8, Certain teaches the computer implemented method of claim 1 and further comprising:
receiving at the first deletion API on the first computing system partition a deletion status request (Col. 15, lines 10-25, Propagation node 540 may send one or more conditional update requests 554 to processing node 522 to apply the identified updates to the appropriate items in partition 524 of secondary index(es) 520 … In at least some embodiments, propagation node 540 may track the status of outstanding update requests (e.g., what nodes have been sent a request, what response have been received, etc.). Processing nodes 522 may send acknowledgements of successful completion of the request or failures 556 to propagation node 540, in some embodiments); and
providing a deletion status response indicative of a status of the deletion request on the first computing system partition (Col. 15, lines 25-32, Based on the results of the acknowledgements or failures 556, the propagation node 540 may determine whether the update was successful. As discussed with regard to FIG. 10, in scenarios where the same update needs to be applied to multiple secondary indexes, the update may not be considered successful unless al secondary indexes acknowledge the successful completion of the update).
Claims 5 and 21 is rejected under 35 U.S.C. 103 as being unpatentable over Certain in view of Tsang as applied to claim 1 above, and further in view of Hallgren et al. Pub. No. US 2018/0357235 A1 (hereinafter Hallgren).
With regard to claim 5, Certain teaches the computer implemented method of claim 1 further comprising:
accessing a second propagation table in the second computing system partition (Col. 4, In at least some embodiments, nodes 130a, 130b, and 130c may provide access to data 132a, 132b, and 132c as part of replicated portion(s) of data set 120; Col. 11, lines 29-39, Propagation nodes 380 may obtain (e.g., from control plane 320) mapping information that identifies the secondary index(es) associated with a table, and the format, organization, or other specification that describes the contents of the secondary indexes, including mapping information that maps items stored in a secondary index to respective partitions of the secondary index) that stores a second propagation record (Col. 11, Propagation nodes may be selected or assigned responsibility for propagating updates, as discussed below with regard to FIG. 13, in some embodiments. Propagation nodes 380 may access propagation state 382, which may be a data store separate from propagation nodes 380 … Propagation state 382 may include various information for tracking the state of operations to propagate updates), to determine whether the item of information has been propagated to a third computing system partition based on the second propagation record (Col. 4, lines 13-22, Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments. For example, propagation mesh 140 may implement a scalable architecture including one or more propagation nodes, such as propagation nodes 142a, 142b, 142c, 142d, and 142e, which may receive update(s) committed to data set 110, such as update(s) 104a, 104b, and 104c, from nodes 110, identify or determine which updates need to be replicated (as not all updates may be appropriate for otherwise applicable to replicated portions); and
propagating the deletion request to the third computing system partition if the item of data was propagated to a third computing partition (Col. 4, lines 22-25, and send the identified updates, such as replicated update(s) 106a, 106b, 106c, to the appropriate nodes 130 of replicated portions of data set 120, in some embodiments)
Certain reasonably teaches the method of source and target partitions connected and accessible through a multi-layered propagation mechanism for distributing deletion requests (Certain, Col. 4) such that is not limited by the number of nodes or partitions of a dataset (Certain, Col. 5). However, Certain does not explicitly teach the chained partition structure incorporating a second and third computing system partition.
Hallgren teaches a second propagation table … that stores a second propagation record (FIG. 2, Raw datasets 202, 204, 206 propagate to 210 derived dataset of a second partition, which further propagates to 220 derived dataset of a third partition; [0058], In block 324, the process is programmed, based on provenance metadata that is managed in the distributed database system to traverse relationships that link the raw datasets to one or more derived datasets, reaching each derived dataset associated with the raw datasets … The provenance data may be managed in separate metadata tables or files.) … a third computing system partition ([0040], In the example of FIG. 2, three (3) raw datasets 202, 204, 206 (Examiner notes: first partition) are stored using the distributed database system 180. In one implementation, datasets FIG. 2 may represent tables of a relational database system and/or materialized view that are derived from the tables. All the datasets 202, 204, 206 contribute, according to a first derivation function or relationship, to a first derived dataset 210 as indicated by arrows connecting the datasets 202, 204, 206 to the first derived dataset 210 (Examiner notes: second partition). Furthermore, a first raw dataset 202 and the first derived dataset 210 contribute, based on a second derivation function or relationship, to a second derived dataset 220 (Examiner notes: third partition). Therefore, the five (5) datasets 202, 204, 206, 210, 220 are arranged in a directed graph in which datasets are nodes and derivation functions or relationships comprise paths)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Hallgren with the teachings of Certain in order to provide a method that teaches a direct graph of dataset partitions representing the flow of connected partitions. The motivation for applying Hallgren teaching with Certain teaching is to provide a method that allows for the propagation method disclosed by Certain to be applied to every related partition disclosed by Hallgren of identified data to be deleted in order to ensure deletion is processed downstream for every dataset derived (Hallgren, [0042]-[0043]). Certain and Hallgren are analogous art directed towards database structures. Therefore, it would have been obvious for one of ordinary skill in the art to combine Hallgren with Certain to teach the claimed invention in order to provide distributed deletion path such that deletion request propagation cascades to all related partitions.
With regard to claim 21, Certain teaches the computer implemented method of claim 5 wherein propagating the deletion request to the third computing system partition comprises:
exposing a third deletion API (Col. 21, lines 22-31, FIG. 12 is a high-level flowchart illustrating various methods and techniques to process a conditional atomic request to apply an update to replicated portion of a data set, according to some embodiments. As indicated at 1210, a conditional, atomic update request for an item may be received from a propagation node, in some embodiments. The request may be formatted according to an API or other interface format (as discussed above with regard to FIG. 3) which may indicate the update is conditional) on the third computing system partition (Fig. 1, Replicated Portions of dataset 120; Col. 4, lines 13-15, Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments); and
calling a deletion method on the third deletion API on the third computing system partition (Col. 21, lines 38-51, As indicated at 1220, a comparison of version identifier of the request and the current version identifier may be performed, in some embodiments … As indicated by a positive exit from 1230, if the version identifier is later than the current version identifier, then the update may be applied to the item (e.g., the item may be overwritten with the updated version of the item, the item may be inserted, or the item may be deleted or marked for deletion with a tombstone marker), as indicated at 1240, in some embodiments; Col. 20, lines 61-64, As discussed above with regard to FIG. 3, updates may be received from a client of a database table via an API or other interface, describing the changes to be performed as part of the update).
Certain reasonably teaches a plurality of APIs enabled to perform deletion operations may be provided associated with an underlying system (Col. 12). However, Certain does not explicitly teach a third deletion application programming interface (API).
Tsang teaches a third deletion application programming interface (API) ([0015], In examples described herein, the cluster layer may be located between an API layer and a service layer of a node. In such examples the clustering layer may intercept an API call from the API layer to the service layer, determine whether the API call is at least one of a request to create, to retrieve, or to delete a resource, and based (at least in part) on the determination, may shard and the database and/or forward the API call)
Rationale to claim 1 applied here.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Certain in view of Tsang as applied to claim 1 above, and further in view of Li et al. Pub. No. US 2018/0225353 A1 (hereinafter Li).
With regard to claim 7, Certain teaches the computer implemented method of claim 1 and further comprising:
detecting the propagation of the item of information to the second computing system partition (Col. 19, lines 53-60, As indicated at 1020, a determination may be made as to whether the update is applicable to a secondary index, in some embodiments. Secondary index schema, for instance, may be evaluated with respect to the update to the item. If the updated item has an attribute, value, or other information included by the secondary index schema to be stored as part of a secondary index, then the update may be applicable, in some embodiments); and
generating a first propagation record in the first propagation table in the first computing system partition (Col. 20, lines 33-38, As indicated at 1070, propagation state may be updated to identify the update as committed to the partition(s) of the secondary index(es), in some embodiments. The propagation state may be propagation state maintained on propagation nodes and/or in separate propagation store) indicative of the propagation of the item of information to the second computing system partition (Col. 22, lines 3-7, As indicated at 1320, a propagation state data store may be accessed to obtain a list committed version identifier for updates to the partition(s) of the table performed at the partition(s) of the secondary index(es), in some embodiments).
Certain teaches updating a propagation record in the propagation table indicative of the propagation of the item of information found in different partitions. However, Certain does not explicitly teach the generation of a propagation record in the propagation table.
Li teaches generating a propagation record in the propagation table ([0110], The metadata of the partition routing table is data for describing the partition routing table. A partition routing table (Examiner notes: propagation table) associated with the data table may be determined according to the metadata of the partition routing table. Specifically, a mark is configured on the application, to record that the data table tbl_user includes the partition routing table tbl_route_msdn(msdn, userId), and the data table tbl_user_order includes the partition routing table tbl_route_order(orderNo, orderId), so as to insert a corresponding partition routing table record (Examiner notes: propagation record) when a partition table data (Examiner notes: item of information) record is inserted, and perform partition routing table matching when a partition table data record is queried)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Li with the teachings of Certain in order to provide a method that teaches propagation record generation. The motivation for applying Li teaching with Certain teaching is to provide a method that allows for association between partition data and its location to be stored and quickly accessed such that improves data query efficiency (Li, [0120]). Certain and Li are analogous art directed towards database structures and data partitioning. Therefore, it would have been obvious for one of ordinary skill in the art to combine Li with Certain to teach the claimed invention in order to provide propagation records to quickly identify the locations of requested data across plurality of partitions of a distributed database system.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Certain in view of Tsang as applied to claims 8 above, and further in view of Drapeau et al. Patent No. US 11,914,732 B2 (hereinafter Drapeau).
With regard to claim 9, Drapeau teaches the computer implemented method of claim 8 wherein providing a deletion status response comprises:
aggregating deletion status results from the first computing system partition and the second computing system partition (Col. 11, lines 33-36, Then, as discussed herein, the data deletion records may be aggregated and provided to the requesting user as proof the data deletion operations being performed; Col. 12, lines 57-62, In response to the request, processing logic of the commerce platform system compiles data deletion completion data for the user identifier (processing block 618); and
providing the aggregated deletion status results as the deletion status response (Col. 12, lines 60-64, Serves an update data privacy user interface with information indicating data deletion performed for the user based on the user identifier (processing block 620). The processing logic of the end user system may the render the updated data privacy user interface (processing block 622).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Drapeau with the teachings of Certain in order to provide a method that teaches aggregation and response of deletion status results from a plurality of computing system partitions. The motivation for applying Drapeau teaching with Certain teaching is to provide a method that allows for data deletion records with operation metadata to be generated such that the data deletion records can be aggregated and compiled into a report. This enables the system to furnish verifiable proof to a particular user of the data deletion operations executed thereby ensuring compliance with data protection regulation (Drapeau, Col. 6-Col. 7). Certain and Drapeau are analogous art directed towards information retrieval and database structures. Therefore, it would have been obvious for one of ordinary skill in the art to combine Drapeau with Certain to teach the claimed invention in order to provide deletion status aggregation to support system transparency and comply with data protection regulation.
Claims 10-12, 14, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Certain et al. Patent No. US 11,314,717 B1 (hereinafter Certain) in view of Tsang et al. Pub. No. US 2017/0366624 A1 (hereinafter Tsang) in view of Li et al. Pub. No. US 2018/0225353 A1 (hereinafter Li) in view of Hallgren et al. Pub. No. US 2018/0357235 A1 (hereinafter Hallgren).
With regard to claim 10, Certain teaches a computer system, comprising (Col. 2, lines 33-36, The systems … described herein may be employed in various combinations and in various embodiments to implement a scalable architecture for propagating updates to replicated data, according to some embodiments):
a first computing system partition that stores data for a first user (Col. 3, lines 46-51, Nodes, such as nodes 110a, 110b, 110c, 130a, 130b, and 130c may be one or more virtual or physical storage devices, processing devices, servers, or other computing systems, such as computing system 2000 discussed below with regard to FIG. 14 that may store data for data set 100 and replicated portion of data set 120, in various embodiments), the first computing system partition comprising:
a first deletion application programming interface (API) exposed by the first computing system partition (Col. 11, lines 65-67, and Col. 12, lines 1-3 and lines 25-30, Database serviced 210 may provide an application programming interface (API) for requesting various operations targeting tables, indexes, items, and/or attributes maintained on behalf of storage service clients. In some embodiments, the service (and/or the underlying system) may provide both control plane APIs and data plane APIs … The data plane APIs provided by database service 210 (and/or the underlying system) may be used to perform item-level operations such as storing, deleting, retrieving, and/or updating items and/or their attributes, or performing index-based search-type operations across multiple items in a table, such as queries or scans);
a first propagation component that propagates an item of information from the first computing system partition to a second computing system partition (Col. 11, lines 58-61, Propagation management 390 may assign propagation responsibility in response to receive requests from processing nodes 330 for a propagation endpoint. In some embodiments, propagation management 390 may assign propagation responsibility for propagation node(s) 380 to propagate updates to secondary index(es) (or partitions thereof), in some embodiments);
a first propagation table (Col. 11, lines 29-39, In at least some embodiments, database service 210 may implement propagation nodes 380. Propagation nodes 380 may receive indications of updates (or the updated items) from processing nodes 330, in some embodiments. Propagation nodes 380 may obtain (e.g., from control plane 320) mapping information that identifies the secondary index(es) associated with a table, and the format, organization, or other specification that describes the contents of the secondary indexes, including mapping information that maps items stored in a secondary index to respective partitions of the secondary index); and
a first propagation table update component that generates a first propagation record in first propagation table, the first propagation record being indicative of the propagation of the item of information to the second computing system partition (Col. 11, Propagation nodes 380 may access propagation state 382, which may be a data store separate from propagation nodes 380 (e.g. another data store system within database service 210 or implemented as part of another storage service in provider network 200). Propagation state 382 may include various information for tracking state of operations to propagate updates); and
the second computing system partition that stores data for the first user (Col. 3-Col. 4, In at least some embodiments, nodes 130a, 130b, 130c, may provide access to data 132a, 132b, and 132c, as part of replicated portion(s) of data set 120.), the second computing system partition comprising:
a second deletion application programming interface (API) exposed by the second computing system partition (Col. 21, lines 22-31, FIG. 12 is a high-level flowchart illustrating various methods and techniques to process a conditional atomic request to apply an update to replicated portion of a data set, according to some embodiments. As indicated at 1210, a conditional, atomic update request for an item may be received from a propagation node, in some embodiments. The request may be formatted according to an API or other interface format (as discussed above with regard to FIG. 3) which may indicate the update is conditional);
a second propagation component that propagates the item of information from the second computing system partition to a third computing system partition (Col. 11, lines 58-61, Propagation management 390 may assign propagation responsibility in response to receive requests from processing nodes 330 for a propagation endpoint. In some embodiments, propagation management 390 may assign propagation responsibility for propagation node(s) 380 to propagate updates to secondary index(es) (or partitions thereof), in some embodiments);
a second propagation table (Col. 11, lines 29-39, In at least some embodiments, database service 210 may implement propagation nodes 380. Propagation nodes 380 may receive indications of updates (or the updated items) from processing nodes 330, in some embodiments. Propagation nodes 380 may obtain (e.g., from control plane 320) mapping information that identifies the secondary index(es) associated with a table, and the format, organization, or other specification that describes the contents of the secondary indexes, including mapping information that maps items stored in a secondary index to respective partitions of the secondary index); and
a second propagation table update component that generates a second propagation record in the second propagation table, the second propagation record being indicative of the propagation of the item of information to the third computing system partition (Col. 11, Propagation nodes 380 may access propagation state 382, which may be a data store separate from propagation nodes 380 (e.g. another data store system within database service 210 or implemented as part of another storage service in provider network 200). Propagation state 382 may include various information for tracking state of operations to propagate updates).
Certain reasonably teaches a plurality of APIs enabled to perform deletion operations may be provided associated with an underlying system (Col. 12). However, Certain does not explicitly teach a second deletion application programming interface (API).
Tsang teaches a second deletion application programming interface (API) ([0015], In examples described herein, the cluster layer may be located between an API layer and a service layer of a node. In such examples the clustering layer may intercept an API call from the API layer to the service layer, determine whether the API call is at least one of a request to create, to retrieve, or to delete a resource, and based (at least in part) on the determination, may shard and the database and/or forward the API call) exposed by the second computing system partition (0023], In some examples, each node 110 owns or stores a shard (e.g., partition) of database 160. As described herein, a shard of a database may involve a horizontal partition of data in a database. Each database shard may refer to an individual partition of the database; [0024], Nodes 110 of multi-node cluster 100 each comprise an application programming interface (API) layer 120, a clustering layer 130, a service layer 140, and a data access object layer 150; [0025], In some examples, node 110 may receive an API. As used herein, an API call may represent a request operation, function, or routine to be performed by an application implemented by the multi-node cluster and that is recognized by the API layer of the application.)
Rationale to claim 1 applied here.
Certain teaches updating a propagation record in the propagation table indicative of the propagation of the item of information found in different partitions (Certain, Col. 20). However, Certain and Tsang do not explicitly teach a propagation table update component that generates a propagation record in the propagation table.
Li teaches a first propagation table update component that generates a propagation record in the propagation table (Fig. 8, DDS inserts the partition routing table data to the partition routing table 811; [0036], The running architecture 200 of the distributed database system includes … a distributed data service (DDS) middleware 202 … When receiving an access request sent by the application 201, the DDS 202 needs to determine a distributed database (that is, the DB1, the DB2, or the DB3) to which the access request is to be sent; [0118], Step 809 to step 811: When inserting the partition table data by using the DDS, the application checks whether configuration of an associated routing table exists, and if the configuration exists, generates and inserts corresponding partition routing table data)
a second propagation table update component that generates a second propagation record in the second propagation table ([0110,], It should be noted that, in this embodiment of the present invention, the partition routing table is created by the user according to a specific service requirement so as to avoid a case in which there is a large volume of partition routing table data that has been created but is not used; [0113], Step 804 to step 806: The application requests, from the DDS, a logical partition value corresponding to a partition field, and the DDS obtains the corresponding logical partition value, and returns the logical partition value to the application program (Examiner notes: such that requesting a logical partition value incorporates a plurality of values such that includes a second propagation table update component for generation of the second propagation record)
Rationale to claim 7 applied here.
Certain reasonably teaches the method of source and target partitions connected and accessible through a multi-layered propagation mechanism for distributing deletion requests (Certain, Col. 4) such that is not limited by the number of nodes or partitions of a dataset (Certain, Col. 5). However, Certain, Tsang, and Li do not explicitly teach the chained partition structure incorporating a second and third computing system partition.
Hallgren teaches a second propagation table … that stores a second propagation record (FIG. 2, Raw datasets 202, 204, 206 propagate to 210 derived dataset of a second partition, which further propagates to 220 derived dataset of a third partition; [0058], In block 324, the process is programmed, based on provenance metadata that is managed in the distributed database system to traverse relationships that link the raw datasets to one or more derived datasets, reaching each derived dataset associated with the raw datasets … The provenance data may be managed in separate metadata tables or files. ) … a third computing system partition ([0040], In the example of FIG. 2, three (3) raw datasets 202, 204, 206 (Examiner notes: first partition) are stored using the distributed database system 180. In one implementation, datasets FIG. 2 may represent tables of a relational database system and/or materialized view that are derived from the tables. All the datasets 202, 204, 206 contribute, according to a first derivation function or relationship, to a first derived dataset 210 as indicated by arrows connecting the datasets 202, 204, 206 to the first derived dataset 210 (Examiner notes: second partition). Furthermore, a first raw dataset 202 and the first derived dataset 210 contribute, based on a second derivation function or relationship, to a second derived dataset 220 (Examiner notes: third partition). Therefore, the five (5) datasets 202, 204, 206, 210, 220 are arranged in a directed graph in which datasets are nodes and derivation functions or relationships comprise paths)
Rationale to claim 5 applied here.
With regard to claim 11, Certain teaches the computer system of claim 10 wherein the first deletion API of the first computing system partition is configured to receive a deletion request to delete the item of information, to delete the item of information from the first computing system partition, and to access the first propagation table to identify, from the first propagation record (Col. 11, lines 29-39, In at least some embodiments, database service 210 may implement propagation nodes 380. Propagation nodes 380 may receive indications of updates (or the updated items) from processing nodes 330, in some embodiments. Propagation nodes 380 may obtain (e.g., from control plane 320) mapping information that identifies the secondary index(es) associated with a table, and the format, organization, or other specification that describes the contents of the secondary indexes, including mapping information that maps items stored in a secondary index to respective partitions of the secondary index), that the item of information has been propagated to the second computing system partition (Col. 4, lines 13-22, Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments. For example, propagation mesh 140 may implement a scalable architecture including one or more propagation nodes, such as propagation nodes 142a, 142b, 142c, 142d, and 142e, which may receive update(s) committed to data set 110, such as update(s) 104a, 104b, and 104c, from nodes 110, identify or determine which updates need to be replicated (as not all updates may be appropriate for otherwise applicable to replicated portions).
With regard to claim 12, Certain teaches the computer system of claim 11 wherein the first deletion API of the first computing system partition is configured to, in response to identifying from the first propagation record that the item of information has been propagated to the second computing system partition (Col. 4, lines 13-22, Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments. For example, propagation mesh 140 may implement a scalable architecture including one or more propagation nodes, such as propagation nodes 142a, 142b, 142c, 142d, and 142e, which may receive update(s) committed to data set 110, such as update(s) 104a, 104b, and 104c, from nodes 110, identify or determine which updates need to be replicated (as not all updates may be appropriate for otherwise applicable to replicated portions), propagate the deletion request to the second computing system partition (Col. 4, lines 22-25, and send the identified updates, such as replicated update(s) 106a, 106b, 106c, to the appropriate nodes 130 of replicated portions of data set 120, in some embodiments).
With regard to claim 14, Certain teaches the computer system of claim 10, wherein the first propagation table update component of the first computing system partition is configured to generate the first propagation record (Col. 11, Propagation state 382 may include various information for tracking the state of operations to propagate updates) including an item identifier identifying the item of information (Col. 16, lines 3-8, While a version identifier may be maintained for each item, for deleted item(s) 622, a current version identifier 624 may be maintained along with a tombstone marker 626, which may indicate that the item has been deleted and should not be visible to queries to the processing node(s) 620) and a target partition identifier the second computing system partition to which the item of information is propagated (Col. 16, lines 10-14, Propagation node 610 may maintain local state 612 which tracks the committed index partition version identifier(s) 614 for each partition of each secondary index to which the propagation nodes sends updates).
Certain teaches the structure of a propagation record in the propagation table indicative of the propagation of the item of information found in different partitions. However, Certain does not explicitly teach a propagation table update component that generates a propagation record.
Li teaches the first propagation table update component of the first computing system partition is configured to generate the propagation record (Fig. 8, DDS inserts the partition routing table data to the partition routing table 811; [0036], The running architecture 200 of the distributed database system includes … a distributed data service (DDS) middleware 202 … When receiving an access request sent by the application 201, the DDS 202 needs to determine a distributed database (that is, the DB1, the DB2, or the DB3) to which the access request is to be sent; [0118], Step 809 to step 811: When inserting the partition table data by using the DDS, the application checks whether configuration of an associated routing table exists, and if the configuration exists, generates and inserts corresponding partition routing table data) which is substantially similar to claim 10 and therefore rejected with similar rationale.
Examiner notes: it would be obvious to one of ordinary skill in the art to recognize that the limitation of claim 10 is being substantially recited again as the limitation for claim 14.
With regard to claim 22, Certain teaches the computer system of claim 10 further comprising:
the third computing system partition that stores data for the first user (Col. 3, lines 12-19, For example, as illustrated in FIG. 1, different nodes, such as nodes 110a, 110b, and 110c may store data that is part of data set 100, such as data 112a, 112b, and 112c respectively, in one embodiment; Col. 3, lines 52-54 and lines 63-65, For example, as illustrated in Fig. 1, updates, such as updates 102a, 102b, and 102c may be received and processed at nodes 110a, 110b, and 110c; Col. 15, lines 52-55, Updates may include or cause the deletion of items from a secondary index (or partition thereof). Deletion requests may, for instance, remove an attribute or item from a database table), the third computing system partition comprising;
a third deletion application programming interface (API) exposed by the third computing system partition (Col. 11, lines 65-67, and Col. 12, lines 1-3 and lines 25-30, Database serviced 210 may provide an application programming interface (API) for requesting various operations targeting tables, indexes, items, and/or attributes maintained on behalf of storage service clients. In some embodiments, the service (and/or the underlying system) may provide both control plane APIs and data plane APIs … The data plane APIs provided by database service 210 (and/or the underlying system) may be used to perform item-level operations such as storing, deleting, retrieving, and/or updating items and/or their attributes, or performing index-based search-type operations across multiple items in a table, such as queries or scans);
a third propagation component that propagates the item of information form the third computing system partition (Col. 11, lines 58-61, Propagation management 390 may assign propagation responsibility in response to receive requests from processing nodes 330 for a propagation endpoint. In some embodiments, propagation management 390 may assign propagation responsibility for propagation node(s) 380 to propagate updates to secondary index(es) (or partitions thereof), in some embodiments);
a third propagation table (Col. 11, lines 29-39, In at least some embodiments, database service 210 may implement propagation nodes 380. Propagation nodes 380 may receive indications of updates (or the updated items) from processing nodes 330, in some embodiments. Propagation nodes 380 may obtain (e.g., from control plane 320) mapping information that identifies the secondary index(es) associated with a table, and the format, organization, or other specification that describes the contents of the secondary indexes, including mapping information that maps items stored in a secondary index to respective partitions of the secondary index); and
a third propagation table update component that generates a third propagation record in the third propagation table, the third propagation record being indicative of the propagation of the item of information from the third computing system partition (Col. 11, Propagation nodes 380 may access propagation state 382, which may be a data store separate from propagation nodes 380 (e.g. another data store system within database service 210 or implemented as part of another storage service in provider network 200). Propagation state 382 may include various information for tracking state of operations to propagate updates).
Certain reasonably teaches the method of source and target partitions connected and accessible through a multi-layered propagation mechanism for distributing deletion requests (Certain, Col. 4) such that is not limited by the number of nodes or partitions of a dataset (Certain, Col. 5). However, Certain does not explicitly teach the chained partition structure incorporating a third computing system partition.
Hallgren teaches a third propagation table … that stores a third propagation record (FIG. 2, Raw datasets 202, 204, 206 propagate to 210 derived dataset of a second partition, which further propagates to 220 derived dataset of a third partition; [0058], In block 324, the process is programmed, based on provenance metadata that is managed in the distributed database system to traverse relationships that link the raw datasets to one or more derived datasets, reaching each derived dataset associated with the raw datasets … The provenance data may be managed in separate metadata tables or files.) … a third computing system partition ([0040], In the example of FIG. 2, three (3) raw datasets 202, 204, 206 (Examiner notes: first partition) are stored using the distributed database system 180. In one implementation, datasets FIG. 2 may represent tables of a relational database system and/or materialized view that are derived from the tables. All the datasets 202, 204, 206 contribute, according to a first derivation function or relationship, to a first derived dataset 210 as indicated by arrows connecting the datasets 202, 204, 206 to the first derived dataset 210 (Examiner notes: second partition). Furthermore, a first raw dataset 202 and the first derived dataset 210 contribute, based on a second derivation function or relationship, to a second derived dataset 220 (Examiner notes: third partition). Therefore, the five (5) datasets 202, 204, 206, 210, 220 are arranged in a directed graph in which datasets are nodes and derivation functions or relationships comprise paths)
Rationale to claim 5 applied here.
Certain teaches updating a propagation record in the propagation table indicative of the propagation of the item of information found in different partitions (Certain, Col. 20). However, Certain does not explicitly teach a propagation table update component that generates a propagation record in the propagation table.
Li teaches a propagation table update component that generates a propagation record in the propagation table (Fig. 8, DDS inserts the partition routing table data to the partition routing table 811; [0036], The running architecture 200 of the distributed database system includes … a distributed data service (DDS) middleware 202 … When receiving an access request sent by the application 201, the DDS 202 needs to determine a distributed database (that is, the DB1, the DB2, or the DB3) to which the access request is to be sent; [0118], Step 809 to step 811: When inserting the partition table data by using the DDS, the application checks whether configuration of an associated routing table exists, and if the configuration exists, generates and inserts corresponding partition routing table data)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Li with the teachings of Certain in order to provide a system that teaches a propagation table update component for propagation record generation. The motivation for applying Li teaching with Certain teaching is to provide a system that allows for a component to associate database data with the location stored, such that future data queries can quickly be routed, improving data query efficiency (Li, [0120]). Certain and Li are analogous art directed towards database structures and data partitioning. Therefore, it would have been obvious for one of ordinary skill in the art to combine Li with Certain to teach the claimed invention in order to provide a propagation managing component in order to efficiently and properly route queries across the plurality of partitions of a distributed database system.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Certain in view of Tsang in view of Li in view of Hallgren as applied to claim 12 above, and further in view of Drapeau et al. Patent No. US 11,914,732 B2 (hereinafter Drapeau).
With regard to claim 13, Certain teaches the computer system of claim 12 wherein the first deletion API of the first computing system partition is configured to receive a deletion status request (Col. 15, lines 10-25, Propagation node 540 may send one or more conditional update requests 554 to processing node 522 to apply the identified updates to the appropriate items in partition 524 of secondary index(es) 520 … In at least some embodiments, propagation node 540 may track the status of outstanding update requests (e.g., what nodes have been sent a request, what response have been received, etc.). Processing nodes 522 may send acknowledgements of successful completion of the request or failures 556 to propagation node 540, in some embodiments),
Certain teaches deletion request acknowledgements for a plurality of computing system partitions (Certain, Col. 20). However, Certain does not explicitly teach an aggregation of deletion acknowledgements as the deletion status response.
Drapeau teaches aggregate deletion status results from the first computing system partition and the second computing system partition (Col. 11, lines 33-36, Then, as discussed herein, the data deletion records may be aggregated and provided to the requesting user as proof the data deletion operations being performed; Col. 12, lines 57-62, In response to the request, processing logic of the commerce platform system compiles data deletion completion data for the user identifier (processing block 618), and provide the aggregated deletion status results in response to the deletion status requests (Col. 12, lines 60-64, Serves an update data privacy user interface with information indicating data deletion performed for the user based on the user identifier (processing block 620). The processing logic of the end user system may the render the updated data privacy user interface (processing block 622) which is substantially similar to claim 9 and therefore reject with similar rationale.
Examiner notes: it would be obvious to one of ordinary skill in the art to recognize that the method of claim 9 is being substantially recited again as limitations for the computer system of claim 13.
Claims 15, 17, 19, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Certain et al. Patent No. US 11,314,717 B1 (hereinafter Certain) in view of Tsang et al. Pub. No. US 2017/0366624 A1 (hereinafter Tsang) in view of Hallgren et al. Pub. No. US 2018/0357235 A1 (hereinafter Hallgren).
With regard to claim 15, Certain teaches a computer system, comprising (Col. 22, lines 63-67, The methods described herein may in various embodiments be implemented by any combination of hardware and software. For example, in one embodiment, the methods may be implemented by a computer system (e.g., a computer system as in FIG. 14):
at least one processor (Fig. 14, Processors 2010a-n; Col. 23, lines 26-27, In the illustrated embodiment, computer system 2000 includes one or more processors 2010); and
a data store storing computer executable instructions which, when executed by the at least one processor, cases the at least one processor to perform steps, comprising (Col. 23, lines 1-4, one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. The program instructions may implement the functionality described herein):
receiving a deletion request at a first computing system partition (Col. 3, lines 12-19, For example, as illustrated in FIG. 1, different nodes, such as nodes 110a, 110b, and 110c may store data that is part of data set 100, such as data 112a, 112b, and 112c respectively, in one embodiment; Col. 3, lines 52-54 and lines 63-65, For example, as illustrated in Fig. 1, updates, such as updates 102a, 102b, and 102c may be received and processed at nodes 110a, 110b, and 110c; Col. 15, lines 52-55, Updates may include or cause the deletion of items from a secondary index (or partition thereof). Deletion requests may, for instance, remove an attribute or item from a database table) that stores data for a first user (Col. 8, lines 4-7 and lines 15-21, Fig. 3, is a logical block diagram illustrating a database service that may implement a scalable architecture for propagating updates to replicated data, according to some embodiments … In one embodiment, database service 210 may also implement a plurality of processing nodes 330, each of which may manage one or more partitions 370 of a data set (e.g., a database) on behalf of clients/users), the deletion request identifying an item of information to be deleted from the first computing system partition (Col. 19, lines 60-67, For example, item A may have multiple attributes (e.g., Attribute AA, BB, CC, DD, EE, FF, and so on). A secondary index may include items where the value of Attribute AA=”2017” and may also include the values of Attributes DD and EE. If the update item has changed the value of AA, DD, or EE, then the update may be applicable (including updates that would result in the removal of an item from the secondary index);
exposing a first deletion application programming interface (API) on the first computing system partition (Col. 11, lines 65-67, and Col. 12, lines 1-3 and lines 25-30, Database serviced 210 may provide an application programming interface (API) for requesting various operations targeting tables, indexes, items, and/or attributes maintained on behalf of storage service clients. In some embodiments, the service (and/or the underlying system) may provide both control plane APIs and data plane APIs … The data plane APIs provided by database service 210 (and/or the underlying system) may be used to perform item-level operations such as storing, deleting, retrieving, and/or updating items and/or their attributes, or performing index-based search-type operations across multiple items in a table, such as queries or scans);
calling a deletion method of the first deletion API (Col. 20, lines 60-64, As indicated in 1120, updates to the partition of the source table may be received. As discussed above with regard to FIG. 3, updates, may be received from a client of a database table via an API or other interface, describing the changes to be performed as part of the update);
accessing a first propagation table in the first computing system partition, that stores a first propagation record (Col. 11, lines 29-39, In at least some embodiments, database service 210 may implement propagation nodes 380. Propagation nodes 380 may receive indications of updates (or the updated items) from processing nodes 330, in some embodiments. Propagation nodes 380 may obtain (e.g., from control plane 320) mapping information that identifies the secondary index(es) associated with a table, and the format, organization, or other specification that describes the contents of the secondary indexes, including mapping information that maps items stored in a secondary index to respective partitions of the secondary index), to determine that the item of information has been propagated to a second computing system partition based on the first propagation record (Col. 3, lines 28-38, Replicated portion(s) of data set 120 may also be maintained for access, in various embodiments. For example, nodes, such as nodes 130a, 130b, and 130c may respectively store data 132a, 132b, and 132c, which may be a portion of one or more different parts of data set 100, in one embodiment. As discussed below with regard to FIGS. 2-8 and 10-13, data 132 may be a secondary index, projection, or other view of data ( or partitions thereof) that represents a subset of data set 100, in one embodiment, which may be stored according to a different format, schema, or other arrangement than data set 100; Col. 4, lines 13-22, Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments. For example, propagation mesh 140 may implement a scalable architecture including one or more propagation nodes, such as propagation nodes 142a, 142b, 142c, 142d, and 142e, which may receive update(s) committed to data set 110, such as update(s) 104a, 104b, and 104c, from nodes 110, identify or determine which updates need to be replicated (as not all updates may be appropriate for otherwise applicable to replicated portions);
propagating the deletion request to the second computing system partition (Col. 4, lines 22-25, and send the identified updates, such as replicated update(s) 106a, 106b, 106c, to the appropriate nodes 130 of replicated portions of data set 120, in some embodiments);
exposing a second deletion API (Col. 21, lines 22-31, FIG. 12 is a high-level flowchart illustrating various methods and techniques to process a conditional atomic request to apply an update to replicated portion of a data set, according to some embodiments. As indicated at 1210, a conditional, atomic update request for an item may be received from a propagation node, in some embodiments. The request may be formatted according to an API or other interface format (as discussed above with regard to FIG. 3) which may indicate the update is conditional) on the second computing system partition (Fig. 1, Replicated Portions of dataset 120; Col. 4, lines 13-15, Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments);
calling a deletion method of the second deletion API (Col. 21, lines 38-51, As indicated at 1220, a comparison of version identifier of the request and the current version identifier may be performed, in some embodiments … As indicated by a positive exit from 1230, if the version identifier is later than the current version identifier, then the update may be applied to the item (e.g., the item may be overwritten with the updated version of the item, the item may be inserted, or the item may be deleted or marked for deletion with a tombstone marker), as indicated at 1240, in some embodiments; Col. 20, lines 61-64, As discussed above with regard to FIG. 3, updates may be received from a client of a database table via an API or other interface, describing the changes to be performed as part of the update); and
accessing a second propagation table in the second computing system partition, that stores a second propagation record (Col. 11, lines 29-39, In at least some embodiments, database service 210 may implement propagation nodes 380. Propagation nodes 380 may receive indications of updates (or the updated items) from processing nodes 330, in some embodiments. Propagation nodes 380 may obtain (e.g., from control plane 320) mapping information that identifies the secondary index(es) associated with a table, and the format, organization, or other specification that describes the contents of the secondary indexes, including mapping information that maps items stored in a secondary index to respective partitions of the secondary index), to determine whether the item of information has been propagated to a third computing system partition based on the second propagation record in the second computing system partition (Col. 3, lines 28-38, Replicated portion(s) of data set 120 may also be maintained for access, in various embodiments. For example, nodes, such as nodes 130a, 130b, and 130c may respectively store data 132a, 132b, and 132c, which may be a portion of one or more different parts of data set 100, in one embodiment. As discussed below with regard to FIGS. 2-8 and 10-13, data 132 may be a secondary index, projection, or other view of data ( or partitions thereof) that represents a subset of data set 100, in one embodiment, which may be stored according to a different format, schema, or other arrangement than data set 100; Col. 4, lines 13-22, Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments. For example, propagation mesh 140 may implement a scalable architecture including one or more propagation nodes, such as propagation nodes 142a, 142b, 142c, 142d, and 142e, which may receive update(s) committed to data set 110, such as update(s) 104a, 104b, and 104c, from nodes 110, identify or determine which updates need to be replicated (as not all updates may be appropriate for otherwise applicable to replicated portions).
However, Certain does not explicitly teach a second deletion API.
Tsang teaches a second deletion application programming interface (API) ([0015], In examples described herein, the cluster layer may be located between an API layer and a service layer of a node. In such examples the clustering layer may intercept an API call from the API layer to the service layer, determine whether the API call is at least one of a request to create, to retrieve, or to delete a resource, and based (at least in part) on the determination, may shard and the database and/or forward the API call)
Rationale to claim 1 applied here.
However, Certain does not teach the chained partition structure incorporating explicitly a second and third computing system partition.
Hallgren teaches the second computing system partition to determine whether the item of data was propagated to a third computing system partition ([0040], In the example of FIG. 2, three (3) raw datasets 202, 204, 206 (Examiner notes: first partition) are stored using the distributed database system 180. In one implementation, datasets FIG. 2 may represent tables of a relational database system and/or materialized view that are derived from the tables. All the datasets 202, 204, 206 contribute, according to a first derivation function or relationship, to a first derived dataset 210 as indicated by arrows connecting the datasets 202, 204, 206 to the first derived dataset 210 (Examiner notes: second partition). Furthermore, a first raw dataset 202 and the first derived dataset 210 contribute, based on a second derivation function or relationship, to a second derived dataset 220 (Examiner notes: third partition). Therefore, the five (5) datasets 202, 204, 206, 210, 220 are arranged in a directed graph in which datasets are nodes and derivation functions or relationships comprise paths)
Rationale to claim 5 applied here.
With regard to claim 17, Certain teaches the computer system of claim 15 wherein the steps further comprise:
receiving at the first deletion API on the first computing system partition a deletion status request (Col. 15, lines 10-25, Propagation node 540 may send one or more conditional update requests 554 to processing node 522 to apply the identified updates to the appropriate items in partition 524 of secondary index(es) 520 … In at least some embodiments, propagation node 540 may track the status of outstanding update requests (e.g., what nodes have been sent a request, what response have been received, etc.). Processing nodes 522 may send acknowledgements of successful completion of the request or failures 556 to propagation node 540, in some embodiments); and
providing a deletion status response indicative of a status of the deletion request on the first computing system partition (Col. 15, lines 25-32, Based on the results of the acknowledgements or failures 556, the propagation node 540 may determine whether the update was successful. As discussed with regard to FIG. 10, in scenarios where the same update needs to be applied to multiple secondary indexes, the update may not be considered successful unless a secondary indexes acknowledge the successful completion of the update).
With regard to claim 19, Certain teaches the computer system of claim 17 wherein the steps further comprise:
deleting the item of information from the first computing system partition using the first deletion API (Col. 19, lines 35-40, an update to an item that has been committed to a partition of a table may be received. As discussed above, the updates may … delete items, entries, values, attributes, or other information in the partition of the table; Col. 20, lines 64-66, As indicated in 1130, the updates to the partition of the source table may be performed, in some embodiments); and
deleting the item of information from the second computing system partition using the second deletion API (Col. 19, lines 35-40, an update to an item that has been committed to a partition of a table may be received. As discussed above, the updates may … delete items, entries, values, attributes, or other information in the partition of the table; Col. 20, lines 64-66, As indicated in 1130, the updates to the partition of the source table may be performed, in some embodiments).
Certain reasonably teaches a plurality of APIs enabled to perform deletion operations may be provided associated with an underlying system (Col. 12). However, Certain does not explicitly teach that such the plurality of APIs are local to a second computing system partition.
Tsang teaches a second deletion application programming interface (API) ([0015], In examples described herein, the cluster layer may be located between an API layer and a service layer of a node. In such examples the clustering layer may intercept an API call from the API layer to the service layer, determine whether the API call is at least one of a request to create, to retrieve, or to delete a resource, and based (at least in part) on the determination, may shard and the database and/or forward the API call)
Rationale to claim 1 applied here.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Certain in view of Tsang in view of Hallgren as applied to claim 15 above, and further in view of Li et al. Pub. No. US 2018/0225353 A1 (hereinafter Li).
With regard to claim 16, Certain teaches the computer system of claim 15 wherein the steps further comprise:
detecting the propagation of the item of information to the second computing system partition (Col. 19, lines 53-60, As indicated at 1020, a determination may be made as to whether the update is applicable to a secondary index, in some embodiments. Secondary index schema, for instance, may be evaluated with respect to the update to the item. If the updated item has an attribute, value, or other information included by the secondary index schema to be stored as part of a secondary index, then the update may be applicable, in some embodiments); and
generating a propagation record in a propagation table in the first computing system partition (Col. 20, lines 33-38, As indicated at 1070, propagation state may be updated to identify the update as committed to the partition(s) of the secondary index(es), in some embodiments. The propagation state may be propagation state maintained on propagation nodes and/or in separate propagation store), the propagation record being indicative of the propagation of the item of information to the second computing system partition (Col. 22, lines 3-7, As indicated at 1320, a propagation state data store may be accessed to obtain a list committed version identifier for updates to the partition(s) of the table performed at the partition(s) of the secondary index(es), in some embodiments).
Certain teaches updating a propagation record in the propagation table indicative of the propagation of the item of information found in different partitions. However, Certain does not explicitly teach the generation of a propagation record in the propagation table.
Li teaches generating a propagation record in the propagation table ([0110], The metadata of the partition routing table is data for describing the partition routing table. A partition routing table (Examiner notes: propagation table) associated with the data table may be determined according to the metadata of the partition routing table. Specifically, a mark is configured on the application, to record that the data table tbl_user includes the partition routing table tbl_route_msdn(msdn, userId), and the data table tbl_user_order includes the partition routing table tbl_route_order(orderNo, orderId), so as to insert a corresponding partition routing table record (Examiner notes: propagation record) when a partition table data (Examiner notes: item of information) record is inserted, and perform partition routing table matching when a partition table data record is queried) which is substantially similar to claim 7 and therefore rejected with similar rationale.
Examiner notes: it would be obvious to one of ordinary skill in the art to recognize that the method of claim 7 is being substantially recited again as limitations for the computer system of claim 16.
Claims 18 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Certain in view of Tsang in view of Hallgren as applied to claims 15 and 17 above, and further in view of Drapeau et al. Patent No. US 11,914,732 B2 (hereinafter Drapeau).
With regard to claim 18, Drapeau teaches the computer system of claim 17 wherein providing a deletion status response comprises:
aggregating deletion status results from the first computing system partition and the second computing system partition (Col. 11, lines 33-36, Then, as discussed herein, the data deletion records may be aggregated and provided to the requesting user as proof the data deletion operations being performed; Col. 12, lines 57-62, In response to the request, processing logic of the commerce platform system compiles data deletion completion data for the user identifier (processing block 618); and
providing the aggregated deletion status results as the deletion status response (Col. 12, lines 60-64, Serves an update data privacy user interface with information indicating data deletion performed for the user based on the user identifier (processing block 620). The processing logic of the end user system may the render the updated data privacy user interface (processing block 622) which is substantially similar to claim 9 and therefore reject with similar rationale.
Examiner notes: it would be obvious to one of ordinary skill in the art to recognize that the method of claim 9 is being substantially recited again as limitations for the computer system of claim 18.
With regard to claim 23, Certain teaches the computer system of claim 15 further comprising:
propagating the deletion request to the third computing system partition (Col. 4, lines 22-25, and send the identified updates, such as replicated update(s) 106a, 106b, 106c, to the appropriate nodes 130 of replicated portions of data set 120, in some embodiments);
exposing a third deletion API (Col. 21, lines 22-31, FIG. 12 is a high-level flowchart illustrating various methods and techniques to process a conditional atomic request to apply an update to replicated portion of a data set, according to some embodiments. As indicated at 1210, a conditional, atomic update request for an item may be received from a propagation node, in some embodiments. The request may be formatted according to an API or other interface format (as discussed above with regard to FIG. 3) which may indicate the update is conditional) on the third computing system partition (Fig. 1, Replicated Portions of dataset 120; Col. 4, lines 13-15, Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments);
calling a deletion method on the third deletion API on the third computing system partition (Col. 21, lines 38-51, As indicated at 1220, a comparison of version identifier of the request and the current version identifier may be performed, in some embodiments … As indicated by a positive exit from 1230, if the version identifier is later than the current version identifier, then the update may be applied to the item (e.g., the item may be overwritten with the updated version of the item, the item may be inserted, or the item may be deleted or marked for deletion with a tombstone marker), as indicated at 1240, in some embodiments; Col. 20, lines 61-64, As discussed above with regard to FIG. 3, updates may be received from a client of a database table via an API or other interface, describing the changes to be performed as part of the update); and
accessing a third propagation table in the third computing system partition (Col. 4, In at least some embodiments, nodes 130a, 130b, and 130c may provide access to data 132a, 132b, and 132c as part of replicated portion(s) of data set 120; Col. 11, lines 29-39, Propagation nodes 380 may obtain (e.g., from control plane 320) mapping information that identifies the secondary index(es) associated with a table, and the format, organization, or other specification that describes the contents of the secondary indexes, including mapping information that maps items stored in a secondary index to respective partitions of the secondary index), that stores a third propagation record (Col. 11, Propagation nodes may be selected or assigned responsibility for propagating updates, as discussed below with regard to FIG. 13, in some embodiments. Propagation nodes 380 may access propagation state 382, which may be a data store separate from propagation nodes 380 … Propagation state 382 may include various information for tracking the state of operations to propagate updates), to determine whether the item of information has been propagated based on the third propagation record in the third computing system partition (Col. 4, lines 13-22, Update(s) 102 performed at nodes 110 may be propagated to replicated portion(s) of data set 120, in some embodiments. For example, propagation mesh 140 may implement a scalable architecture including one or more propagation nodes, such as propagation nodes 142a, 142b, 142c, 142d, and 142e, which may receive update(s) committed to data set 110, such as update(s) 104a, 104b, and 104c, from nodes 110, identify or determine which updates need to be replicated (as not all updates may be appropriate for otherwise applicable to replicated portions);
Certain reasonably teaches a plurality of APIs enabled to perform deletion operations may be provided associated with an underlying system (Col. 12). However, Certain does not explicitly teach that such the plurality of APIs are local to a third computing partition.
Tsang teaches a third deletion application programming interface (API) ([0015], In examples described herein, the cluster layer may be located between an API layer and a service layer of a node. In such examples the clustering layer may intercept an API call from the API layer to the service layer, determine whether the API call is at least one of a request to create, to retrieve, or to delete a resource, and based (at least in part) on the determination, may shard and the database and/or forward the API call)
Rationale to claim 1 applied here.
Certain reasonably teaches the method of source and target partitions connected and accessible through a multi-layered propagation mechanism for distributing deletion requests (Certain, Col. 4) such that is not limited by the number of nodes or partitions of a dataset (Certain, Col. 5). However, Certain does not explicitly teach the chained partition structure incorporating third computing system partition.
Hallgren teaches a third propagation table in the third computing partition (FIG. 2, Raw datasets 202, 204, 206 propagate to 210 derived dataset of a second partition, which further propagates to 220 derived dataset of a third partition; [0058], In block 324, the process is programmed, based on provenance metadata that is managed in the distributed database system to traverse relationships that link the raw datasets to one or more derived datasets, reaching each derived dataset associated with the raw datasets … The provenance data may be managed in separate metadata tables or files.), that stores a third propagation record (a first raw dataset 202 and the first derived dataset 210 contribute, based on a second derivation function or relationship, to a second derived dataset 220 (Examiner notes: third partition). Therefore, the five (5) datasets 202, 204, 206, 210, 220 are arranged in a directed graph in which datasets are nodes and derivation functions or relationships comprise paths)
Rationale to claim 5 applied here.
However, Certain, Tsang, and Hallgren do not explicitly teach aggregation of deletion status results from a third computing system partition.
Drapeau teaches wherein providing the deletion status response further comprises:
aggregating deletion status results from the third computing system partition (Col. 11, lines 33-36, Then, as discussed herein, the data deletion records may be aggregated and provided to the requesting user as proof the data deletion operations being performed; Col. 12, lines 57-62, In response to the request, processing logic of the commerce platform system compiles data deletion completion data for the user identifier (processing block 618).
Rationale to claim 9 applied here.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 2019/0132391 A1
teaches
Storage Architecture for Heterogenous Multimedia Data
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN A CASTANEDA whose telephone number is (571)272-0465. The examiner can normally be reached Monday-Friday 9:30AM-5:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/I.A.C./Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195