Prosecution Insights
Last updated: April 19, 2026
Application No. 18/367,917

SYSTEM AND METHOD FOR CAPTURE OF CHANGE DATA FROM DISTRIBUTED DATA SOURCES, FOR USE WITH HETEROGENEOUS TARGETS

Final Rejection §103
Filed
Sep 13, 2023
Examiner
MAY, ROBERT F
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
216 granted / 286 resolved
+20.5% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
41 currently pending
Career history
327
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
18.0%
-22.0% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 286 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The Action is responsive to the Amendments and Remarks filed on 11/28/2025. Claims 1-4, 6-14, and 16-23 are pending claims. Claims 1, 11, and 21 are written in independent form. Claims 5 and 15 are cancelled claims. Claim Objections Claims 1, 11, and 21 are objected to because of the following informalities: Claims 1, 11, and 21 recite the typographical error of “generating…a token…” and “building a cache that includes….an association of tokens…” without clearly indicating that the generated tokens and tokens in the cached association of tokens are the same tokens. The claims are understood as intending to recite “building a cache that includes, for each change data record, an association of the generated tokens…”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 6, 8, 11-13, 16, 18, and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (U.S. Pre-Grant Publication No. 2010/0274768, hereinafter referred to as Wang), and further in view of Abrams et al. (U.S. Pre-Grant Publication No. 2006/0235715, hereinafter referred to as Abrams) and Anglin et al. (U.S. Pre-Grant Publication No. 2013/0054524, hereinafter referred to as Anglin). Regarding Claim 1: Wang teaches a system for capture of change data from a distributed data source, for use with heterogeneous targets, comprising: A computer that includes a processor, and a change data capture process manager executing thereon, wherein the change data capture process manager is configured to capture change data from a distributed data source, using a capture process, for use with one or more target systems, Wang teaches “replication of a database with multiple logs may begin with starting a log scanner on each available transaction log” and “each log scanner extracts from tis log the data changes in the given logical time range” (Para. [0046]) where the database replicated to remote databases (Figure 3 & Paras. [0038]-[0039]). Wang further teaches a plurality of nodes by teaching data replicated from the database 305, comprising distributed data over nodes and database fragments, to remote databases 310 as the target databases (Paras. [0038]-[0039] & Figure 3). Wang also teaches a computer 110 that includes a processing unit 120 (Para. [0017]) and log scanners that extract from their log the data changes for a given logical time range (Para. [0046). Wherein the distributed data source comprises a plurality of nodes that are associated with a distributed source topology and that store data; Wang teaches “distributed computing environments that include any of the above systems or devices” (Para. [0015]) where “In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices” (Para. [0016]), “The nodes 205-211 may include database components 225-231. The various entities may be located relatively close to each other or may be distributed across the world.” (Para. [0028]), and “Different tables of a database may be distributed on different database fragments and different data records of the same table may be distributed on different database fragments.” (Para. [0001]). Wherein the distributed data source uses a mechanism to distribute and store the data within a plurality of partitions within the plurality of nodes; Wang teaches “distributed computing environments that include any of the above systems or devices” (Para. [0015]) where “In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices” (Para. [0016]), “The nodes 205-211 may include database components 225-231. The various entities may be located relatively close to each other or may be distributed across the world.” (Para. [0028]), and “Different tables of a database may be distributed on different database fragments and different data records of the same table may be distributed on different database fragments.” (Para. [0001]). Wherein the distributed data source includes, for each node of the plurality of nodes, an associated source change trace entity, wherein the changes to the data stored at the node are committed to the source change trace entity at the node; Wang teaches “distributed computing environments that include any of the above systems or devices” (Para. [0015]) where “In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices” (Para. [0016]), “The nodes 205-211 may include database components 225-231. The various entities may be located relatively close to each other or may be distributed across the world.” (Para. [0028]), and “Different tables of a database may be distributed on different database fragments and different data records of the same table may be distributed on different database fragments.” (Para. [0001]). Wang further teaches “Each change to a duplicated database record may be recorded in multiple logs.” (Para. [0003]) and providing access to log scanners at nodes of the distributed data source where the log scanner extracts from its log data changes, “each batch replicates data changes of transactions that were committed in a given logical time range” (Para. [0046]) where “For de-duplicating entries regarding data changes in multiple logs, each log record for a data change may be associated with a logical timestamp. This may be done, for example, by including the timestamp in the log record, by including the timestamp in the corresponding transaction's commit log record” (Para. [0070]). Therefore, Wang is understood as teaching each node having an modules including an associated source change trace entity and changes to the data stored at the node being committed to the source change trance entity at the node. Said capture process including: Determining the distributed source topology associated with the plurality of nodes in the distributed data source, Wang teaches “the stores 215-221 may be accessed via components of a database management system (DBMS)…compris[ing] one or more programs that control organization, storage, management, and retrieval of data in a database” (para. [0033]), the database being discussed being exemplified by the distributed nodes depicted in figure 2. Wang further teaches “data stored on the stores 215-221 may comprise a…hierarchical database” (Para. [0032]) thereby teaching the database management system controlling the organization/topology of the hierarchical database. Fetching, from the distributed data source, an indication of the mechanism used by the distributed data source to distribute and store the data within the plurality of partitions within the plurality of nodes; Wang teaches fetching information about the mechanism used by a distributed live source node to distribute and store the data within the partitions within the node by teaching “aggregate usage information may be recorded and cached” and “this can be located on some or all nodes. Querying the usage information of duplication schemas needs to query both the cache and DuplicationSchemaHistory” (Para. [0063]). Accessing the source change trace entities at the plurality of nodes, to determine the data changes at the distributed data source, for use with the one or more target systems; It is noted that the limitation recites intended use language of “…to determine…” and “…for use with…” and is therefore not being given patentable weight. The scope of the limitation is understood as reciting “accessing the source change trace entities”. However, for purposes of compact prosecution, the limitation is being addressed below as if the intended use language was recited in a positive step. Wang teaches “a centralized node may receive available logs and may remove duplicates in create a single data change stream to be exported to another database” (Para. [0105]). thereby teaching accessing the data change logs at each of the plurality of nodes that are then sent to the centralized node to determine the data changes across the distributed nodes for exporting to one or more other remote databases, as depicted in figure 3. Generating, for each change data record read from the source change trace entities and indicative of the data changes at the distributed data source, an indicator indicative of a partition and node within the distributed data source providing that change data record; Wang further teaches “database fragments are associated with different logs” (Abstract) and “Each database fragment is logically associated with its own log although physically one log may include the changes from multiple database fragments hosted by a single node. Each log indicates changes made, if any, in its associated database fragment or fragments. Furthermore, the history includes time ranges in which the schemas are or were valid.” (Para. [0099]). Therefore Wang teaches generating, for each log indicating changes, an indicator of a fragment and node with in the distributed data source providing that change data record (the fragments/partitions connected to nodes are also depicted visually in Fig. 2 showing DB frag(s) 215-221 connected to nodes 205-211). Building a cache that includes, for each change data record, an association of indicators and nodes associated with that change data record; and Wang teaches “To detect whether logs of online database fragments contain all data changes in a given logical time range and to filter duplicates along with log scanning, a history of duplication schemas may be recorded.” (Para. [0046]) where “aggregate usage information may be recorded and cached” (Para. [0063]) where “an in-memory data structure may be maintained to cache a set of (DuplicationSchemaID, MostRecentUsageTimeStamp). With every data change, its duplication schema's most recent usage timestamp is updated in the cache” (Para. [0063]).Wang further teaches “database fragments are associated with different logs” (Abstract) and “Each database fragment is logically associated with its own log although physically one log may include the changes from multiple database fragments hosted by a single node. Each log indicates changes made, if any, in its associated database fragment or fragments. Furthermore, the history includes time ranges in which the schemas are or were valid.” (Para. [0099]).Therefore, Wang teaches building a cache that includes an association of identifiers indicating a partition and node within the distributed data source providing/associated with the change data record. Monitoring for a presence of new nodes, or an unavailability of one or more nodes within the distributed data source; Wang teaches “in case a database fragment becomes unavailable and thus data record copies in this database fragment become inaccessible, copies of the data records may still be available on other database fragments” (Para. [0036]) thereby teaching monitoring the unavailability of the database fragment and when the situation is detected, selecting another database fragment from which to obtain change data records when it is determined that one source is unavailable.It is noted that monitoring is being its broadest reasonable interpretation of any detection method that is capable of detecting or receiving notification of “a presence of new nodes or unavailability of one or more nodes”. Wherein the change data capture process manager, in response to a source node is determined as unavailable, performs a recovery process that selects, from within the plurality of replica nodes that the distributed data source, based on a recovery position information, a replica node and position associated with the replica node from which to obtain the change data for use with the one or more target systems. Wang teaches “in case a database fragment becomes unavailable and thus data record copies in this database fragment become inaccessible, copies of the data records may still be available on other database fragments” (Para. [0036]) thereby teaching in response to a source being determined as unavailable, selecting, based on information indicating available copies of the data records, another database fragment and thus the position of change data records on the another database fragment from which to obtain change data records. Wang teaches all of the elements of the claimed invention as stated above except: Generating, for each change data record read from the source change trace entities and indicative of the data changes at the distributed data source, a token indicative of a partition and node within the distributed data source providing that change data record; Wherein the change data capture process manager, in response to a source node is determined as unavailable, performs a recovery process that selects, from within the plurality of replica nodes that the distributed data source, based on the tokens in the cache, including a token indicative of the partition within the distributed data source providing a particular change data record, and a recovery position information, a replica node and position associated with the replica node from which to obtain the change data for use with the one or more target systems. However, in the related field of endeavor of data collection in a distributed storage environment, Abrams teaches: Generating, for each change data record read from the source change trace entities and indicative of the data changes at the distributed data source, a token indicative of a partition and node within the distributed data source providing that change data record; Abrams teaches associating data sets with source profiles that include authentication tokens “which the utility can use to verify that the dataset originated with the expected source” (Para. [0199]) thereby teaching associating extracted data records with a token indicative of the source providing that record when extracting/accessing the data sets. Abrams further teaches “Information in a source profile includes authentication tokens, which the utility can use to verify that the dataset originated with the expected source” (Para. [0199]) and “FIG. 7A shows an example of a method for managing information and associated source based entitlements in a multi-source multi-tenant data repository. This figure represents a high level overview of the advantageous processes needed to form, maintain and operate the repository. In FIG. 7A, box 1100 represents the overall method. Within it, box 1101 represents the initial step of forming the repository with the necessary information element structures in place (described in detail in FIG.8A, 8B, 8C, 8D). In addition to these, the repository is used to store other items that reside in a data store. These additional items are business (value added functions, business documents, etc.) or functional/operational (rule sets, log records, etc.) in nature” (Para. [0292]). Abrams further teaches “The reference data utility assures the data sources, through audit log support” (Para. [0167]).Wang explicitly teaches logs as a commit/transaction log by teaching “each database fragment is associated with a transaction log. In implementation, one or more database fragments in a single store may share a transaction log or each database fragment may have its own transaction log” (Para. [0037]) and “including the timestamp in the log record, by including the timestamp in the corresponding transaction's commit log record, or by some other technique that associates a data change with the logical timestamp.” (Para. [0070]).Therefore, Abrams in combination with Wang teaches generating a partition token from every record read from a transaction/commit log. Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Abrams and Wang at the time that the claimed invention was effectively filed, to have combined the association of acquired data with a token describing the source of the acquired data, as taught by Abrams, with the system and method for capturing changes in logs and transferring the captured changes to remote databases, as taught by Wang. One would have been motivated to make such combination because Abrams teaches “information in a source profile includes authentication tokens, which the utility can use to verify that the dataset [associated with the token] originated with the expected source” (Para. [0199]) and it would have been obvious to a person having ordinary skill in the art that verifying data sources of received data would improve the security of the contents of the stream of changes taught by Wang prior to sending the stream of changes to the remote databases. Wang and Abrams teach all of the elements of the claimed invention as stated above except: Wherein the change data capture process manager, in response to a source node is determined as unavailable, performs a recovery process that selects, from within the plurality of replica nodes that the distributed data source, based on the tokens in the cache, including a token indicative of the partition within the distributed data source providing a particular change data record, and a recovery position information, a replica node and position associated with the replica node from which to obtain the change data for use with the one or more target systems. However, in the related field of endeavor of data replication from a source to a target, Anglin in combination with Wang and Abrams teaches: Wherein the change data capture process manager, in response to a source node is determined as unavailable, performs a recovery process that selects, from within the plurality of replica nodes that the distributed data source, based on the tokens in the cache, including a token indicative of the partition within the distributed data source providing a particular change data record, and a recovery position information, a replica node and position associated with the replica node from which to obtain the change data for use with the one or more target systems. Wang teaches “in case a database fragment becomes unavailable and thus data record copies in this database fragment become inaccessible, copies of the data records may still be available on other database fragments” (Para. [0036]) thereby teaching selecting another database fragment from which to obtain change data records in response to determining that one source is unavailable. Anglin teaches each object at the source and target servers having unique attributes that can be compared to determine if the target has objects matching those at the source, such as signature, unique file name, hash value, etc.” (Para. [0039]). Abrams teaches an attribute of acquired data as being an authentication token “which the utility can use to verify that the dataset originated with the expected source” (Para. [0199]). Therefore Anglin in combination with Abrams and Wang teach selecting a replica node for a recovery process based on an authentication stored as an attribute of the source object to be compared with a target object which, based on the comparison of the tokens as attributes, determines whether to capture the source object to be sent to the target server(s) for the recovery.It is noted that the claims do not specify how the tokens are used in selecting a replica node, and merely that the selection is “based on the tokens in the cache”. Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Anglin, Abrams, and Wang at the time that the claimed invention was effectively filed, to have combined the comparison of source and target object attributes prior to sending the objects from the source to the target, as taught by Anglin, with the association of acquired data with a token describing the source of the acquired data, as taught by Abrams, and the system and method for capturing changes in logs and transferring the captured changes to remote databases, as taught by Wang. One would have been motivated to make such combination because Anglin “the replication between the source and target seeks to minimize the amount of data transmitted for objects sent to the target server” by comparing the data on the source to be sent to the target with data already at the target (Para. [0053]) and it would have been obvious to a person having ordinary skill in the art that minimizing the amount of data transmission would improve speed of transmission as well as reduce the cost spent on each transmission. Regarding Claim 2: Wang, Anglin, and Abrams further teach: Wherein the distributed data source is one of a distribute database, or a distributed data stream, or other distributed data source, and wherein the one or more target systems include one or more of a database, a message queue, or other target. Wang teaches replicating data from the database 305, comprising distributed data over nodes and database fragments, to remote databases 310 as the target databases (Paras. [0038]-[0039] & Figure 3). Regarding Claim 3: Wang, Anglin, and Abrams further teach: Wherein the change data capture process manager performs a change data capture process that converts the change data read from the distributed data source, into a canonical format output of the change data, for consumption by the one or more target systems. It is noted that the limitation recites intended use language of “…for consumption by…” and is therefore not being given patentable weight. The scope of the limitation is understood as reciting “Wherein the change data capture process manager performs a change data capture process that converts the change data read from the distributed data source, into a canonical format output of the change data”. However, for purposes of compact prosecution, the limitation is being addressed below as if the intended use language was recited in a positive step. Wang teaches “data changes exported from all log scanners may be merged to form a single data change stream before they are applied to the remote database” (Para. [0046]) thereby converting the change data into a canonical format for consumption by the remote databases. Regarding Claim 6: Wang, Anglin, and Abrams further teach: Wherein the change data capture process manager performs a deduplication process that provides automatic deduplication of the data provided by the distributed data source, Wang teaches “when multiple logs are involved in the source database where duplicate changes may be included in these logs, the duplicates need to be removed…in creating the stream of changes” (Para. [0040]) thereby teaching automatic deduplication of the data. including that when a new row is processed, the change data capture process manager checks the cache for a token match, and, Wang teaches “aggregate usage information may be recorded and cached” (Para. [0063]) and “each store may include zero or more database fragments (sometimes referred to herein simply as "fragments"). A fragment may include one or more records of a database. In relational databases, a record may comprise a row of a table, for example. As used herein, a record is to be read broadly as to include any data that may be included in a database of any type.” (Para. [0035])Abrams further teaches “The source profile contains control information needed to cleanse, quality enhance and transform data from that source into repository entity fields. This includes authentication tokens to validate a source as the origin of arriving data, formats, encodings and protocols for receiving datasets from the source, contact arrangements for correction interactions, reporting arrangements, data access and updated authorizations granted to agents acting for the source.” (Para. [0212]). if the token exists, checks the origin node of a source row, Abrams further teaches “The source profile contains control information needed to cleanse, quality enhance and transform data from that source into repository entity fields. This includes authentication tokens to validate a source as the origin of arriving data, formats, encodings and protocols for receiving datasets from the source, contact arrangements for correction interactions, reporting arrangements, data access and updated authorizations granted to agents acting for the source.” (Para. [0212]) thereby teaching checking if a token exists and checking the origin of a source of a record/row. if the origin node of the source row matches the node in the cache, this row data is passed, otherwise the row is filtered out. Abrams further teaches “The source profile contains control information needed to cleanse, quality enhance and transform data from that source into repository entity fields. This includes authentication tokens to validate a source as the origin of arriving data, formats, encodings and protocols for receiving datasets from the source, contact arrangements for correction interactions, reporting arrangements, data access and updated authorizations granted to agents acting for the source.” (Para. [0212]) and “All source datasets received, validated, normalized, cleansed and prepared as target datasets, along with any attribute values enhanced through cross-source comparison and/or cleansing processes, are stored separately in the ETSDT repository. Each of these datasets of reference data values has clearly understood sourcing” (Para. [0399]) thereby teaching validating a source by matching the source as the origin node of arriving data, thus passing on the data.Abrams further teaches “If the source is invalid the dataset is recorded and the entire dataset is sent to manual processing for source validation” (Para. [0421]) thereby teaching filtering out the invalid rows of the dataset for manual processing for source validation. Regarding Claim 8: Wang, Anglin, and Abrams further teach: whereupon a change to the distributed source topology associated with the distributed data source system, including one or more nodes being added to or removed from the distributed source topology, the deduplication process detects the change to the distributed source to the distributed source topology. Wang teaches removing a node from the source topology by determining the node to report incompleteness to address the possibility of node failure (Para. [0069]) thereby teaching detecting the removal of the node based on the incompleteness report. Regarding Claim 11: All of the limitations herein are similar to some or all of the limitations of Claim 1. Regarding Claim 12: All of the limitations herein are similar to some or all of the limitations of Claim 2. Regarding Claim 13: All of the limitations herein are similar to some or all of the limitations of Claim 3. Regarding Claim 16: All of the limitations herein are similar to some or all of the limitations of Claim 6. Regarding Claim 18: All of the limitations herein are similar to some or all of the limitations of Claim 8. Regarding Claim 21: Some of the limitations herein are similar to some or all of the limitations of Claim 1. Wang, Anglin, and Abrams further teach: a non-transitory computer readable storage medium, including instructions stored thereon which when read and executed by one or more computers causes the one or more computers to perform a method (Wang - Para. [0019]). Regarding Claim 22: Wang, Anglin, and Abrams further teach: wherein the cache of data records extracted from the distributed data source includes, for each record within the cache, the token indicative of the node within the distributed data source providing that record; and Anglin teaches each object at the source and target servers having unique attributes that can be compared to determine if the target has objects matching those at the source, such as signature, unique file name, hash value, etc.” (Para. [0039]). Abrams teaches an attribute of acquired data as being an authentication token “which the utility can use to verify that he dataset originated with the expected source” (Para. [0199]). Therefore Anglin in combination with Abrams teaches storing the authentication as an attribute of the source object to be compared with a target object which, based on the comparison of the tokens as attributes, determines whether to capture the source object to be sent to the target server(s). Wang teaches performing the comparison for every available log in a cache by teaching temporary storage of received available logs for performing duplicate comparison for creation of a single data stream (Para. [0105]). wherein the system determined, based on comparison of the tokens in the cache, to capture a data change associated with a particular record, as provided by a particular node of the distributed data source, Anglin teaches each object at the source and target servers having unique attributes that can be compared to determine if the target has objects matching those at the source, such as signature, unique file name, hash value, etc.” (Para. [0039]). Abrams teaches an attribute of acquired data as being an authentication token “which the utility can use to verify that he dataset originated with the expected source” (Para. [0199]). Therefore Anglin in combination with Abrams teaches storing the authentication as an attribute of the source object to be compared with a target object which, based on the comparison of the tokens as attributes, determines whether to capture the source object to be sent to the target server(s). wherein a determination is made that the node indicated by the token in the cache matches the source node for the particular record then passing the particular record to the capture process. Anglin teaches each object at the source and target servers having unique attributes that can be compared to determine if the target “has objects matching those at the source, such as signature, unique file name, hash value, etc.” (Para. [0039]) which is done before capturing the metadata of the target objects to be passed to the target. Abrams teaches an attribute of acquired data as being an authentication token “which the utility can use to verify that the dataset originated with the expected source” (Para. [0199]). Therefore Anglin in combination with Abrams teaches making the determination to capture the source object when the attribute of the source object has been authenticated based on comparison of tokens as attributes.While the newly amended claim limitation as a whole merely recites two parts in a particular order, performing the determination that the token in the cache matches the source node, then passing the particular record to the capture process, Anglin further teaches the first causing the second by teaching the authentication step, narrowing down determined objects (step 118) which then causes the capturing and sending of metadata only for the determined authenticated objects (step 120). Regarding Claim 23: All of the limitations herein are similar to some or all of the limitations of Claim 22. Claims 4, 10, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang, Anglin, and Abrams, and further in view of Padmanabhan et al. (U.S. Pre-Grant Publication No. 2013/0246376, hereinafter referred to as Padmanabhan). Regarding Claim 4: Wang, Anglin, and Abrams teach all of the elements of the claimed invention as stated above except: whereupon based on a target system to which the change data will be communicated, the canonical format output of the change data is converted to a format used by the target system. However, in the related field of endeavor of extracting data from a source for loading into a target, Padmanabhan teaches: whereupon based on a target system to which the change data will be communicated, the canonical format output of the change data is converted to a format used by the target system. Padmanabhan teaches the data intake management computing device 12 using file generation filtering rules, based on retrieved file definition rules and a name and location of one of the target application servers, to apply to the transformed source files to generate one or more load ready files (Paras. [0037]-[0038]) outputting “the generated load ready files to the corresponding one of the target application servers…based on the obtained name and location” (Para. [0041]). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Padmanabhan, Anglin, Abrams, and Wang at the time that the claimed invention was effectively filed, to have combined the processing of intake data for transfer to target computers, as taught by Padmanabhan, with the comparison of source and target object attributes prior to sending the objects from the source to the target, as taught by Anglin, the association of acquired data with a token describing the source of the acquired data, as taught by Abrams, and the system and method for capturing changes in logs and transferring the captured changes to remote databases, as taught by Wang. One would have been motivated to make such combination because Padmanabhan teaches “a number of advantages…that more efficiently and effectively manage the intake any kind of incoming data” where “with this technology, received data files can be automatically processed in any custom file format required by applications executing at requesting target computing devices” (Para. [0007]). It would have been obvious to a person having ordinary skill in the art that incorporating a data intake management computer device to automatically process source data into “any custom file format required” by the target would create a more robust and flexible system. Regarding Claim 10: Padmanabhan, Wang, Anglin, and Abrams further teach: wherein when more than one replica node is associated with a record, Wang teaches “the single data stream is ordered by logical times at which changes occurred to database records associated with the batch of changes” (Para. [0094]) thereby teaching the distributed data source including nodes that store and provide associated records. wherein a history queue that includes a set of last records read from one or more source nodes is used to select, based on a record history, which replica node to provide the record, and Wang teaches “in case a database fragment becomes unavailable and thus data record copies in this database fragment become inaccessible, copies of the data records may still be available on other database fragments” (Para. [0036]) where when a database fragment is determined to be offline, another database fragment matching the offline database fragment is selected for providing the data change based on a maximum ID is used when multiple database fragments are available (Para. [0078]). Wang further teaches using a duplicationschemahistory intersected with a set of currently online database fragment IDs and then finding the maximum ID of the intersection (Para. [0078]) thereby teaching using a history queue to select a replica based in part on the duplicationschemahistory. Padmanabhan also teaches a history queue of last records read by teaching a data intake management computing device “may automatically store information on which incoming source files were successfully validated and which incoming source files were not successfully validated in the audit information database 40, although other types of information can be stored” (Para. [0025]). wherein a replica node with a maximum record history is selected to feed a partition token found in a last record processed by the unavailable node. It is noted that the limitation recites intended use language of “…to feed…” and is therefore not being given patentable weight. The scope of the limitation is understood as reciting “a replica node with a maximum record history is selected”. However, for purposes of compact prosecution, the limitation is being addressed below as if the intended use language was recited in a positive step. Wang teaches “in case a database fragment becomes unavailable and thus data record copies in this database fragment become inaccessible, copies of the data records may still be available on other database fragments” (Para. [0036]) where the when a database fragment is determined to be offline, another database fragment matching the offline database fragment is selected for providing the data change based on a maximum ID is used when multiple database fragments are available (Para. [0078]). Wang further teaches using a duplicationschemahistory intersected with a set of currently online database fragment IDs and then finding the maximum ID of the intersection (Para. [0078]) thereby teaching using a history queue to select a replica with the maximum record history determined based in part on the duplicationschemahistory. Regarding Claim 14: All of the limitations herein are similar to some or all of the limitations of Claim 4. Regarding Claim 20: All of the limitations herein are similar to some or all of the limitations of Claim 10. Claims 7, 9, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wang, Anglin, and Abrams, and further in view of Tom et al. (U.S. Pre-Grant Publication No. 2005/0165858, hereinafter referred to as Tom). Regarding Claim 7: Wang, Anglin, and Abrams further teach: Wherein the change data capture process manager provides access to one or more distributed source change trace entity at nodes of the distributed data source system. Wang teaches “a centralized node may receive available logs and may remove duplicates in create a single data change stream to be exported to another database” (Para. [0105]). thereby teaching access being provided to the data change logs each node to determine the data changes across the distributed nodes for exporting to one or more other remote databases, as depicted in figure 3. Wang, Anglin, and Abrams teach all of the elements of the claimed invention as stated above except: Wherein the change data capture process manager performs automatic discovery of the distributed source topology associated with the distributed data source system, However, in the related field of endeavor of extracting data from a source for loading into a target, Tom teaches: Wherein the change data capture process manager performs automatic discovery of the distributed source topology associated with the distributed data source system, Tom teaches “as nodes are added to a peer-to-peer network, it is possible to validate if data replication requirements are being met” (Para. [0042]) thereby teaching automatic discovery of the distributed topology as nodes are added to the peer-to-peer network and providing access to the nodes. Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Tom, Anglin, Abrams, and Wang at the time that the claimed invention was effectively filed, to have combined the ring topology, as taught by Tom, with the comparison of source and target object attributes prior to sending the objects from the source to the target, as taught by Anglin, the association of acquired data with a token describing the source of the acquired data, as taught by Abrams, and the system and method for capturing changes in logs and transferring the captured changes to remote databases, as taught by Wang. One would have been motivated to make such combination because Tom teaches that an architecture of a ring topology increases fault tolerance of the system (Para. [0031]) and increasing fault tolerance improves the reliability of the system. Regarding Claim 9: Tom, Wang, Anglin, and Abrams further teach: whereupon the change data capture process manager determine that a particular node in the distributed data source system, wherein said particular node had been providing records, becomes unavailable, the change data capture process manager performs a recovery process that selects a replica node at which to obtain records. Tom teaches synchronizing nodes where “since all nodes stay synchronized, and all nodes can act as publishers, there now exists multiple sites for failovers, write and read scalability, etc.” (Para. [0031]) thereby teaching selecting a replica node as a failover node when a node becomes unavailable. Regarding Claim 17: All of the limitations herein are similar to some or all of the limitations of Claim 7. Regarding Claim 19: All of the limitations herein are similar to some or all of the limitations of Claim 9. Response to Amendment Applicant’s Amendments, filed on 11/28/2025, are acknowledged and accepted. In light of the Amendments filed on 11/28/2025, the claim objection to claims 1, 11, and 21 are withdrawn. Response to Arguments On pages 14-15 of the Remarks filed on 11/28/2025, Applicant argues that “although Wang and Anglin generally appear to describe various means for data replication, while Abrams appears to describe the use of an authentication token which a multi-source multi-tenant reference data utility can use to verify that a dataset originated with an expected source; neither of the cited references when considered alone or in combination, appear to describe, for example, wherein each data record, when extracted from the distributed data source, is associated with a token indicative of a partition and node within the distributed data source providing that record; or wherein in response to a source node is determined as being unavailable, a recovery process selects, from within the plurality of replica nodes at the distributed data source, based on the token indicative of partition within the distributed data source providing a record, a replica node from which to obtain the change data records” because “ Abrams appears to describe that "a source profile contains information characterizing the behavior of a data source used by a reference data utility" and that "information in a source profile includes authentication tokens, which the utility can use to verify that the dataset originated with the expected source"- which appears to indicate that the authentication token described therein is included in the source profile associated with a data source and is used to verify that a dataset originated with an expected source; but does not appear to describe, nor render obvious when combined with the additional cited references, the operation of a change data capture process manager in generating tokens from records read from source trace entities, for use in determining replica nodes from which to obtain change data records.”. Applicant’s argument is not convincing for at least the reason(s) that the amended claims do not specify or necessitate how the tokens are being used in the selection of the replica node, just that the selection is “based on the tokens in the cache”. Therefore, when read using the broadest reasonable interpretation of the amended claims, Wang, Abrams, and Anglin are understood as teaching the scope of the amended claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Porter (U.S. Patent No. 7,321,939) teaches a method for delivering information to information targets within a computing environment having multiple platforms includes extraction information from an information source, transforming the extracted information, and isolating the transformed information by wrapping the transformed information into message envelope having a standard format, the message envelope being routed to at least one information target on the same platform where the message envelope is targeted to an information target on the same platform as the router. Dolan (U.S. Pre-Grant Publication No. 2012/0303559) teaches developing, training, validating, and deploying discovery avatars embodying mathematical models that may be used for document and data discovery and deployed within large data repositories. Todd (U.S. Patent No. 7,716,181) teaches capturing source database change transactions, batching them together for efficient transfer to a target replica system, and applying the batched change transactions to the target replica system. Kuroide (U.S. 2010/020515) teaches “when a failure occurred in any way of the matching nodes, the data matching switching part 33 records the identification information of the failure node into a history list of failure nodes” and switching to the backup data of the other matching nodes (Para. [9934]). Bulkowski (U.S. Pre-Grant Publication No. 2018/0004777) teaches a method of a data distribution across nodes of a Distributed Database Base System (DDBS) includes the step of hashing a primary key of a record into a digest, wherein the digest is part of a digest space of the DDBS. The method includes the step of partitioning the digest space of the DDBS into a set of non-overlapping partitions. The method includes the step of implementing a partition assignment algorithm. The partition assignment algorithm includes the step of generating a replication list for the set of non-overlapping partitions. The replication list includes a permutation of a cluster succession list. A first node in the replication list comprises a master node for that partition. A second node in the replication list comprises a first replica. The partition assignment algorithm includes the step using the replication list to generate a partition map. Kan et al. (U.S. Pre-Grant Publication No. 2013/0346365) teaches a distributed storage system of the present invention includes a plurality of data nodes coupled via a network and respectively including data storage units. At least two of the data nodes hold in the respective data storage units thereof replicas of a plurality of types of data structures that are logically identical but are physically different between the data nodes. Park et al. (U.S. Pre-Grant Publication No. 2017/0147638) teaches efficiently providing transaction-consistent snapshots of data stored in or associated with a database stored within a database management system. An embodiment operates by receiving, at a source database, an update request to update data associated with a table stored at the source database. The embodiment continues by modifying a value of a modification-in-progress data structure corresponding to the table to indicate that a modification is in progress for the table, and that cached data associated with the table is invalid while the modification is in progress for the table and performing the table update based, at least, on information received in the update request. The embodiment further continues by updating a value of a commit identification counter, and subsequently a table time stamp associated with the table, to indicate that all cached data associated with the table having a time stamp older than the updated time stamp are invalid. The embodiment further continues by modifying the value of the modification-in-progress counter to indicate the completion of table modification. Lee et al. (U.S. Pre-Grant Publication No. 2016/0371356) teaches facilitating transaction processing within a database environment having a coordinator node, a first worker node, and at least a second worker node. The first worker node sends a request from to the coordinator node for at least a first synchronization token maintained by the coordinator node. The first worker node receives the at least a first synchronization token from the coordinator node. The first worker node assigns the at least a first synchronization token to a snapshot as a snapshot ID value. The snapshot is executed at the first worker node. The first worker node forwards the snapshot ID value to the at least a second worker node. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT F MAY whose telephone number is (571)272-3195. The examiner can normally be reached Monday-Friday 9:30am to 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached on 571-270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT F MAY/Examiner, Art Unit 2154 2/19/2026 /BORIS GORNEY/Supervisory Patent Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Sep 13, 2023
Application Filed
May 18, 2024
Non-Final Rejection — §103
Oct 23, 2024
Response Filed
Jan 21, 2025
Final Rejection — §103
May 23, 2025
Response after Non-Final Action
Jul 25, 2025
Request for Continued Examination
Jul 30, 2025
Response after Non-Final Action
Aug 21, 2025
Non-Final Rejection — §103
Nov 28, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586145
METHOD AND APPARATUS FOR EDITING VIDEO IN ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12468740
CATEGORY RECOMMENDATION WITH IMPLICIT ITEM FEEDBACK
2y 5m to grant Granted Nov 11, 2025
Patent 12367197
Pipelining a binary search algorithm of a sorted table
2y 5m to grant Granted Jul 22, 2025
Patent 12360955
Data Compression and Decompression Facilitated By Machine Learning
2y 5m to grant Granted Jul 15, 2025
Patent 12347550
IMAGING DISCOVERY UTILITY FOR AUGMENTING CLINICAL IMAGE MANAGEMENT
2y 5m to grant Granted Jul 01, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+29.7%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 286 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month