Prosecution Insights
Last updated: April 19, 2026
Application No. 18/982,526

Data Management Ecosystem for Databases

Final Rejection §103
Filed
Dec 16, 2024
Examiner
LE, MIRANDA
Art Unit
2153
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
368 granted / 492 resolved
+19.8% vs TC avg
Strong +77% interview lift
Without
With
+77.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
19 currently pending
Career history
511
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
69.2%
+29.2% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
3.8%
-36.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 492 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This communication is responsive to Amendment, filed 12/01/2025. Claims 1-20 are pending in this application. This action is made Final. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Demla et al. (US Pub No. 2019/0171633), in view of Schulman et al. (US Pub No. 2018/0302421). As to claims 1, 12, 19, Demla teaches a computing device comprising: one or more processors (i.e. Computer system 400 also includes a main memory 406, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 402 for storing information and instructions to be executed by processor 404, [0184]); and memory storing instructions that, when executed by the one or more data objects to be migrated to a production environment (i.e. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 404 for execution, [0190]); scan a plurality of queries in sandbox to determine one or more data objects to be migrated to a production environment (i.e. The commit logs of the first database system are scanned and converted into an event stream, [0026]; During the processing of the commit logs, an event stream is generated. For each event represented in the event stream, there is information about the event that may be referred to as the “event payload” ... the event payload for event created by the update to the loan app table ... to extract the person data for the corresponding update that the person application needs to make to the person app table, [0042]; queries over the audit log data store are more efficient than similar queries over a database that stores all changes made within the multiple systems, [0029]; assume that there are 10 investors in that loan, that payment operation may trigger 10 child business operations, each of which is payment of one of the ten investors ... the operation touches 10 tables, [0045]); determine, based on a plurality of queries, lineage information for one or more data objects, wherein the lineage information comprises an origin of each of the one or more data objects, and a frequency with which each of the one or more data objects has been accessed (i.e. a request to perform a multi-system operation (MSO1) is received ... the first service application 202 that provides the first service ... first changes to be made in a first database system 206 associated with the first service, [0031]; The second service application 204 recognizes MSO1 as a type of operation that involves the second service, [0035]; A single high-level business operation may spawn multiple child operations ... trigger 10 child business operations ... a parent operation and all child operations are treated as one business operation, with a single correlation identifier even if the operation touches 10 tables, [0045]); generate a tokenization for at least a portion of sensitive data, wherein the tokenization replaces the portion of the sensitive data with a non-sensitive token without altering a type or length of the portion of the sensitive data (i.e. reconciliation system 310 identifies records to be stored in an audit log data store based on a set of audit log rules. These rules may be used to encode information about changes that are of particular interest to an administrator, [0132]; Record changes that are made in a particular column in a particular table (e.g., changes to a “SSN” column of a “Users” table), [0135]); generate, based on the lineage information and the tokenization, a migration plan path of the one or more data objects from the sandbox to the production environment (i.e. the log records recorded in the commit logs, and made into event streams by the streaming modules, includes the correlation context stored by each of the services 1-N ... all of the change records sent out into the event streams for the operation instance are able to be associated using the correlation identifier for the operation instance, [0147]; the event payload for event created by the update to the loan app table (in the creation of a new loan application) should have enough information for person service to extract the person data for the corresponding update that the person application needs to make to the person app table, [0042]); provide, as input to a trained machine learning model, the migration plan, wherein the trained machine learning model is trained based on historical scheduling of fresh data load to source tables associated with the one or more data objects (i.e. based on the event information stored for prior executions of operation type X, a machine learning system may learn the average time that it takes for each action to happen, [0098]; a loan system may get a payment. This payment could be processed in system A, and then two days later the record of the event (including the correlation context) can be propagated to system B to cause system B to perform something as part of the same multi-system operation ... The time that passes between two systems performing their respective parts of the same multi-system operation can be seconds, minutes, hours, or even days, [0050]); receive, from the trained machine learning model and based on the input, a recommendation on a scheduling time to execute the migration plan (i.e. Based on an expectation model for a particular type of multi-system operation (created manually or by a trained machine learning engine) it is possibly know how much time normally lapses between events involved in that type of multi-system operation. For example, since each one of the INSERTS in the BizOp table will also have the create date, it is possible to know how long does it take on the average between the time that (a) a loan app is created by the loan app service and (b) the person is inserted into the person table in the database used by the person service. Based on this knowledge, events that occur in one system involved in a multi-system operation may trigger the sending of expectation messages to one or more other systems involved in the multi-system transaction, [0103]); add a sub-domain associated with a business organization to at least one of a plurality of data storages (i.e. a separate service is provided for each “domain”, and each domain has a highest-level object that it manages. The highest-level object managed by a service is referred to as the aggregate root of the service, [0112]; the aggregate root ID associated with each multi-system operation is inserted, as part of the correlation context metadata, into the BIZ-OP table (e.g. MSOT1), [0113]; a context attribute of a rule may span across services, database schemas, and even domains of services, [0114]; the second service application 204 performs its part of the multi-system operation based on an event stream produced from the commit log 212 of database system 206, [0034]); add a schema to the at least one of the plurality of data storages (i.e. In addition to the second changes, as part of the same second transaction TX2, the second service application 204 also causes entries to be added to a second multi-system operation table (MSOT2) that is managed by a second database server 220. Similar to the entries added to MSOT1, the entries that are added to MSOT2 include the correlation identifier CI1 and metadata related to the multi-system operation MSO1, [0036]); provision a plurality of computing resources available to one or more servers that provide the at least one of the plurality of data storages (i.e. A single high-level business operation may spawn multiple child operations ... trigger 10 child business operations ... a parent operation and all child operations are treated as one business operation, with a single correlation identifier even if the operation touches 10 tables, [0045]; the presence of the correlation context in the logs/events/messages of each system involved in a multi-system operation enables an analyzer system to use the data from the correlation context of the multi-system operation to build a communication graph that illustrates the communications that occurred as a result of execution of the multi-system operation, [0092]; a communication graph can be used to see what kind of activity is happening in the system, and based on the graph, it may be evident what triggered a loan, and that the loan creation impacted five other services external to the loan app service itself, [0093]); cause the at least one of the plurality of data storages to execute the migration plan (i.e. an analysis system builds a “live flow graph” which can be used both for debugging and for discovery (auto-discovery and/or delay discovery for activity in the systems), [0093]; communication graphs 1-N are created based on the information obtained from the logs of systems 1-4. As explained above, each of those communication graphs corresponds to a distinct correlation identifier, and is built from the correlation context information, associated with that correlation identifier, that is stored in the logs of systems 1-4, [0096]). Although Demla does not seem to specifically teach "sandbox", Schulman teaches this limitation (i.e. Any collected data (e.g., sandbox data) can be used to draw conclusions on the specific attack(s) and to develop stronger detection and prevention functionality, [0057]). It would have been obvious to one of ordinary skill of the art having the teaching of Demla, Schulman before the effective filing date of the claimed invention to modify the system of Demla to include the limitations as taught by Schulman. One of ordinary skill in the art would be motivated to make this combination in order to monitor the stream collected data to a monitoring server in view of Schulman ([0052]), as doing so would give the added benefit of keeping track of received monitored network transaction data and forms complete transactions for analysis, as taught by Schulman ([0052]). As to claims 2, 13, Demla teaches the plurality of data storages comprise one or more of a database or a data warehouse (i.e. information about all events are stored in an event table, and information about which events have been consumed, and by whom, are stored in a consumption table, [0045]). As per claim 3, Demla teaches the sensitive data comprises one or more of: payment card information (PCI), or nonpublic personal information (NPI) (i.e. any change that results in a value in a “Salary” column of an “Employment” table to go over 100,000, [0138]). As to claims 4, 14, Demla teaches the instructions, when executed by the one or more processors, cause the computing device to: prior to determining the lineage information, receive, from a user device, a request to migrate the one or more data objects from the sandbox to the production environment of the at least one of the plurality of the data storages in a cloud computing environment, wherein computing resources are dynamically allocated to the at least one of the plurality of data storages (i.e. a request to perform a multi-system operation (MSO1) is received, [0031]; executing multi-system operations based on information stored in the commit logs of database systems ... The commit logs of the first database system are scanned and converted into an event stream, [0026]; A single high-level business operation may spawn multiple child operations ... assume that there are 10 investors in that loan, that payment operation may trigger 10 child business operations, each of which is payment of one of the ten investors. However, a parent operation and all child operations are treated as one business operation, with a single correlation identifier even if the operation touches 10 tables. That is, the child operations inherit the correlation ID of the parent, [0045]). As to claims 5, 15, Demla teaches the instructions, when executed by the one or more processors, cause the computing device to: prior to receiving the request to migrate, receive a request to provision the one or more data objects in the at least one of the plurality of data storages (i.e. a request to perform a multi-system operation (MSO1) is received, [0031]; executing multi-system operations based on information stored in the commit logs of database systems ... The commit logs of the first database system are scanned and converted into an event stream, [0026]; A single high-level business operation may spawn multiple child operations ... assume that there are 10 investors in that loan, that payment operation may trigger 10 child business operations, each of which is payment of one of the ten investors, [0045]); and provision access to the one or more data object in the production environment (i.e. a parent operation and all child operations are treated as one business operation, with a single correlation identifier even if the operation touches 10 tables. That is, the child operations inherit the correlation ID of the parent, [0045]). As to claims 6, 16, Demla teaches the instructions, when executed by the one or more processors, cause the computing device to: prior to generating the tokenization, provide, to a user device, one or more options to tokenize the sensitive data, wherein the tokenization is based on a user selection of the one or more options (i.e. the rules governing storage of data in the audit log are maintained in a database table accessible to reconciliation system 310. The rules may be changed by a database administrator by updating the table and then restarting an audit log listener 312, of reconciliation system 310, [0133]; reconciliation system 310 identifies records to be stored in an audit log data store based on a set of audit log rules. These rules may be used to encode information about changes that are of particular interest to an administrator, [0132]). As to claims 7, 17, 20, Demla teaches the instructions, when executed by the one or more processors, cause the computing device to: prior to an execution of the migration plan, update a data catalog of the at least one of the plurality of data storages with the lineage information (i.e. Reconciliation system 310 is configured to receive the event streams from any number of streaming modules ( e.g. streaming modules 1, 2, N), each of which is associated with separate service, [0088]; streaming module 214 generates an event stream 217 based on information from the commit log 212 associated with the first service application 202, [0043]). As to claims 8, 18, Demla teaches the instructions, when executed by the one or more processors, cause the computing device to: determine a plurality of dependent jobs to be executed prior to the execution of the migration plan (i.e. It shall be further assumed that the multi-system operation requires first changes to be made in a first database system 206 associated with the first service (e.g. decrementing the number of seats available on the flight), and second changes to be made in a second database system 228 associated with a second service (e.g. reducing the account balance to reflect the ticket purchase), [0031]); and cause the at least one of the plurality of data storages to execute the migration plan by causing the at least one of the plurality of data storages to execute the migration plan after the dependent jobs have been executed successfully (i.e. A single high-level business operation may spawn multiple child operations ... that payment operation may trigger 10 child business operations, each of which is payment of one of the ten investors ... the operation touches 10 tables. That is, the child operations inherit the correlation ID of the parent, [0045]). As per claim 9, Demla teaches the instructions, when executed by the one or more processors, cause the computing device to validate the migration plan based on data quality check using metadata information (i.e. under these circumstances, the validation analysis for the OP-TYPE-X operation is only ripe after two hours have elapsed since reconciliation system 310 has seen a correlation ID for an occurrence of OP-TYPE-X from streaming module 1, [0090]). As per claim 10, Demla teaches the instructions, when executed by the one or more processors, cause the computing device to: determine a historical operating status of at least one of the plurality of data storages (i.e. based on historic records, the machine learning system may know that other business operations of the same type end up with multiple entries in multiple databases of multiple services, and it knows how much time for all these to happen, [0091]); select, based on the historical operation status, a time period to execute the migration plan, wherein causing the at least one of the plurality of data storages to execute the migration plan comprises causing the at least one of the plurality of data storages to execute the migration during the time period (i.e. based on the event information stored for prior executions of operation type X, a machine learning system may learn the average time that it takes for each action to happen, [0098]; a loan system may get a payment. This payment could be processed in system A, and then two days later the record of the event (including the correlation context) can be propagated to system B to cause system B to perform something as part of the same multi-system operation ... The time that passes between two systems performing their respective parts of the same multi-system operation can be seconds, minutes, hours, or even days, [0050]). As per claim 11, Demla teaches the instructions, when executed by the one or more processors, cause the computing device to: train a machine learning model based on historical operating status of the plurality of data storages that indicates a degree of utilization of each of the plurality of data storages (i.e. based on historic records, the machine learning system may know that other business operations of the same type end up with multiple entries in multiple databases of multiple services, and it knows how much time for all these to happen, [0091]). Response to Arguments Applicant's arguments with respect to claims 1-20 have been considered but are moot in view of the new ground(s) of rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MIRANDA LE whose telephone number is (571)272-4112. The examiner can normally be reached M-F 7AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached on 571-272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MIRANDA LE/ Primary Examiner, Art Unit 2153
Read full office action

Prosecution Timeline

Dec 16, 2024
Application Filed
Aug 30, 2025
Non-Final Rejection — §103
Oct 21, 2025
Applicant Interview (Telephonic)
Dec 01, 2025
Response Filed
Dec 10, 2025
Examiner Interview Summary
Jan 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591565
PREDICTING PURGE EFFECTS IN HIERARCHICAL DATA ENVIRONMENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12547635
METHOD AND APPARATUS FOR SPATIAL DATA PROCESSING
2y 5m to grant Granted Feb 10, 2026
Patent 12517907
GRAPH-BASED QUERY ENGINE FOR AN EXTENSIBILITY PLATFORM
2y 5m to grant Granted Jan 06, 2026
Patent 12517929
MAPPING DISPARATE DATASETS
2y 5m to grant Granted Jan 06, 2026
Patent 12488015
SYSTEMS AND METHODS FOR INTERACTIVE ANALYSIS
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+77.1%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 492 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month