Prosecution Insights
Last updated: April 19, 2026
Application No. 17/357,202

SEAMLESS MICRO-SERVICES DATA SOURCE MIGRATION WITH MIRRORING

Non-Final OA §103
Filed
Jun 24, 2021
Examiner
MILLS, PAUL V
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Charter Communications Operating LLC
OA Round
5 (Non-Final)
53%
Grant Probability
Moderate
5-6
OA Rounds
4y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
185 granted / 351 resolved
-2.3% vs TC avg
Strong +40% interview lift
Without
With
+39.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
22 currently pending
Career history
373
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
47.8%
+7.8% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
24.7%
-15.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 351 resolved cases

Office Action

§103
DETAILED ACTION Status of Claims This action is in reply to the communication filed on 12/17/2025. Claims 1-19, 21, and 22 have been cancelled. Claims 23-44 have been added. Claims 20 and 23-44 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on12/17/2025 has been entered. Response to Arguments On pg. 8-9 of the Remarks, Applicant essentially argues regarding the newly added claims of the instant application.: “As explained in the "New Claims" section infra, Applicant has by this paper cancelled each of the rejected claims without prejudice and added, inter alia, new independent Claims 29 and 39, each of which correspond generally and without limitation or estoppel to subject matter of allowed independent Claim 20. Accordingly, Applicant believes each of the now-pending claims is in condition for allowance...New independent Claims 29 and 39 each correspond generally and without limitation or estoppel to subject matter of allowed independent Claim 20...New independent Claims 29 and 39 each correspond generally and without limitation or estoppel to subject matter of allowed independent Claim 20…Additionally, Applicant submits that new Claims 23 - 44 distinguish over the art of record, and therefore are believed to be in condition for allowance.” Examiner respectfully disagrees new independent Claims 29 and 39 are in condition for allowance for the reasons detailed in the rejections below. While claims 29 and 39 do “correspond generally” to the subject matter of claim 20, their scope is not equivalent. Most particularly, the last limitations of claims 29 and 39 recite: “wherein at least one of the time periods and the proportions is adapted based on a percentage of production requests meeting or exceeding a threshold level of criticality.“ Which is not equivalent to the final limitaltions recited in claim 20: “wherein apportionment of the production request stream between the first request stream and second request stream is adapted based on at least a percentage of production requests therein determined to meet or exceed a threshold level of criticality; and wherein the plurality of time periods are each adapted based on at least a percentage of production requests therein determined to meet or exceed a threshold level of criticality.” Allowable Subject Matter Claim 20 stands allowed as specified in the 07/21/2025 Office Action. New claims 23-28 are allowable at least for being dependent upon an allowed claim. Claim Objections Claims 29-38 are objected to because of the following informalities: Claim 29 recites “identifying an identified microservice in a production environment configured to process…” which should be written “identifying a”. Claims 30-38 depend upon an objected claim. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 29-37, 39-40 and 42-44 are rejected under 35 U.S.C. 103 as being unpatentable over “Understanding and Validating Database System Administration”, 2006, (hereafter Oliveira) in view of “Testing Database Changes the Right Way”, 2018, (hereafter “Heap”) in further view of “Bifrost – Supporting Continuous Deployment with Automated Enactment of Multi-Phase Live Testing Strategies”, 2016, (hereafter “Bifrost”) in view of Patilet al. (US 2020/0314615 A1). Claim 29 and 39: Oliveira discloses (pg. 213-214, § 1, para. 1, 5, 6; pg. 219, § 4) an apparatus for servicing application component (container and microservice discussed below in view of Bifrost) requests in a production environment, wherein: the production environment (online environment/slice) comprising compute and memory resources configured as one or more clusters of [application components] including an identified [component] operatively coupled to an existing data source (online/live database), the existing data source scheduled to be replaced by a new data source (masked database under validation) (pg. 220; and pg. 223, col. 1). the apparatus comprising: network services provider equipment comprising one or more server apparatus and one or more non-transitory storage apparatus, the one or more server apparatus and one or more non-transitory storage apparatus configured to provide: compute and memory resources (pg. 220, Fig. 5; pg. 224, sect. 5.1). compute and memory resources, in the mirroring environment (validation environment/slice), configured as a second [masked component or a proxy component] operatively coupled to the new data source (see at least pg. 220, § 4.1) disclosing “the components under validation, which we shall call masked components for simplicity…we actually host the validation environment on the online system itself. In particular, we divide the components into two logical slices: an online slice that hosts the online components and a validation slice where components can be validated before being integrated into the online slice…a script places the set of components to be worked on in the validation environment, effectively masking them from the live service.” route a sequence of [service/component] requests to each of the first [live component] and the second [masked component] in the mirroring environment for contemporaneous processing to generate respective first output data and second output data; compare the first output data and the second output data to determine a level of correlation therebetween (pg. 214, col. 1, para. 1; pg. 220; and pg. 223, col. 1) “Replica-based validation involves designating each masked component as a “mirror” of a live component. All requests sent to the live component are then duplicated and also sent to the mirrored, masked component. Results from the masked component are compared against those produced by the live component” (pg. 214, col. 1); “a comparator function might determine if the streams of requests and replies going across the pair of connections labeled (A) and (B) are similar enough to declare the masked database as working correctly” (pg. 220, col. 2). In response to the level of correlation exceeding a threshold level of correlation (pass validation), qualify the identified [component] for processing requests using the new data source and invoke a migration of request processing toward an instantiation of the component coupled to the new data source (pg. 220, col. 2; pg. 223, col. 2) “If the masked components pass this validation, the script calls a migration function that fully integrates the component into the live service.” (pg. 220, col. 2). As noted above, Oliveira compares responses/outputs of the masked components in the mirrored validation environment to responses/outputs produced in the live/online environment, and accordingly does not specifically disclose a first [component] configured to use the existing data source in the mirroring environment and comparing the first and second [mirror component] output data to determine a level of correlation therebetween. Heap, however, discloses an analogous DB testing method including instantiating in an environment (“shadow prod”) mirroring at least a portion of the production environment a [control/baseline machine] configured to use the existing data source, and a second [experimental machine] operatively coupled the new (different configuration) data source in at least pg. 2-3 § “Enter Shadow Prod” and “Populating a Shadow Prod Machine”. Heap further discloses comparing the [control and experimental machine] output data to determine a level of correlation therebetween in at least pg. 3-4, § “Analyzing the Results”. Exemplary quotations: “You can think of shadow prod as a staging environment for data-layer changes. Each machine in shadow prod is a mirror of a machine in production, but with a different configuration that we want to evaluate. All reads and writes that go to the production machine are mirrored to the shadow prod machine. This means the shadow prod machine is experiencing the exact same workload as a production machine…Shadow prod gives us a place where we can test database changes in an environment exactly like production” “Whenever we are testing a change, we spin up two copies of the same machine. We designate one as the control machine and the other as an experimental machine. The control machine has the same configuration as production. By setting up the experiment in this way, we have a baseline we can compare queries on the experimental machines to. This protects us from the possibility that whatever we did to populate the machine affected the resulting performance.” It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify Oliveira to utilize a newly instantiated control/baseline component that duplicates the production component as a reference to assess the component being validated rather than the actual/live component to reduce the risk of the transients or noise tainting the output data thus increasing the result accuracy (pg. 2-3 § “Enter Shadow Prod” and “Populating a Shadow Prod Machine”). Oliveira further discloses (pg. 223, col. 2) an exemplary two-stage process to migrate the live service from the old database to the new, but does not specifically employ an incremental rollout and does not disclose divide a production request stream for the identified microservice into a first request stream processed using the existing data source and a second request stream processed using the new data source. Also, as noted above, Oliveira does not disclose exemplary components being run on containers or that follow the microservice architectural style. Bifrost, however, discloses a framework to automation support for live testing strategies, including (pg. 2, § 2.2) dark/shadow launching analogous to Oliveira/Heap, wherein the “prototype specifically targets microservice-based applications” (pg. 5, § 4) including a microservice-based case study where “Every service of the case study application resides in its own Docker container” (pg. 8, § 5.1.2, para. 3). Bifrost further controls testing and migration for new microservice versions with multi-stage release/rollout strategies including divide a production request stream for the identified microservice into a first request stream processed using the existing data source and a second request stream processed using the new data source in at least Bifrost pg. 2, § 2.2; pg. 3, § 2.3; pg. 8, col. 2 disclosing “a concrete rollout strategy may consist of initial dark launching” (route a sequence of microservices requests to each of the first container and the second container in the mirroring environment for contemporaneous processing) “followed, if successful, by a gradual rollout over a defined period of time” where requests/traffic is divided into respective request streams for processing between stable (production environment) version and new version (second microservice) under test (mirroring environment), and the new version begins to provide live responses (use processing by the second container as production processing for the second request stream). Bifrost further employs threshold based checks, analogous to Oliveira’s validations (pg. 220, col. 2, para. 1-3; pg. 5; pg. 8, col. 2), between stages for “continuing the rollout strategy if the tested services behave as expected” (pg. 5, col. 2) (when contemporaneous processing of the second request stream by the first container and the second container continues to result in correlation exceeding the threshold level), provides “Gradual Rollout… starting with 5% traffic up to 100%, increasing traffic 5% every 10 seconds, for 200 seconds duration in total” (pg. 7, col. 2) after each of a plurality of time periods, increase a proportion of the production request stream allocated to the second request stream It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify Oliveira/Heap to utilize the Bifrost toolkit as it “allows developers to define and automatically enact complex live testing strategies” (pg. 1, Abstract) providing increased deployment flexibility and reduced development effort (Bifrost pg. 7, § 5; pg. 12) and to adopt microservice design principles because the “architectural concept has its advantages for the adoption of live testing methods” (pg. 2, § 2.1). Oliveira/Heap/Bifrost do not specifically disclose wherein at least one of the time periods and the proportions is adapted based on a percentage of production requests meeting or exceeding a threshold level of criticality. Patil, however, discloses (¶0017-0020, 0027) various policies for dividing traffic between a stable (existing) service version, A/B test instances and/or Canaries, analogous to Bifrost and the claimed invention, where the proportion of requests routed to the different services instances is adapted based on a percentage of production requests meeting or exceeding a threshold level of criticality. Exemplary quotation: “Emergency calls should not pass through a Canary release, because a Canary release has a higher risk of failure than a version proven to be stable over a long test duration. In another example, transactions that are related to high priority users or services should not pass-through Canary releases” (¶0027). It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify Oliveira/Heap/Bifrost to divide traffic based on priority as taught by Patil to prevent more important requests from encountering errors (Patil ¶0027). Claims 30 and 40: The combination of Oliveira/Heap/Bifrost/Patil discloses the limitations as shown in the rejections above. Oliveira (pg. 220) in view of Heap (pg. 2-3) and Bifrost (pg. 2, § 2.2; pg. 8, col. 1) further discloses wherein the sequence of microservices requests routed to the first container (Heap baseline/control instance) and the second container in the mirroring environment is also routed to the identified microservice in the production environment for processing using the existing data source under the same rationale described above for claims 29 and 39. Claims 31, 32, and 34: The combination of Oliveira/Heap/Bifrost/Patil discloses the limitations as shown in the rejections above. Furthermore, Bifrost discloses wherein the first request stream is at least ten times larger than the second request…wherein the first request stream comprises approximately ninety-five percent of the production request stream and the second request stream comprises approximately five percent of the production request stream. in at least the example in pg. 3, § 2.3: “gradually rolled out to more and more users, first to 5%, then 10%, 20%...”. Claims 33: The combination of Oliveira/Heap/Bifrost/Patil discloses (¶0017-0020, 0027) the limitations as shown in the rejections above. Furthermore, as described in the rejections to claims 29 and 39 above, Patil discloses wherein apportionment of the production request stream between the first request stream and second request stream is adapted in response to the overall volume of production request traffic stream between the first request stream and the second request stream is adapted in response to at least one of an overall volume of production request traffic and the percentage of production requests meeting or exceeding the threshold level of criticality. See also Bifrost pg. 3, § 2.3: “gradually rolled out to more and more users, first to 5%, then 10%, 20%...”; percentage based distributions adapt in response to the overall volume. Claim 35: The combination of Oliveira/Heap/Bifrost/Patil discloses the limitations as shown in the rejections above. Furthermore, Bifrost discloses in response to the proportion of the production request stream allocated to the second request stream reaching substantially one hundred percent, configuring the identified microservice in the production environment to process requests using the new data source see at least pg. 8, #4; pg. 2, § 2.2: “The amount of users testing the newest feature or functionality is gradually increased (e.g., increase traffic to the new version in 5% steps) until the previous version is completely replaced.” Claim 36: The combination of Oliveira/Heap/Bifrost/Patil discloses the limitations as shown in the rejections above. Furthermore, Oliveira discloses terminating the processing of request for the identified microservice by the mirroring environment (pg. 223; col. 2; pg. 222, col. 2, para. 1-3). See also Bifrost (pg. 8, col. 2; pg. 5, col. 1). Claim 37: The combination of Oliveira/Heap/Bifrost/Patil discloses the limitations as shown in the rejections above. Furthermore, Oliveira discloses wherein configuring the identified microservice in the production environment to process requests using the new data source is performed during a portion of a day having a request activity level below a threshold level (“low load”) (pg. 223; col. 2). Claim 42: The combination of Oliveira/Heap/Bifrost/Patil discloses the limitations as shown in the rejections above. Furthermore Heap (pg. 2-3) in view of Bifrost (pg. 2, § 2.2; pg. 8, col. 1) further discloses wherein the identified microservice (production instance/microservice) in the production environment is hosted in a container separate from the first container (Heap baseline/control instance) and the second container (shadow instance/container under test) in the mirroring environment in view of Bifrost (dark/shadow testing for microservice containers) under the same rationale described above for claims 29 and 39. Claim 43: The combination of Oliveira/Heap/Bifrost/Patil discloses the limitations as shown in the rejections above. Furthermore, Bifrost discloses wherein the compute and memory resources configured to divide the production request stream comprise a request proxy configured to selectively route microservices requests between the production environment and the mirroring environment (pg. 6, col. 1; pg. 7, col. 1; pg. 8, col. 1). Claim 44: The combination of Oliveira/Heap/Bifrost/Patil discloses the limitations as shown in the rejections above. Furthermore, Bifrost discloses wherein using processing by the second container as production processing for the second request stream is conditioned on the level of correlation exceeding the threshold level for at least one complete time period of the plurality of time periods (pg. 8). See also Oliveira pg. 220, col. 2. Claim 38 is rejected under 35 U.S.C. 103 as being unpatentable over Oliveira in view of Heap in further view of Bifrost in view of Patilet in further view of Moniz et al. (US 9,836,388 B1). Claim 38: The combination of Oliveira/Heap/Bifrost/Patilet discloses the limitations as shown in the rejections above. Oliveira’s validation “uses a set of comparator functions, which compute whether some set of observations of the validation service match a set of criteria…a comparator function might determine if the streams of requests and replies…are similar enough to declare the masked database as working correctly” (pg. 220) including for example “a strict comparator, such as exact content matching” (pg. 223) and accordingly teaches wherein determining the level of correlation comprises evaluating output data fields associated with user experience or billing accuracy but does not describe excluding data portions/fields from analysis and does explicitly disclose a comparator for excluding output data fields not relevant to user experience or billing accuracy. Moniz, however, discloses (col. 7, li. 16-36; col. 17, li. 28-57; FIG. 1) an analogous shadow/dark testing system (FIG. 1) with a comparator for determining the level of correlation between responses of a new candidate and existing/authority service and evaluating output data fields associated with user experience or billing accuracy while excluding output data fields not relevant to user experience or billing accuracy. Exemplary quotations: “comparator module 208 may receive the candidate response 136 and authority response 138 and…compares the response 136 to the response…the comparator may tag or classify differences which are specified to be important or unacceptable to the functioning of the software system…comparator module may allow differences based on planned functionality changes in the candidate stack 114 to be suppressed (e.g. ignored)” (col. 7, li. 16-36). “select the fields of the response structure to make the comparison on as well as which fields to include in the request log report. For example, in some cases, the dashboard user knows that some fields will be changed due to a change in function or the fields may be randomly generated. In such a case, the user may wish to have one or more such fields excluded from the analysis (by not being analyzed or by continuing to analyze and store information about the field but excluding the field from reporting)…service 118 may provide the user with an interface to select or exclude fields of the requests and/or responses to be tested as the requests are being replayed” (col. 17, li. 28-57). It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify the comparator of Oliveira/Heap/Bifrost/Patilet with the field exclude/ignore Moniz’s comparator to increase the accuracy and usability of difference analysis by focusing on important differences (Moniz col. 17, li. 28-57; col. 7, li. 16-36). Claims 41 is rejected under 35 U.S.C. 103 as being unpatentable over Oliveira in view of Heap in further view of Bifrost in view of Patilet in further view of in further view of Beck et al. (Simulation-based Evaluation of Resilience Antipatterns in Microservice Architectures). Claim 41: The combination of Oliveira/Heap/Bifrost/Patilet discloses the limitations as shown in the rejections above. Oliveira further describes (pg. 222, col. 1; pg. 223, col. 1) trace-based validation, but the combination of Oliveira/Heap/Bifrost/Patilet does not specifically disclose utilization of a tracing mechanism. Beck, however, discloses utilization of a tracing mechanism (Jaeger) to: (i) trace transactions of the identified microservice and generate tracing transaction data based thereon, and (ii) generate a visualization of the transactions of the identified microservice based on the tracing transaction data, the visualization enabling organization of the transactions of the identified microservice for debugging and optimization thereof (pg. 24-25; pg. 46-48). It would have been obvious to one of ordinary skill in the art prior to the filing date of the invention to modify Oliveira/Heap/Bifrost/Barua to employ the Jaeger microservice tracing tool disclosed by Beck because Jaeger “has a low overhead, supports dynamic sampling and is easily scalable…it allows for span tags, a benefit that will prove valuable in the developed architectural model extraction approach. And even more, Jaeger offers client libraries for six different programming languages, which means it can be used to instrument pretty much every microservice regardless of the employed technology stack and, therefore, most microservice applications can be instrumented with it in preparation for our architecture extraction approach. These advantages let to the decision to make Jaeger the tracing tool of our choice” (Beck pg. 46). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: Each of the following is directed to validating/testing new service versions US 20210182247 A1; US 20210117310 A1; US 20200241864 A1. “Safe Velocity: A Practical Guide to Software Deployment at Scale sing Controlled Rollout” is directed to strategies for selecting the length of the rollout period. Any inquiry of a general nature or relating to the status of this application or concerning this communication or earlier communications from the Examiner should be directed to Paul Mills whose telephone number is 571-270-5482. The Examiner can normally be reached on Monday-Friday 11:00am-8:00pm. If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, April Blair can be reached at 571-270-1014. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /P. M./ Paul Mills 02/18/2026 /APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Jun 24, 2021
Application Filed
Feb 19, 2024
Non-Final Rejection — §103
May 23, 2024
Response Filed
Jun 03, 2024
Final Rejection — §103
Sep 09, 2024
Notice of Allowance
Sep 09, 2024
Response after Non-Final Action
Oct 29, 2024
Response after Non-Final Action
Dec 02, 2024
Request for Continued Examination
Dec 11, 2024
Response after Non-Final Action
Dec 30, 2024
Non-Final Rejection — §103
Apr 03, 2025
Response Filed
Jul 15, 2025
Final Rejection — §103
Oct 10, 2025
Notice of Allowance
Oct 10, 2025
Response after Non-Final Action
Nov 13, 2025
Response after Non-Final Action
Dec 17, 2025
Request for Continued Examination
Jan 02, 2026
Response after Non-Final Action
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572385
DYNAMIC SYSTEM POWER LOAD MANAGEMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12547456
FLEXIBLE LIMITERS FOR MANAGING RESOURCE DISTRIBUTION
2y 5m to grant Granted Feb 10, 2026
Patent 12530215
PROCESSING SYSTEM, RELATED INTEGRATED CIRCUIT, DEVICE AND METHOD FOR CONTROLLING COMMUNICATION OVER A COMMUNICATION SYSTEM HAVING A PHYSICAL ADDRESS RANGE
2y 5m to grant Granted Jan 20, 2026
Patent 12519865
MULTIPLE MODEL INJECTION FOR A DEPLOYMENT CLUSTER
2y 5m to grant Granted Jan 06, 2026
Patent 12481522
USER-LEVEL THREADING FOR SIMULATING MULTI-CORE PROCESSOR
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
53%
Grant Probability
92%
With Interview (+39.6%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 351 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month