Prosecution Insights
Last updated: April 19, 2026
Application No. 18/658,179

METHODS AND APPARATUSES FOR MANAGING MULTI-ZONE DATA CENTER FAILURES

Non-Final OA §102§103
Filed
May 08, 2024
Examiner
LI, ALBERT
Art Unit
2113
Tech Center
2100 — Computer Architecture & Software
Assignee
Naver Corporation
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
48 granted / 55 resolved
+32.3% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
14 currently pending
Career history
69
Total Applications
across all art units

Statute-Specific Performance

§101
15.4%
-24.6% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 55 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 8-10, 13, 16-17 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US Patent Application Publication No. 20060031594 (“Kodama”). Regarding claim 1, Kodama teaches A method for managing data center failures, the method being executed by one or more processors of a leader management node, and the method comprising: (Fig. 5, [0047], [0048], [0131]: a virtual volume module configures the replication of data storage systems) allocating a first data node among a first plurality of data nodes a master data node, the first plurality of data nodes being in a first data center; (Fig. 5, [0048], [0133], [0135]: storage system A of storage systems in a first data center is designated as a primary storage system) allocating one or more second data nodes among the first plurality of data nodes as one or more first backup data nodes; (Fig. 5, [0048], [0133], [0135]: storage system B of storage systems in a first data center is designated as a secondary storage system) allocating one among a second plurality of data nodes as a second backup data node, the second plurality of data nodes being in a second data center, and the first data center and the second data center being located in different regions; and (Fig. 5, [0048], [0133], [0136]: storage system D of storage systems in a second data center at a different location is designated as a secondary storage system) setting a data replication mode between the master data node, the one or more first backup data nodes and the second backup data node, wherein the data replication mode is set to either a first mode or a second mode, (Fig. 5, [0047], [0048], [0135], [0136]: configuring replication between storage system A and storage system B to be synchronous and replication between storage system A and storage system D to be synchronous or asynchronous) wherein the data replication between the master data node and all among the one or more first backup data node is performed in a synchronous replication mode, and the data replication between the master data node and the second backup data node is performed in a synchronous replication mode or an asynchronous replication mode according to the first mode or the second mode. (Fig. 5, [0047], [0048], [0135], [0136]: configuring replication between storage system A and storage system B to be synchronous and replication between storage system A and storage system D to be synchronous or asynchronous) Regarding claim 2, Kodama further teaches wherein the first mode prioritizes data stability over minimization of latency when providing a service in response to occurrence of a failure. ([0124]: synchronous replication prioritizes data stability over IO performance in response to a failure) Regarding claim 8, Kodama further teaches wherein the second mode prioritizes minimization of latency in providing a service over data stability in response to occurrence of a failure. ([0124]: asynchronous replication prioritizes IO performance over data stability in response to a failure) Regarding claim 9, Kodama further teaches wherein the setting comprises setting the data replication mode as the second mode including: setting the one or more first backup data node as one or more slave type backup data nodes; and (Fig. 5, [0047], [0048], [0135]: configuring replication between storage system A and storage system B to be synchronous) setting the second backup data node as a learner type backup data node. (Fig. 5, [0047], [0048], [0136]: configuring replication between storage system A and storage system D to be asynchronous) Regarding claim 10, Kodama further teaches wherein data replication between the learner type backup data node and the master data node is performed based on asynchronous replication. (Fig. 5, [0047], [0048], [0136]: configuring replication between storage system A and storage system D to be asynchronous) Regarding claim 13, Kodama further teaches automatically changing one of the one or more first backup data node to a new master data node without administrator input in response to determining that the master data node has failed while the data replication mode is set as the second mode. ([0039], [0140]: the virtual volume module triggers failover from storage system A to storage system B) Regarding claim 16, Kodama further teaches wherein the leader management node is connected to one or more backup management nodes, the leader management node and the one or more backup management nodes being located in different data centers. ([0133], [0135]: standby virtual volume module in the data center in the other location) Regarding claim 17, Kodama further teaches wherein one of the one or more backup management nodes is changed to a new leader management node in response to occurrence of a failure in the leader management node. ([0142]: failover to from the active virtual volume module to the standby virtual volume module) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3, 5-6, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication No. 20060031594 (“Kodama”) in view of US Patent Application Publication No. 20160203202 (“Merriman”). Regarding claim 3, Kodama further teaches wherein the setting comprises setting the data replication mode as the first mode including: (Fig. 5, [0047], [0048], [0135], [0136]: configuring replication between storage system A and storage system B to be synchronous and replication between storage system A and storage system D to be synchronous) setting the one or more first backup data nodes as one or more first slave type backup data nodes; and (Fig. 5, [0047], [0048], [0135], [0136]: configuring replication between storage system A and storage system B to be synchronous and replication between storage system A and storage system D to be synchronous) setting the second backup data node as a second slave type backup data node, data replication between each of the one or more first slave type backup data nodes and the second slave type backup data node with the master data node being performed based on synchronous replication, and (Fig. 5, [0047], [0048], [0135], [0136]: configuring replication between storage system A and storage system B to be synchronous and replication between storage system A and storage system D to be synchronous) Kodama does not further teach the remaining limitations. Merriman teaches each of the one or more first slave type backup data nodes and the second slave type backup data node is included as a new master candidate for a master election performed in response to failure of the master data node. ([0049]: master election in response to a failed master includes slaves in the same data center as the failed master and slaves in data centers in different geographic locations) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Merriman’s election with Kodama’s failover. One of ordinary skill in the art prior to the effective filing date would have been motivated to make the combination to automatically select the best candidate to failover (Merriman, [0048]). Regarding claim 5, Kodama in view of Merriman further teaches automatically changing one of the one or more first slave type backup data nodes or the second slave type backup data node to a new master data node without an administrator input in response to determining that the master data node has failed while the data replication mode is set as the first mode. (Kodama, [0039], [0140]: the virtual volume module triggers failover from storage system A to storage system B) Regarding claim 6, Kodama in view of Merriman further teaches wherein the automatically changing includes assigning priority to the one or more first backup data nodes in the master election in response to determining that the one or more first backup data nodes is set as the first slave type backup data node located in the same data center as the master data node, and the second backup data node is set as the second slave type backup data node located in a data center different from the master data node. (Merriman, [0049]: slaves in the same data center as the failed master are given priority during election over slaves in different geographic locations) Regarding claim 19, Kodama does not further teach the remaining limitations. Merriman teaches A non-transitory computer-readable recording medium storing instructions that, when executed by one or more processors, cause performance of the method according to claim 1 ([0148]-[0150]: non-transitory computer-readable recording medium storing instructions that, when executed by one or more processors, cause performance of the methods) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Merriman’s medium with Kodoma’s method. One of ordinary skill in the art prior to the effective filing date would have been motivated to make the combination because software-implemented methods can be stored in memory (Merriman, [0148]-[150]). Claim(s) 20, the management node(s) that implement(s) the method(s) of claim(s) 1, respectively, is/are rejected on the same grounds as claim(s) 1, respectively. Kodama in view of Merriman further teaches A management node comprising: a memory storing one or more computer-readable programs; and one or more processors connected to the memory and configured to execute the one or more computer-readable programs to cause the management node to (Kodama, Fig. 5, [0047], [0048]: a virtual volume module. Merriman, [0148]-[0150]: system comprising a one or more processors and a non-transitory computer-readable recording medium storing instructions that, when executed by one or more processors, cause performance of the methods) Claim(s) 4, 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication No. 20060031594 (“Kodama”) in view of US Patent Application Publication No. 20160203202 (“Merriman”) and US Patent No. 11061603 (“Bankar”). Regarding claim 4, Kodama in view of Merriman further teaches wherein the synchronous replication includes: (Kodama, [0135], [0136]: in response to a request, replication between storage system A and storage system B is synchronous and replication between storage system A and storage system D is synchronous) transmitting, by the master data node, a replication…to both the first backup data node and the second backup data node in response to receiving a request; (Kodama, [0135], [0136]: in response to a request, replication between storage system A and storage system B is synchronous and replication between storage system A and storage system D is synchronous) receiving, by the master data node, an acknowledgement (ACK) from each of the first backup data node and the second backup data node; and (Kodama, [0125], [0135], [0136]: in synchronous replication, a response from the replication destination is required) outputting, by the master data node, a response associated with the request in response to receiving the ACK from each of the first backup data node and the second backup data node. (Kodama, [0125], [0135], [0136]: in synchronous replication, a response from the replication destination is required to respond to the host) Kodama in view of Merriman does not teach replication log. Bankar teaches replication log (Col. 8, Lines 55-67, Col. 9, Lines 1-30: replication log) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Bankar’s replication log with Kodama’s replication. One of ordinary skill in the art prior to the effective filing date would have been motivated to make the combination to ensure that data is properly recovered after a disaster (Bankar, Col. 8, Lines 55-67, Col. 9, Lines 1-30). Regarding claim 7, Kodama in view of Merriman further teaches receiving an input for changing the data replication mode…to the second mode; and (Kodama, [0047], [0048], [0124]: users can select between synchronous and asynchronous replication) setting the data replication mode to be the second mode in response to the receiving of the input. (Kodama, [0047], [0048], [0124]: users can select between synchronous and asynchronous replication) Kodama in view of Merriman does not teach changing the data replication mode from the first mode to the second mode. Bankar teaches changing the data replication mode from the first mode to the second mode (Col. 12, Lines 55-67: switch from synchronous replication to asynchronous replication) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Bankar’s replication mode switch with Kodama’s replication. One of ordinary skill in the art prior to the effective filing date would have been motivated to make the combination to ensure that service requirements are met (Bankar, Col. 13, Lines 1-20). Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication No. 20060031594 (“Kodama”) in view of US Patent Application Publication No. 20160203202 (“Merriman”). Regarding claim 11, Kodama further teaches wherein the asynchronous replication includes: transmitting, by the master data node, a replication…to the first backup data node and the second backup data node in response to receiving a request; ([0135], [0136]: in response to a request, replication between storage system A and storage system B is synchronous and replication between storage system A and storage system D is asynchronous) receiving, by the master data node, an acknowledgement (ACK) from the first backup data node; and ([0125], [0135]: in synchronous replication, a response from the replication destination is required) outputting, by the master data node, a response associated with the request in response to the receiving of the ACK regardless of whether an ACK is received from the second backup data node. ([0125], [0135], [0136]: in synchronous replication, a response from the replication destination is required, and in asynchronous replication, a response from the replication destination is not required to respond to the host) Kodama does not teach replication log. Bankar teaches replication log (Col. 8, Lines 55-67, Col. 9, Lines 1-30: replication log) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Bankar’s replication log with Kodama’s replication. One of ordinary skill in the art prior to the effective filing date would have been motivated to make the combination to ensure that data is properly recovered after a disaster (Bankar, Col. 8, Lines 55-67, Col. 9, Lines 1-30). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication No. 20060031594 (“Kodama”) in view of US Patent Application Publication No. 20220206900 (“Zad Tootaghaj”). Regarding claim 12, Kodama does not further teach the remaining limitations. Zad Tootaghaj teaches wherein the learner type backup data node is excluded as a new master candidate for a master election that is performed in response to a failure of the master data node. (Fig. 1, [0042], [0053], [0077]: exclude a node in a non-local location from consideration in leader election) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Zad Tootaghaj’s location-aware election with Kodama’s failover. One of ordinary skill in the art prior to the effective filing date would have been motivated to make the combination to reduce latency between the leader and other nodes (Zad Tootaghaj, [0079]). Claim(s) 14, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication No. 20060031594 (“Kodama”) in view of US Patent Application Publication No. 20240160539 (“Khatri”). Regarding claim 14, Kodama does not further teach the remaining limitations. Khatri teaches transmitting a message querying whether or not to change the second backup data node to a new master data node in response to determining that the master data node and the one or more first backup data nodes have failed while the data replication mode is set as the second mode. (Fig. 1, [0019], [0023]: after a failure of a first region that includes a primary node and secondary nodes, vote on whether one of the secondary nodes in a second region should become the primary database) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Kathri’s cross-regional failover with Kodama’s cross-regional replication. One of ordinary skill in the art prior to the effective filing date would have been motivated to make the combination to provide streamlined cross-regional failover (Kathri, [0014]). Regarding claim 15, Kodama in view of Kathri further teaches changing the second backup data node to the new master data node in response to receiving a corresponding request. (Kathri, [0021], [0023]: change votes for each node such that a new primary node is elected that resides in a different region than the failing region) Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication No. No. 20060031594 (“Kodama”) in view of US Patent Application Publication No. 20220207053 (“Mankad”). Regarding claim 18, Kodama does not further teach the remaining limitations. Mankad teaches wherein the leader management node and the one or more backup management nodes are synchronized using a Raft protocol. (Mankad, [0084]: etcd consensus for administration database state transitions) etcd uses the Raft algorithm for consensus and Mankad inherently discloses using a Raft protocol. See pg. 1 of Non-Patent Literature etcd README. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine Mankad’s redundant administration consensus with Kodama’s redundant management. One of ordinary skill in the art prior to the effective filing date would have been motivated to make the combination because etcd ensures consistency between management nodes (Mankad, [0084]). Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALBERT LI whose telephone number is (571)272-5721. The examiner can normally be reached M-F 8:00AM-4:00PM PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at (571)272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.L./Examiner, Art Unit 2113 /PHILIP GUYTON/Primary Examiner, Art Unit 2113
Read full office action

Prosecution Timeline

May 08, 2024
Application Filed
Aug 12, 2025
Non-Final Rejection — §102, §103
Nov 13, 2025
Response Filed
Dec 01, 2025
Final Rejection — §102, §103
Jan 29, 2026
Examiner Interview Summary
Jan 29, 2026
Applicant Interview (Telephonic)
Mar 03, 2026
Request for Continued Examination
Mar 12, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596614
RESET TECHNIQUES FOR PROTOCOL LAYERS OF A MEMORY SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12572422
Delayed Log Write of Input/Outputs Using Persistent Memory
2y 5m to grant Granted Mar 10, 2026
Patent 12530256
SYSTEMS AND METHODS FOR IN-SYSTEM DETECTION AND RECOVERY OF A BIT CORRUPTION EVENT
2y 5m to grant Granted Jan 20, 2026
Patent 12511204
METHOD AND SYSTEM FOR MANAGING GEO-REDUNDANT CLOUD SERVERS IN COMMUNICATION SYSTEMS
2y 5m to grant Granted Dec 30, 2025
Patent 12511206
METHOD AND APPARATUS FOR PROCESSING STORAGE MEDIUM FAILURE AND SOLID STATE DRIVE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+19.3%)
2y 1m
Median Time to Grant
High
PTA Risk
Based on 55 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month