Prosecution Insights
Last updated: April 19, 2026
Application No. 18/201,098

FAILURE BEHAVIOR OF STRETCHED CLUSTERS

Non-Final OA §102
Filed
May 23, 2023
Examiner
GUSTAFSON, MATHEW DONALD
Art Unit
2113
Tech Center
2100 — Computer Architecture & Software
Assignee
VMware, Inc.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
1y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
2 granted / 2 resolved
+45.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 10m
Avg Prosecution
19 currently pending
Career history
21
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
35.8%
-4.2% vs TC avg
§112
1.2%
-38.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§102
Detailed Action This action is in response to the application filed on 05/23/2023. Claims 1-20 are pending and have been fully examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-20 are rejected under 35 U.S.C. 102 Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C. 102(a)(2) as being anticipated by Konka et al. (U.S. Publication No. 2020/0026625 A1), hereinafter referred to as Konka. Regarding Claim 1, Konka teaches: A method comprising: detecting a virtual computing instance (VCI) operating on a first node in a first fault domain in a multi-fault domain storage cluster comprising: (Fig. 3, [0026]; regarding, “The nodes of cluster 102 can comprise various virtualized entities or VEs (e.g., user virtual machines or user VMs, virtualized disks or vDisks, virtualized network interface cards or vNICs, executable containers, etc.”); the first fault domain comprising the first node, a second fault domain comprising a second node, and a witness fault domain comprising a witness node; (Fig. 3, [0045]; regarding, “the system 300 comprises a cluster 102 comprising two nodes (e.g., node 104.sub.1 and node 104.sub.2) that have multiple tiers of storage in a storage pool 106. Each node can be associated with one server, multiple servers, or portions of a server. The nodes can be associated (e.g., logically and/or physically) with the clusters.”; [0046]; regarding, “Furthermore, cluster 102 is associated with a failure domain 320.sub.1 whereas node 104.sub.3 is associated with a failure domain 320.sub.2…. The separate failure domains of cluster 102 and node 104.sub.3 indicate that a failure endemic to cluster 102 will not affect node 104.sub.3.”); automatically registering the first fault domain as a preferred fault domain for the VCI; ([0031]; regarding “The aforementioned disaster recovery framework further comprises a protocol for accessing an arbiter to facilitate selection of a leader node that in turn performs DR operations in the event of a failure at the two-node cluster.”); determining, at the second fault domain, whether a loss of communication over an inter-fault domain network link between the first fault domain and the second fault domain is due to a failure of the first fault domain or a failure of the inter-fault domain network link; ([0040]; regarding, “Detecting one or more failures (e.g., failure event 202)…For example, failure event 202 might be associated with a node failure, an intra-cluster communication failure, an external communication failure, an arbiter (or witness VM) access failure, and/or any other failure and/or combination thereof.”); and in response to the failure of the first fault domain: restarting, on the second node of the second fault domain, the VCI; ([0064]; regarding, “a failover migration of a VM hosted on failed node 512 might be performed to bring up a replica VM 506.sub.11 at the remote cluster 502 so that user 504 can continue working.”); and automatically registering the second fault domain as the preferred fault domain for the VCI. ([0040]; regarding, “In response to detecting the one or more failures, a protocol is invoked to communicate with the arbiter to establish (e.g., select) a leader node from one of the two nodes of the cluster (step 240). As earlier mentioned, an arbiter service managed by a witness VM might execute one or more atomic operations to select a leader node…”). Regarding Claim 2, Konka teaches the method of Claim 1 as referenced above. Konka further teaches: further comprising sending, from the first fault domain and the second fault domain, heartbeat messages to the witness node; ([0019]; regarding, “In any of the foregoing cases, upon detecting the failure, one or both nodes… invoke a protocol to communicate with a witness node…”; ([0050]; regarding, “the controller VMs of the nodes in system 300 interact using external communications (e.g., external communications 334.sub.1 and external communications 334.sub.2) over external network 130.”; and wherein determining whether the loss of communication over the inter-fault domain network link between the first fault domain and the second fault domain is due to the failure of the first fault domain or the failure of the inter-fault domain network link further comprises receiving an indication from the witness node of whether the first fault domain has sent a heartbeat message to the witness node within a time period. ([0019]; regarding, “In any of the foregoing cases, upon detecting the failure, one or both nodes… invoke a protocol to communicate with a witness node…”; ([0050]; regarding, “the controller VMs of the nodes in system 300 interact using external communications (e.g., external communications 334.sub.1 and external communications 334.sub.2) over external network 130.”; [0032]; regarding, “node 104.sub.1 and node 104.sub.2 might both detect a failure (operation 2) in response to a heartbeat signal not being received at either node after a certain period of time (e.g., 10 seconds or five consecutive unsuccessful heartbeat pings).”). Regarding Claim 3, Konka teaches the method of Claim 1 as referenced above. Konka further teaches: further comprising: detecting a failure of the inter-fault domain network link between the first fault domain and the second fault domain; ([0032]; regarding, “node 104.sub.1 and node 104.sub.2 might both detect a failure (operation 2) in response to a heartbeat signal not being received at either node… This situation might occur when the intra-cluster communication between nodes of cluster 102 has failed.”); and maintaining the VCI at the second fault domain based on the second fault domain being registered as the preferred fault domain for the VCI. ([0032]; regarding, “In this case, node 104.sub.1 and node 104.sub.2 will invoke the protocol with the arbiter 122 by issuing a respective leadership request over an external network 130 (e.g., the Internet) to the arbiter 122 (operation 3 and operation 4).”; [0033]; Upon receiving the one or more leadership requests, an atomic operation 124 is executed at arbiter 122 to select a leader node (operation 5)… the leader node is selected based at least in part on at least one outcome of at least one atomic leadership election operation (e.g., atomic operation 124)”). Regarding Claim 4, Konka teaches the method of Claim 1 as referenced above. Konka further teaches: further comprising: detecting a loss of communication over a second inter-fault domain network link between the second fault domain and the witness node; ([0071]; regarding, “Other disaster recovery scenarios in two-node computing clusters as facilitated by the herein disclosed techniques are possible. For example, if the external communication between the witness VM and one of the nodes of the two-node cluster fails…”); and maintaining the VCI at the second fault domain based on the second fault domain being registered as the preferred fault domain for the VCI. ([0071]; regarding, “if the external communication between the witness VM and one of the nodes of the two-node cluster fails, an alert is issued to the node unable to access the witness VM. In this case, the cluster is otherwise unaffected, and no user intervention is required.”). Regarding Claim 5, Konka teaches the method of Claim 1 as referenced above. Konka further teaches: further comprising: detecting a loss of communication over a second inter-fault domain network link between the second fault domain and the witness node and a second occurrence of the loss of communication over the inter-fault domain network link between the first fault domain and the second fault domain; ([0063]; regrading, “in node failure scenario 584 of FIG. 5A, a failure is detected at cluster 102 (operation C). As can be observed, the failure in node failure scenario 584 pertains to a failed node 512.”); restarting, in the first fault domain, the VCI; ([0064]; regarding, “a failover migration of a VM hosted on failed node 512 might be performed to bring up a replica VM 506.sub.11 at the remote cluster 502 so that user 504 can continue working.”); and automatically registering the first fault domain as the preferred fault domain for the VCI. ([0063]; regarding, “The active node 514 in cluster 102 issues a leadership request to the witness VM 120 in accordance with the herein disclosed techniques (operation D). The active node 514 is selected as the leader node (e.g., by an arbiter) by witness VM 120 (operation E) and a leadership lock is received at the active node 514 (operation F).”). Regarding Claim 6, Konka teaches the method of Claim 1 as referenced above. Konka further teaches: further comprising: detecting the VCI being restarted at the first fault domain when the second fault domain is operational; ([0066]; regarding, “In the case of an intra-cluster communication failure, the nodes at cluster 102 might not know the state of the other node so that each will issue a leadership request to witness VM 120 at remote cluster 502 (operation M).”; [0064]; regarding, “a failover migration of a VM hosted on failed node 512 might be performed to bring up a replica VM 506.sub.11 at the remote cluster 502 so that user 504 can continue working.”); and automatically registering the first fault domain as the preferred fault domain for the VCI. ([0066]; regarding, “According to the herein disclosed techniques, a single leader node is selected at the witness VM 120 even in the presence of the multiple leadership requests (operation N). The selected leader node receives the leadership lock from the witness VM 120 (operation O).”). Regarding Claim 7, Konka teaches the method of Claim 1 as referenced above. Konka further teaches: wherein the first fault domain comprises a first site, the second fault domain comprises a second site, and the witness fault domain comprises a third site. (Fig. 3, [0045]; regarding, “the system 300 comprises a cluster 102 comprising two nodes (e.g., node 104.sub.1 and node 104.sub.2)… Each node can be associated with one server, multiple servers, or portions of a server. The nodes can be associated (e.g., logically and/or physically) with the clusters.”; [0031]; regarding, “The witness VM and its arbiter service can be hosted in a different cluster that is geographically proximal or geographically distal from cluster 102.”; [0046]; regarding, “cluster 102 is associated with a failure domain 320.sub.1 whereas node 104.sub.3 is associated with a failure domain 320.sub.2. As used herein, a failure domain or availability domain is a logical collection of hardware components (e.g., nodes, switches, racks, etc.) that are affected by failures within the collection. As an example, a failure domain might comprise a single physical node appliance or a rack of node appliances. The separate failure domains of cluster 102 and node 104.sub.3 indicate that a failure endemic to cluster 102 will not affect node 104.sub.3.”). Regarding Claim 8, Konka teaches: A multi-fault domain storage cluster comprising: (Fig. 3); a first fault domain comprising a first node; a second fault domain comprising a second node, and a witness fault domain comprising a witness node; (Fig. 3, [0045]; regarding, “the system 300 comprises a cluster 102 comprising two nodes (e.g., node 104.sub.1 and node 104.sub.2) that have multiple tiers of storage in a storage pool 106. Each node can be associated with one server, multiple servers, or portions of a server. The nodes can be associated (e.g., logically and/or physically) with the clusters.”; [0046]; regarding, “Furthermore, cluster 102 is associated with a failure domain 320.sub.1 whereas node 104.sub.3 is associated with a failure domain 320.sub.2…. The separate failure domains of cluster 102 and node 104.sub.3 indicate that a failure endemic to cluster 102 will not affect node 104.sub.3.”); wherein the witness node is configured to: detect a virtual computing instance (VCI) operating on the first node; ([0026]; regarding, “The nodes of cluster 102 can comprise various virtualized entities or VEs (e.g., user virtual machines or user VMs, virtualized disks or vDisks, virtualized network interface cards or vNICs, executable containers, etc.”); automatically register the first fault domain as a preferred fault domain for the VCI; and in response to the VCI restarting on the second node, automatically register the second fault domain as the preferred fault domain for the VCI; ([0031]; regarding “The aforementioned disaster recovery framework further comprises a protocol for accessing an arbiter to facilitate selection of a leader node that in turn performs DR operations in the event of a failure at the two-node cluster.”); and wherein the second node is configured to: determine whether a loss of communication over an inter-fault domain network link between the first fault domain and the second fault domain is due to a failure of the first fault domain or a failure of the inter-fault domain network link; ([0040]; regarding, “Detecting one or more failures (e.g., failure event 202)…For example, failure event 202 might be associated with a node failure, an intra-cluster communication failure, an external communication failure, an arbiter (or witness VM) access failure, and/or any other failure and/or combination thereof.”); and in response to the failure of the first fault domain, restart the VCI on the second node. ([0064]; regarding, “a failover migration of a VM hosted on failed node 512 might be performed to bring up a replica VM 506.sub.11 at the remote cluster 502 so that user 504 can continue working.”); Claims 9-14 are rejected under 35 U.S.C. 102 under the same grounds of rejection as claims 2-7 respectively. Regarding Claim 15, Konka teaches: One or more non-transitory computer-readable media storing instructions that, when executed by processors of a multi-fault domain storage cluster, cause the processors to: ([0088]); detect a virtual computing instance (VCI) operating on a first node in a first fault domain in the multi-fault domain storage cluster comprising: (Fig. 3, [0026]; regarding, “The nodes of cluster 102 can comprise various virtualized entities or VEs (e.g., user virtual machines or user VMs, virtualized disks or vDisks, virtualized network interface cards or vNICs, executable containers, etc.”); the first fault domain comprising the first node, a second fault domain comprising a second node, and a witness fault domain comprising a witness node; (Fig. 3, [0045]; regarding, “the system 300 comprises a cluster 102 comprising two nodes (e.g., node 104.sub.1 and node 104.sub.2) that have multiple tiers of storage in a storage pool 106. Each node can be associated with one server, multiple servers, or portions of a server. The nodes can be associated (e.g., logically and/or physically) with the clusters.”; [0046]; regarding, “Furthermore, cluster 102 is associated with a failure domain 320.sub.1 whereas node 104.sub.3 is associated with a failure domain 320.sub.2…. The separate failure domains of cluster 102 and node 104.sub.3 indicate that a failure endemic to cluster 102 will not affect node 104.sub.3.”); automatically register the first fault domain as a preferred fault domain for the VCI; ([0031]; regarding “The aforementioned disaster recovery framework further comprises a protocol for accessing an arbiter to facilitate selection of a leader node that in turn performs DR operations in the event of a failure at the two-node cluster.”); determine, at the second fault domain, whether a loss of communication over an inter-fault domain network link between the first fault domain and the second fault domain is due to a failure of the first fault domain or a failure of the inter-fault domain network link; ([0040]; regarding, “Detecting one or more failures (e.g., failure event 202)…For example, failure event 202 might be associated with a node failure, an intra-cluster communication failure, an external communication failure, an arbiter (or witness VM) access failure, and/or any other failure and/or combination thereof.”); and in response to the failure of the first fault domain: restart, on the second node of the second fault domain, the VCI; ([0064]; regarding, “a failover migration of a VM hosted on failed node 512 might be performed to bring up a replica VM 506.sub.11 at the remote cluster 502 so that user 504 can continue working.”); and automatically register the second fault domain as the preferred fault domain for the VCI. ([0040]; regarding, “In response to detecting the one or more failures, a protocol is invoked to communicate with the arbiter to establish (e.g., select) a leader node from one of the two nodes of the cluster (step 240). As earlier mentioned, an arbiter service managed by a witness VM might execute one or more atomic operations to select a leader node…”). Claims 16-20 are rejected under 35 U.S.C. 102 under the same grounds of rejection as claims 2-6 respectively. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATHEW GUSTAFSON whose telephone number is (571)272-5273. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at (571) 272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.D.G./Examiner, Art Unit 2113 /BRYCE P BONZO/Supervisory Patent Examiner, Art Unit 2113
Read full office action

Prosecution Timeline

May 23, 2023
Application Filed
Sep 24, 2025
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572400
DATABASE SWITCHOVER IN A DISTRIBUTED DATABASE SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12461830
RESOURCE-AWARE WORKLOAD REALLOCATION ACROSS CLOUD ENVIRONMENTS
2y 5m to grant Granted Nov 04, 2025
Patent 12332719
POWER SUPPLY REDUNDANCY CONTROL SYSTEM AND METHOD FOR GPU SERVER AND MEDIUM
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
1y 10m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month