Prosecution Insights
Last updated: April 19, 2026
Application No. 18/399,497

SOFTWARE BASED VALIDATION OF MEMORY FOR FAULT TOLERANT SYSTEMS

Non-Final OA §103
Filed
Dec 28, 2023
Examiner
XU, MICHAEL
Art Unit
2113
Tech Center
2100 — Computer Architecture & Software
Assignee
Stratus Technologies Ireland Ltd.
OA Round
2 (Non-Final)
77%
Grant Probability
Favorable
2-3
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
95 granted / 124 resolved
+21.6% vs TC avg
Strong +23% interview lift
Without
With
+23.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
18 currently pending
Career history
142
Total Applications
across all art units

Statute-Specific Performance

§101
17.9%
-22.1% vs TC avg
§103
57.0%
+17.0% vs TC avg
§102
13.7%
-26.3% vs TC avg
§112
1.7%
-38.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 124 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3,5-7,13-14,16,18,20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210342232 A1 (Gopalan) and further in view of US 20200050523 A1 (Pawlowski). Regarding claim 1, Gopalan teaches, A computer comprising: a network device;(fig 1:Host 1-9; par 66 “As shown in FIG. 1, a typical cluster consists of multiple racks of physical machines. Machines within a rack are connected to a top-of-the-rack (TOR) switch.”) a storage device;(fig 1:network storage; par 36 “The plurality of servers may host a plurality of virtual machines, each virtual machine having an associated memory space comprising memory pages. At least one virtual machine may use network attach storage.” ) and at least two compute nodes, wherein one compute node is designated an active node and the other compute node is designated a standby node,(par 27 “A distributed duplicate tracking phase identifies and tracks identical memory content across VMs running on same/different physical machines in a cluster, including nonmigrating VMs running on the target machines.” Gopalan does source to target live migrations, and does not seem to differentiate between active and standby.) each compute node comprising a dedicated memory with an per-node controller (par 51 “wherein each of the source server rack comprises a deduplication server which determines a hash of each memory page in the respective source server rack, storing the hashes of the memory pages in a hash table along with a list of duplicate pages, and controls a deduplicating of the memory pages or sub-pages within the source server rack before the transferring to the server rack.”; par 84 “Per-node controllers are responsible for managing the deduplication of outgoing and incoming VMs. We call the controller component managing the outgoing VMs as the source side and the component managing the incoming VMs as the target side.”; par 73 “In each machine, a per-node controller process coordinates the tracking of identical pages among all VMs in the machine”), an operating system memory, and a firmware reserved memory(par 36 “The plurality of servers may host a plurality of virtual machines, each virtual machine having an associated memory space comprising memory pages.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”) and the active node per-node controller further comprises an availability driver; (par 51 “wherein each of the source server rack comprises a deduplication server which determines a hash of each memory page in the respective source server rack, storing the hashes of the memory pages in a hash table along with a list of duplicate pages, and controls a deduplicating of the memory pages or sub-pages within the source server rack before the transferring to the server rack.”; par 78 “Per-node controllers perform this forwarding of identical pages among hosts in the target rack.” par 69 “The pre-copy [5] VM migration technique transfers the memory of a running VM over the network by performing iterative passes over its memory. Each successive round transfers the pages that were dirtied by the VM in the previous iteration. Such iterations are carried out until a very small number of dirty pages are left to be transferred. Given the throughput of the network, if the time required to transfer the remaining pages is smaller than a pre-determined threshold, the VM is paused and its CPU state and the remaining dirty pages are transferred. Upon completion of this final phase, the VM is resumed at the target.”) wherein the availability driver of the active node disables or suspends all processes that alter operating system memory and transfers all operation system memory of the active node to the operating system memory of the standby node;( par 84 “Per-node controllers are responsible for managing the deduplication of outgoing and incoming VMs. We call the controller component managing the outgoing VMs as the source side and the component managing the incoming VMs as the target side.”; par 69 “The pre-copy [5] VM migration technique transfers the memory of a running VM over the network by performing iterative passes over its memory. Each successive round transfers the pages that were dirtied by the VM in the previous iteration. Such iterations are carried out until a very small number of dirty pages are left to be transferred. Given the throughput of the network, if the time required to transfer the remaining pages is smaller than a pre-determined threshold, the VM is paused and its CPU state and the remaining dirty pages are transferred. Upon completion of this final phase, the VM is resumed at the target.”) the isolated per-node controller of the active node executes a code to generate an active validation array set of all operating system memory, (par 72 “We use content hashing to detect identical pages. The pages having the same content yield the same hash value.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”) the active validation array is then transferred to the standby node. (par 78 “The deduplication server at the target rack monitors the pages within hosted VMs and synchronizes this information with other deduplication servers. Per-node controllers perform this forwarding of identical pages among hosts in the target rack.”) However, Gopalan does not specifically teach an availability driver, an isolated utility executive, or each compute node comprising a dedicated memory …, an operating system memory, and a firmware reserved memory. On the other hand, Pawlowski teaches, A computer comprising: a network device; a storage device; (fig 1; par 23 “In brief overview, a high reliability fault tolerant computer 10 constructed in accordance with the disclosure includes, in one embodiment, a plurality of CPU nodes (generally 14) interconnected to at least two IO domains (generally 26) through a mesh fabric network 30 as shown in FIG. 1. At least one of the nodes 14C of the plurality of nodes 14 is a standby node and does not execute applications unless one of the other CPU nodes 14, 14A, 14B either begins to fail or actually does fail. When a failure occurs, standby CPU node 14C acquires the state of the failing CPU node (for example, CPU node 14) and continues to execute the applications that were executing on the failing CPC node 14.”) and at least two compute nodes, wherein one compute node is designated an active node and the other compute node is designated a standby node, (par 13 “In one embodiment, the fault tolerant computer system includes a plurality of CPU nodes, each CPU node including a processor and a memory, wherein one of the CPU nodes is designated as a standby CPU node and the remainder are designated as active CPU nodes”) each compute node comprising a dedicated memory with an isolated utility executive, (fig 5A:522 “FT Kernel mode driver”; par 60 “The FT Management Layer 536 causes the FT Kernel Mode Driver (FT Driver) 522 to begin processing a command to enter Mirrored Execution. The FT Kernel Mode Driver 522 loads or writes the program and data code of the FT Virtual Machine Monitor (FTVMM) code 580, the FTVMM data 584, the SLAT LO 588, and the VMCS-L0 Array 592 into the Reserved Memory Region.”; par 70 “In more detail during blackout, the FT driver executes driver code on all processors on the active but failing CPU 14 concurrently and copies the final set of dirtied pages to the Standby CPU 14C. The FT Driver causes all processors on CPU 14 to disable system interrupt processing on each processor so as to prevent other programs in the Fault tolerant computer system from generating more Dirty Page Bits.”; par 71 “In one embodiment, the FT Driver then copies the set of physical memory pages that are identified in the Dirty Page Bit Map into the corresponding physical memory addresses in the Second Subsystem.”;) an operating system memory,(par 44 “The OS bootloader loads the OS image into memory and begins OS execution.”) and a firmware reserved memory(fig 5A:504,508; par 57 “Referring to FIG. 5A, in normal, non-mirrored, operation, the layers in the fault tolerant computer system include …; a server firmware layer 504 including the system Universal Extensible Firmware Interface (UEFI) BIOS 508; and a zero layer reserved memory region 512 … . The zero layer reserved memory 512 is reserved by the BIOS 508 at boot time. Although most of the memory of the fault tolerant computer system is available for use by the Operating System and software, the reserved memory 512 is not.”) and the active node operating system memory further comprises an availability driver;(fig 5A:536; par 59 “Non-virtualized software components 534 include an FT Management Layer 536. Each Virtual Machine Guest (VM) includes a VM Guest Operating System (VM OS) 542, 542A, and a SLAT table associated with the VM (SLAT L2) 546, 546A. Also included in each VM 538, 538A is one or more Virtual Machine Control Structures associated with the VM (VMCS-N), generally 550, 550A, one for each of the virtual processors 0-N that are allocated to that VM.”) wherein the availability driver of the active node disables or suspends all processes that alter operating system memory and transfers all operation system memory of the active node to the operating system memory of the standby node; (par 70 “In more detail during blackout, the FT driver executes driver code on all processors on the active but failing CPU 14 concurrently and copies the final set of dirtied pages to the Standby CPU 14C. The FT Driver causes all processors on CPU 14 to disable system interrupt processing on each processor so as to prevent other programs in the Fault tolerant computer system from generating more Dirty Page Bits.”; par 71 “In one embodiment, the FT Driver then copies the set of physical memory pages that are identified in the Dirty Page Bit Map into the corresponding physical memory addresses in the Second Subsystem.”) the CPU and primary management processor of the active node executes a code to transfer resource mapping to the standby node(par 74 “To complete the failover, once all steps up to this point have been completed successfully, the active but failing CPU sends a command to the Primary Management Processor (which will coordinate with the Secondary Management Processors and handle any error cases in this step) to swap all of the resource mapping (Step 350) between the host ports for the two CPU nodes 14, 14A which are participating in the failover operation.”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Gopalan to incorporate the parallel processing of Pawlowski. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Gopalan -- a need for how to handle copying state and memory from active to standby node -- with Pawlowski providing a known method to solve a similar problem. Pawlowski provides “In another embodiment, if one of: a failure, a beginning of a failure and a predicted failure occurs in an active node, the state and memory of the active CPU node is transferred through the switching fabric to the standby CPU node and the standby CPU node becomes the new active node, taking over for the previously failing node.”(Pawlowski par 9) Regarding claim 2, Gopalan and Pawlowski teaches, The computer system of claim 1, Gopalan further teaches, wherein the isolated utility executive of the standby node executes a code to generate a standby validation array set of all operating system memory that is verified against the active validation array. (par 72 “We use content hashing to detect identical pages. The pages having the same content yield the same hash value.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”) Regarding claim 3, Gopalan teaches, The computer system of claim 2, Gopalan further teaches, wherein the isolated utility executive of the standby node(par 78 “The deduplication server at the target rack monitors the pages within hosted VMs and synchronizes this information with other deduplication servers. Per-node controllers perform this forwarding of identical pages among hosts in the target rack.”) signals to the availability driver to complete or abort the VM being used by the active node to the standby node.(par 69 “Given the throughput of the network, if the time required to transfer the remaining pages is smaller than a pre-determined threshold, the VM is paused and its CPU state and the remaining dirty pages are transferred. Upon completion of this final phase, the VM is resumed at the target. For GMGD each VM is migrated independently with the pre-copy migration technique.”) However, Gopalan does not specifically teach transfer of a network and storage device. On the other hand, Pawlowski further teaches, A high reliability fault tolerant computer system, which has an active node and a standby node, where when a failure occurs in the active node, the standby node acquires the state of the failing active node and resumes operation (fig 1; par 23 “In brief overview, a high reliability fault tolerant computer 10 constructed in accordance with the disclosure includes, in one embodiment, a plurality of CPU nodes (generally 14) interconnected to at least two IO domains (generally 26) through a mesh fabric network 30 as shown in FIG. 1. At least one of the nodes 14C of the plurality of nodes 14 is a standby node and does not execute applications unless one of the other CPU nodes 14, 14A, 14B either begins to fail or actually does fail. When a failure occurs, standby CPU node 14C acquires the state of the failing CPU node (for example, CPU node 14) and continues to execute the applications that were executing on the failing CPC node 14.”). wherein the isolated utility executive of the standby node signals to the availability driver to complete or abort transfer of a network and storage device being used by the active node to the standby node.(par 75 “Both CPU nodes 14, 14C read the token from their mailbox mechanism showing their new respective states (swapped from the original active and standby designations). Software on the new Active CPU node then performs any final cleanup as required. For example, it may be necessary to … train the switching fabric to map transactions from the new Active CPU node (Step 360) and perform a Resume from System Management (RSM) instruction to return control to the operating system and resume the interrupted instruction. The Standby CPU node can reactivate the previously quiesced devices and allow transactions to flow through the fabric to and from the Standby CPU node.”) Regarding claim 5, Gopalan and Pawlowski teaches, The computer system of claim 1, Gopalan further teaches, wherein the active validation array and standby validation array are generated by a hash function. (par 72 “We use content hashing to detect identical pages. The pages having the same content yield the same hash value.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”) Regarding claim 6, Gopalan and Pawlowski teaches, The computer system of claim 1, Gopalan further teaches, wherein the isolated utility executive is a virtual machine monitor which executes in the dedicated memory(par 78 “The deduplication server at the target rack monitors the pages within hosted VMs and synchronizes this information with other deduplication servers.”) to perform the validation array generation procured on the standby node.(par 74 “The per-rack deduplication servers maintain a hash table, which is populated by carrying out a rack-wide content hashing of the 160-bit hash values pre-computed by per-node controllers. Each hash is also associated with a list of hosts in the rack containing the corresponding pages.”) Regarding claim 7, Gopalan teaches, The computer system of claim 5, Gopalan further teaches, wherein the isolated utility executive of the active node and of the standby node execute code on all processers of the computer system for each virtual machine to generate the active validation array and the standby validation array.(par 39 “and the determination of the content redundancy in the memory across the plurality of servers may comprise determining, for each virtual machine, a hash for each memory page or sub-page used by the respective virtual machine.”) However, Gopalan does not specifically teach execute code on all processers of the computer system in parallel. On the other hand, Pawlowski further teaches, wherein the isolated utility executive of the active node and of the standby node execute code on all processers of the computer system in parallel to process the memory pages.(par 70 “In more detail during blackout, the FT driver executes driver code on all processors on the active but failing CPU 14 concurrently and copies the final set of dirtied pages to the Standby CPU 14C. The FT Driver causes all processors on CPU 14 to disable system interrupt processing on each processor so as to prevent other programs in the Fault tolerant computer system from generating more Dirty Page Bits.”; par 71 “In one embodiment, the FT Driver then copies the set of physical memory pages that are identified in the Dirty Page Bit Map into the corresponding physical memory addresses in the Second Subsystem.”) Regarding claim 13, Gopalan teaches, A computer system configured to migrate a PC Server(par 9 “The present invention relates to gang migration, i.e. the simultaneous live migration of multiple VMs that run on multiple physical machines in a cluster.”; par 4 “Live migration of a virtual machine (VM) refers to the transfer of a running VM over the network from one physical machine to another.”), the computer system comprising: a network device;(fig 1:Host 1-9; par 66 “As shown in FIG. 1, a typical cluster consists of multiple racks of physical machines. Machines within a rack are connected to a top-of-the-rack (TOR) switch.”) a storage device; (fig 1:network storage; par 36 “The plurality of servers may host a plurality of virtual machines, each virtual machine having an associated memory space comprising memory pages. At least one virtual machine may use network attach storage.” ) and at least two compute nodes, wherein one compute node is designated an active node and the other compute node is designated a standby node, (par 27 “A distributed duplicate tracking phase identifies and tracks identical memory content across VMs running on same/different physical machines in a cluster, including nonmigrating VMs running on the target machines.” Gopalan does source to target live migrations, and does not seem to differentiate between active and standby.) each compute node comprising a dedicated memory with an isolated utility executive, (par 51 “wherein each of the source server rack comprises a deduplication server which determines a hash of each memory page in the respective source server rack, storing the hashes of the memory pages in a hash table along with a list of duplicate pages, and controls a deduplicating of the memory pages or sub-pages within the source server rack before the transferring to the server rack.”; par 84 “Per-node controllers are responsible for managing the deduplication of outgoing and incoming VMs. We call the controller component managing the outgoing VMs as the source side and the component managing the incoming VMs as the target side.”) an operating system memory, the operating system memory including a plurality of files or data structures that are accessed by a driver running within the operating system, and a firmware reserved memory; (par 36 “The plurality of servers may host a plurality of virtual machines, each virtual machine having an associated memory space comprising memory pages.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”) and the active node operating system memory further comprises an availability driver wherein the availability driver of the active node disables or suspends all processes that alter operating system memory and transfers all operation system memory of the active node to the operating system memory of the standby node; (par 69 “The pre-copy [ 5] VM migration technique transfers the memory of a running VM over the network by performing iterative passes over its memory. Each successive round transfers the pages that were dirtied by the VM in the previous iteration. Such iterations are carried out until a very small number of dirty pages are left to be transferred. Given the throughput of the network, if the time required to transfer the remaining pages is smaller than a pre-determined threshold, the VM is paused and its CPU state and the remaining dirty pages are transferred. Upon completion of this final phase, the VM is resumed at the target.”) the isolated utility executive of the active node executes a code to generate an active validation array set of all operating system memory(par 72 “We use content hashing to detect identical pages. The pages having the same content yield the same hash value.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”), the active validation array is then transferred to the standby node;(par 78 “The deduplication server at the target rack monitors the pages within hosted VMs and synchronizes this information with other deduplication servers. Per-node controllers perform this forwarding of identical pages among hosts in the target rack.”) the isolated utility executive of the standby node executes a code to generate a standby validation array set of all operating system memory that is verified against the active validation array; (par 72 “We use content hashing to detect identical pages. The pages having the same content yield the same hash value.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”) However, Gopalan does not specifically teach the isolated utility executive of the standby node signals to the availability driver to complete or abort transfer of a network and storage device being used by the active node to the standby node. On the other hand, Pawlowski teaches, A computer system configured to migrate a PC Server, the computer system comprising: a network device; a storage device; (fig 1; par 23 “In brief overview, a high reliability fault tolerant computer 10 constructed in accordance with the disclosure includes, in one embodiment, a plurality of CPU nodes (generally 14) interconnected to at least two IO domains (generally 26) through a mesh fabric network 30 as shown in FIG. 1. At least one of the nodes 14C of the plurality of nodes 14 is a standby node and does not execute applications unless one of the other CPU nodes 14, 14A, 14B either begins to fail or actually does fail. When a failure occurs, standby CPU node 14C acquires the state of the failing CPU node (for example, CPU node 14) and continues to execute the applications that were executing on the failing CPC node 14.”) and at least two compute nodes, wherein one compute node is designated an active node and the other compute node is designated a standby node, (par 13 “In one embodiment, the fault tolerant computer system includes a plurality of CPU nodes, each CPU node including a processor and a memory, wherein one of the CPU nodes is designated as a standby CPU node and the remainder are designated as active CPU nodes”) each compute node comprising a dedicated memory with an isolated utility executive,(par 70 “In more detail during blackout, the FT driver executes driver code on all processors on the active but failing CPU 14 concurrently and copies the final set of dirtied pages to the Standby CPU 14C. The FT Driver causes all processors on CPU 14 to disable system interrupt processing on each processor so as to prevent other programs in the Fault tolerant computer system from generating more Dirty Page Bits.”; par 71 “In one embodiment, the FT Driver then copies the set of physical memory pages that are identified in the Dirty Page Bit Map into the corresponding physical memory addresses in the Second Subsystem.”) an operating system memory, the operating system memory including a plurality of files or data structures that are accessed by a driver running within the operating system, and a firmware reserved memory;(par 59 “Non-virtualized software components 534 include an FT Management Layer 536. Each Virtual Machine Guest (VM) includes a VM Guest Operating System (VM OS) 542, 542A, and a SLAT table associated with the VM (SLAT L2) 546, 546A. Also included in each VM 538, 538A is one or more Virtual Machine Control Structures associated with the VM (VMCS-N), generally 550, 550A, one for each of the virtual processors 0-N that are allocated to that VM.”) and the active node operating system memory further comprises an availability driver wherein the availability driver of the active node disables or suspends all processes that alter operating system memory and transfers all operation system memory of the active node to the operating system memory of the standby node;(par 70 “In more detail during blackout, the FT driver executes driver code on all processors on the active but failing CPU 14 concurrently and copies the final set of dirtied pages to the Standby CPU 14C. The FT Driver causes all processors on CPU 14 to disable system interrupt processing on each processor so as to prevent other programs in the Fault tolerant computer system from generating more Dirty Page Bits.”; par 71 “In one embodiment, the FT Driver then copies the set of physical memory pages that are identified in the Dirty Page Bit Map into the corresponding physical memory addresses in the Second Subsystem.”) the isolated utility executive of the standby node signals to the availability driver to complete or abort transfer of a network and storage device being used by the active node to the standby node. (par 74 “To complete the failover, once all steps up to this point have been completed successfully, the active but failing CPU sends a command to the Primary Management Processor (which will coordinate with the Secondary Management Processors and handle any error cases in this step) to swap all of the resource mapping (Step 350) between the host ports for the two CPU nodes 14, 14A which are participating in the failover operation.” par 75 “Both CPU nodes 14, 14C read the token from their mailbox mechanism showing their new respective states (swapped from the original active and standby designations). Software on the new Active CPU node then performs any final cleanup as required. For example, it may be necessary to … train the switching fabric to map transactions from the new Active CPU node (Step 360) and perform a Resume from System Management (RSM) instruction to return control to the operating system and resume the interrupted instruction. The Standby CPU node can reactivate the previously quiesced devices and allow transactions to flow through the fabric to and from the Standby CPU node.”) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Gopalan to incorporate the transfer of a network and storage device of Pawlowski. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Gopalan -- a need for how to handle resuming operations on the failover target -- with Pawlowski providing a known method to solve a similar problem. Pawlowski provides “In another embodiment, if one of: a failure, a beginning of a failure and a predicted failure occurs in an active node, the state and memory of the active CPU node is transferred through the switching fabric to the standby CPU node and the standby CPU node becomes the new active node, taking over for the previously failing node.”(Pawlowski par 9) Regarding claim 14, Gopalan and Pawlowski teaches, The computer system of claim 13, Pawlowski further teaches, wherein the isolated utility executive of the active node and of the standby node become completely idle after the availability driver is signaled.(par 60 “Referring now to FIG. 5B, at the start of mirroring, the fault tolerant computer system is operating in non-mirrored mode. The FT Management Layer 536 causes the FT Kernel Mode Driver (FT Driver) 522 to begin processing a command to enter Mirrored Execution.”; par 61 “The FT driver initializes the VMCS LO for each processor and causes the FTVMM to be installed and to execute”; par 62 “Once this is complete, the FT Driver generates a System Management Interrupt, and all processors execute in Firmware UEFI BIOS and Firmware SMM Module which generate an SMI, request the MPs 38 and 38A to change the host ports on switches 34, 34A, 34B and 34C to the standby CPU 14C, after which operation resumes on CPU 14C which is now the new Online CPU, and no longer a Standby CPU. The Firmware SMM performs a Resume to the FT Driver, and FT Driver completes the Blackout phase, unloads the FTVMM, releases the processors that were paused, enables interrupts, and completes its handling of the request for CPU failover.” None of these steps involve the component that does the validation array generation(the isolated utility executive)) Regarding claim 16, Gopalan and Pawlowski teaches, The computer of claim 1, Gopalan further teaches, wherein the availability driver is a kernel mode driver of the operating system.(fig 2; par 103; par 100 “A per-QEMU/KVM process thread, called a remote thread, periodically traverses the list, and checks for each entry whether the page corresponding to the identifier has been added into the target shared memory.” KVM is explained in par 82-84, par 84 “Per-node controllers are responsible for managing the deduplication of outgoing and incoming YMs. We call the controller component managing the outgoing YMs as the source side and the component managing the incoming YMs as the target side. The controller sets up a shared memory region that is accessible only by other QEMU/KVM processes” KVM stands for “Kernel-based Virtual Machine” and is similar to a kernel mode driver) However, although Gopalan teaches implementing the monitor on a Kernel based Virtual Machine does not specifically teach wherein the availability driver is a kernel mode driver of the operating system. On the other hand, Pawlowski further teaches, A high reliability fault tolerant computer system, which has an active node and a standby node, where when a failure occurs in the active node, the standby node acquires the state of the failing active node and resumes operation (fig 1; par 23 “In brief overview, a high reliability fault tolerant computer 10 constructed in accordance with the disclosure includes, in one embodiment, a plurality of CPU nodes (generally 14) interconnected to at least two IO domains (generally 26) through a mesh fabric network 30 as shown in FIG. 1. At least one of the nodes 14C of the plurality of nodes 14 is a standby node and does not execute applications unless one of the other CPU nodes 14, 14A, 14B either begins to fail or actually does fail. When a failure occurs, standby CPU node 14C acquires the state of the failing CPU node (for example, CPU node 14) and continues to execute the applications that were executing on the failing CPC node 14.”). wherein the availability driver is a kernel mode driver of the operating system. (fig 5A:522; par 58,60 “ The FT Management Layer 536 causes the FT Kernel Mode Driver (FT Driver) 522 to begin processing a command to enter Mirrored Execution.”) Regarding claim 18, Gopalan and Pawlowski teaches, The computer system of claim 1, Gopalan further teaches, wherein bios and boot firmware of the standby node are configured to similarly configured to bios and boot firmware of the active node.(par 16 “VMs within a cluster often have similar memory content, given that they may execute the same operating system, libraries, and applications.”) However, Gopalan does not specifically teach wherein bios and boot firmware of the standby node are configured to be identically or substantially identically configured to bios and boot firmware of the active node. Gopalan does not go into detail about how identical the bios and boot firmware of the VMs are. On the other hand, Pawlowski further teaches, wherein bios and boot firmware of the standby node are configured to be identically or substantially identically configured to bios and boot firmware of the active node.(par 42 “The previous steps allow the user to configure the system to his or her needs. However, for users that do not wish to make use of this ability, a default configuration of replicated resources is allocated to each active compute node 14 (active CPU), but the user can modify the allocation via a provisioning service, if desired.”; par 44 “The boot process for a given CPU 18 in a multi-node platform is the same as it would be for any standard server with standard BIOS and standard OS.” par 92) Regarding claim 20, Gopalan and Pawlowski teaches, The computer of claim 1, Gopalan further teaches, wherein the standby node further comprises software configured to detect details of memory that fails validation and characterizes pages or sub-pages validation failures that occurred using the detected details of memory.(par 41 ‘The transferring may comprise selectively suppressing a transfer of memory pages or sub-pages already stored in the rack by a process comprising: computing in real time hashes of the memory pages or sub-pages in the rack; storing the hashes in a hash table; receiving a hash representing a memory page or sub-page of a virtual machine to be migrated to the server rack; comparing the received hash to the hashes in the hash table; if the hash does not correspond to a hash in the hash table and adding the hash of the memory page or sub-page of a virtual machine to be migrated to the server rack to the hash table, transferring the copy of the memory page or sub-page of a virtual machine to be migrated to the server rack; and if the hash corresponds to a hash in the hash table, duplicating the unique memory page or sub-page within the server rack associated with the entry in the hash table and suppressing the transferring of the copy of the memory page or sub-page of a virtual machine to be migrated to the server rack.“; par 73 “In each machine, a per-node controller process coordinates the tracking of identical pages among all VMs in the machine”) However, Gopalan does not specifically teach characterizes a failure mode. On the other hand, Pawlowski further teaches, wherein the standby node further comprises software configured to detect details of memory that fails validation and characterizes a failure mode that occurred using the detected details of memory.(par 48 “In overview, the active CPU node 14 which is experiencing either a large number of correctable errors above a predetermined threshold or other degraded capability indicates to the MP 38, associated with the node's IO domain 26, that the node 14 has reached this degraded state, and a failover to the non-failing standby CPU node 14C should commence.”; par 79 “In more detail, the MP 38, 38A enables DPC (Downstream Port Containment) Failover Triggers in the fabric-mode switch component (generally 34) for each downstream port connecting a CPU node to a device. Potential Failover Triggers include Link-Down errors, and uncorrectable and fatal errors, in addition to intentional software triggers; for example to shut down and remove an IO domain 26.”) Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210342232 A1 (Gopalan) and US 20200050523 A1 (Pawlowski) as applied to claim 3 above, and further in view of US 11372969 B1 (Sundahl). Regarding claim 4, Gopalan and Pawlowski teaches, The computer system of claim 3, Gopalan further teaches, wherein the compute nodes further comprise stack memory in the dedicated memory,(par 28 “This problem is important because a VM is particularly vulnerable to failure during live migration. …. During this time, a VM' s state at the source and the destination nodes may be inconsistent, its state may be distributed across multiple nodes, and the software stack of a VM, including its virtual disk contents, may be in different stages of migration.”) wherein the stack memory can be verified through a hash value.(par 30 “It is a further object to provide a method of tracking duplication of memory content in a plurality of servers, each server having a memory pool comprising a plurality of memory pages and together residing in a common rack, comprising: computing a hash value for each of the plurality of memory pages or sub-pages in each server, communicating the hash values to a deduplication server process executing on a server in the common rack;”) However, Gopalan and Pawlowski do not specifically teach wherein the stack memory can be verified through a data token. On the other hand, Sundahl teaches, A stack protection system(col 2 ln 43-50 “This application discloses improved systems and methods of providing computer security and countering attacks on computing systems by protecting control data such as a return address from being disclosed or modified. The disclosed technology includes techniques for detecting attempts to overwrite a return address on a call stack, and preventing any overwrite of a return address on the call stack from succeeding.”) wherein the compute nodes further comprise stack memory in the dedicated memory, wherein the stack memory can be verified through a data token.(col 11 ln 47-60 “In block 825, the operational routine 850 retrieves ( e.g., from the shadow stack) the dummy return address that the operational routine 800 saved for verification. In block 835, the operational routine 850 compares the retrieved dummy return address of block 825 against the dummy control data popped off the stack with the function return of block 815. In block 845, the operational routine 850 conditionally branches based on the result of the comparison. In block 855, if the dummy return addresses match, then the verification succeeds and the operational routine 850 proceeds to retrieve and decode the encoded return address from the shadow stack. Then in block 865, the operational routine 850 continues the return from the called function as normal, sending control flow to the decoded return address.”; col 6 ln 2-6 ln “As a result, the overwritten stack canary 330 no longer matches the known value, and fails the verification. Therefore, the computing system knows that the canary value 230 has been corrupted, so the integrity of the control data 220 is suspect.”) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Gopalan and Pawlowski to incorporate the stack verification method of Sundahl. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Gopalan and Pawlowski -- a need for how to verify that memory is still valid -- with Sundahl providing a known method to solve a similar problem. Sundahl provides “improved systems and methods of providing computer security and countering attacks on computing systems by protecting control data such as a return address from being disclosed or modified. The disclosed technology includes techniques for detecting attempts to overwrite a return address on a call stack, and preventing any overwrite of a return address on the call stack from succeeding. By enhancing randomization of a stack canary and using a shadow stack to encode and conceal a return address, the disclosed technology enhances security of a computing system against stack smashing, ROP attacks, and JIT-ROP attacks.”(Sundahl col 2 ln 43-54) Claim(s) 8-10,12,19 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210342232 A1 (Gopalan), in view of US 20200050523 A1 (Pawlowski), and US 20220156734 A1 (Iwato). Regarding claim 8, Gopalan teaches, A method for verifying memory contents in a fault tolerant computer system(par 16 “During normal execution, a duplicate tracking mechanism keeps track of identical pages across different VMs in the cluster.” Par 17 “The present technology therefore seeks to identify and track identical memory pages across VMs running on different physical machines in a cluster, including non-migrating VMs running on the target machines.”) comprising: providing the fault tolerant system, the fault tolerant system comprising a first compute node comprising a first isolated utility executive and a second compute node comprising a second isolated utility executive,( par 84 “Per-node controllers are responsible for managing the deduplication of outgoing and incoming VMs. We call the controller component managing the outgoing VMs as the source side and the component managing the incoming VMs as the target side.”; par 73 “In each machine, a per-node controller process coordinates the tracking of identical pages among all VMs in the machine. The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”) the compute nodes having memory;(fig 1:Host 1-9; par 66 “As shown in FIG. 1, a typical cluster consists of multiple racks of physical machines. Machines within a rack are connected to a top-of-the-rack (TOR) switch.” par 36 “The plurality of servers may host a plurality of virtual machines, each virtual machine having an associated memory space comprising memory pages. At least one virtual machine may use network attach storage.”) suspending system execution on the first compute node, using an availability driver, to prevent changes in memory on the first compute node, wherein the availability driver is a kernel mode driver; (par 69 “The pre-copy [5] VM migration technique transfers the memory of a running VM over the network by performing iterative passes over its memory. Each successive round transfers the pages that were dirtied by the VM in the previous iteration. Such iterations are carried out until a very small number of dirty pages are left to be transferred. Given the throughput of the network, if the time required to transfer the remaining pages is smaller than a pre-determined threshold, the VM is paused and its CPU state and the remaining dirty pages are transferred. Upon completion of this final phase, the VM is resumed at the target.”; fig 2; par 103; par 100 “A per-QEMU/KVM process thread, called a remote thread, periodically traverses the list, and checks for each entry whether the page corresponding to the identifier has been added into the target shared memory.” KVM is explained in par 82-84, par 84 “Per-node controllers are responsible for managing the deduplication of outgoing and incoming YMs. We call the controller component managing the outgoing YMs as the source side and the component managing the incoming YMs as the target side. The controller sets up a shared memory region that is accessible only by other QEMU/KVM processes” KVM stands for “Kernel-based Virtual Machine” and is similar to a kernel mode driver) generating, using the first isolated utility executive, a first array for every page of memory on the first compute node; (par 72 “We use content hashing to detect identical pages. The pages having the same content yield the same hash value.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”) generating, using the second isolated utility executive, a second array for every page of memory on the second compute node; (par 72 “We use content hashing to detect identical pages. The pages having the same content yield the same hash value.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”) verifying consistency between the first arrays on the first compute node and the second arrays on the second compute node; (par 72 “We use content hashing to detect identical pages. The pages having the same content yield the same hash value.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”) resuming system execution on the second compute node. (par 69 “… the VM is paused and its CPU state and the remaining dirty pages are transferred. Upon completion of this final phase, the VM is resumed at the target.”) However, Gopalan does not specifically teach transferring disk and network access to the second compute node. On the other hand, Pawlowski teaches, A method for performing memory transfer in a fault tolerant computer system (fig 1; par 23 “In brief overview, a high reliability fault tolerant computer 10 constructed in accordance with the disclosure includes, in one embodiment, a plurality of CPU nodes (generally 14) interconnected to at least two IO domains (generally 26) through a mesh fabric network 30 as shown in FIG. 1. At least one of the nodes 14C of the plurality of nodes 14 is a standby node and does not execute applications unless one of the other CPU nodes 14, 14A, 14B either begins to fail or actually does fail. When a failure occurs, standby CPU node 14C acquires the state of the failing CPU node (for example, CPU node 14) and continues to execute the applications that were executing on the failing CPC node 14.”) comprising: providing the fault tolerant system, the fault tolerant system comprising a first compute node comprising a first isolated utility executive and a second compute node comprising a second isolated utility executive(fig 5A:536,522,550; par 58,59 “Non-virtualized software components 534 include an FT Management Layer 536. Each Virtual Machine Guest (VM) includes a VM Guest Operating System (VM OS) 542, 542A, and a SLAT table associated with the VM (SLAT L2) 546, 546A. Also included in each VM 538, 538A is one or more Virtual Machine Control Structures associated with the VM (VMCS-N), generally 550, 550A, one for each of the virtual processors 0-N that are allocated to that VM.”), the compute nodes having memory;(par 13 “In one embodiment, the fault tolerant computer system includes a plurality of CPU nodes, each CPU node including a processor and a memory, wherein one of the CPU nodes is designated as a standby CPU node and the remainder are designated as active CPU nodes”) suspending system execution on the first compute node, using an availability driver, to prevent changes in memory on the first compute node(par 69 “After the Brownout copy phase is complete, the active but failing CPU 14 signals its drivers, which are tracking DMA memory access, to pause all DMA traffic. (Step 330) This is the beginning of the Blackout phase. CPU threads are then all paused to prevent further modification of memory pages. At this time, the final list of pages modified by either CPU access or DMA access is copied to the Standby CPU 14C.”;) wherein the availability driver is a kernel mode driver;(fig 5A:522; par 58,60 “ The FT Management Layer 536 causes the FT Kernel Mode Driver (FT Driver) 522 to begin processing a command to enter Mirrored Execution.”) transferring disk and network access to the second compute node;(par 75 “Both CPU nodes 14, 14C read the token from their mailbox mechanism showing their new respective states (swapped from the original active and standby designations). Software on the new Active CPU node then performs any final cleanup as required. For example, it may be necessary to … train the switching fabric to map transactions from the new Active CPU node (Step 360) and perform a Resume from System Management (RSM) instruction to return control to the operating system and resume the interrupted instruction. The Standby CPU node can reactivate the previously quiesced devices and allow transactions to flow through the fabric to and from the Standby CPU node.”) and resuming system execution on the second compute node. (par 75 “… perform a Resume from System Management (RSM) instruction to return control to the operating system and resume the interrupted instruction. The Standby CPU node can reactivate the previously quiesced devices and allow transactions to flow through the fabric to and from the Standby CPU node.”) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Gopalan to incorporate the transfer of a network and storage device of Pawlowski. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Gopalan -- a need for how to handle resuming operations on the failover target -- with Pawlowski providing a known method to solve a similar problem. Pawlowski provides “In another embodiment, if one of: a failure, a beginning of a failure and a predicted failure occurs in an active node, the state and memory of the active CPU node is transferred through the switching fabric to the standby CPU node and the standby CPU node becomes the new active node, taking over for the previously failing node.”(Pawlowski par 9) However, Gopalan and Pawlowski does not specifically teach wherein the full memory bandwidth operates in parallel. On the other hand, Iwato teaches, A device for computing a plurality of hash values used in a blockchain or similar storage in parallel(par 1 “The embodiments discussed herein are related … to an information processing device suitable for computing a plurality of hash values used in a blockchain or the like in parallel.”) generating a first array for every page of memory on the first compute node wherein the full memory bandwidth operates in parallel;(par 34 “An information processing device according to an embodiment of the present invention performs computation of a hash function using a processor, a high-speed memory (for example, a HBM), and a semiconductor integrated circuit such as an ASIC or an FPGA. The memory is configured to include multiple banks. The semiconductor integrated circuit performing computation of a hash function employs multiple threads in order to shield an access latency of the memory and performs parallel computation that can make full use of the bandwidth of the memory.”) generating a second array for every page of memory on the second compute node wherein the full memory bandwidth operates in parallel;(par 34 “An information processing device according to an embodiment of the present invention performs computation of a hash function using a processor, a high-speed memory (for example, a HBM), and a semiconductor integrated circuit such as an ASIC or an FPGA. The memory is configured to include multiple banks. The semiconductor integrated circuit performing computation of a hash function employs multiple threads in order to shield an access latency of the memory and performs parallel computation that can make full use of the bandwidth of the memory.”) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Gopalan and Pawlowski to incorporate the memory usage technique of Iwato. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Gopalan and Pawlowski -- a need for how to quickly process VM memory data(Gopalan par 12 “Live VM migration mechanisms must move active VMs as quickly as possible and with minimal impact on the applications and the cluster infrastructure. These requirements translate into reducing the total migration time, downtime, application degradation, and cluster resource overheads such as network traffic, computation, memory, and storage overheads. Even though a large body of work in both industry and academia has advanced these goals, several challenges related to performance, robustness, and security remain to be addressed.”; Iwato par 5 “an algorithm for computation of a hash function for finding out a nonce includes a … few gigabyte size dataset called DAG that has been calculated beforehand, and the speed of access to a memory where the DAG is stored limits the processing speed of proof-of-work (PoW).”) -- with Iwato providing a known method to solve a similar problem. Iwato provides “An embodiment of the present invention has been made in view of the above problems and an object thereof is to execute control not to cause conflict of bank accesses from a large number of hash computation circuits to a memory including a plurality of banks, thereby making the most of a memory bandwidth effectively.”(Iwato par 18) Regarding claim 9, Gopalan, Pawlowski, and Iwato teaches, The method of claim 8, Gopalan further teaches, wherein the arrays on the first compute node and the arrays on the second compute node are generated by a checksum or hash function. (par 72 “We use content hashing to detect identical pages. The pages having the same content yield the same hash value.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”) Regarding claim 10, Gopalan, Pawlowski, and Iwato teaches, The method of claim 8, Gopalan further teaches, further comprising detecting an immediate or impending failure of the first compute node.(par 9 “Datacenter administrators may need to perform gang migration to handle resource reallocation for peak workloads, imminent failures, cluster maintenance, or powering down of several physical machines to save energy.”; abstract “Gang migration refers to the simultaneous live migration of multiple Virtual Machines (VMs) from one set of physical machines to another in response to events such as load spikes and imminent failures.”) Regarding claim 12, Gopalan, Pawlowski, and Iwato teaches, The method of claim 8 Gopalan further teaches, further comprising transferring the memory of the first compute node to a second compute node via a virtual machine monitor. (par 78 “The deduplication server at the target rack monitors the pages within hosted VMs and synchronizes this information with other deduplication servers.”; par 41 “The transferring may comprise selectively suppressing a transfer of memory pages or sub-pages already stored in the rack by a process comprising: computing in real time hashes of the memory pages or sub-pages in the rack; …, transferring the copy of the memory page or sub-page of a virtual machine to be migrated to the server rack; ….”) Regarding claim 19, Gopalan, Pawlowski, and Iwato teaches, The method of claim 9, Pawlowski further teaches, further comprising copying processor state memory from the active node to the standby node, wherein the processor state memory comprises processor specific register states and Local Apic (Local Advanced Programmable Interrupt Controller) states.(par 62 “The remaining dirty pages and the active processor state are then copied to the standby computer memory.”; par 37 “In one embodiment, using a Windows operating system and zero-copy direct memory access (DMA), the switching fabric 30 can transfer the processor state and memory contents from one CPU node 14 to another at about 56 GB/sec.”; par 72 “Once all the memory of the active but failing CPU node 14 has been copied, the active but failing CPU node 14 saves the internal state of its processors (Step 340) (including its registers, local Advanced Programmable Interrupt Controller, High Precision Event Timer, etc.) to a memory location, copies that data to the Standby CPU node, where it is subsequently restored into the corresponding registers of the Standby CPU node 14C.”) Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210342232 A1 (Gopalan), US 20200050523 A1 (Pawlowski) and US 20220156734 A1 (Iwato) as applied to claim 8 above, and further in view of US 11372969 B1 (Sundahl). Regarding claim 11, Gopalan, Pawlowski, and Iwato teaches, The method of claim 8, Gopalan further teaches, wherein the memory further includes stack memory, (par 28 “This problem is important because a VM is particularly vulnerable to failure during live migration. …. During this time, a VM' s state at the source and the destination nodes may be inconsistent, its state may be distributed across multiple nodes, and the software stack of a VM, including its virtual disk contents, may be in different stages of migration.”) and wherein each stack memory page is validated using a hash value.(par 30 “It is a further object to provide a method of tracking duplication of memory content in a plurality of servers, each server having a memory pool comprising a plurality of memory pages and together residing in a common rack, comprising: computing a hash value for each of the plurality of memory pages or sub-pages in each server, communicating the hash values to a deduplication server process executing on a server in the common rack;”) However, Gopalan, Pawlowski, and Iwato does not specifically teach wherein each stack memory page is validated using a data token. On the other hand, Sundahl teaches, A stack protection system(col 2 ln 43-50 “This application discloses improved systems and methods of providing computer security and countering attacks on computing systems by protecting control data such as a return address from being disclosed or modified. The disclosed technology includes techniques for detecting attempts to overwrite a return address on a call stack, and preventing any overwrite of a return address on the call stack from succeeding.”) wherein the memory further includes stack memory, and wherein each stack memory page is validated using a data token.(col 11 ln 47-60 “In block 825, the operational routine 850 retrieves ( e.g., from the shadow stack) the dummy return address that the operational routine 800 saved for verification. In block 835, the operational routine 850 compares the retrieved dummy return address of block 825 against the dummy control data popped off the stack with the function return of block 815. In block 845, the operational routine 850 conditionally branches based on the result of the comparison. In block 855, if the dummy return addresses match, then the verification succeeds and the operational routine 850 proceeds to retrieve and decode the encoded return address from the shadow stack. Then in block 865, the operational routine 850 continues the return from the called function as normal, sending control flow to the decoded return address.”; col 6 ln 2-6 ln “As a result, the overwritten stack canary 330 no longer matches the known value, and fails the verification. Therefore, the computing system knows that the canary value 230 has been corrupted, so the integrity of the control data 220 is suspect.”) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Gopalan, Pawlowski, and Iwato to incorporate the stack verification method of Sundahl. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Gopalan, Pawlowski, and Iwato -- a need for how to verify that memory is still valid -- with Sundahl providing a known method to solve a similar problem. Sundahl provides “improved systems and methods of providing computer security and countering attacks on computing systems by protecting control data such as a return address from being disclosed or modified. The disclosed technology includes techniques for detecting attempts to overwrite a return address on a call stack, and preventing any overwrite of a return address on the call stack from succeeding. By enhancing randomization of a stack canary and using a shadow stack to encode and conceal a return address, the disclosed technology enhances security of a computing system against stack smashing, ROP attacks, and JIT-ROP attacks.”(Sundahl col 2 ln 43-54) Claim(s) 15,17 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210342232 A1 (Gopalan) and US 20200050523 A1 (Pawlowski) as applied to claim 13 above, and further in view of US 11372969 B1 (Sundahl). Regarding claim 15, Gopalan and Pawlowski teaches, The computer system of claim 14, Gopalan further teaches, wherein the memory further includes stack memory, (par 28 “This problem is important because a VM is particularly vulnerable to failure during live migration. …. During this time, a VM' s state at the source and the destination nodes may be inconsistent, its state may be distributed across multiple nodes, and the software stack of a VM, including its virtual disk contents, may be in different stages of migration.”) and wherein each stack memory page is validated using a hash value.(par 30 “It is a further object to provide a method of tracking duplication of memory content in a plurality of servers, each server having a memory pool comprising a plurality of memory pages and together residing in a common rack, comprising: computing a hash value for each of the plurality of memory pages or sub-pages in each server, communicating the hash values to a deduplication server process executing on a server in the common rack;”) However, Gopalan and Pawlowski do not specifically teach wherein each stack memory page is validated using a data token. On the other hand, Sundahl teaches, A stack protection system(col 2 ln 43-50 “This application discloses improved systems and methods of providing computer security and countering attacks on computing systems by protecting control data such as a return address from being disclosed or modified. The disclosed technology includes techniques for detecting attempts to overwrite a return address on a call stack, and preventing any overwrite of a return address on the call stack from succeeding.”) wherein the memory further includes stack memory, and wherein each stack memory page is validated using a data token. (col 11 ln 47-60 “In block 825, the operational routine 850 retrieves ( e.g., from the shadow stack) the dummy return address that the operational routine 800 saved for verification. In block 835, the operational routine 850 compares the retrieved dummy return address of block 825 against the dummy control data popped off the stack with the function return of block 815. In block 845, the operational routine 850 conditionally branches based on the result of the comparison. In block 855, if the dummy return addresses match, then the verification succeeds and the operational routine 850 proceeds to retrieve and decode the encoded return address from the shadow stack. Then in block 865, the operational routine 850 continues the return from the called function as normal, sending control flow to the decoded return address.”; col 6 ln 2-6 ln “As a result, the overwritten stack canary 330 no longer matches the known value, and fails the verification. Therefore, the computing system knows that the canary value 230 has been corrupted, so the integrity of the control data 220 is suspect.”) Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Gopalan and Pawlowski to incorporate the stack verification method of Sundahl. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Gopalan and Pawlowski -- a need for how to verify that memory is still valid -- with Sundahl providing a known method to solve a similar problem. Sundahl provides “improved systems and methods of providing computer security and countering attacks on computing systems by protecting control data such as a return address from being disclosed or modified. The disclosed technology includes techniques for detecting attempts to overwrite a return address on a call stack, and preventing any overwrite of a return address on the call stack from succeeding. By enhancing randomization of a stack canary and using a shadow stack to encode and conceal a return address, the disclosed technology enhances security of a computing system against stack smashing, ROP attacks, and JIT-ROP attacks.”(Sundahl col 2 ln 43-54) Regarding claim 17, Gopalan, Pawlowski, and Sundahl teaches, The computer system of claim 15, Pawlowski further teaches, wherein the availability driver is a kernel mode driver of the operating system. (fig 5A:522; par 58,60 “ The FT Management Layer 536 causes the FT Kernel Mode Driver (FT Driver) 522 to begin processing a command to enter Mirrored Execution.”) Response to Arguments Applicant’s arguments, see pg. 6-8 , filed 01/30/2026, with respect to the rejections under 35 U.S.C. 101 have been fully considered and are persuasive. The rejections under 35 U.S.C. 101 of 10/30/2025 has been withdrawn. Applicant’s arguments, see pg. 8-10 , filed 01/30/2026, with respect to the rejection(s) of claim(s) 1,2,5-6 under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by US 20210342232 A1 (Gopalan) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 35 U.S.C. 103 as being unpatentable over US 20210342232 A1 (Gopalan), in view of US 20200050523 A1 (Pawlowski). With respect to the independent claims, the applicant has argued that Gopalan does not teach limitations “and the active node operating system memory further comprises an availability driver; wherein the availability driver of the active node disables or suspends all processes that alter operating system memory and transfers all operation system memory of the active node to the operating system memory of the standby node;”. Explaining that Gopalan does not teach an availability driver. Pawlowski also teaches an availability driver and in the cited (fig 5A:536; par 58,59 “Non-virtualized software components 534 include an FT Management Layer 536.”) examiner interprets this as limitation “and the active node operating system memory further comprises an availability driver;”. Pawlowski teaches in the cited (par 69 “After the Brownout copy phase is complete, the active but failing CPU 14 signals its drivers, which are tracking DMA memory access, to pause all DMA traffic. (Step 330) This is the beginning of the Blackout phase. CPU threads are then all paused to prevent further modification of memory pages. At this time, the final list of pages modified by either CPU access or DMA access is copied to the Standby CPU 14C.”). Examiner interprets this as “wherein the availability driver of the active node disables or suspends all processes that alter operating system memory and transfers all operation system memory of the active node to the operating system memory of the standby node;” With respect to the independent claims, the applicant has argued that Gopalan does not teach limitations “each compute node comprising a dedicated memory with an isolated utility executive, … the isolated utility executive of the active node executes a code to generate an active validation array set of all operating system memory,”. Explaining that Gopalan does not teach an with an isolated utility executive. The newly cited Pawlowski teaches, in the cited(fig 5A:522 “FT Kernel mode driver”; par 60 “The FT Management Layer 536 causes the FT Kernel Mode Driver (FT Driver) 522 to begin processing a command to enter Mirrored Execution. The FT Kernel Mode Driver 522 loads or writes the program and data code of the FT Virtual Machine Monitor (FTVMM) code 580, the FTVMM data 584, the SLAT LO 588, and the VMCS-L0 Array 592 into the Reserved Memory Region.”; par 70 “In more detail during blackout, the FT driver executes driver code on all processors on the active but failing CPU 14 concurrently and copies the final set of dirtied pages to the Standby CPU 14C. The FT Driver causes all processors on CPU 14 to disable system interrupt processing on each processor so as to prevent other programs in the Fault tolerant computer system from generating more Dirty Page Bits.”; par 71 “In one embodiment, the FT Driver then copies the set of physical memory pages that are identified in the Dirty Page Bit Map into the corresponding physical memory addresses in the Second Subsystem.”). Examiner interprets this as limitation “each compute node comprising a dedicated memory with an isolated utility executive, … the isolated utility executive of the active node executes a code”. The remaining limitation, “generate an active validation array set of all operating system memory”, was not argued to be missing, but nevertheless, Gopalan teaches the “generate an active validation array set of all operating system memory” limitation in the cited (par 72 “We use content hashing to detect identical pages. The pages having the same content yield the same hash value.”; par 73 “The per-node controller instructs a user-level QEMU/KVM process associated with each VM to scan the VM' s memory image, perform content based hashing and record identical pages.”), though Gopalan does this through its per-node controller. The combination of Gopalan and Pawlowski covers the limitation of “each compute node comprising a dedicated memory with an isolated utility executive, … the isolated utility executive of the active node executes a code to generate an active validation array set of all operating system memory,”. With respect to the independent claims, the applicant has argued that Gopalan does not teach limitations “at least two compute nodes, wherein one compute node is designated an active node and the other compute node is designated a standby node, each compute node comprising a dedicated memory …, an operating system memory, and a firmware reserved memory …;”. Explaining that Gopalan relates to virtual machines and not two hardware nodes. The newly cited Pawlowski teaches, in the cited (par 13 “In one embodiment, the fault tolerant computer system includes a plurality of CPU nodes, each CPU node including a processor and a memory, wherein one of the CPU nodes is designated as a standby CPU node and the remainder are designated as active CPU nodes”). Examiner interprets this as limitation “at least two compute nodes, wherein one compute node is designated an active node and the other compute node is designated a standby node,”. Pawlowski also teaches an operating system memory in the cited (par 44 “The OS bootloader loads the OS image into memory and begins OS execution.”), and a firmware reserved memory in the cited (fig 5A:504,508; par 57 “Referring to FIG. 5A, in normal, non-mirrored, operation, the layers in the fault tolerant computer system include …; a server firmware layer 504 including the system Universal Extensible Firmware Interface (UEFI) BIOS 508; and a zero layer reserved memory region 512 … . The zero layer reserved memory 512 is reserved by the BIOS 508 at boot time. Although most of the memory of the fault tolerant computer system is available for use by the Operating System and software, the reserved memory 512 is not.”). examiner interprets these as limitations “an operating system memory, and a firmware reserved memory”. Applicant's arguments filed 01/30/2026 pg. 8-10 regarding the rejections of Claim(s) 8-10,12, rejected under 35 U.S.C. 103 as being unpatentable over US 20210342232 A1 (Gopalan), in view of US 20200050523 A1 (Pawlowski), and US 20220156734 A1 (Iwato); and claims 3,7,13,14, rejected under 35 U.S.C. 103 as being unpatentable over US 20210342232 A1 (Gopalan) in view of US 20200050523 A1 (Pawlowski); have been fully considered but they are not persuasive. With respect to the independent claims, the applicant has argued that Gopalan and Pawlowski are not properly combinable. Applicant explains that Gopalan acts on virtual nodes and Pawlowski acts on physical hardware, so it would not make sense to incorporate techniques from Pawlowski into Gopalan’s system. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. [see MPEP 2143] In this case, one of ordinary skill in the art would have been motivated to remedy the shortcomings of Gopalan -- a need for how to handle resuming operations on the failover target -- with Pawlowski providing a known method to solve a similar problem. Pawlowski provides “In another embodiment, if one of: a failure, a beginning of a failure and a predicted failure occurs in an active node, the state and memory of the active CPU node is transferred through the switching fabric to the standby CPU node and the standby CPU node becomes the new active node, taking over for the previously failing node.”(Pawlowski par 9). Both Gopalan and Pawlowski are both generally about fault tolerant node architectures and a person of ordinary skill in the art would have been motivated to combine these teaching to improve reliability of failover systems. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20240176739 A1 - Alden - DMA memory copy from failing active node to standby node. US 20220068421 A1 - Thommana - checks volatile memory when loading containers. Uses a checksum and a digest(cryptographic hash) US 8271700 B1 - Annem - teaches DMA concurrent memory access with modern system buses in par 98. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL XU whose telephone number is (571)272-5688. The examiner can normally be reached Monday-Friday 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at (571) 272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL XU/Examiner, Art Unit 2113
Read full office action

Prosecution Timeline

Dec 28, 2023
Application Filed
Oct 28, 2025
Non-Final Rejection — §103
Jan 30, 2026
Response Filed
Apr 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572503
APPLICATION LEVEL TO SHARE LEVEL REPLICATION POLICY TRANSITION FOR FILE SERVER DISASTER RECOVERY SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12547498
POWER RECOVERY IN A NON-BOOTING INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12468609
FAILOVER OF DOMAINS
2y 5m to grant Granted Nov 11, 2025
Patent 12380015
PREDICTING TESTS BASED ON CHANGE-LIST DESCRIPTIONS
2y 5m to grant Granted Aug 05, 2025
Patent 12360874
SYSTEMS AND METHODS FOR GOVERNING CLIENT-SIDE SERVICES
2y 5m to grant Granted Jul 15, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+23.0%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 124 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month