DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The Office Action is in response to claims filed 06/12/2023.
Claims 1-20 are pending.
Claim Objections
Claims 11 and 19 objected to because of the following informalities: Claim 11 in part states “but neither the one or more physical host functions nor the physical host function”. Claim 11 appears to restate itself after “nor”. Claim 19 states “thereby allowing fewer traffic types to be completed solely by the VDI emulation that without the simultaneous communication”. Claim 19 contains a grammatical error. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 10 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 10 recites the limitation "the suspension" in line 13. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 and 4-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal et al. Pub. No. US 20190114196 A1 (hereafter Aggarwal) in view of Liu et al. Pub. No. US 20210334124 A1 (hereafter Liu).
With regard to claim 1, Aggarwal teaches a method of operating a data system, the method comprising: maintaining first and second virtualized instantiations in operation for at least a period of time during a migration of a process from a first domain associated with the first virtualized instantiation to a second domain associated with the second virtualized instantiation (¶ [0121] states “the present invention provide a mechanism for performing live migration of a VM from one server to another server. With S-IOV, the work of creating an emulated interface and a process for migration is implemented in VDCM 402”. FIG. 8 shows the emulation is within Host OS 150 and is done to support Guest VM 1 802. Examiner’s Note: the emulated interface is part of the virtualized instantiation. The VM is interpreted to be a process and is also considered part of the virtualized instantiation).
Aggarwal does not explicitly teach a second domain that is associated with a second virtualized instantiation that is maintained during the migration of a process.
However, in an analogous art, Liu teaches a method of operating a data system, the method comprising: maintaining first and second virtualized instantiations in operation for at least a period of time during a migration of a process from a first domain associated with the first virtualized instantiation to a second domain associated with the second virtualized instantiation (¶ [0003] states “A virtual machine live migration can begin by creating a target virtual machine on a target computer and put to the target virtual machine in a pause state”. ¶ [0045] states “data in the ingress traffic received at server computer 104 for source virtual machine 130 is sent to both source virtual machine 130 and target virtual machine 132 when port mirroring rule 134 is put into place. As depicted, port mirroring rule 134 is used during the entire process of the virtual machine live migration”. Examiner’s Note: the target virtual machine is considered part of the second virtualized instantiation. The port mirroring rule that sends traffic to both source and target virtual machines demonstrates that both virtual instantiations exist at the same time).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the simultaneous operations of source and target virtual machine of Liu with the emulated interface during virtual machine migration process of Aggarwal. A person having ordinary skill in the art would have motivated to make this combination for the purpose of “[increasing] the performance of computer system 212 by maintaining the network connectivity between the source virtual machine and the target virtual machine with all other virtual machines in communication with the source virtual machine using a port mirroring rule” (Liu ¶ [0069]). The virtual machines on the host and target servers are considered to be part of the virtual instantiations. Additionally, it would be obvious to one of ordinary skill in the art that the emulated interface of Aggarwal could also be present on the target host and part of the second virtual instantiation. Aggarwal’s computing platform design in FIG. 1 shows a communications link 155 and ¶ [0029] states “there are other endpoint devices coupled to communications link 155 (e.g., PCIe interconnect) that support Scalable IOV capabilities”.
With regard to claim 4, Aggarwal and Liu teach the method of claim 1, wherein the maintaining of the first and the second virtualized instantiations in operation for at least the period of time during the migration of the process comprises. Aggarwal also teaches enabling a physical host of the second domain to send or receive protocol transactions to the second virtualized instantiation (Aggarwal ¶ [0059] states “Virtual Device (VDEV) 404 is the abstraction through which a shared physical device (e.g., Scalable IOV device 230) is exposed to software in guest VM 422. VDEVs 404 are exposed to guest VM 422 as virtual PCI Express enumerated devices, with virtual resources such as virtual Requester-ID, virtual configuration space registers, virtual memory BARs, virtual MSI-X table, etc.”. Examiner’s Note: interacting with configuration space registers and MSI-X table involves sending or receiving PCIe transaction layer protocol packets. Although Aggarwal is describing the host scalable IOV device, it would be obvious that a destination device could also be a scalable IOV device and therefore the host of the second domain could send or receive protocol transactions with the virtualized instantiation), while simultaneously enabling a physical host of the first domain to perform one or more functions or process on the first virtualized instantiation (¶ [0122] states “VDCM 402 detects when a VM is being live migrated and switches the data path (e.g., fast path 806) of VDev 1 804 in guest VM 1 802 to an emulated interface (e.g., slow path SW emulation 808)” ¶ [0126] states “The emulated interface takes over the data path and the processing of the data traffic remains uninterrupted while the VM is migrated”. Examiner’s Note: the switching of traffic from fast path to the slow path is an exemplary function in the first virtualized instantiation. This function persists while the VM is migrated).
Aggarwal does not teach the aforementioned actions happening simultaneously at the first and second domains.
However, Liu teaches enabling a physical host of the second domain to send or receive protocol transactions to the second virtualized instantiation, while simultaneously enabling a physical host of the first domain to perform one or more functions or process on the first virtualized instantiation (¶ [0106] states “For example, step 904 can be performed prior to step 902 or prior to step 900. In another example, step 902 can be performed prior to step 900. In other illustrative examples, these three steps can be performed simultaneously”. ¶ [0097] states “The process begins by creating an egress tunnel rule in a target computer to each of the other virtual machines for direct egress connectivity to remote computers for remote virtual machines communicating with a source virtual machine (step 900). In step 900, these tunnel rules are set up in a network interface card for the target computer. In this example, the network interface card is a smart NIC”. ¶ [0098] states “In step 902, data packets in the ingress traffic that show up at the virtual network interface cards are delivered to both the source virtual machine and the target virtual machine during the entire virtual machine live migration”. ¶ [0099] states “The process creates a target virtual machine in a target computer (step 904)”. Examiner’s Note: the simultaneous execution of these steps shows that operations at the source and target is possible).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the individual operations of sending or receiving protocol transactions between host and virtualized instantiation and performing actions involving functions or processes of Aggarwal with the simultaneous operation of source and target hosts of Liu. A person having ordinary skill in the art would have motivated to make this combination for the purpose supporting the goals of live migration. Liu ¶ [0003] states “A virtual machine live migration can be performed in which the virtual machine being migrated still operates during the migration from the current server computer to another server computer. Live migrations are desirable to reduce downtime and unavailability of virtual machines to process requests”. By executing simultaneously, the live migration process is completed faster and this can lead to shorter downtimes.
With regard to claim 5, Aggarwal and Liu teach the method of claim 4, wherein the enabling of the physical host of the second domain to send or receive protocol transactions to the second virtualized instantiation comprises. Aggarwal also teaches enabling reading or writing registers (¶ [0059] states “Virtual Device (VDEV) 404 is the abstraction through which a shared physical device (e.g., Scalable IOV device 230) is exposed to software in guest VM 422. VDEVs 404 are exposed to guest VM 422 as virtual PCI Express enumerated devices, with virtual resources such as virtual Requester-ID, virtual configuration space registers, virtual memory BARs, virtual MSI-X table, etc.”. ¶ [0064] states “VDEV registers that are read frequently and have no read side-effects, but require VDCM intercept and emulation on write accesses, may be mapped as read-only to backing memory pages provided by VCDM”).
With regard to claim 6, Aggarwal and Liu teach the method of claim 4, wherein the enabling of the physical host of the second domain to send or receive protocol transactions to the second virtualized instantiation comprises. Aggarwal also teaches enabling transmission or receipt of interrupts (¶ [0066] states “[0066] VDEVs 404 expose a virtual MSI or virtual MSI-X capability that is emulated by VDCM 402. Guest driver 424 requests VDEV interrupt resources normally through guest VM 422 interfaces, and the guest VM may service this by programming one or more Interrupt Messages through the virtual MSI or virtual MSI-X capability of VDEV 404”).
With regard to claim 7, Aggarwal and Liu teach the method of claim 4, wherein the enabling of the physical host of the second domain to send or receive protocol transactions to the second virtualized instantiation comprises. Aggarwal also teaches the physical host of the second domain accepting one or more upstream message types associated with one or more detectable side-effects of the second virtualized instantiation (¶ [0064] states VDEVs “supports high performance read accesses to these registers along with virtualizing their write side-effects by intercepting on guest write accesses”. Examiner’s Note: any scalable IOV device could be considered part of the second domain and second virtualized instantiation if different from the first device and connected to the first device by the communication channel 150 in FIG. 1. A write operation is considered to be of one of the upstream message types. The write operation causes side effects of the virtualized instantiation).
With regard to claim 8, Aggarwal and Liu teach the method of claim 1. Aggarwal also teaches further comprising providing indication of a completed transaction (¶ [0132] states “Once the live migration is complete, if the new server running guest VM 802 has an S-IOV capable device, then the administrator can “hot-add” (e.g., connect) VDev 1 804 back into the guest VM. This action can be trapped by the VDCM 402 triggering an interrupt as a result of the “hot-add”. Examiner’s Note: the migration of the virtual machine is considered to be the transaction. The interrupt generated in response to connecting the VDev is considered the indication).
With regard to claim 9, Aggarwal and Liu teach the method of claim 8. Aggarwal also teaches wherein the providing of the indication of the completed transaction comprises causing the second virtualized instantiation to elect to pass through one or more transactions to a physical endpoint device to be completed thereat (¶ [0121] states “When live migration is complete, VDCM 402 switches the data path back to fast path 806”. ¶ [0042] states “Fast path accesses typically include data path operations involving work submission and work completion processing. With this organization, slow path accesses to the virtual device from a guest VM are trapped and emulated by device-specific host software while fast path accesses are directly mapped on to the physical device”. Examiner’s Note: the fast path actions are considered to be passed-through to the endpoint. After the VM migration is complete, the indication occurs and then the fast-path is activated for passed-through operations).
Claim(s) 2 and 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aggarwal and Liu, and further in view of Tsirkin et al. Pub. No. US 20210124602 A1 (hereafter Tsirkin).
With regard to claim 2, Aggarwal and Liu teach the method of claim 1, wherein the maintaining of the first and the second virtualized instantiations in operation for at least the period of time during the migration of the process comprises.
Aggarwal and Liu do not explicitly teach an entity of the second domain detecting or accessing functions, entities, or processes of the first domain.
However, in an analogous art, Tsirkin teaches enabling at least one of detection or access of one or more functions, entities, or processes of the first domain by at least one entity of the second domain (¶ [0025] states “the destination hypervisor 123 may include migration module 124”. ¶ [0027] states “migration module 124 may begin by determining a total amount of memory (e.g., memory 117) associated with VM 111 on source host 110. In some implementations, migration module 124 may make this determination by sending a request to source hypervisor 113 for the amount of memory associated with source VM”. Examiner’s Note: the migration module within the destination host and hypervisor is an entity of the second domain. The source host is part of the first domain).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the migration module accessing and detecting functions, entities, or processes of the first domain of Tsirkin with the maintaining of the first and second virtual machines during a migration of a virtual machine of Aggarwal and Liu. The result is that the “Migration module 124 can manage the destination-side tasks for migration of a source VM 111 to destination VM 121 in order to ensure that sufficient resources (e.g., a sufficient amount of memory) are present on destination host 120 to support the migration” (Tsirkin ¶ [0025]). A person having ordinary skill in the art would have motivated to make this combination “to support successful migration of the VM's memory from the source to the destination” (¶ [0015]) and “significantly improve the efficiency of post-copy live migrations, particularly with respect to VMs that use large amounts of memory, reducing the likelihood of VM failure after migration has been initiated” (¶ [0016]). Additional benefits of doing so can be found ¶ [0016].
With regard to claim 3, Aggarwal, Liu, and Tsirkin teach the method of claim 2, wherein the enabling of the at least one of detection or access of the one or more functions, entities, or processes of the first domain by the at least one entity of the second domain comprises. Tsirkin also teaches enabling a destination physical host of the second domain to perform at least one of (i) detection or (ii) access of the first virtualized instantiation of the first domain (¶ [0025] states “the destination hypervisor 123 may include migration module 124”. ¶ [0027] states “migration module 124 may begin by determining a total amount of memory (e.g., memory 117) associated with VM 111 on source host 110. In some implementations, migration module 124 may make this determination by sending a request to source hypervisor 113 for the amount of memory associated with source VM”. Examiner’s Note: the migration module within the destination hypervisor which is within the destination physical host. The destination physical host is part of the second domain. The source host is part of the first domain).
Claim(s) 10 and 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Freking et al. Pub. No. US 20140351484 A1 (hereafter Freking) in view of Goggin et al. Pub. No. US 20120042034 A1 (hereafter Goggin).
With regard to claim 10, Freking teaches a computer readable apparatus having a non-transitory storage medium, the non-transitory storage medium comprising at least one computer program having a plurality of instructions, the plurality of instructions configured to, when executed on a digital processor apparatus, cause a computerized apparatus to (¶ [0041] states “These computer program instructions may also be stored in a computer-readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture, including instructions that implement the function/act specified by the flowchart and/or block diagram block or blocks”):
block I/O traffic during a virtual machine (VM) migration process (¶ [0092] states “In one embodiment, the PCIe broadcast component 825.sub.1 on the sub-switch module 820.sub.1 is configured to store incoming network traffic for the first port in a buffer, while the network traffic for the first port is suspended”),
the suspension comprising suspension of the I/O traffic from at least one of (i) a physical endpoint device function to one or more physical host functions, or (ii) the one or more physical host functions to the physical endpoint device function (¶ [0092] states “In one embodiment, the PCIe broadcast component 825.sub.1 on the sub-switch module 820.sub.1 is configured to store incoming network traffic for the first port in a buffer, while the network traffic for the first port is suspended”. ¶ [0088] states “For example, the sub-switch 820.sub.1 could receive PCIe traffic from the host device 810 via the PCIe link 850”. ¶ [0088] adds “The sub-switch 820.sub.2 could receive the converted traffic and could re-convert the traffic to conform with the PCIe protocol, before transmitting the traffic to the peripheral device 840 over the PCIe link 860”. ¶ [0095] states “the PCIe communications between the peripheral devices connected to the distributed switch may permit these devices to communicate with multiple host devices at the same time. Thus, using these techniques, a particular one of the peripheral devices 840 could communicate with the host device 810 and other host devices, using a single PCIe communication link”. FIG. 8 shows there are multiple devices in Peripheral Devices 840. Examiner’s Note: the peripheral devices are interpreted to include the physical endpoint device functions. The traffic suspension mentioned in ¶ [0092] is the traffic between the host and peripheral devices described in ¶ [0088]. ¶ [0095] explains how it is possible for there to be multiple devices connected to the same distributed switch. FIG. 8 also shows how there can be multiple peripheral devices connected to the distributed switch. Therefore, it would be obvious the traffic suspension could include traffic from multiple hosts or peripheral devices).
Freking does not teach the suspension of I/O traffic during a virtual machine migration process.
However, in an analogous reference, Goggin teaches block I/O traffic during a virtual machine (VM) migration process (¶ [0062] states “Referring again to FIG. 4, live migration of a virtual machine, such as VM 304, from the source host machine 300 to the destination host machine 404-2 involves temporarily suspending all operation of the VM 304 in the course of the migration”. ¶ [0058] states “PF API modules 394-1 and 394-2 enable/disable either the first request queue 380-1 or the first response queue 380-2”. Examiner’s Note: Goggin also describes a type of blocking in the form of disabling request or response queues. The disabling of queues occurs during the VM migration (¶ [0056] – [0058])).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the blocking request and response queues during a virtual machine migration process of Goggin with the suspension of PCIe traffic between one more hosts and peripheral devices of Freking. A person having ordinary skill in the art would have motivated to make this combination “to avoid loss of SCSI IO requests and/or SCSI IO responses that may be in flight during the time interval when the VM operations are temporarily suspended” (Goggin ¶ [0062]). One of ordinary skill in the art would recognize the data integrity benefits of preserving all I/O traffic between the host and peripheral device.
With regard to claim 12, Freking and Goggin teach the computer readable apparatus of claim 10, wherein the plurality of instructions are further configured to, when executed on the digital processor apparatus, cause the computerized apparatus to. Freking also teaches unblock of the I/O traffic such that the I/O traffic is allowed to flow to at least one of (i) the physical endpoint device function, or (ii) the one or more physical host functions (¶ [0092] states “Upon resuming network traffic for the first port, the PCIe broadcast component 825.sub.1 could transmit the stored network traffic from the buffer to corresponding downstream switch modules in the plurality of downstream switch modules”. Examiner’s Note: it is understood that the stored traffic would eventually reach its destination peripheral device or host after being routed through one or more switches in the distributed switch).
With regard to claim 13, Freking and Goggin teach the computer readable apparatus of claim 10, wherein: the one or more physical host functions comprise. Freking also teaches an originating host and a destination host (FIG. 3 displays origin compute element 100-1 and destination compute element 100-2 are connected via connection mesh fabric 155),
the originating host and the destination host having different identifications associated therewith (¶ [0056] states “FIG. 4 depicts a block diagram of an example data structure and values for a routing table 145-1A in an origin compute element … the routing table 145-1A comprises example entries or rows 401 and 402, each of which includes a virtual PTP bridge identifier field 409, a south chip identifier field 410, a secondary bus identifier field 411, a subordinate bus identifier field 412, and an MMIO bus address range field 414. Examiner’s Note: the routing table in FIG. 4 describes routing within I/O element 132-1. There are multiple identifiers such as bus IDs and virtual bridge IDs. By examining FIG. 3, the origin compute element 100-1 and destination compute element 100-2 also have a PTP component and buses connecting them to the connection mesh fabric. It would be obvious that the origin and destination compute elements could also be identified by PTP component or bus IDs and that those IDs could be different);
Freking does not teach the draining of non-address-routed I/O transaction.
However, Goggin teaches and the blockage of the I/O traffic comprises use of traffic routing block/pause function that causes a flow control condition, wherein the flow control condition allows non-address-routed I/O transactions to drain from an I/O subsystem of the I/O fabric and complete before all remaining traffic is blocked using the traffic routing block/pause function, thereby ensuring that only address-routed traffic is present in the I/O subsystem when traffic is blocked using the traffic routing block/pause function (¶ [0065] states “The process 500 calls module 394-1 which disables the VF 316 from attempting to de-queue any SCSI IO requests from the first SCSI IO request queue 380-1. Thus, operation of the first SCSI IO request queue 380-1 is suspended … If there are SCSI IO requests that the VF 316 previously de-queued from the first request queue 380-1 and sent to storage 308, but for which it has not yet received completions, then the decision module 398 awaits completions for all such outstanding SCSI IO storage requests”. Examiner’s Note: read completions are known in the art to not be address routed. When the decision module is awaiting completions of outstanding IO storage requests, it is letting all the non-address-routed I/O transactions to drain from the I/O system).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the draining of non-address-routed traffic of Goggin with the originating and destination compute elements having different identifications of Freking. Additionally, combining Goggin’s drainage of non-address-routed traffic with the traffic suspension of Freking would create a system in which the only traffic present in the Freking’s switch would be address-routed traffic. A person having ordinary skill in the art would have motivated to make this combination because “Graceful migration of the VM 304 involves suspending its operation in a manner such that operations of the VM 304 are not disrupted despite the temporary suspension of its operation. In accordance with some embodiments, when a VM 304 is relocated in the course of a direct storage access via a VF 316, precautions are taken to avoid loss of SCSI IO requests and/or SCSI IO responses that may be in flight during the time interval when the VM operations are temporarily suspended” (Goggin ¶ [0062]). In other words, the benefit is to avoid the loss of traffic between origination, destination, peripheral device, or any combination of devices in a network. One of ordinary skill in the art would recognize the benefits of data integrity in a network.
With regard to claim 14, Freking and Goggin teach the computer readable apparatus of claim 10, wherein: the one or more physical host functions comprise. Freking also teaches an originating host and a destination host (FIG. 3 displays origin compute element 100-1 and destination compute element 100-2 are connected via connection mesh fabric 155),
and the blockage of the I/O traffic comprises use of traffic routing block/pause function without regard for draining non-address-routed traffic before the blockage (¶ [0092] states “In one embodiment, the PCIe broadcast component 825.sub.1 on the sub-switch module 820.sub.1 is configured to store incoming network traffic for the first port in a buffer, while the network traffic for the first port is suspended”. Examiner’s Note: Freking does not differentiate between addressed-routed and non-addressed routed traffic when suspending incoming network traffic).
Freking does not explicitly teach that the originating that originating host and common destination host having common identification.
However, Freking implicitly teaches the originating host and the destination host having a common identification associated therewith (¶ [0083] states “FIG. 7 depicts a block diagram of an example data structure for a routing table 145-2B in the destination compute element, after the movement of the ownership of a device from the origin compute element to the destination compute element, according to an embodiment of the invention”. FIG. 3 shows the system’s network topology. Examiner’s Note: when examining the topology of South Chip D 143-2, there are two devices 160-3 and 160-4. The CHIP ID 710 is the same or common between devices 160-3 and 160-4 in rows 702 and 703).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to have the originating host and the destination host have a common identification. Additionally, depending on the configuration, the originating and destination host could share other common identifications besides chip ID such as a bus ID or some other ID. A person having ordinary skill in the art would have motivated to have the originating host and destination host have a common identification to account for network topologies where multiple hosts are sharing a broad identifier. A broad identifier could be a switch, gateway, or any other network device that can have multiple inputs.
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Freking and Goggin, and further in view of Huilgol et al. Pub. No. US 20220091872 A1 (hereafter Huilgol).
With regard to claim 11, Freking and Goggin teach the computer readable apparatus of claim 10. Freking also teaches wherein the blockage of the I/O traffic occurs inside an I/O traffic switch routing fabric, but neither the one or more physical host functions nor the physical host function (¶ [0092] states “In one embodiment, the PCIe broadcast component 825.sub.1 on the sub-switch module 820.sub.1 is configured to store incoming network traffic for the first port in a buffer, while the network traffic for the first port is suspended”. Examiner’s Note: FIG. 8 clearly shows the PCIe broadcast component that stores network traffic is within the distributed switch that is between the host and peripheral devices),
Freking and Goggin do not teach that physical endpoint device functions are only aware of pausing when a flow control mechanism notifies the physical endpoint device function to stop transmitting additional I/O traffic.
However, in an analogous art, Huilgol teaches such that an awareness by the physical endpoint device functions and one or more physical host functions of the blockage only occurs based on an invocation of one or more flow control mechanisms of the I/O traffic switch routing fabric to prevent the physical endpoint device function from transmission of additional I/O traffic (¶ [0117] states “A vendor enhanced NVMe driver can indicate to the SR-IOV NVMe card via a vendor specific mechanism that the driver supports a vendor specific pause/resume mechanism … At block 1706 the VF may send a vendor specific asynchronous notification to pause posting new requests and may set the ‘Processing Paused’ controller status register”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the pausing of a NVMe peripheral device of Huilgol with the blockage of I/O traffic in PCIe switch of Freking for the purpose of blocking existing traffic and preventing the transmission of additional traffic. A person having ordinary skill in the art would have motivated to make this combination because “doing so helps to ensure that no traffic flowing through the first switch module is lost” (Goggin ¶ [0092]). One of ordinary skill in the art would recognize the data integrity benefits of pausing devices during the migration of VMs or VFs.
Claim(s) 15-17 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iwamatsu et al. Pub. No. US 20120254866 A1 (hereafter Iwamatsu) in view of Freking.
With regard to claim 15, Iwamatsu teaches a system, comprising: a source host, the source host having have one or more first virtualized domains or endpoints associated therewith; and a destination host, the destination host having have one or more second virtualized domains or endpoints associated therewith (¶ [0010] states “A non-operating virtual machine is created on the destination computer, and the memory contents of the virtual machine running on the source computer are transferred to the destination computer and then copied to the created virtual machine”. FIG.1 clearly shows the source computer 1,1-1 and destination computer 1,1-2 with a plurality of virtual machines within the computer. Examiner’s Note: the virtual machine is considered part of the virtualized domain or endpoint);
and operations are completed within the one or more data fabrics by a virtual device instantiation (VDI) emulation implementation instead of the physical fabric device (¶ [0039] states “The emulation controller 36b of the connection controller 36 selects a PCI device 2 to be emulated from among the PCI devices 2 connected to the downstream ports 32 and causes the device emulator 35 to emulate the selected PCI device 2. This allows the computer 1-1 to access the PCI device 2, as well as allows the computer 1-2 to communicate with the emulated PCI device 2”).
Iwamatsu does not explicitly teach a data fabric, pause points within a data fabric, the blocking of traffic, and the fabric device containing an emulation.
However, in an analogous reference, Freking teaches one or more data fabrics (¶ [0021] states “the mesh fabric that interconnects the different upstream and downstream ports in the distributed switch”);
the one or more pause points configured to implement a protocol whereby all types of traffic are blocked from arrival at a physical fabric device (¶ [0092] states “In one embodiment, the PCIe broadcast component 825.sub.1 on the sub-switch module 820.sub.1 is configured to store incoming network traffic for the first port in a buffer, while the network traffic for the first port is suspended”),
and operations are completed within the one or more data fabrics by a virtual device instantiation (VDI) emulation implementation instead of the physical fabric device (¶ [0021] states “the mesh fabric that interconnects the different upstream and downstream ports in the distributed switch”. The distributed switch is illustrated in FIG. 8).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the pause points that block traffic and data fabric of Freking with the source and destination host with virtual machines and device emulation of Iwamatsu. Additionally, the data fabric and distributed switch mentioned in Freking share a similar domain. In other words, switches can be part of the data fabric. In Iwamatsu FIG. 1, the device emulator is within the PCI switch, so the device can emulator can be said to be within the data fabric. A person having ordinary skill in the art would have motivated to make this combination for the purpose of “[reducing] traffic between the apparatuses, thereby reducing the suspension time of the virtual machine as well as the time for live migration” (Iwamatsu ¶ [0041]). Additionally, Freking gives a benefit of buffering traffic at the switch stating “ensure that no traffic flowing through the first switch module is lost, while the link for the host device is being reset” (Freking ¶ [0092]). In the combination between Freking and Iwamatsu, the buffering of traffic would instead be during the migration of a virtual machine. The buffering would provide data integrity benefits that one of ordinary skill in the art would recognize.
With regard to claim 16, Iwamatsu and Freking teach the system of claim 15. Freking also teaches wherein the one or more data fabrics comprise a PCIe-compliant switch fabric (¶ [0017] states “embodiments could receive a request to reset a PCIe link for a first host device, connected to a plurality of downstream PCIe devices the distributed switch. Embodiments could determine one or more ports of the first switch module that correspond to the downstream PCIe devices and could suspend traffic for the first host device on these ports”. Examiner’s Note: the quotation provides evidence that the distributed switch is PCIe-compliant).
With regard to claim 17, Iwamatsu and Freking teach the system of claim 15. Freking also teaches wherein the source host comprise a server apparatus (¶ [0025] states “the compute element 100 and/or the service processor 133 are multi-user mainframe computer systems, single-user computer systems, or server computer systems”. Additionally, the compute element is represented in FIG. 3 in origin compute element 100-1).
With regard to claim 19, Iwamatsu and Freking teach the system of claim 15. Freking also teaches wherein the one or more pause points enable at least the one or more first virtualized domains or endpoints and the one or more second virtualized domains or endpoints to simultaneously communicate with a physical fabric device, thereby allowing fewer traffic types to be completed solely by the VDI emulation that without the simultaneous communication (¶ [0095] states “In some embodiments, the PCIe communications between the peripheral devices connected to the distributed switch may permit these devices to communicate with multiple host devices at the same time. Thus, using these techniques, a particular one of the peripheral devices 840 could communicate with the host device 810 and other host devices, using a single PCIe communication link. In such an embodiment, although the PCIe broadcast component 825.sub.1 could suspend and buffer the PCIe traffic for the host device 810, other PCIe traffic for the peripheral devices 840 could continue to flow through the sub-switch 820.sub.1, from other devices than the host device 810”. Examiner’s Notes: the pause points are part of the distributed switch. The distributed switch supports the simultaneous communication between hosts and peripheral devices. Iwamatsu has also already demonstrated that hosts can have a plurality of virtual machines which can be interpreted to be part of the virtualized domain or endpoint).
Claim(s) 18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iwamatsu and Freking in view of Aggarwal.
With regard to claim 18, Iwamatsu and Freking teach the system of claim 15.
Iwamatsu and Freking do not explicitly teach that supported operations of the VDI emulation can proceed at a source or destination host during a VM migration.
However, in an analogous art, Aggarwal teaches wherein supported types of operations of the VDI emulation are enabled to proceed at a respective one of the source host or destination host during a migration of a VM from one of the source host or the destination host to another one of the source host or the destination host (¶ [0121] states “Embodiments of the present invention provide a mechanism for performing live migration of a VM from one server to another server. With S-IOV, the work of creating an emulated interface and a process for migration is implemented in VDCM 402”. ¶ [0130] states “In this case, fast path 806 is made inactive and slow path SW emulation 808 is activated. Hence the data traffic going to guest VM 1 802 will continue to flow even though VDev 1 804 has been ejected”. Examiner’s Note: the slow path using emulation is considered part of the VDI emulation. The slow path supports all operations of the fast path. During the migration of a virtual machine, the slow path handles operations for the virtual machine. Both the source host and destination host could have supported operations given to it via the slow path if both the source and destination host are scalable IOV devices. In other words, the slow path (which is the emulation) can handle traffic between the host and guest VM wherein the slow emulation is within the host. Additionally, the host can receive external traffic as shown in FIG. 1 and explained in ¶ [0028]. It would be obvious that some external traffic could be routed to the guest VM using the slow path emulation).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the slow path emulation letting operations proceed at a source or destination host during a virtual machine migration with the system composed of pause points within a data fabric and source and destination hosts with virtual machines of Iwamatsu and Freking. A person having ordinary skill in the art would have motivated to make this combination because “the data traffic going to guest VM 1 802 will continue to flow even though VDev 1 804 has been ejected” (Aggarwal ¶ [0130]). One of ordinary skill in the art would recognize the benefits of doing so such as maintaining connections to other hardware internal or external to the host during a virtual machine migration.
With regard to claim 20, Iwamatsu and Freking teach the system of claim 15. Freking teaches wherein the physical fabric device is unaware of any blocking of the traffic (¶ [0088] states “The system 800 includes a host device 810 and a plurality of peripheral devices 840, connected via a distributed switch 805”. ¶ [0092] states “In one embodiment, the PCIe broadcast component 825.sub.1 on the sub-switch module 820.sub.1 is configured to store incoming network traffic for the first port in a buffer, while the network traffic for the first port is suspended”. Examiner’s Note: the host device or the peripheral devices could be interpreted as the physical fabric device).
Iwamatsu and Freking do not explicitly teach that the physical fabric device is unaware of any blocking of the traffic.
However, in an analogous reference, Aggarwal teaches wherein the physical fabric device is unaware of any blocking of the traffic (¶ [0123] states guest VM 1 802 does not need to be aware of the emulated interface (slow path SW emulation 808) for the failover path”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to combine the unawareness of traffic rerouting of Aggarwal with the blocking of traffic to the physical fabric device of Freking. In this case, the traffic routing would be substituted with the blocking of traffic. A person having ordinary skill in the art would have motivated to make this substitution for the purpose of not having to taken any additional action when there is a blockage of network traffic (Aggarwal ¶ [0123] states “there is no additional action needed by the user in guest VM 1 802 to create a team interface between the emulated interface 808 and the fast path interface 806”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER L YUAN whose telephone number is (571)272-5737. The examiner can normally be reached Mon-Fri 7:30am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PETER LI YUAN/Examiner, Art Unit 2197
/KENNETH TANG/Primary Examiner, Art Unit 2197