DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to claims filed 12/8/2025.
Claims 1-20 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-10 and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kutch et al. US 20180181420 A1 (Kutch) in view of Bshara et al. US 9836421 B1 (Bshara) in view of Kumar et al US 20120167082 A1 (Kumar).
Bshara was cited in IDS filed on 7/11/2023.
Regarding claim 1, Kutch teaches A virtualized system comprising: (Fig. 1, [5]: “FIG. 1 is a high-level functional block diagram illustrating an example virtualization system having a SR-IOV architecture that may serve as a setting in which aspects of the embodiments may be implemented.”) a processor (Fig. 2 element 202; [19]: “Example host machine 200 includes at least one processor 202 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.)…”) configured to provide a function for a virtualization environment; (Claim 1: “computing hardware, including a processor coupled to a data store and an input/output (I/O) device interfaced with the processor, the computing hardware to: execute a hypervisor; instantiate the VM to execute under supervision of the hypervisor.”; [13]: “According to one aspect of the embodiments, a SR-IOV architecture is used to bypass a hypervisor's involvement in data movement by providing independent memory space, interrupts, and direct-memory access (DMA) streams for virtual machines. SR-IOV architecture allows a device to support multiple Virtual Functions (VFs). SR-IOV facilitates two function types: physical functions (PFs), and virtual functions (VFs). PFs are PCI-express (PCIe) functions that include the SR-IOV extended capability, which may be used to configure and manage the SR-IOV functionality. VFs are “lightweight” PCIe functions that include resources for facilitating data movement but have a carefully minimized set of configuration resources.”) a host operating system (OS) configured to run on the virtualization environment; (Fig. 1: 112; [14]: “As depicted, guest operating systems (OS's) 102A and 102B are executed along-side management OS 112 in distinct virtual machines, over hypervisor 120, which in turn is executed on a computing architecture described in greater detail below with reference to FIGS. 2-3.”; Examiner notes [9] of instant specification: “According to an example embodiment…The host operating system runs on a host virtual machine of the virtualization environment. Further, examiner notes from [38] of instant specification that host OS, like management OS from Kutch, functions as a guest OS with expanded privileges: “A scheme or manner of controlling the at least one hardware input/output device 500 may be changed (or may vary) according to whether a subject or an agent controlling the at least one hardware input/output device 500 is the host operating system 200 or the at least one guest operating system 300.”) at least one guest operating system configured to run on at least one virtual machine of the virtualization environment; (Fig. 1: 106A, 106B, 112, 120; Fig 3, [28]: “Each VM 320A, 320B includes a guest operating system 322A, 322B, and application programs 324A, 324B.”) a hypervisor configured to implement the virtualization environment (Fig. 1: 120; [2]: “In virtual computing, a hypervisor provides the virtualization of a computer system…”) using the function of the processor, (Fig. 2: 202, [18]: “FIG. 2 is a block diagram illustrating a host machine platform, which may implement all, or portions of, the virtualization system of FIG. 1 according to some embodiments.”; [19]: “Example host machine 200 includes at least one processor 202 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.)…”; [13]: “According to one aspect of the embodiments, a SR-IOV architecture is used to bypass a hypervisor's involvement in data movement by providing independent memory space, interrupts, and direct-memory access (DMA) streams for virtual machines. SR-IOV architecture allows a device to support multiple Virtual Functions (VFs). SR-IOV facilitates two function types: physical functions (PFs), and virtual functions (VFs). PFs are PCI-express (PCIe) functions that include the SR-IOV extended capability, which may be used to configure and manage the SR-IOV functionality. VFs are “lightweight” PCIe functions that include resources for facilitating data movement but have a carefully minimized set of configuration resources.”) and configured to generate and control the at least one virtual machine of the virtualization environment; (Fig. 1: 120; [14]: “As depicted, guest operating systems (OS's) 102A and 102B are executed along-side management OS 112 in distinct virtual machines, over hypervisor 120, which in turn is executed on a computing architecture described in greater detail below with reference to FIGS. 2-3.”; [37]: “VM and guest OS 402 is executed over hypervisor 420…”; Abstract: “Management of access to input/output devices by a virtual machine (VM) includes executing a hypervisor, and instantiating the VM to execute under supervision of the hypervisor.”) at least one hardware input/output (I/O) device controlled by the host operating system and the at least one guest operating system; (Fig. 1; [12]: “Various aspects of the embodiments are directed to managing access to input/output (I/O) devices by virtual machines (VMs). Input/output devices include network interface devices (NIDs), I/O ports (e.g., universal serial bus (USB) controllers, peripherals (e.g., keyboard, touchscreen, mouse, game controller), video adapters, or any other device that interfaces with a peripheral component interconnect (PCI) bus or equivalent, for example.”; Examiner notes, VMs of Kutch have guest or management operating system(s) which are managing the I/O devices, see at least [15-16] along with previous citations.) and at least one hardware interface device configured to support direct communication between the at least one guest operating system and the at least one hardware input/output device […] to enable the at least one guest operating system to control the at least one hardware input/output device. (Fig. 1, Fig. 5, [43]: “VF (virtual function) driver engine 502 configures the I/O device, and communicates data to and from the device.”; [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150.”; Fig. 4, Fig. 7, [53]: “the VM to include an I/O device-agnostic (IODA) driver that is configured to interface with the I/O device via a first path according to a set of operational parameters specific to the I/O device, and to interface with the hypervisor via a second path; execute the IODA driver to configure the operational parameters to comport with an operational protocol of the I/O device based on device-description information provided to the IODA driver via the second path”, Examiner notes, in Fig. 4, the IODA DRV 404 supports direct communication from the VM 402 to the I/O device 450 (first path) without using the second path through the hypervisor)
Kutch does not specifically teach use of a VIRTIO specification and where the at least one hardware interface device comprises a physical hardware device for communicating with the at least one hardware I/O device.
However, in analogous art Bshara teaches a virtualized system including a host device 102/402 executing a host operating system 104, where the host device supports multiple virtual machines 404, including first virtual machine 404A executing a first guest operating system 406. Importantly, Bshara discloses an Input/Output (I/O) adapter device 108/308/410 that is communicatively coupled to the host device 402 via a hardware host interface (PCIe, etc.), and which functions as the claimed hardware interface device for guest access to hardware input/output resources (Abstract; col. 3, lines 1-12 and 35-65; Figs 1-5). In addition, Bshara teaches that direct communication is performed in accordance with the Virtualization Input/Output (VirtIO) specification, disclosing that the I/O adapter device can emulate the functionality of para-virtualization (PV) backend drivers that are VirtIO compliant (col. 2, lines 5-9; col. 4, lines 14-33; col. 20, lines 14-20; Fig. 4).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the I/O adapter device with VIRTIO specification of Bshara with the systems and methods of Kutch resulting in a system in which the communication with I/O in Kutch is in accordance with the I/O adapter device and VIRTIO specification as in Bshara. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success for the purpose of: “allow[ing] the guest operating systems to execute standard PV drivers to communicate with backend drivers that are implemented in the driver domain” (see Bshara col 2. Line 61 – col. 3 line 34) as well as having the I/O adapter device allow for “bypassing the hypervisor or driver domain” (col. 5, line 20). The predicted the result would yield “improved performance in terms of latency and bandwidth for transfer of data” (col. 5, lines 26-27).
Kumar provides further support by teaching MIMO specification (device 102 includes a driver application 118, a driver application 120, a device card 122, Memory-mapped Input/Output (MMIO) registers and GTT memory 124, a graphics aperture 126, a display interface 128, and a display interface 130 in at least ¶ [0014]).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the MMIO of Kumar with the systems and methods of Kutch and Bshara resulting in a system in which Kutch and Bshara utilize MMIO as in Kumar. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success for the purpose of: “achiev[ing] native I/O performance in a VM” (see Kumar ¶ [0023]).
Regarding claim 2, Kutch further teaches wherein the at least one hardware interface device operates independently from the at least one guest operating system and the hypervisor. (Fig. 1: 156A, 156B, [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150. In some examples, secure data exchange is facilitated respectively between the VF drivers 106A and 106B on the VM-side, with the VFs 156A and 156B on the NID side using a PCI-express interconnect 140…” Examiner notes, each VF is formed outside of the Guest OS and hypervisor; [13]: “a SR-IOV architecture is used to bypass a hypervisor's involvement in data movement by providing independent memory space, interrupts, and direct-memory access (DMA) streams for virtual machines.”)
Regarding claim 3, Kutch further teaches wherein the at least one guest operating system (Fig. 1: 102A, 102B; Fig. 3: 322A, 322B) comprises: a guest virtualization driver (Fig. 1: 106A, 106B, Fig. 5: 404, 502; [34]: “Accordingly, in some examples, these device drivers of the guest OS's running in the VMs include I/O-device-agnostic (IODA) drivers. IODA drivers may be VF drivers for use with a SR-IOV architecture according to some embodiments.”) for performing an operation of the virtualization environment, (Fig. 3, [29]: “Each guest operating system (OS) 322A, 322B provides a kernel that operates via the resources provided by VMM 318 to control the hardware devices, manage memory access for programs in memory, coordinate tasks and facilitate multi-tasking, organize data to be stored, assign memory space and other resources, load program binary code into memory, initiate execution of the corresponding application program which then interacts with the user and with hardware devices, and detect and respond to various defined interrupts.”; Examiner notes [56] of instant specification: “For example, the guest virtualization driver vGDRV may control the processor PRC, the memory device MEM, the intellectual properties IP (e.g., the hardware input/output device 500 in FIG. 1) included in the system hardware 710 via the virtual processor vPRC, the virtual memory device vMEM and the virtual intellectual property vIP included in the virtual hardware.”.) and wherein the at least one hardware input/output device is controlled through the guest virtualization driver and the at least one hardware interface device. (Fig. 1: 106A, 106B (guest virtualization drivers), 156A, 156B (interface device), 150 (hardware I/O device); [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150. In some examples, secure data exchange is facilitated respectively between the VF drivers 106A and 106B on the VM-side, with the VFs 156A and 156B on the NID side using a PCI-express interconnect 140 and directed-I/O virtualization technology VT-d 130, with the latter providing such features as I/O device assignment, direct memory addressing (DMA) remapping, interrupt remapping, and various reliability features, such as error reporting.”)
Regarding claim 4, Kutch further teaches wherein the guest virtualization driver provides an interrupt directly to the at least one hardware interface device without being trapped by the hypervisor to control the at least one hardware input/output device. (Fig. 1, [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150. In some examples, secure data exchange is facilitated respectively between the VF drivers 106A and 106B on the VM-side, with the VFs 156A and 156B on the NID side using a PCI-express interconnect 140 and directed-I/O virtualization technology VT-d 130, with the latter providing such features as I/O device assignment, direct memory addressing (DMA) remapping, interrupt remapping, and various reliability features, such as error reporting.”; [13]: “According to one aspect of the embodiments, a SR-IOV architecture is used to bypass a hypervisor's involvement in data movement by providing independent memory space, interrupts, and direct-memory access (DMA) streams for virtual machines.”)
Regarding claim 5, Kutch further teaches further comprising: a shared memory (Fig. 2: 204, 206, 216 [20]: “The storage device 216 includes a machine-readable medium 222 on which is stored one or more sets of data structures and instructions 224 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 224 may also reside, completely or at least partially, within the main memory 204, static memory 206, and/or within the processor 202 during execution thereof by the host machine 200, with the main memory 204, static memory 206, and the processor 202 also constituting machine-readable media.”) shared by the host operating system and the at least one guest operating system, (Fig. 2: 204, 206, 216, 224; Fig. 3: 318, 322A, 322B; [18]: “FIG. 2 is a block diagram illustrating a host machine platform, which may implement all, or portions of, the virtualization system of FIG. 1 according to some embodiments.”; [23]: “FIG. 3 is a diagram illustrating an example computing hardware and software architecture of a computer system such as the one depicted in FIG. 2, in which various interfaces between hardware components and software components are shown.”; [23-24]: “Memory management device 304 provides mappings between virtual memory used by processes being executed, and the physical memory…Interconnect 306 includes a backplane such as memory, data, and control lines, as well as the interface with input/output devices... Memory 308…and non-volatile memory 309 such as flash memory…are interfaced with memory management device 304 and interconnect 306 via memory controller 310.”; Examiner notes [84] of instant specification: “For convenience of illustration, FIG. 7 illustrates that the shared memory 320 is included in the guest operating system 300a. However, example embodiments are not limited thereto. For example, the shared memory 320 may be disposed or located separately from the host operating system 200a and the guest operating system 300a, and may be shared by the host operating system 200a and the guest operating system 300a.”) and wherein the at least one guest operating system is configured to exchange data with the at least one hardware input/output device through the guest virtualization driver and the shared memory. (Fig. 1, [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150. In some examples, secure data exchange is facilitated respectively between the VF drivers 106A and 106B on the VM-side, with the VFs 156A and 156B on the NID side using a PCI-express interconnect 140 and directed-I/O virtualization technology VT-d 130, with the latter providing such features as I/O device assignment, direct memory addressing (DMA) remapping, interrupt remapping, and various reliability features, such as error reporting.”; [13]: “According to one aspect of the embodiments, a SR-IOV architecture is used to bypass a hypervisor's involvement in data movement by providing independent memory space, interrupts, and direct-memory access (DMA) streams for virtual machines.”)
Regarding claim 6, Kutch further teaches wherein the at least one hardware interface device (Fig. 1: 156A, 156B; [13]: “VFs are “lightweight” PCIe functions that include resources for facilitating data movement but have a carefully minimized set of configuration resources.” Examiner notes, VF as a description as a type of interface with NID (hardware I/O device) 150.) comprises: at least one hardware emulator ([15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B…”; Examiner notes, virtual NIDs are emulating hardware of physical NID as how [61] of instant specification states: “The device emulator D_EML may allocate the physical components to the guest virtual machine 740, and may establish and manage the virtual hardware by emulating the allocated physical components.”)
Kutch does not teach included in the hypervisor.
However, in analogous art, Bshara teaches included in the hypervisor. (Fig. 2: Examiner notes, device emulation taking place in hypervisor 210B; Col. 7, lines 7-11: “The hypervisor 210B or a virtual machine manager (VMM) can emulate a single device as multiple virtual devices in a virtualized environment. The virtual machines may be any suitable emulation of a computer system that may be managed by the hypervisor 210B.”)
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to move the virtual NIDs in Kutch (Kutch, Fig. 1: 104A, 104B) from the virtual machines to the hypervisor resulting in hardware device emulation taking place in the hypervisor as is done in Bshara (Bshara, Fig. 2: 210B). A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success to standardize the interface expected by the virtual function drivers, thus avoiding potential conflict. In [19], Bshara, while discussing a strength of I/O device implemented hardware driver interfaces, states one of their drawbacks: “By implementing the backend driver functionality in the I/O adapter device, new I/O adapter devices can be rapidly introduced in the market, as long as the I/O adapter device conforms to the interface expected by the frontend driver.” Thus, emulating the hardware device in the hypervisor will allow the virtual system to ensure conformity between the emulated hardware I/O device and the virtual function driver in the Guest OS.
Regarding claim 7, Kutch further teaches wherein the at least one guest operating system comprises: a guest virtualization driver for performing an operation of the virtualization environment, (Fig. 1: 106A, 106B; [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150.”) and wherein the at least one hardware input/output device is controlled through the guest virtualization driver and the at least one hardware emulator. (Fig. 1, Fig. 5, Fig. 7, [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150.”)
Regarding claim 8, Kutch further teaches wherein a control of the at least one hardware input/output device by the guest virtualization driver (Fig. 1, Fig. 5, Fig. 7, [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150.”)
Kutch in view of Bshara does not teach is trapped by the hypervisor and is provided to the at least one hardware interface device.
However, in analogous art, Kumar teaches is trapped by the hypervisor (Fig. 2: 206, Fig. 4, [30]: “In some embodiments, a VM requests access to a device's resource (for example, the device's MMIO resource) at 202… If it is not a frequently accessed resource at 204, the request is trapped and emulated by a VMM device model at 206.”; Examiner notes, a virtual machine monitor (VMM) is a hypervisor.) and is provided to the at least one hardware interface device. (Fig. 2, Fig. 4, [30]: “Then the VMM device model ensures isolation and scheduling at 208. At 210 the VMM device model accesses device resources 212.”)
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to expand the trapping of interrupts in Kutch to being able to trap other control instructions sent by a guest VM to an emulated hardware I/O device. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, because full device emulation, which requires device accesses by VMs to be trapped by the hypervisor, allows the physical device to be independent of its emulated version and makes virtual machine migration simpler. Kumar states in at least [22]: “The virtual device emulated by the model can be independent of the physical device present in the system. This is a big advantage of this technique, and it makes VM migration simpler.”
Regarding claim 9, Kutch further teaches wherein: the at least one guest operating system includes a first guest operating system configured to run on a first virtual machine of the virtualization environment, (Fig. 1: 106A; Fig 3, [28]: “Each VM 320A, 320B includes a guest operating system 322A, 322B, and application programs 324A, 324B.”) the at least one hardware input/output device includes a first hardware input/output device, (Fig. 1: 150; [12]: “Input/output devices include network interface devices (NIDs), I/O ports (e.g., universal serial bus (USB) controllers, peripherals (e.g., keyboard, touchscreen, mouse, game controller), video adapters, or any other device that interfaces with a peripheral component interconnect (PCI) bus or equivalent, for example.”) and the at least one hardware interface device includes a first hardware interface device (Fig. 1: 156A, 156B, 166; [13]: “SR-IOV facilitates two function types: physical functions (PFs), and virtual functions (VFs). PFs are PCI-express (PCIe) functions that include the SR-IOV extended capability, which may be used to configure and manage the SR-IOV functionality. VFs are “lightweight” PCIe functions that include resources for facilitating data movement but have a carefully minimized set of configuration resources.”) configured to support communication between the first guest operating system and the first hardware input/output device. (Fig. 1, Fig. 5, [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150. In some examples, secure data exchange is facilitated respectively between the VF drivers 106A and 106B on the VM-side, with the VFs 156A and 156B on the NID side using a PCI-express interconnect 140 and directed-I/O virtualization technology VT-d 130…”; [43]: “VF driver engine 502 configures the I/O device, and communicates data to and from the device.”)
Regarding claim 10, Kutch further teaches wherein: the at least one guest operating system further includes a second guest operating system configured to run on a second virtual machine of the virtualization environment (Fig. 1: 106B; Fig 3, [28]: “Each VM 320A, 320B includes a guest operating system 322A, 322B, and application programs 324A, 324B.”) and configured to operate independently from the first guest operating system, (Fig. 1, Fig. 3, [14]: “As depicted, guest operating systems (OS's) 102A and 102B are executed along-side management OS 112 in distinct virtual machines, over hypervisor 120…”) and the at least one hardware interface device further includes a second hardware interface device (Fig. 1: 156A, 156B, 166; [13]: “SR-IOV facilitates two function types: physical functions (PFs), and virtual functions (VFs). PFs are PCI-express (PCIe) functions that include the SR-IOV extended capability, which may be used to configure and manage the SR-IOV functionality. VFs are “lightweight” PCIe functions that include resources for facilitating data movement but have a carefully minimized set of configuration resources.”) configured to support communication between the second guest operating system and the first hardware input/output device. (Fig. 1, Fig. 5, [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150. In some examples, secure data exchange is facilitated respectively between the VF drivers 106A and 106B on the VM-side, with the VFs 156A and 156B on the NID side using a PCI-express interconnect 140 and directed-I/O virtualization technology VT-d 130…”; [43]: “VF driver engine 502 configures the I/O device, and communicates data to and from the device.”)
Regarding claim 12, Kutch teaches wherein the host operating system comprises: a host virtualization driver for performing an operation of the virtualization environment; (Kutch: Fig. 1: 112, 116; [16]: “Supervisory VM and management OS 112 performs configuration of I/O controller 114, including establishing, and managing, partitioning of multiple I/O paths, and assignment (and, in some embodiments, dynamic re-assignment) of I/O paths to respective VMs. It includes engine 114, which is configured to interact with physical functions PF 166 of NID 150 via PF driver 116 of hypervisor 120 and PCI-e interconnect 140.”; [13]: “…a SR-IOV architecture is used to bypass a hypervisor's involvement in data movement by providing independent memory space, interrupts, and direct-memory access (DMA) streams for virtual machines.”)
Kutch does not teach and a device driver configured to directly control the at least one hardware input/output device.
However, in analogous art, Bshara teaches and a device driver (Bshara: Fig. 1: 106, Fig. 2: 216) configured to directly control the at least one hardware input/output device, (Bshara: Col. 7, lines 42-48: “The driver domain 210A may also include a device driver 216 for communicating with the I/O adapter device 108. The device driver 216 may be specific to the I/O adapter device 108. In some instances, the device driver 216 may utilize a different protocol to communicate with the I/O adapter device 108 than the communication protocol used by the PV frontend and backend drivers.”).
Further, Kutch in view of Bshara teaches and wherein the at least one hardware input/output device is controlled through the host virtualization driver (Kutch, Fig. 1, 112, 116; [16]: “Supervisory VM and management OS 112 performs configuration of I/O controller 114, including establishing, and managing, partitioning of multiple I/O paths, and assignment (and, in some embodiments, dynamic re-assignment) of I/O paths to respective VMs. It includes engine 114, which is configured to interact with physical functions PF 166 of NID 150 via PF driver 116 of hypervisor 120 and PCI-e interconnect 140.”) and the device driver (Bshara, Fig. 1, 106, Fig. 2, 216) without using the at least one hardware interface device. (Kutch, Fig. 1, 112, 116, 166: Examiner notes, the PF driver may control hardware I/O device without utilizing virtual functions; Bshara, Fig. 1, Fig. 2, Col. 7, lines 42-48: “The driver domain 210A may also include a device driver 216 for communicating with the I/O adapter device 108. The device driver 216 may be specific to the I/O adapter device 108. In some instances, the device driver 216 may utilize a different protocol to communicate with the I/O adapter device 108 than the communication protocol used by the PV frontend and backend drivers.”; Examiner notes, device driver operates in both virtual (Fig. 2) and non-virtual (Fig. 1) implementations, thus device driver may operate hardware I/O device directly without virtual I/O implemented Virtual Functions (hardware device interfaces).)
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the management OS and PF driver in Kutch with the device driver in Bshara allowing Kutch to use vender specific drivers having specific or proprietary code. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, to improve the compatibility of the virtual system with existing propriety devices. Bshara states in Col. 6, lines 17-29: “Hence, in typical systems, different device drivers may be needed for different operating systems running on the host device 102 for different devices. For example, for Linux® operating system running on the host device 102, a Linux® NIC driver may be needed to communicate with the I/O adapter device 108, for Windows® operating system running on the host device 102, a Windows® NIC driver may be needed to communicate with the I/O adapter device 108, and so on. Similarly, if the I/O adapter device 108 is an audio card, different audio drivers may be needed for Linux® operating system, Windows® operating system, etc., that can be executing on the host device 102.”
Regarding claim 13, Kutch further teaches further comprising: a memory device (Fig. 2: 204, 206, 216) into which the host operating system, the at least one guest operating system and the hypervisor are loaded. (Fig. 1: 102A, 102B, 112, 120; Fig. 2: 204, 206, 216, 224; Fig. 3: 304, 308, 318, 322A, 322B; [20]: “The storage device 216 includes a machine-readable medium 222 on which is stored one or more sets of data structures and instructions 224 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 224 may also reside, completely or at least partially, within the main memory 204, static memory 206, and/or within the processor 202 during execution thereof by the host machine 200, with the main memory 204, static memory 206, and the processor 202 also constituting machine-readable media.”; [23]: “Memory management device 304 provides mappings between virtual memory used by processes being executed, and the physical memory…”)
Regarding claim 14, Kutch further teaches further comprising: a storage device (Fig. 2: 216) configured to store the host operating system, the at least one guest operating system and the hypervisor. (Fig. 1: 102A, 102B, 112, 120; Fig. 2: 216, 224; [20]: “The storage device 216 includes a machine-readable medium 222 on which is stored one or more sets of data structures and instructions 224 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein.; Fig. 3, [7]: “FIG. 3 is a diagram illustrating an example hardware and software architecture of a computer system such as the one depicted in FIG. 2, in which various interfaces between hardware components and software components are shown.”)
Regarding claim 15, Kutch further teaches wherein, in response to the virtualized system being booted, the host operating system, the at least one guest operating system and the hypervisor that are stored in the storage device are loaded into the memory device. (Fig 1: 102A, 102B, 112, 120; Fig. 2: 204, 206, 216, 224; Fig. 3, Claim 16: “At least one machine-readable medium containing instructions for managing access to input/output devices by a virtual machine (VM), the instructions, when executed by computing hardware, cause the computing hardware to perform operations including: executing a hypervisor; instantiating the VM to execute under supervision of the hypervisor…”; [26]: “On the software side, a pre-operating system (pre-OS) environment 316, which is executed at initial system start-up and is responsible for initiating the boot-up of the operating system…Pre-OS environment 316 is responsible for initiating the launching of the operating system or virtual machine manager, but also provides an execution environment for embedded applications according to certain aspects of the invention.”)
Regarding claim 16, Kutch further teaches wherein the at least one hardware input/output device includes at least one of a memory device, a camera, a graphics processing unit (GPU), a neural processing unit (NPU), a peripheral component interconnect express (PCIe) device and a universal flash storage (UFS) device. ([12]: “Input/output devices include network interface devices (NIDs), I/O ports (e.g., universal serial bus (USB) controllers, peripherals (e.g., keyboard, touchscreen, mouse, game controller), video adapters, or any other device that interfaces with a peripheral component interconnect (PCI) bus or equivalent; [24]: “Interconnect 306 includes…the interface with input/output devices, e.g., PCI-e, USB, etc…I/O devices, including video and audio adapters, non-volatile storage, external peripheral links such as USB, personal-area networking (e.g., Bluetooth), etc., camera/microphone data capture devices, fingerprint readers and other biometric sensors, as well as network interface devices such as those communicating via Wi-Fi or LTE-family interfaces, are collectively represented as I/O devices...”)
Regarding claim 17, Kutch does not teach wherein the virtualized system is configured to operate based on a virtual I/O device (VIRTIO) specification.
However, in analogous art, Bshara teaches wherein the virtualized system is configured to operate based on a virtual I/O device (VIRTIO) specification. (Fig. 4, Col. 2, lines 5-9: “FIG. 4 illustrates a system comprising a host device configured to communicate with an I/O adapter device utilizing SR-IOV and Virtualization Input/Output (VirtIO) implementation, according to some embodiments of the disclosed technology.”)
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to implement the paravirtualization environment in Kutch with the VirtIO standard in Bshara to standardized the divers used for each guest operating system. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success to achieve the performance benefits of paravirtualization without needing specific drivers for each OS. Bshara states in Col. 6, lines 38-49: “In some instances, running an operating system in a virtualized environment can provide the option of having a standard driver for each guest operating system executing inside a virtual machine running on the host device, e.g., using virtIO. VirtIO can provide a virtualization standard where the device driver for the guest operating system knows it is running in a virtual environment, and cooperates with the driver domain. This can provide most of the performance benefits of para-virtualization. Hence, PV drivers utilizing virtIO can overcome the problem of having device specific drivers for each operating system, as discussed with reference to FIG. 2.”
Regarding claim 18, Kutch teaches A method of operating a virtualized system, (Fig. 7; Claim 13: “A method for managing access to input/output devices by a virtual machine (VM), the method being executed by computing hardware, and comprising: executing a hypervisor; instantiating the VM to execute under supervision of the hypervisor,”) the method comprising: generating a virtualization environment, on which a host operating system (OS), (Fig. 1: Examiner notes, Supervisory VM and MGMT OS (112); [14]: “As depicted, guest operating systems (OS's) 102A and 102B are executed along-side management OS 112 in distinct virtual machines, over hypervisor 120, which in turn is executed on a computing architecture described in greater detail below with reference to FIGS. 2-3.”; Examiner notes, that even when host OS, guest OS(s), and hypervisor are being executed to generate environment, host OS still runs on the virtual environment; [8] of instant specification states: “According to an example embodiment, in a method of operating a virtualized system, a virtualization environment is generated by executing a host operating system (OS), at least one guest operating system and a hypervisor using a processor…The host operating system runs on the virtualization environment.”) at least one guest operating system and a hypervisor are executed (Fig. 1: Examiner notes, Hypervisor (120), VM and Guest OS (102A, 102B).) using a processor, (Fig. 2: 202; [18]: “FIG. 2 is a block diagram illustrating a host machine platform, which may implement all, or portions of, the virtualization system of FIG. 1 according to some embodiments.”; [19]: “Example host machine 200 includes at least one processor 202 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.)…”) the processor configured to provide a function for the virtualization environment, (Fig. 2: 202, [18]: “FIG. 2 is a block diagram illustrating a host machine platform, which may implement all, or portions of, the virtualization system of FIG. 1 according to some embodiments.”; [19]: “Example host machine 200 includes at least one processor 202 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.)…”; [13]: “According to one aspect of the embodiments, a SR-IOV architecture is used to bypass a hypervisor's involvement in data movement by providing independent memory space, interrupts, and direct-memory access (DMA) streams for virtual machines. SR-IOV architecture allows a device to support multiple Virtual Functions (VFs). SR-IOV facilitates two function types: physical functions (PFs), and virtual functions (VFs). PFs are PCI-express (PCIe) functions that include the SR-IOV extended capability, which may be used to configure and manage the SR-IOV functionality. VFs are “lightweight” PCIe functions that include resources for facilitating data movement but have a carefully minimized set of configuration resources.”) the host operating system configured to run on the virtualization environment, the at least one guest operating system configured to run on at least one virtual machine of the virtualization environment, the hypervisor configured to implement the virtualization environment using the function of the processor (Fig. 1: Examiner notes, Supervisory VM and management OS (112), Guest VM and guest OS (102A, 102B), Hypervisor (120); Fig. 2, [18]: “FIG. 2 is a block diagram illustrating a host machine platform, which may implement all, or portions of, the virtualization system of FIG. 1 according to some embodiments.”; [19]: “Example host machine 200 includes at least one processor 202 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.)…”) and configured to generate and control the at least one virtual machine of the virtualization environment; (Fig. 1, 120; [14]: “As depicted, guest operating systems (OS's) 102A and 102B are executed along-side management OS 112 in distinct virtual machines, over hypervisor 120, which in turn is executed on a computing architecture described in greater detail below with reference to FIGS. 2-3.”; [37]: “VM and guest OS 402 is executed over hypervisor 420…”; Abstract: “Management of access to input/output devices by a virtual machine (VM) includes executing a hypervisor, and instantiating the VM to execute under supervision of the hypervisor.”) and when at least one hardware input/output (I/O) device ([12]: “Input/output devices include network interface devices (NIDs), I/O ports (e.g., universal serial bus (USB) controllers, peripherals (e.g., keyboard, touchscreen, mouse, game controller), video adapters, or any other device that interfaces with a peripheral component interconnect (PCI) bus or equivalent, for example.”) is to be controlled by the at least one guest operating system, controlling the at least one hardware input/output device using at least one hardware interface device, (Fig. 1; [12]: “Various aspects of the embodiments are directed to managing access to input/output (I/O) devices by virtual machines (VMs). Examiner notes, VMs of Kutch have guest or management operating system(s) which are managing the I/O devices; [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150.”) the at least one hardware input/output device being controlled by the host operating system ([16]: “Supervisory VM and management OS (host operating system) 112 performs configuration of I/O controller 114, including establishing, and managing, partitioning of multiple I/O paths, and assignment (and, in some embodiments, dynamic re-assignment) of I/O paths to respective VMs. It includes engine 114, which is configured to interact with physical functions PF 166 of NID 150 via PF driver 116 of hypervisor 120 and PCI-e interconnect 140”; Examiner notes, explanation of what a PF is found in at least [13]: “PFs are PCI-express (PCIe) functions that include the SR-IOV extended capability, which may be used to configure and manage the SR-IOV functionality.”) and the at least one guest operating system, ([15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150. In some examples, secure data exchange is facilitated respectively between the VF drivers 106A and 106B on the VM-side, with the VFs 156A and 156B on the NID side using a PCI-express interconnect 140 and directed-I/O virtualization technology VT-d 130, with the latter providing such features as I/O device assignment, direct memory addressing (DMA) remapping, interrupt remapping, and various reliability features, such as error reporting.”) the at least one hardware interface device configured to support direct communication between the at least one guest operating system and the at least one hardware input/output device (Fig. 1, Fig. 5, [43]: “VF (virtual function) driver engine 502 configures the I/O device, and communicates data to and from the device.”; [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150.”; Fig. 4, Fig. 7, [53]: “the VM to include an I/O device-agnostic (IODA) driver that is configured to interface with the I/O device via a first path according to a set of operational parameters specific to the I/O device, and to interface with the hypervisor via a second path; execute the IODA driver to configure the operational parameters to comport with an operational protocol of the I/O device based on device-description information provided to the IODA driver via the second path”, Examiner notes, in Fig. 4, the IODA DRV 404 supports direct communication from the VM 402 to the I/O device 450 (first path) without using the second path through the hypervisor)
Kutch does not specifically teach use of a VIRTIO specification and where the at least one hardware interface device comprises a physical hardware device for communicating with the at least one hardware I/O device.
However, in analogous art Bshara teaches a virtualized system including a host device 102/402 executing a host operating system 104, where the host device supports multiple virtual machines 404, including first virtual machine 404A executing a first guest operating system 406. Importantly, Bshara discloses an Input/Output (I/O) adapter device 108/308/410 that is communicatively coupled to the host device 402 via a hardware host interface (PCIe, etc.), and which functions as the claimed hardware interface device for guest access to hardware input/output resources (Abstract; col. 3, lines 1-12 and 35-65; Figs 1-5). In addition, Bshara teaches that direct communication is performed in accordance with the Virtualization Input/Output (VirtIO) specification, disclosing that the I/O adapter device can emulate the functionality of para-virtualization (PV) backend drivers that are VirtIO compliant (col. 2, lines 5-9; col. 4, lines 14-33; col. 20, lines 14-20; Fig. 4).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the I/O adapter device with VIRTIO specification of Bshara with the systems and methods of Kutch resulting in a system in which the communication with I/O in Kutch is in accordance with the I/O adapter device and VIRTIO specification as in Bshara. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success for the purpose of: “allow[ing] the guest operating systems to execute standard PV drivers to communicate with backend drivers that are implemented in the driver domain” (see Bshara col 2. Line 61 – col. 3 line 34) as well as having the I/O adapter device allow for “bypassing the hypervisor or driver domain” (col. 5, line 20). The predicted the result would yield “improved performance in terms of latency and bandwidth for transfer of data” (col. 5, lines 26-27).
Kumar provides further support by teaching MIMO specification (device 102 includes a driver application 118, a driver application 120, a device card 122, Memory-mapped Input/Output (MMIO) registers and GTT memory 124, a graphics aperture 126, a display interface 128, and a display interface 130 in at least ¶ [0014]).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the MMIO of Kumar with the systems and methods of Kutch and Bshara resulting in a system in which Kutch and Bshara utilize MMIO as in Kumar. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success for the purpose of: “achiev[ing] native I/O performance in a VM” (see Kumar ¶ [0023]).
Regarding claim 19, Kutch further teaches further comprising: when the at least one hardware input/output device is to be controlled by the host operating system, (Fig. 1: 112, 116, 150, 166; [16]: “Supervisory VM and management OS (host operating system) 112 performs configuration of I/O controller 114, including establishing, and managing, partitioning of multiple I/O paths, and assignment (and, in some embodiments, dynamic re-assignment) of I/O paths to respective VMs. It includes engine 114, which is configured to interact with physical functions PF 166 of NID 150 via PF driver 116 of hypervisor 120 and PCI-e interconnect 140.”; [13]: “PFs are PCI-express (PCIe) functions that include the SR-IOV extended capability, which may be used to configure and manage the SR-IOV functionality.”) controlling the at least one hardware input/output device without using the at least one hardware interface device. (Fig. 1: 112, 116, 150, 166; Examiner notes, PF driver and physical function are separate from virtual function drivers and virtual functions that guest OSes use, and allow management OS control over hardware input/output device without using virtual functions; [16]: “Supervisory VM and management OS (host operating system) 112 performs configuration of I/O controller 114, including establishing, and managing, partitioning of multiple I/O paths, and assignment (and, in some embodiments, dynamic re-assignment) of I/O paths to respective VMs. It includes engine 114, which is configured to interact with physical functions PF 166 of NID 150 via PF driver 116 of hypervisor 120 and PCI-e interconnect 140.”)
Regarding claim 20, Kutch teaches A virtualized system comprising: (Fig. 1, [5]: “FIG. 1 is a high-level functional block diagram illustrating an example virtualization system having a SR-IOV architecture that may serve as a setting in which aspects of the embodiments may be implemented.”) a processor (Fig. 2 element 202; [19]: “Example host machine 200 includes at least one processor 202 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.)…”) configured to provide a function for a virtualization environment; (Claim 1: “computing hardware, including a processor coupled to a data store and an input/output (I/O) device interfaced with the processor, the computing hardware to: execute a hypervisor; instantiate the VM to execute under supervision of the hypervisor.”; [13]: “According to one aspect of the embodiments, a SR-IOV architecture is used to bypass a hypervisor's involvement in data movement by providing independent memory space, interrupts, and direct-memory access (DMA) streams for virtual machines. SR-IOV architecture allows a device to support multiple Virtual Functions (VFs). SR-IOV facilitates two function types: physical functions (PFs), and virtual functions (VFs). PFs are PCI-express (PCIe) functions that include the SR-IOV extended capability, which may be used to configure and manage the SR-IOV functionality. VFs are “lightweight” PCIe functions that include resources for facilitating data movement but have a carefully minimized set of configuration resources.”) a host operating system (OS) configured to run on a host virtual machine of the virtualization environment; (Fig. 1, element 112; [14]: “As depicted, guest operating systems (OS's) 102A and 102B are executed along-side management OS 112 in distinct virtual machines, over hypervisor 120, which in turn is executed on a computing architecture described in greater detail below with reference to FIGS. 2-3.”; Examiner notes [9] of instant specification: “According to an example embodiment… The host operating system runs on a host virtual machine of the virtualization environment. Further, examiner notes from [38] of instant specification that host OS, like management OS from Kutch, functions as a guest OS with expanded privileges: “A scheme or manner of controlling the at least one hardware input/output device 500 may be changed (or may vary) according to whether a subject or an agent controlling the at least one hardware input/output device 500 is the host operating system 200 or the at least one guest operating system 300.”).
a first guest operating system and a second guest operating system (Fig 1, Fig 3, [28]: “Each VM 320A, 320B includes a guest operating system 322A, 322B, and application programs 324A, 324B.”) configured to run independently from each other on a first guest virtual machine and a second guest virtual machine of the virtualization environment, respectively, and configured to run independently from the host operating system, (Fig 1, 102A, 102B, 112; Fig 3, 320A, 320B: Examiner notes, first and second virtual machines each have their own guest OS that runs independently between each other and host OS (MGMT OS 112); [29]: “Each guest operating system (OS) 322A, 322B provides a kernel that operates via the resources provided by VMM 318 to control the hardware devices, manage memory access for programs in memory, coordinate tasks and facilitate multi-tasking, organize data to be stored, assign memory space and other resources, load program binary code into memory, initiate execution of the corresponding application program which then interacts with the user and with hardware devices, and detect and respond to various defined interrupts.” Examiner notes, each Guest OS has its own kernel to run independently.) the first guest virtual machine and the second guest virtual machine being different from each other; (Fig 1, Fig 3: Examiner notes, first and second virtual machines 102A, 102B, 320A, 320B; [14]: “As depicted, guest operating systems (OS's) 102A and 102B are executed along-side management OS 112 in distinct virtual machines…”)
a hypervisor configured to implement the virtualization environment (Fig. 1: 120; [2]: “In virtual computing, a hypervisor provides the virtualization of a computer system…”) using the function of the processor, (Fig. 2, element 202, [18]: “FIG. 2 is a block diagram illustrating a host machine platform, which may implement all, or portions of, the virtualization system of FIG. 1 according to some embodiments.”; [19]: “Example host machine 200 includes at least one processor 202 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.)…”) and configured to generate and control the host virtual machine, the first guest virtual machine and the second guest virtual machine of the virtualization environment; (Fig. 1: Examiner notes, Supervisory VM and management OS (112), Guest VM and guest OS (102A, 102B), Hypervisor (120); [14]: “As depicted, guest operating systems (OS's) 102A and 102B are executed along-side management OS 112 in distinct virtual machines, over hypervisor 120, which in turn is executed on a computing architecture described in greater detail below with reference to FIGS. 2-3.”; Abstract: “Management of access to input/output devices by a virtual machine (VM) includes executing a hypervisor, and instantiating the VM to execute under supervision of the hypervisor.”).
a hardware input/output (I/O) device controlled by the host operating system, ([16]: “Supervisory VM and management OS (host operating system) 112 performs configuration of I/O controller 114, including establishing, and managing, partitioning of multiple I/O paths, and assignment (and, in some embodiments, dynamic re-assignment) of I/O paths to respective VMs. It includes engine 114, which is configured to interact with physical functions PF 166 of NID 150 via PF driver 116 of hypervisor 120 and PCI-e interconnect 140”; Examiner notes, explanation of what a PF is found in at least [13]: “PFs are PCI-express (PCIe) functions that include the SR-IOV extended capability, which may be used to configure and manage the SR-IOV functionality.”) the first guest operating system and the second guest operating system; ([15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150. In some examples, secure data exchange is facilitated respectively between the VF drivers 106A and 106B on the VM-side, with the VFs 156A and 156B on the NID side using a PCI-express interconnect 140 and directed-I/O virtualization technology VT-d 130, with the latter providing such features as I/O device assignment, direct memory addressing (DMA) remapping, interrupt remapping, and various reliability features, such as error reporting.”).
a first hardware interface device configured to support direct communication between the first guest operating system and the hardware input/output device; and a second hardware interface device configured to support direct communication between the second guest operating system and the hardware input/output device (Fig. 1, Fig. 5, [43]: “VF (virtual function) driver engine 502 configures the I/O device, and communicates data to and from the device.”; [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150.”; Fig. 4, Fig. 7, [53]: “the VM to include an I/O device-agnostic (IODA) driver that is configured to interface with the I/O device via a first path according to a set of operational parameters specific to the I/O device, and to interface with the hypervisor via a second path; execute the IODA driver to configure the operational parameters to comport with an operational protocol of the I/O device based on device-description information provided to the IODA driver via the second path”, Examiner notes, in Fig. 4, the IODA DRV 404 supports direct communication from the VM 402 to the I/O device 450 (first path) without using the second path through the hypervisor), the second hardware interface device being different from the first hardware interface device, (Fig. 1: 156A, 156B; [13]: “According to one aspect of the embodiments, a SR-IOV architecture is used to bypass a hypervisor's involvement in data movement by providing independent memory space, interrupts, and direct-memory access (DMA) streams for virtual machines. SR-IOV architecture allows a device to support multiple Virtual Functions (VFs).”; Examiner notes, each Virtual Function operates independently.) wherein the first guest operating system comprises: a first guest virtualization driver for performing an operation of the virtualization environment, wherein the hardware input/output device is controlled through the first guest virtualization driver and the first hardware interface device, (Fig. 1 102A (first VM and Guest OS), 106A (guest virtualization driver), 156A (interface), 150 (hardware I/O device); [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150.”) wherein the second guest operating system comprises: a second guest virtualization driver for performing an operation of the virtualization environment, wherein the hardware input/output device is controlled through the second guest virtualization driver and the second hardware interface device, (Fig. 1 102B (second VM and Guest OS), 106B (guest virtualization driver), 156B (interface), 150 (hardware I/O device); [15]: “VMs with guest OS's 102A and 102B respectively implement virtual NIDs 104A. 104B, which utilize VF drivers 106A and 106B to operate VFs 156A and 156B facilitated by a SR-IOV-enabled NID 150.”) wherein the host operating system comprises: a host virtualization driver for performing an operation of the virtualization environment; (Fig. 1: 112, 116; [16]: “Supervisory VM and management OS 112 performs configuration of I/O controller 114, including establishing, and managing, partitioning of multiple I/O paths, and assignment (and, in some embodiments, dynamic re-assignment) of I/O paths to respective VMs. It includes engine 114, which is configured to interact with physical functions PF 166 of NID 150 via PF driver 116 of hypervisor 120 and PCI-e interconnect 140.”).
Kutch does not teach use of a VIRTIO specification, a device driver configured to directly control the hardware input/output device, and wherein the hardware input/output device is controlled through the host virtualization driver and the device driver without using the first hardware interface device or the second hardware interface device, and where the at least one hardware interface device comprises a physical hardware device for communicating with the at least one hardware I/O device.
However, in analogous art Bshara teaches a virtualized system including a host device 102/402 executing a host operating system 104, where the host device supports multiple virtual machines 404, including first virtual machine 404A executing a first guest operating system 406. Importantly, Bshara discloses an Input/Output (I/O) adapter device 108/308/410 that is communicatively coupled to the host device 402 via a hardware host interface (PCIe, etc.), and which functions as the claimed hardware interface device for guest access to hardware input/output resources (Abstract; col. 3, lines 1-12 and 35-65; Figs 1-5). In addition, Bshara teaches that direct communication is performed in accordance with the Virtualization Input/Output (VirtIO) specification, disclosing that the I/O adapter device can emulate the functionality of para-virtualization (PV) backend drivers that are VirtIO compliant (col. 2, lines 5-9; col. 4, lines 14-33; col. 20, lines 14-20; Fig. 4).
Moreover, Bshara teaches and a device driver (Fig. 1: 106, Fig. 2: 216) configured to directly control the hardware input/output device, (Col. 7, lines 42-48: “The driver domain 210A may also include a device driver 216 for communicating with the I/O adapter device 108. The device driver 216 may be specific to the I/O adapter device 108. In some instances, the device driver 216 may utilize a different protocol to communicate with the I/O adapter device 108 than the communication protocol used by the PV frontend and backend drivers.”).
Further, Kutch in view of Bshara teaches and wherein the hardware input/output device is controlled through the host virtualization driver (Kutch, Fig. 1, 112, 116; [16]: “Supervisory VM and management OS 112 performs configuration of I/O controller 114, including establishing, and managing, partitioning of multiple I/O paths, and assignment (and, in some embodiments, dynamic re-assignment) of I/O paths to respective VMs. It includes engine 114, which is configured to interact with physical functions PF 166 of NID 150 via PF driver 116 of hypervisor 120 and PCI-e interconnect 140.”) and the device driver (Bshara, Fig. 1, 106, Fig. 2, 216) without using the first hardware interface device or the second hardware interface device. (Kutch, Fig. 1, 112, 116, 166: Examiner notes, the PF driver may control hardware I/O device without utilizing virtual functions; Bshara, Fig. 1, Fig. 2, Col. 7, lines 42-48: “The driver domain 210A may also include a device driver 216 for communicating with the I/O adapter device 108. The device driver 216 may be specific to the I/O adapter device 108. In some instances, the device driver 216 may utilize a different protocol to communicate with the I/O adapter device 108 than the communication protocol used by the PV frontend and backend drivers.”; Examiner notes, device driver operates in both virtual (Fig. 2) and non-virtual (Fig. 1) implementations, thus device driver may operate without virtual I/O implemented Virtual Functions (hardware device interfaces).)
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the I/O adapter device with VIRTIO specification of Bshara with the systems and methods of Kutch resulting in a system in which the communication with I/O in Kutch is in accordance with the I/O adapter device and VIRTIO specification as in Bshara. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success for the purpose of: “allow[ing] the guest operating systems to execute standard PV drivers to communicate with backend drivers that are implemented in the driver domain” (see Bshara col 2. Line 61 – col. 3 line 34) as well as having the I/O adapter device allow for “bypassing the hypervisor or driver domain” (col. 5, line 20). The predicted the result would yield “improved performance in terms of latency and bandwidth for transfer of data” (col. 5, lines 26-27).
In addition, it would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the management OS and PF driver in Kutch with the device driver in Bshara allowing Kutch to use vender specific drivers having specific or proprietary code. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success, to improve the compatibility of the virtual system with existing propriety devices. Bshara states in Col. 6, lines 17-29: “Hence, in typical systems, different device drivers may be needed for different operating systems running on the host device 102 for different devices. For example, for Linux® operating system running on the host device 102, a Linux® NIC driver may be needed to communicate with the I/O adapter device 108, for Windows® operating system running on the host device 102, a Windows® NIC driver may be needed to communicate with the I/O adapter device 108, and so on. Similarly, if the I/O adapter device 108 is an audio card, different audio drivers may be needed for Linux® operating system, Windows® operating system, etc., that can be executing on the host device 102.”
Kumar provides further support by teaching MIMO specification (device 102 includes a driver application 118, a driver application 120, a device card 122, Memory-mapped Input/Output (MMIO) registers and GTT memory 124, a graphics aperture 126, a display interface 128, and a display interface 130 in at least ¶ [0014]).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to combine the MMIO of Kumar with the systems and methods of Kutch and Bshara resulting in a system in which Kutch and Bshara utilize MMIO as in Kumar. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success for the purpose of: “achiev[ing] native I/O performance in a VM” (see Kumar ¶ [0023]).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kutch et al. US 20180181420 A1 (Kutch) in view of Bshara et al. US 9836421 B1 (Bshara) as applied to claims 6 and 7 above, and further in view of Kumar et al US 20120167082 A1 (Kumar) in view of Hunter et al. US 20160077858 A1 (Hunter).
Regarding claim 11, Kutch further teaches wherein: the at least one hardware input/output device further includes a second hardware input/output device different from the first hardware input/output device, ([24]: “Interconnect 306 includes a backplane such as memory, data, and control lines, as well as the interface with input/output devices, e.g., PCI-e, USB, etc…are interfaced with memory management device 304 and interconnect 306 via memory controller 310. I/O devices, including video and audio adapters, non-volatile storage, external peripheral links such as USB, personal-area networking (e.g., Bluetooth), etc., camera/microphone data capture devices, fingerprint readers and other biometric sensors, as well as network interface devices such as those communicating via Wi-Fi or LTE-family interfaces, are collectively represented as I/O devices and networking 312, which interface with interconnect 306 via corresponding I/O controllers 314.”)
Kutch does not teach and the at least one hardware interface device further includes a second hardware interface device configured to support communication between the first guest operating system and the second hardware input/output device.
However, in analogous art, Hunter teaches and the at least one hardware interface device further includes a second hardware interface device (Fig. 5, Fig. 6, [126]: “In the embodiment shown, the I/O devices 614 in the hardware layer 610 include at least one SR-IOV-compliant device capable of being mapped from one physical function to a plurality of virtual functions.”; [127]: “In the embodiment shown, the interconnect service partition 602 is configured to manage SR-IOV devices by maintaining the physical functions, while virtual functions are distributed among partitions intended as users of particular I/O device functionality.”) configured to support communication between the first guest operating system and the second hardware input/output device. (Fig. 5, Fig. 6, [115]: “The I/O subsystem 222 further includes one or more communication connections 230. The communication connections 230 enable the computing device 1000 to send data to and receive data from a network of one or more such devices.”; [125]: “Each [Guest Partition 604] has…an operating system of that partition, such as a Guest OS 605.”)
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to improve the capability of interfacing with many different hardware I/O devices in Kutch, Bshara and Kumar, with the capability of interfacing with more than one hardware I/O device simultaneously in Hunter. A person having ordinary skill in the art would have been motivated to make this combination, with a reasonable expectation of success to improve the virtual system’s effectiveness for its intended purpose. Hunter states the intended purpose of virtualization in [2]: “Computer system virtualization allows multiple operating systems and processes to share the hardware resources of a host computer.” It is common within one computer system to have many hardware I/O devices, thus allowing the guest VMs of Kutch to interface with more than one hardware device within a system will improve the virtual system’s sharing of “the hardware resources of a host computer.”
Response to Arguments
Applicant’s arguments have been fully considered but are moot in view of the new grounds of rejections.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH TANG whose telephone number is (571)272-3772. The examiner can normally be reached Monday-Friday 7AM-3PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 571-272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KENNETH TANG/Primary Examiner, Art Unit 2197