DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on 10/22/2025 has been entered. Claims 1-5, 7-16, and 25 remain pending in this application.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 12-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per claim 12, the claim recites “sending an indication of an input/output page fault and a descriptor indicating a location of the buffer into a fault buffer queue” and “when the fault buffer queue stores the descriptor: receive a stored descriptor indicating that the input/output page fault corresponding to the input/output payload has occurred to an input/output page fault queue.” As it reads currently, the phrase “a stored descriptor” introduces a new descriptor distinct from “the descriptor” stored in the fault buffer queue. However, because “a stored descriptor” is received after “the descriptor” is stored in the fault buffer queue, it appears that “the descriptor” and “a stored descriptor” are referencing the same descriptor. Otherwise, where is “a stored descriptor” coming from if not from the fault buffer queue? Therefore, “a stored descriptor” should really just be “the descriptor.” If the Examiner is incorrect in this reading of the claim, please explain the distinction in the next set of remarks.
As per claims 13-16, they are dependent claims of claim 12, so they are rejected for similar reasons.
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 8-9 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends.
Because of the amendments to claim 1, claim 8 does not further limit claim 1 as claim 8 appears to be included in claim 1. The three limitations of “write,” “store,” and “write” are all included in the amended claim 1. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
As per claim 9, it is a dependent claim of claim 8, so it is rejected for similar reasons.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5, 8-9, and 11-16 are rejected under 35 U.S.C. 103 as being unpatentable over Tsirkin (US Pub. No. 2022/0121463 A1) in view of Sankaran et al. (US Pub. No. 2018/0011651 A1 hereinafter Sankaran).
As per claim 1, Tsirkin teaches a system comprising: a peripheral device to send payloads to a virtual machine (¶ [0021]-[0022], “As shown in FIG. 1, host computer system 110 is connected to a network 150. Host computer system 110 may be a server, a mainframe, a workstation, a personal computer (PC), a mobile phone, a palm-sized computing device, etc. Network 150 may be a private network (e.g., a local area network (LAN), a wide area network (WAN), intranet, etc.) or a public network (e.g., the Internet). Host computer system 110 may also include network accelerator device 180…In certain implementations, network accelerator device 180 may receive an incoming packet from network 150, e.g., to be consumed by a process running on Guest VM 130.”); and a processing device to run the virtual machine (¶ [0019], “Host computer system 110 may comprise one or more processors communicatively coupled to memory devices and input/output (I/O) devices. Host computer system 110 runs a host operating system (OS) 120, which can comprise software that manages the hardware resources of the computer system and that provides functions such as inter-process communication, scheduling, virtual memory management, and so forth. In some examples, host operating system 120 also comprises a hypervisor 125, which provides a virtual operating platform for guest virtual machine (VM) 130 and manages its execution, e.g., by abstracting the physical layer, including processors, memory, and I/O devices, and presenting this abstraction to the VM as virtual devices.”), wherein the processing device comprises: a plurality of buffers configured to receive payloads from the peripheral device (¶ [0014], “The driver may allocate a receive ring that includes a set of memory buffers for storing incoming packets from the network, to be processed by the network accelerator device.” ¶ [0022], “Page fault handling component 128 of network accelerator device 180 may select a buffer from a set of buffers of a receive ring that is allocated by ring buffer management component 129 of network accelerator device driver 133. The selected buffer may be the next buffer in the receive ring allocated by ring buffer management component 129.” ¶ [0032], “At operation 248, ring buffer management component 129 of network accelerator device driver 133 may allocate memory buffers buffer-1 211, buffer-2 212, buffer-3 213, and buffer-4 214.”); a fault buffer queue configured to store locations corresponding to the plurality of buffers (¶ [0032], “Network accelerator device 180 may store identifiers identifying buffers buffer-1 211, buffer-2 212, buffer-3 213, and buffer-4 214 in a data structure (e.g., a queue). Network accelerator device driver 133 may provide, to processing logic of a network accelerator device 180, the list of the buffer addresses (and/or buffer identifiers) at which to store incoming packets. While four buffers are depicted in FIG. 2, it should be noted that network accelerator device driver may allocate more or fewer than four buffers.” ¶ [0034], “In some implementations, page fault handling component 128 may store a data structure (e.g., an ordered list or a queue, illustrated as buffer order list 221 in local memory 220) to indicate the order in which the packets were received, i.e., the order in which the buffers were used. In response to receiving a notification indicating the successful storage of the first incoming packet at buffer-2 212, processing logic may append buffer order list 221 to indicate that the first incoming packet is stored at buffer-2 212. The data structure may store an identifier identifying the buffer-2 212, and/or may store the memory address of buffer-2 212.”); an input/output page fault queue configured to store descriptions of page faults (¶ [0017], “The host system may handle the page faults of the memory buffers whose addresses are stored within the faulty buffer list. Handling a page fault may include copying the memory page that triggered the page fault from a backing store to the main memory. The host system may use a page request interface (PRI) handler to take the appropriate action to recover the affected pages, for example. Upon successful resolution (or handling) of the page fault, the buffer address of the faulty buffer may be removed from the faulty buffer list.” ¶ [0028], “Host computer system 110 may handle the page faults of the faulty buffers stored in the faulty buffer list. The host computer system 110 may handle a page fault by bringing the memory page that triggered the page fault from a backing store to the physical main memory. In one implementation, in order to detect when the page fault has been handled, ring buffer management component 129 may run a background thread to monitor the status of the page fault of the memory buffers in the faulty buffer list…Furthermore, page fault handling component 128 may remove the address (and/or identifier) of the newly restored buffer from the faulty buffer list.”); processing circuitry configured to: receive a request from the peripheral device to store a payload in a page of guest memory (¶ [0022], “In certain implementations, network accelerator device 180 may receive an incoming packet from network 150, e.g., to be consumed by a process running on Guest VM 130.”); generate an indication of an input/output page fault based on a failure to find the page of guest memory (¶ [0023], “Page fault handling component 128 may attempt to store the incoming packet at the selected buffer. Storing the incoming packet at the selected buffer may involve translating the buffer address associated with the selected buffer (e.g., translating the virtual address of the buffer to a corresponding physical address), followed by storing the incoming packet to a memory page identified by the translated address in the memory of the host computer system. The translation operation may cause a page fault (e.g., if the memory page identified by the translated address is not present in the main memory).” See also para. 0033-0034.); based on the indication of the input/output page fault: store the payload in a buffer of the plurality of buffers (¶ [0024], “Page fault handling component 128 may then attempt to store the incoming packet at another memory buffer of the set of memory buffers allocated by ring buffer management component 129. For example, page fault handling component 128 may attempt to store the incoming packet at the next buffer in the receive ring.” See also para. 0033-0034.); write a descriptor to the fault buffer queue corresponding to a location of the buffer (¶ [0034], “In some implementations, page fault handling component 128 may store a data structure (e.g., an ordered list or a queue, illustrated as buffer order list 221 in local memory 220) to indicate the order in which the packets were received, i.e., the order in which the buffers were used. In response to receiving a notification indicating the successful storage of the first incoming packet at buffer-2 212, processing logic may append buffer order list 221 to indicate that the first incoming packet is stored at buffer-2 212. The data structure may store an identifier identifying the buffer-2 212, and/or may store the memory address of buffer-2 212.” See also para. 0033-0034.); write a descriptor to the input/output page fault queue corresponding to the input/output page fault (¶ [0028], “Host computer system 110 may handle the page faults of the faulty buffers stored in the faulty buffer list. The host computer system 110 may handle a page fault by bringing the memory page that triggered the page fault from a backing store to the physical main memory. In one implementation, in order to detect when the page fault has been handled, ring buffer management component 129 may run a background thread to monitor the status of the page fault of the memory buffers in the faulty buffer list…Furthermore, page fault handling component 128 may remove the address (and/or identifier) of the newly restored buffer from the faulty buffer list.” ¶ [0033], “At operation 252, processing logic may detect that buffer-1 211 is not present. For example, processing logic may detect the occurrence of a page fault, indicating that the translation operation (i.e., the translation of the virtual buffer address of buffer-1 211 to a corresponding physical address) failed because the memory page containing buffer-1 211, identified by the translated address, is not present in the main memory. Processing logic may store the address (and/or identifier) of buffer-1 211 in the faulty buffer list 222 in local memory 220.”); and generate a request to resolve the input/output page fault based on the descriptor of the input/output page fault stored in the input/output page fault queue, wherein the payload is stored in the buffer while the input/output page fault corresponding to the page of the guest memory is resolved (¶ [0018], “The overhead of handling the page fault is minimized because the network accelerator device can proceed to use another memory buffer to store the incoming packet without having to wait for the page fault of the first memory buffer to be handled.” ¶ [0028], “Host computer system 110 may handle the page faults of the faulty buffers stored in the faulty buffer list. The host computer system 110 may handle a page fault by bringing the memory page that triggered the page fault from a backing store to the physical main memory. In one implementation, in order to detect when the page fault has been handled, ring buffer management component 129 may run a background thread to monitor the status of the page fault of the memory buffers in the faulty buffer list. The background thread may then notify page fault handling component 128 that a page fault is handled, for example by placing the buffer identifier in the ring buffer. Page fault handling component 128 may wait for the notification before attempting to store a packet at a buffer that has previously resulted in a page fault. Additionally or alternatively, ring buffer management component 129 may make the newly restored memory buffer (i.e., for which the page fault has been handled) the next available memory buffer allocated to network accelerator device 180. Page fault handling component 128 may then use the newly restored buffer to store the next incoming packet. Furthermore, page fault handling component 128 may remove the address (and/or identifier) of the newly restored buffer from the faulty buffer list.”).
Tsirkin fails to explicitly teach peripheral devices accessing virtual machine memory using direct memory access (DMA).
Accordingly, Sankaran teaches the well-known technique of a peripheral device to access guest memory of a virtual machine using direct memory access (DMA) (¶ [0020], “The IOMMU (also referred to as I/O memory management circuitry) is a direct memory access (DMA) remapping hardware unit that accesses translation tables populated by a virtual machine monitor (VMM) of a virtual machine for purposes of translating addresses of shared virtual memory (SVM) for I/O devices.” ¶ [0023], “Accordingly, if an I/O device is compromised in allowing malicious software to modify the ATC contents, the I/O device can generate DMA requests with an HPA to any memory page in a platform that employs virtualization, including to other domains (applications, virtual machines or containers), or to virtual machine manager (VMM) code and to data pages in memory.”).
Tsirkin and Sankaran are considered to be analogous to the claimed invention because they are in the same field of virtual memory and page fault handling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Tsirkin with the well-known technique of accessing guest memory of a virtual machine via DMA as taught by Sankaran to arrive at the claimed invention. This modification would have yielded predictable results and have been reasonable under MPEP § 2143 as both references deal with resolving page faults that occur in the guest memory of a virtual machine.
As per claim 2, Tsirkin and Sankaran teach the system of claim 1. Tsirkin teaches wherein the processing device is configured to generate a request to resolve the input/output page fault by running a driver to request resolution of the input/output page fault (¶ [0020], “Guest operating system 131 may run network accelerator device driver 133. Network accelerator device driver 133 may be a software component that enables network accelerator device 180 to communicate with guest operating system 131, as well as with other network accelerator devices and/or other network interface devices. Network accelerator device driver 133 may include ring buffer management component 129 that may facilitate page fault handling for network accelerator devices within host computer system 110. Ring buffer management component 129 may allocate a set of memory buffers within a data structure for storing incoming packets from network 150.”) and causing the payload to be copied from the buffer to a newly allocated page of the guest memory after the input/output page fault is resolved (¶ [0028], “In one implementation, in order to detect when the page fault has been handled, ring buffer management component 129 may run a background thread to monitor the status of the page fault of the memory buffers in the faulty buffer list. The background thread may then notify page fault handling component 128 that a page fault is handled, for example by placing the buffer identifier in the ring buffer. Page fault handling component 128 may wait for the notification before attempting to store a packet at a buffer that has previously resulted in a page fault. Additionally or alternatively, ring buffer management component 129 may make the newly restored memory buffer (i.e., for which the page fault has been handled) the next available memory buffer allocated to network accelerator device 180. Page fault handling component 128 may then use the newly restored buffer to store the next incoming packet.”).
As per claim 3, Tsirkin and Sankaran teach the system of claim 2. Sankaran teaches wherein the processing device is configured to run a virtual machine manager to manage the virtual machine (¶ [0028], “The system 100 may also include a virtual machine monitor (VMM) 115, e.g., a hosting OS for the system 100, an IOMMU 120 having DMA remapping hardware 121, and a number of I/O devices including Device A and Device B, which include an address translation cache (ATC) 124A and 124B, respectively.”), wherein the driver is configured to cause the payload to be copied by issuing a call to the virtual machine (¶ [0020], “As will be explained in detail, the translation requests sent to the IOMMU and the translation responses received back from the IOMMU allow the I/O devices to handle ATC misses, detect I/O page faults at the ATC, and report the page faults to software through the IOMMU before the transaction is issued on the I/O fabric. This software may be system software, which may include an operating system in a non-virtualized machine, or a VMM and/or OS running within a virtual machine.” See also Fig. 1. I/O devices communicate with virtual machine via device drivers operating on the virtual machines being monitored by a VMM. VMM communicates with IOMMU for servicing page faults.).
As per claim 5, Tsirkin and Sankaran teach the system of claim 2. Sankaran teaches wherein the driver is configured to communicate with an address translation engine of the peripheral device using Address Translation Service (ATS) messages (¶ [0020], “One way to achieve I/O page fault detection at the I/O device is to build an address translation cache (ATC), also referred to as a device translation lookaside buffer (device-TLB), at the I/O device that is capable of caching virtual address translations along with permissions and interacting with an I/O memory management unit (IOMMU) in completing address translation requests.” ¶ [0044]-[0045], “The IOMMU 220 may also send an invalidation request to the ATC 324 of the device 318 to invalidate a translation cached in the ATC 324 (336), e.g., because the translation is stale or determined to be invalid for other reasons, such as in response to an invalidation request from software running on the system 200 (350). After the translation has been cleared from the ATC 324, the ATC 324 may send an invalidation completion message to the Root-Complex 216 (340).”)
As per claim 8, Tsirkin and Sankaran teach the system of claim 1. Tsirkin teaches wherein the peripheral device, in response to a direct memory access (DMA) attempt that results in the input/output page fault, is configured to: write the descriptor to the input/output page fault queue corresponding to the input/output page fault (¶ [0028], “Host computer system 110 may handle the page faults of the faulty buffers stored in the faulty buffer list. The host computer system 110 may handle a page fault by bringing the memory page that triggered the page fault from a backing store to the physical main memory. In one implementation, in order to detect when the page fault has been handled, ring buffer management component 129 may run a background thread to monitor the status of the page fault of the memory buffers in the faulty buffer list…Furthermore, page fault handling component 128 may remove the address (and/or identifier) of the newly restored buffer from the faulty buffer list.” ¶ [0033], “At operation 252, processing logic may detect that buffer-1 211 is not present. For example, processing logic may detect the occurrence of a page fault, indicating that the translation operation (i.e., the translation of the virtual buffer address of buffer-1 211 to a corresponding physical address) failed because the memory page containing buffer-1 211, identified by the translated address, is not present in the main memory. Processing logic may store the address (and/or identifier) of buffer-1 211 in the faulty buffer list 222 in local memory 220.”); store the payload in the buffer (¶ [0024], “Page fault handling component 128 may then attempt to store the incoming packet at another memory buffer of the set of memory buffers allocated by ring buffer management component 129. For example, page fault handling component 128 may attempt to store the incoming packet at the next buffer in the receive ring.” See also para. 0033-0034.); and write the descriptor to the fault buffer queue corresponding to the location of the buffer (¶ [0034], “In some implementations, page fault handling component 128 may store a data structure (e.g., an ordered list or a queue, illustrated as buffer order list 221 in local memory 220) to indicate the order in which the packets were received, i.e., the order in which the buffers were used. In response to receiving a notification indicating the successful storage of the first incoming packet at buffer-2 212, processing logic may append buffer order list 221 to indicate that the first incoming packet is stored at buffer-2 212. The data structure may store an identifier identifying the buffer-2 212, and/or may store the memory address of buffer-2 212.” See also para. 0033-0034.).
As per claim 9, Tsirkin and Sankaran teach the system of claim 8. Tsirkin teaches wherein the peripheral device, in response to a direct memory access (DMA) attempt to store another payload that results in another input/output page fault corresponding to another guest memory of another virtual machine is to: write another descriptor to the input/output page fault queue corresponding to the other input/output page fault (¶ [0028], “Host computer system 110 may handle the page faults of the faulty buffers stored in the faulty buffer list. The host computer system 110 may handle a page fault by bringing the memory page that triggered the page fault from a backing store to the physical main memory. In one implementation, in order to detect when the page fault has been handled, ring buffer management component 129 may run a background thread to monitor the status of the page fault of the memory buffers in the faulty buffer list…Furthermore, page fault handling component 128 may remove the address (and/or identifier) of the newly restored buffer from the faulty buffer list.” ¶ [0033], “At operation 252, processing logic may detect that buffer-1 211 is not present. For example, processing logic may detect the occurrence of a page fault, indicating that the translation operation (i.e., the translation of the virtual buffer address of buffer-1 211 to a corresponding physical address) failed because the memory page containing buffer-1 211, identified by the translated address, is not present in the main memory. Processing logic may store the address (and/or identifier) of buffer-1 211 in the faulty buffer list 222 in local memory 220.”); store the other payload in another buffer of the plurality of buffers (¶ [0024], “Page fault handling component 128 may then attempt to store the incoming packet at another memory buffer of the set of memory buffers allocated by ring buffer management component 129. For example, page fault handling component 128 may attempt to store the incoming packet at the next buffer in the receive ring.” See also para. 0033-0034.); and write another descriptor to the fault buffer queue corresponding to the location of the other buffer (¶ [0034], “In some implementations, page fault handling component 128 may store a data structure (e.g., an ordered list or a queue, illustrated as buffer order list 221 in local memory 220) to indicate the order in which the packets were received, i.e., the order in which the buffers were used. In response to receiving a notification indicating the successful storage of the first incoming packet at buffer-2 212, processing logic may append buffer order list 221 to indicate that the first incoming packet is stored at buffer-2 212. The data structure may store an identifier identifying the buffer-2 212, and/or may store the memory address of buffer-2 212.” See also para. 0033-0034.).
As per claim 11, Tsirkin and Sankaran teach the system of claim 1. Sankaran teaches wherein the processing device is configured to run a plurality of other virtual machines (¶ [0027], “FIG. 1 is a block diagram of a system 100 that provides hardware support for direct assignment of I/O devices, according to an embodiment of the present disclosure. The system 100 may include various virtual machines (VMs), for example a first VM 102A and a second VM 102N.” ¶ [0034], “The system 200 may include, among other components, one or more processor cores 201 each that may execute the one or more virtual machines 102A through 102 N of FIG. 1.”), at least one of which has other guest memory that is overcommitted with the guest memory (¶ [0073], “More specifically, referring to FIG. 9, the method 900 may start where the VMM, in response to a translation request, may determine whether a guest physical address (referred to as GPA1 for purposes of this explanation) needs to be paged out of memory due a page fault (e.g., a memory overcommit that demands that a HPA be paged in) (910).”).
As per claim 12, Tsirkin teaches an article of manufacture comprising one or more tangible, non- transitory machine-readable media comprising instructions (¶ [0050], “Data storage device 418 may include a non-transitory computer-readable storage medium 428 on which may store instructions 422 embodying any one or more of the methodologies or functions described herein (e.g., page fault handling component 128).”) that, when executed by a processing device, cause the processing device to: receive, from a network, a payload for storing data in a memory page (¶ [0022], “In certain implementations, network accelerator device 180 may receive an incoming packet from network 150, e.g., to be consumed by a process running on Guest VM 130.”); based on the memory page not being found (¶ [0023], “Page fault handling component 128 may attempt to store the incoming packet at the selected buffer. Storing the incoming packet at the selected buffer may involve translating the buffer address associated with the selected buffer (e.g., translating the virtual address of the buffer to a corresponding physical address), followed by storing the incoming packet to a memory page identified by the translated address in the memory of the host computer system. The translation operation may cause a page fault (e.g., if the memory page identified by the translated address is not present in the main memory).” See also para. 0033-0034.):
storing the input/output payload into a buffer (¶ [0024], “Page fault handling component 128 may then attempt to store the incoming packet at another memory buffer of the set of memory buffers allocated by ring buffer management component 129. For example, page fault handling component 128 may attempt to store the incoming packet at the next buffer in the receive ring.” See also para. 0033-0034.); and
sending an indication of an input/output page fault and a descriptor indicating a location of the buffer into a fault buffer queue (¶ [0034], “In some implementations, page fault handling component 128 may store a data structure (e.g., an ordered list or a queue, illustrated as buffer order list 221 in local memory 220) to indicate the order in which the packets were received, i.e., the order in which the buffers were used. In response to receiving a notification indicating the successful storage of the first incoming packet at buffer-2 212, processing logic may append buffer order list 221 to indicate that the first incoming packet is stored at buffer-2 212. The data structure may store an identifier identifying the buffer-2 212, and/or may store the memory address of buffer-2 212.” See also para. 0033-0034.); when the fault buffer queue stores the descriptor: receive a stored descriptor indicating that the input/output page fault corresponding to the input/output payload has occurred to an input/output page fault queue (¶ [0028], “Host computer system 110 may handle the page faults of the faulty buffers stored in the faulty buffer list. The host computer system 110 may handle a page fault by bringing the memory page that triggered the page fault from a backing store to the physical main memory. In one implementation, in order to detect when the page fault has been handled, ring buffer management component 129 may run a background thread to monitor the status of the page fault of the memory buffers in the faulty buffer list…Furthermore, page fault handling component 128 may remove the address (and/or identifier) of the newly restored buffer from the faulty buffer list.” ¶ [0033], “At operation 252, processing logic may detect that buffer-1 211 is not present. For example, processing logic may detect the occurrence of a page fault, indicating that the translation operation (i.e., the translation of the virtual buffer address of buffer-1 211 to a corresponding physical address) failed because the memory page containing buffer-1 211, identified by the translated address, is not present in the main memory. Processing logic may store the address (and/or identifier) of buffer-1 211 in the faulty buffer list 222 in local memory 220.”); and store the input/output payload while resolving the input/output page fault (¶ [0018], “The overhead of handling the page fault is minimized because the network accelerator device can proceed to use another memory buffer to store the incoming packet without having to wait for the page fault of the first memory buffer to be handled.”).
Tsirkin fails to explicitly teach the payload coming from a peripheral device.
Accordingly, Sankaran teaches the well-known technique of receive, from a peripheral device, an input/output payload for storing in a memory page (¶ [0020], “The IOMMU (also referred to as I/O memory management circuitry) is a direct memory access (DMA) remapping hardware unit that accesses translation tables populated by a virtual machine monitor (VMM) of a virtual machine for purposes of translating addresses of shared virtual memory (SVM) for I/O devices.” ¶ [0023], “Accordingly, if an I/O device is compromised in allowing malicious software to modify the ATC contents, the I/O device can generate DMA requests with an HPA to any memory page in a platform that employs virtualization, including to other domains (applications, virtual machines or containers), or to virtual machine manager (VMM) code and to data pages in memory.”).
Tsirkin and Sankaran are considered to be analogous to the claimed invention because they are in the same field of virtual memory and page fault handling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Tsirkin with the well-known technique of receiving a payload from a peripheral device as taught by Sankaran to arrive at the claimed invention. This modification would have yielded predictable results and have been reasonable under MPEP § 2143 as both references deal with resolving page faults that occur in the guest memory of a virtual machine.
As per claim 13, Tsirkin and Sankaran teach the article of manufacture of claim 12. Tsirkin teaches wherein the instructions, when executed by the processing device, cause the processing device to, after resolution of the input/output page fault results in a page of physical memory being newly allocated, use the descriptor from the fault buffer queue to copy the input/output payload from the buffer to the newly allocated page of physical memory (¶ [0028], “Host computer system 110 may handle the page faults of the faulty buffers stored in the faulty buffer list. The host computer system 110 may handle a page fault by bringing the memory page that triggered the page fault from a backing store to the physical main memory. In one implementation, in order to detect when the page fault has been handled, ring buffer management component 129 may run a background thread to monitor the status of the page fault of the memory buffers in the faulty buffer list. The background thread may then notify page fault handling component 128 that a page fault is handled, for example by placing the buffer identifier in the ring buffer. Page fault handling component 128 may wait for the notification before attempting to store a packet at a buffer that has previously resulted in a page fault.”).
As per claim 14, Tsirkin and Sankaran teach the article of manufacture of claim 13. Tsirkin teaches wherein the instructions, when executed by the processing device, cause the processing device to copy the input/output payload to memory of a virtual machine indicated in the descriptor in the input/output page fault queue (¶ [0028], “Host computer system 110 may handle the page faults of the faulty buffers stored in the faulty buffer list. The host computer system 110 may handle a page fault by bringing the memory page that triggered the page fault from a backing store to the physical main memory. In one implementation, in order to detect when the page fault has been handled, ring buffer management component 129 may run a background thread to monitor the status of the page fault of the memory buffers in the faulty buffer list. The background thread may then notify page fault handling component 128 that a page fault is handled, for example by placing the buffer identifier in the ring buffer. Page fault handling component 128 may wait for the notification before attempting to store a packet at a buffer that has previously resulted in a page fault.”). Sankaran teaches a guest physical address (GPA) of memory of a virtual machine (¶ [0043], “In one embodiment, the ATC 324 may also send a translation request to the IOMMU 220 with a virtual address (whether a guest virtual address (GVA) or an I/O virtual address (IOVA)) for access to a corresponding host physical address (HPA) (331). The IOMMU 220 may then generate an address translation based on a mapping between the GVA (or IOVA) and a guest physical address (GPA), and then a mapping between the GPA and the corresponding HPA, using various paging structures 310 (e.g., paging tables as will be discussed) and in relation to the pages 311 in memory. After the IOMMU 220 completes address mapping in response to the translation request, the IOMMU may respond to the ATC 324 with a translation completion message (332), which contains the HPA (assuming successful translation) and translation data that the device 318 may use to formulate a translated request. The ATC 324 may then send the translated request (e.g., a regular memory read, write, or atomics request with an AT field and the HPA) to the IOMMU 220 to obtain needed data stored at the HPA (334).”).
As per claim 15, Tsirkin and Sankaran teach the article of manufacture of claim 14. Sankaran teaches wherein the instructions, when executed by the processing device, cause the processing device to copy the input/output payload using a virtual machine manager (¶ [0020], “The IOMMU (also referred to as I/O memory
management circuitry) is a direct memory access (DMA) remapping hardware unit that
accesses translation tables populated by a virtual machine monitor (VMM) of a virtual
machine for purposes of translating addresses of shared virtual memory (SVM) for I/O
devices. As will be explained in detail, the translation requests sent to the IOMMU and
the translation responses received back from the IOMMU allow the I/O devices to
handle ATC misses, detect I/O page faults at the ATC, and report the page faults to
software through the IOMMU before the transaction is issued on the I/O fabric. This
software may be system software, which may include an operating system in a non-
virtualized machine, or a VMM and/or OS running within a virtual machine.”).
As per claim 16, Tsirkin and Sankaran teach the article of manufacture of claim 13. Sankaran teaches wherein the instructions, when executed by the processing device, cause the processing device to send an entry corresponding to the newly allocated page to a device translation lookaside buffer on the peripheral device (¶ [0020], “One way to achieve I/O page fault detection at the I/O device is to build an address translation cache (ATC), also referred to as a device translation lookaside buffer (device-TLB), at the I/O device that is capable of caching virtual address translations along with permissions and interacting with an I/O memory management unit (IOMMU) in completing address translation requests.” ¶ [0064], “Under the RTRR mode, GPAs are now cached in an ATC 324 of an I/O device 318. Any VMM paging of guest physical addresses does not depend on an existing method of modifying the GPA-to-HPA mapping as not-present and performing IOTLB and ATC invalidation to page out a GPA. This is because performing these steps would cause non-recoverable faults to any currently-pending translated requests because these are subject to GPA-to-HPA translations.” See also para. 0041-0042).
Claim(s) 4 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Tsirkin and Sankaran as applied to claim 2 above, and further in view of Biemueller et al. (US Patent No. 11,474,857 B1 hereinafter Biemueller).
As per claim 4, Tsirkin and Sankaran teach the system of claim 2. Tsirkin teaches the driver running on the processing device is configured to resolve the input/output page fault (¶ [0020], “Guest operating system 131 may run network accelerator device driver 133. Network accelerator device driver 133 may be a software component that enables network accelerator device 180 to communicate with guest operating system 131, as well as with other network accelerator devices and/or other network interface devices. Network accelerator device driver 133 may include ring buffer management component 129 that may facilitate page fault handling for network accelerator devices within host computer system 110. Ring buffer management component 129 may allocate a set of memory buffers within a data structure for storing incoming packets from network 150.”).
Tsirkin and Sankaran fail to teach sending an indication to the peripheral device that the input/output page fault is resolved.
However, Biemueller teaches after the input/output page fault is resolved, provide an indication to the peripheral device that the input/output page is resolved to enable the peripheral device to access the page of guest memory (Col. 16, lines 7-53, “At the DVS, the requested page may be written to the memory 313, and the PBM 360 may be updated to indicate that the page is now present at the DVS. The component at the DVS (e.g., a virtual CPU, or an I/O operation virtualization management component) which requested the page may be informed that the page has reached the DVS using a variety of techniques in different embodiments. For example, in one embodiment, the component may periodically examine a page fault completion queue, and detect when an entry for the requested page is placed in the completion queue. In another embodiment, an interrupt mechanism may be used to notify the waiting components when a requested page arrives at the DVS. In some embodiments, a sweeper process running at the DVS (e.g., at the offload cards) may generate spurious page faults for pages that have not yet been requested for the CI, thus speeding up the transfer of the remaining state information.”).
Tsirkin, Sankaran, and Biemueller are all considered to be analogous to the claimed invention because they are all in the same field of virtual memory and page fault handling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the page fault handling system of Tsirkin and Sankaran with the notification technique of Biemueller to arrive at the claimed invention. The motivation to modify Tsirkin and Sankaran with the teachings of Biemueller is that quickly notifying any waiting components that a page fault has been handled allows those waiting components to immediately access the newly available page.
As per claim 25, Tsirkin and Sankaran teach the system of claim 2. Tsirkin teaches cause the payload to be copied from the buffer to the newly allocated page of the guest memory based on the indication of the input/output page fault being resolved (¶ [0028], “In one implementation, in order to detect when the page fault has been handled, ring buffer management component 129 may run a background thread to monitor the status of the page fault of the memory buffers in the faulty buffer list. The background thread may then notify page fault handling component 128 that a page fault is handled, for example by placing the buffer identifier in the ring buffer. Page fault handling component 128 may wait for the notification before attempting to store a packet at a buffer that has previously resulted in a page fault. Additionally or alternatively, ring buffer management component 129 may make the newly restored memory buffer (i.e., for which the page fault has been handled) the next available memory buffer allocated to network accelerator device 180. Page fault handling component 128 may then use the newly restored buffer to store the next incoming packet.”).
Tsirkin and Sankaran fail to teach a completion queue configured to store an indication that the input/output page fault has been resolved.
However, Biemueller teaches wherein the processing device comprises a completion queue configured to store an indication that the input/output page fault has been resolved, the driver being configured to write the indication that the input/output page fault has been resolved to the completion queue (Col. 16, lines 7-53, “he component at the DVS (e.g., a virtual CPU, or an I/O operation virtualization management component) which requested the page may be informed that the page has reached the DVS using a variety of techniques in different embodiments. For example, in one embodiment, the component may periodically examine a page fault completion queue, and detect when an entry for the requested page is placed in the completion queue. In another embodiment, an interrupt mechanism may be used to notify the waiting components when a requested page arrives at the DVS. In some embodiments, a sweeper process running at the DVS (e.g., at the offload cards) may generate spurious page faults for pages that have not yet been requested for the CI, thus speeding up the transfer of the remaining state information.” Col. 22 & 23, lines 62-67 & 1-8, “An entry may be placed into a page-fault completion queue in various embodiments, along with the identifier token. In some embodiments, the SDH may be responsible for updating the present-pages bit map to indicate the presence of the requested page in the compute instance's memory, thereby completing the page fault response (element 823).”).
Refer to claim 4 for motivation to combine.
Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over Tsirkin and Sankaran as applied to claim 1 above, and further in view of Osisek et al. (US Pub. No. 2010/0223612 A1 hereinafter Osisek).
As per claim 7, Tsirkin and Sankaran teach the system of claim 1. Tsirkin teaches the fault buffer queue (¶ [0034], “In some implementations, page fault handling component 128 may store a data structure (e.g., an ordered list or a queue, illustrated as buffer order list 221 in local memory 220) to indicate the order in which the packets were received, i.e., the order in which the buffers were used. In response to receiving a notification indicating the successful storage of the first incoming packet at buffer-2 212, processing logic may append buffer order list 221 to indicate that the first incoming packet is stored at buffer-2 212. The data structure may store an identifier identifying the buffer-2 212, and/or may store the memory address of buffer-2 212.” See also para. 0033-0034.) and the input/output page fault queue (¶ [0028], “Host computer system 110 may handle the page faults of the faulty buffers stored in the faulty buffer list. The host computer system 110 may handle a page fault by bringing the memory page that triggered the page fault from a backing store to the physical main memory. In one implementation, in order to detect when the page fault has been handled, ring buffer management component 129 may run a background thread to monitor the status of the page fault of the memory buffers in the faulty buffer list…Furthermore, page fault handling component 128 may remove the address (and/or identifier) of the newly restored buffer from the faulty buffer list.” ¶ [0033], “At operation 252, processing logic may detect that buffer-1 211 is not present. For example, processing logic may detect the occurrence of a page fault, indicating that the translation operation (i.e., the translation of the virtual buffer address of buffer-1 211 to a corresponding physical address) failed because the memory page containing buffer-1 211, identified by the translated address, is not present in the main memory. Processing logic may store the address (and/or identifier) of buffer-1 211 in the faulty buffer list 222 in local memory 220.”).
Tsirkin and Sankaran fail to teach buffers/memory being pinned.
However, Osisek teaches memory being pinned (¶ [0036], “Therefore, host real storage is allocated to the buffer to support the buffer. The host real storage is to be allocated to the guest buffer prior to storing data in the buffer. That is, the guest storage buffer is to be pinned into host real storage in order to avoid a page fault.”).
Tsirkin, Sankaran, and Osisek are all considered to be analogous to the claimed
invention because they are all in the same field of virtual memory and handling page faults. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the input/output page fault queue and fault buffer queue of Tsirkin and Sankaran to be pinned as taught in Osisek to arrive at the claimed invention. The motivation to modify Tsirkin and Sankaran with the teachings of Osisek is that pinning an area of memory or a buffer prevents page faults from occurring for said memory area or buffer.
Claim(s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over Tsirkin and Sankaran as applied to claim 1 above, and further in view of Kumar et al. (US Pub. No. 2019/0370050 A1 hereinafter Kumar).
As per claim 10, Tsirkin and Sankaran teach the system of claim 1. Sankaran teaches the peripheral device (¶ [0036], “In one embodiment, the I/O devices 218 include one or more integrated devices 218A (such as processor graphics), one or more discrete devices 218B (such as PCIe® devices or other attached devices), and/or one or more non-SVM devices 218C (such as legacy devices that do not support shared virtual memory). The I/O devices, furthermore, may include network controller devices, storage controller devices, peripheral controller devices (like Universal Serial Bus (USB) controllers), media controller devices, display controllers, and the like.”).
Tsirkin and Sankaran fail to teach the peripheral device being a scalable input/output virtualization (SIOV) device or a single-root input/output virtualization (SR-IOV) device.
However, Kumar teaches wherein the peripheral device comprises at least one of a scalable input/output virtualization (SIOV) device and a single-root input/output virtualization (SR-IOV) device (¶ [0042], “In various embodiments, an input/output (I/O) device may be configured to operate using at least one of a single-root input/output virtualization (SR-IOV) and a scalable input/output virtualization (S-IOV), such as, for example, an Intel Scalable Input/Output Virtualization from the Intel Scalable Input/Output Virtualization specification.”).
Tsirkin, Sankaran, and Kumar are all considered to be analogous to the claimed invention because they are all in the same field of virtual memory and handling page faults. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the I/O devices of Tsirkin and Sankaran with the SR-IOV or S-IOV device functionality of Kumar to arrive at the claimed invention.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-5, 7-16, and 25 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant has amended the claims with new limitations that change the scope of the claimed invention. Therefore, the amended claims necessitate new rejections, as addressed above. The amended claims are not allowable over prior art previously used along with additional references, for reasons indicated above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN ROBERT DAKITA EWALD whose telephone number is (703)756-1845. The examiner can normally be reached Monday-Friday: 9:00-5:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at (571)272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.D.E./Examiner, Art Unit 2199
/LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199