DETAILED ACTION
This Office Action is in response to claims filed on 11/28/2025
Claims 1-23 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see page 7 of remarks, filed 11/28/2025, with respect to claim objection of claim 2 have been fully considered and are persuasive. The objection of 08/27/2025 has been withdrawn.
Applicant’s arguments, see page 7 of remarks, filed 11/28/2025, with respect to 35 U.S.C. 112(b) rejection of claims 1-8 have been fully considered and are persuasive. The rejection of 08/27/2025 has been withdrawn.
Applicant’s arguments with respect to claims 1, 8, 15 and their respective dependent claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 22 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 22 recites the limitation "a descriptor queue" in lines 3 and 5. It is unclear if the descriptor queue of line 5 is the same descriptor queue recited in line 3 or a new, separate descriptor queue. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-8, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Kutch et al. Pub. No. US 2021/0232528 A1 (hereinafter Kutch) in view of Hart et al. Pub. No. US 2015/0052279 A1 (hereinafter Hart) in view of Chiang et al. Pub. No. US 2012/0204188 A1 (hereinafter Chiang).
With regard to claim 1, Kutch teaches an apparatus comprising (Abstract, Examples described herein relate to an apparatus comprising: a descriptor format translator accessible to a driver. In some examples, the driver and descriptor format translator share access to transmit and receive descriptors):
a host interface ([0030], A VEE (Examiner notes: a virtual execution environment) can include at least a virtual machine or a container. VEEs can execute in bare metal (e.g., single tenant) or hosted (e.g., multiple tenants) environments. A virtual machine (VM) can be software the runs an operating system and one or more applications; [0034], VEE 302 can utilize a same virtualized interface (e.g., VDEV (Examiner notes: Virtual Device) driver 304) no matter what the physical VF or SIOV NIC 330 is used for packet transmission or receipt)
circuitry, when coupled to a physical device, that is to ([0037], In some examples, PF Host driver 314 can initialize FDR (Examiner notes: Flexible Descriptor Representor) 320 and connect FDR 320 to NIC 330):
…, wherein:
the host interface is configured to route first communications to the circuity instead of the physical device (FIG. 3, VEE 302 routing descriptors (Examiner notes: first communication) through host kernel 310 to FDR 320 circuitry; [0035], In system of FIG. 3, VDEV driver 304 (Examiner notes: Communication interface of VEE 302) communicates with FDR 320, which interacts with VDEV driver 304 as a NIC (or other device) and route second communications to the physical device (FIG. 3, VEE 302 routing data (Examiner notes: second communication) through host kernel 310 to virtual function 334 of physical device 330; [0035], In some examples, VDEV Driver 304 can also communicate with NIC 330 to configure access to queues and descriptor rings)
the physical device is accessible as a virtual device via the host interface ([0007], Either VF (SR-IOV) or ADI (S-IOV) may be assigned to a container in a pass-through manner (full or mediation), which provide one virtual device associated with a physical device instance) and
the circuitry comprises a direct memory access (DMA) circuitry ([0039], For example, for packet receipt, NIC 330 can copy by direct memory access (DMA) data to destination location and provide an Rx descriptor to a descriptor ring managed by FDR 320 (Examiner notes: wherein FDR implicitly comprises DMA circuitry)), multiple cores ([0166], A multi-core architecture of the appliance 200, referred to as nCore or multi-core technology, allows the appliance in some embodiments to break the single core performance barrier and to leverage the power of multi-core CPUs),
However, Kutch does not explicitly teach the circuitry capable of performing hypervisor functions.
Hart teaches perform operations of a hypervisor ([0033], As denoted by the arrows, embodiments of the present disclosure are directed toward the use of common address ranges for different VFs … For instance, VFs with similar functions can be assigned to different virtual PCI busses, while each still has a common address range. The hypervisor 114 or 116 can apply address offsets to translate the access requests from the drivers to base addresses of the physical devices and corresponding VFs operating therein)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Hart with the teachings of Kutch in order to provide an apparatus that teaches circuitry configured to perform hypervisor operations. The motivation for applying Hart teaching with Kutch teaching is to provide an apparatus that allows for resource assignment and routing control over I/O devices such that enables a system to handle the configurations and management of one or more virtual function for a physical I/O device (Hart, [0025]). Kutch and Hart are analogous art directed towards hypervisor-specific management and integration. Therefore, it would have been obvious for one of ordinary skill in the art to combine Hart with Kutch to teach the claimed invention in order to provide hypervisor capabilities coupled to physical I/O devices, thereby improving resource utilization by sharing the hardware across multiple host devices through virtualization.
However, the combination does not explicitly teach circuitry configured to perform scheduling and load balancing of a cores and threads.
and a thread manager to load balance among the multiple cores ([0036], Load balancing manager 180 assigns a home processor or home processor element to process tree 220 by assigning a home processor element identifier (HPEI) to process tree 220) and select a thread ([0037], load balancing manager 180 dispatches with thread dispatcher 230 each thread in process order within process tree 220 to ready queue 1 and then to run queue 1 that corresponds to processor 1).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Chiang with the teachings of Kutch and Hart in order to provide an apparatus that teaches multicore load balancing and thread execution selection. The motivation for applying Chiang teaching with Kutch and Hart teaching is to provide an apparatus that allows for efficient load balancing across multiple processor resource thereby enabling reduction of system overhead associated with memory access while avoiding underutilization of available processor resources from strict processor designations (Chiang, [0010]-[0013]). Kutch, Hart, and Chiang are analogous art directed towards load rebalancing techniques in distributed systems. Therefore, it would have been obvious for one of ordinary skill in the art to combine Chiang with Kutch and Hart to teach the claimed invention in order to provide efficient load balancing across processor resources.
With regard to claim 2, Kutch teaches the apparatus of claim 1, wherein the operations of the hypervisor comprise performing descriptor format translation between one of multiple different device drivers and the physical device ([0037], In some examples, PF host driver 314 can initialize FDR 320 and connect FDR 320 to NIC 330. In some examples, FDR 320 can allocate Rx/Tx (Examiner notes: Receive/Transfer) descriptor rings for NIC 330. After initialization, FDR 320 can contain two copies of Rx/Tx rings, such as a Rx/Tx ring for NIC 330 and Rx/Tx ring for VDEV driver 304. FDR 320 can utilize descriptor conversion 322 to perform descriptor translation or Rx or Tx descriptors so that a descriptor in the Rx/Tx ring for NIC 330 is a translation of a corresponding Rx or Tx descriptor in the Rx/Tx ring for the VDEV driver 304. In some examples, FDR 320 can access NIC 330 as a VF or PF using SR-IOV or SIOV or NIC 330 can access FDR 320 as a VF or PF using SR-IOV or SIOV) and queue semantic translation ([0069], In some examples, if the device is a storage controller or storage device (e.g., with one or more non-volatile memory devices), for access to a storage device, a single virtqueue can be used to send requests and receive responses. The VEE can use a virtqueue to provide an avail ring index to pass a descriptor to the vhost target and the vhost target can update the virtqueue with a used ring index to the VEE. Writing to storage can be a write command, and reading from storage can be a read command. For a write or read command, a free entry in the descriptor table can be identified and filled with the command, indicating that write or read, where the data should be written to or read from. The descriptor can be identified at a tail entry of the avail ring via a virtqueue and then the vhost target notified of an available descriptor. After the vhost target completes the IO operation, it can write the result of the processing on the status, then update the used ring, and write the index value of the descriptor in the tail entry of the used ring then notify the VEE (Examiner notes: descriptor passed into queue in order to translate command semantics from driver to device, and vice versa) … In some examples, descriptor format conversion can be used to modify descriptors using embodiments described herein).
With regard to claim 3, Kutch teaches the apparatus of claim 1, wherein:
the first communications comprise an event ([0032], Virtual device (VDEV) driver 304 can send a configuration command to FDR 320 (Examiner notes: the first communication) to connect FDR 320 to a virtualized interface exposed by VEE 302; [0033], VDEV driver 304 for VEE 302 can allocate kernel memory for descriptors and system memory for packet buffers and program FDR 320 to access those descriptors. For example, VDEV driver 304 can indicate descriptor buffer locations (e.g., Tx or Rx) to FDR 320. VDEV driver 304 can communicate with FDR 320 instead of NIC 330 to provide descriptors for packet transmit (Tx) or access descriptors for packet receive (Rx) and
the circuitry comprises at least two cores (FIG. 5, FDR 510 contains processing cores; [0073], FIG. 13 depicts an example system … System 1300 includes processor 1310 which provides processing, operation management, and execution of instructions for system 1300. Processor 1310 can include any type of microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core, or other processing hardware to provide processing for system 1300, or a combination of processors (Examiner notes: such that include a plurality of two or more processing cores)
However, the combination does not explicitly teach load balancing and scheduling event processing among the plurality of at least two cores.
Chiang teaches at least two cores ([0016], HIS 100 includes a processor group 105 that includes multiple processors, namely processor 1, processor 2, …, processor N, wherein N is the total number of processors in processor group 105. Processor group 105, may include multiple processors, processor cores, or other processor elements) and second circuitry to load balance and schedule event processing among the at least two cores ([0016], FIG. 1 shows an information handling system 100 with a load balancing manager 180 that employs the disclosed load balancing methodology.)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Chiang with the teachings of Kutch and Hart in order to provide an apparatus that teaches event scheduling and load balancing among a plurality of cores. The motivation for applying Chiang teaching with Kutch and Hart teaching is to provide an apparatus that allows for a multicore system and load balancer to schedule events across multiple cores thereby enabling efficient use of processing resources (Chiang, [0026]). Kutch, Hart, and Chiang are analogous art directed towards load rebalancing techniques in distributed systems. Therefore, it would have been obvious for one of ordinary skill in the art to combine Chiang with Kutch and Hart to teach the claimed invention in order to provide multicore hardware capable of performing parallel execution efficiently through the use of load balancing techniques.
With regard to claim 5, Kutch teaches the apparatus of claim 1, wherein the circuity is configured to route first communications to the circuitry instead of the physical device ([0025], Various embodiments provide for compatibility between virtual interfaces with a variety of NICs … At least to provide compatibility between virtual interfaces with a variety of NICs, various embodiments provide for descriptor format conversion in connection with packet transmission or receipt so that a virtualized execution environment (VEE) can utilize a driver for a NIC other than a NIC used to transmit or receive packets) based on incompatibility between a processor-executed driver and the physical device. ([0003], There are multiple NIC vendors with a variety of capabilities and functionalities. Different NICs can support different formats of descriptors. However, developers such as firewall vendors or virtual network functions (VNF) developers face challenges with changing or updated NICs from repeated updating and re-validation of products in order to address potential driver incompatibility or changes in interface technology … Updates to kernel firmware or drivers can result in incompatibility with VF drivers)
With regard to claim 6, Kutch does not explicitly teach the elements that comprise a host system.
Hart teaches the apparatus of claim 1, comprising a host system communicatively coupled to the circuitry by the host interface, wherein the host system comprises one or more processors to execute an operating system ([0031], One or more hypervisors (or virtual machine managers) 114 and 116 can manage communications from VMs 106-112 to external resources. This management can facilitate the running of multiple operating systems on shared hardware (e.g., hosts or CPUs 102, 104). The hypervisors 114, 116 can provide the different instances of operating systems with access to the memory, processor(s), and other resources of the hosts 102, 104) and a driver to access the physical device as the virtual device ([0060], As view by a device driver, a common PCI address space is used for each VF and its corresponding virtual PCI bus (e.g., 0x00000000-0x1F000000). The host operating systems and the device adapter(s) are presented with this address, as shown in block 804).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Hart with the teachings of Kutch in order to provide an apparatus that teaches the configuration of a host system including processors executing an operating system and a driver element to access a virtualized I/O device instance. The motivation for applying Hart teaching with Kutch teaching is to provide an apparatus that allows for a host system to transmit, receive, and execute requests directed towards mapped I/O devices (Hart, [0032]). Kutch and Hart are analogous art directed towards hypervisor-specific management and integration. Therefore, it would have been obvious for one of ordinary skill in the art to combine Hart with Kutch to teach the claimed invention in order to provide a host system architecture integrating a processor executing an operating system and a driver functioning as an interface to coordinate communication with associated virtual functions.
With regard to claim 7, Kutch teaches the apparatus of claim 1, wherein the virtual device is accessible using virtualization based on one or more of: Single Root I/O Virtualization (SR-IOV), and/or Scalable Input/Output (I/O) Virtualization (S-IOV) ([0005], Intel® scalable IOV (S-IOV) and single root I/O virtualization (SR-IOV) may provide virtual machines and containers access to a device using isolated shared physical function (PF) resources and multiple virtual functions (VFs) and corresponding drivers).
With regard to claim 8, Kutch teaches the apparatus of claim 1, comprising the physical device communicatively coupled to the circuitry, wherein the physical device comprises one or more of: a protocol engine, a storage controller, a network interface device, a graphics processing unit, and/or accelerator ([0032], A physical PCIe connected NIC 330 (e.g., a SR-IOV VF, S-IOV VDEV, or a PF) can be selected as a device that will receive and transmit packets or perform wok at the request of VEE 302 … Note that the while refence is made to a NIC, in addition or alternatively, NIC 330 can include a storage controller, a storage device, an infrastructure processing unit (IPU), data processing unit (DPU), accelerators (e.g., FPGAs), or hardware queue manager (HQM).
With regard to claim 21, Kutch teaches the apparatus of claim 1, wherein:
the first communications comprise communications to be processed prior to being provided to a protocol engine associated with the physical device ([0040], For example, for packet transmit, VDEV driver 304 can place a packet into a memory buffer and writes to a Tx descriptor … Where configured to translate a descriptor, FDR 320 can translate the Tx descriptor (Examiner notes: Processing prior to transmitting) to a format recognized and properly readable by NIC 330 … FDR 320 can monitor the Tx descriptors provided by VDEV driver 304, translate recently written Tx descriptor into a descriptor format used by NIC 330, include in the translated Tx descriptor address of the data buffer to be transmitted, and write the translated descriptor into a ring that NIC 330 is monitoring) and
the second communications comprise communications directed to a protocol engine ([0040], Although if no descriptor translation is needed, FDR 320 can allow the Tx descriptor to be available without translation (Examiner notes: Second communication path without prior processing) … NIC 330 can read the Tx descriptor from a descriptor ring managed by FDR 320 and NIC 330 can access packet data from a memory buffer identified in the … untranslated … Tx descriptor by a DMA copy operation).
With regard to claim 22, Kutch teaches the apparatus of claim 1, wherein:
the circuitry is to perform ([0025], Various embodiments provide for compatibility between virtual interfaces with a variety of NICs):
fetch one or more descriptors from a descriptor queue in host memory ([0033], VDEV driver 304 for VEE 302 can allocate kernel memory for descriptors and system memory for packet buffers and program FDR (Examiner notes: Flexible Descriptor Representor) 320 to access those descriptors … VDEV driver 304 can allocate memory for packet buffers and Rx or Tx descriptors rings, and descriptor rings (queues) can be accessible to FDR 320, and some descriptor rings can be accessible to NIC 330),
process the one or more descriptor ([0039], FDR 320 can determine when NIC 330 updates an Rx descriptor or adds an Rx descriptor to a ring managed by FDR 320. Where configured to translate a descriptor, FDR 320 can translate the Rx descriptor a format recognized and properly readable by VDEV driver 340),
copy the one or more descriptors to a descriptor queue ([0039], FDR 320 can provide the translated Rx descriptor ring accessible to VDEV driver 304. VDEV driver 304 can determine that an Rx descriptor is available to process by VEE 302.),
However, Kutch and Hart do not explicitly teach selection of a particular core to perform event scheduling among threads in a ready state and deallocation among threads upon completion.
Chiang teaches select an event based on an arbitration scheme ([0011], Load balancing manager managers of the IHS may group threads that share data into data sharing threads known as process trees; [0026], Process tree 220 provides thread information to a thread dispatch 230 as shown in FIG. 2 (Examiner notes: Such that facilitates the flow of execution)),
select a core to process the selected event ([0036], Load balancing manager 180 assigns a home processor or home processor element to process tree 220 by assigning a home processor element identifier (HPEI) to process tree 220, as per block 325.), wherein the selected core is to arbitrate among threads in a ready state to select a thread to process the event ([0036], In other words, each thread, such as thread 240 of process tree 220, corresponds to a particular HPEI, such as the HPEI for processor 1. In this manner, processor 1 is the “home processor” of a process tree 220 that includes multiple threads, such as thread 240; [0037], Load balancing manager 180 populates the ready queue of the home processor with tree threads, as per block 327.), and
based on completion of event processing by the selected thread, set the selected thread state to free for allocation ([0043], Load balancing manager 180 continues testing If all thread execution is complete again, as per block 340, once all execution is complete, OS 190 ends the particular application and deletes process tree 220, as per block 385.).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Chiang with the teachings of Kutch and Hart in order to provide an apparatus that teaches a process of scheduling and load balancing received events within a core on a thread. The motivation for applying Chiang teaching with Kutch and Hart teaching is to provide an apparatus that allows for a scheduling and load balancing lifecycle such that enables parallel processing of events across multiple cores while maintaining efficient resource utilization while minimizing resource idle time (Chiang, [0042])). Kutch, Hart, and Chiang are analogous art directed towards load rebalancing techniques in distributed systems. Therefore, it would have been obvious for one of ordinary skill in the art to combine Chiang with Kutch and Hart to teach the claimed invention in order to provide efficient scheduling and load balancing of multiple processor resources.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Kutch in view of Hart in view of Chiang as applied to claim 1 above, and further in view of Zhu et al. Pub. No. US 2022/0382466 A1 (hereinafter Zhu).
With regard to claim 4, Kutch teaches the apparatus of claim 1, wherein:
the first communications comprise an event ([0032], Virtual device (VDEV) driver 304 can send a configuration command to FDR 320 (Examiner notes: the first communication) to connect FDR 320 to a virtualized interface exposed by VEE 302; [0033], VDEV driver 304 for VEE 302 can allocate kernel memory for descriptors and system memory for packet buffers and program FDR 320 to access those descriptors. For example, VDEV driver 304 can indicate descriptor buffer locations (e.g., Tx or Rx) to FDR 320. VDEV driver 304 can communicate with FDR 320 instead of NIC 330 to provide descriptors for packet transmit (Tx) or access descriptors for packet receive (Rx) and
the circuitry comprises at least one cores (FIG. 5, FDR 510 contains processing cores; [0073], FIG. 13 depicts an example system … System 1300 includes processor 1310 which provides processing, operation management, and execution of instructions for system 1300. Processor 1310 can include any type of microprocessor, central processing unit (CPU), graphics processing unit (GPU), processing core, or other processing hardware to provide processing for system 1300, or a combination of processors)
However, the combination does not explicitly teach the at least one core performing concurrent execution while waiting for completion of a DMA operation.
Zhu teaches the at least one core is to process the event while waiting for completion of a direct memory access (DMA) operation for another event ([0053], Instead of waiting for the completion of the DMA read operation in idle, queue manager 640 may advance queue 630 to execute the next operation in the queue. In this way, the next memory operation in queue 630 can be executed concurrently with the DMA read operation, effectively harvesting the available margin of power source Vcc).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Zhu with the teachings of Kutch, Hart, and Chiang in order to provide an apparatus that teaches concurrent execution of operations during the execution of a direct memory access (DMA) operation. The motivation for applying Zhu teaching with Kutch, Hart, and Chiang teaching is to provide an apparatus that allows for parallel execution of multiple instructions on a core such that enables efficient use of a processing resource by reducing the idleness consuming power and improving the throughput of data transferring (Zhu, [0054]). Kutch, Hart, Chiang, and Zhu are analogous art directed towards peripheral adapted interface arrangements. Therefore, it would have been obvious for one of ordinary skill in the art to combine Zhu with Kutch, Hart, and Chiang to teach the claimed invention in order to provide concurrent execution of events thereby improving power consumption and data transfer throughput.
Claims 9-11, 13, 15-17, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kutch et al. Pub. No. 2021/0232528 A1 (hereinafter Kutch) in view of Chiang et al. Pub. No. US 2012/0204188 A1 (hereinafter Chiang).
With regard to claim 9, Kutch teaches a method comprising ([0099], Illustrative examples of the devices, systems, and methods disclosed herein are provided):
a host system utilizing a physical device by device virtualization ([0030], Specialized software, called a hypervisor, emulates the PC client or server’s CPU, memory, hard disk, network and other hardware resources completely, enabling virtual machines to share the resources. The hypervisor can emulate multiple virtual hardware platforms that are isolated from each other, allowing virtual machines to run Linux®, FreeBSD, VMWare, or Windows® Server operating systems on the same underlying physical host; [0032], A physical PCIe connected NIC 330 (e.g., a SR-IOV, S-IOV VDEV, or a PF) can be selected as a device that will receive and transmit packets or perform work at the request of VEE 302) and accessing an intermediary device to accelerate communication with the physical device ([0009], FIG. 2 provides an overview of a system that uses vhost or virtual data path acceleration (vDPA). vDPA allows a connection between a VM or container and device to be established using virtio to provide a data-plane between a virtio driver executing within a VM and a SR-IOV VF and control-plane that is managed by a vDPA application … Live migration of a container and VM accessing a device using vDPA can be supported; [0034], FDR 320 can perform descriptor format conversion so that VEE 302 can utilize the same virtual interface to communicate with a NIC use by another core), wherein the intermediary device performs event translation ([0039], Where configured to translate a descriptor, FDR 320 can translate the Rx descriptor to a format recognized and properly readable by VDEV driver 304) and comprises circuitry to perform direct memory access (DMA) operations (([0039], For example, for packet receipt, NIC 330 can copy by direct memory access (DMA) data to destination location and provide an Rx descriptor to a descriptor ring managed by FDR 320 (Examiner notes: wherein FDR implicitly comprises DMA circuitry))
However, the Kutch does not explicitly teach circuitry configured to schedule one or more processors of the intermediary device for execution.
Chiang teaches and perform a thread manager to load balance among multiple processors of the intermediary device ([0036], Load balancing manager 180 assigns a home processor or home processor element to process tree 220 by assigning a home processor element identifier (HPEI) to process tree 220) and select a thread ([0037], load balancing manager 180 dispatches with thread dispatcher 230 each thread in process order within process tree 220 to ready queue 1 and then to run queue 1 that corresponds to processor 1) which is substantially similar to claim 1 and therefore rejected with similar rationale.
Examiner notes: It would be obvious for one of ordinary skill in the art to recognize that the apparatus of claim 1 is being substantially recited again as limitations for the method of claim 9.
With regard to claim 10, Kutch teaches the method of claim 9, wherein the intermediary device performs event translation comprises retrieving a descriptor ([0050], At 706, the virtual interface can setup descriptor translation to be performed by the descriptor format translator so that the descriptor format received by the NIC or read by the VEE or its virtual interface are properly read. The manner of descriptor translation can be specified to translate a source descriptor to destination descriptor at a bit-by-bit and/or field-by-field basis) and performing descriptor format translation ([0052], Descriptor formation translation can include one or more of: copying one or more fields from a first descriptor to a second descriptor; expanding or contracting content in one or more fields in a first descriptor and writing the expanded or contracted content to one or more fields in a second descriptor; filling-in content or leaving blank one or more fields of the second descriptor where one or more fields are not completed in the first descriptor; and so forth) between one of multiple different device drivers and the physical device ([0057], While examples described in FIGS. 7A-7C are with respect to a NIC or network interface device, various embodiments can apply to any workload descriptor format translation for a device such as an accelerator, hardware queue manager (HQM), queue management device (QMD), storage controller, storage device, accelerator, and so forth)
With regard to claim 11, Chiang teaches the method of claim 9, wherein the circuitry performs load balancing of events processing among the multiple processors ([0013], In one embodiment of the disclosed load balancing methodology, a load balancing manager may allow idle processors to share thread execution with the home processor.).
With regard to claim 13, Kutch teaches the method of claim 9, wherein the physical device comprises one or more of: a protocol engine, a storage controller, a network interface device, a graphics processing unit, and/or accelerator ([0032], A physical PCIe connected NIC 330 (e.g., a SR-IOV VF, S-IOV VDEV, or a PF) can be selected as a device that will receive and transmit packets or perform wok at the request of VEE 302 … Note that the while refence is made to a NIC, in addition or alternatively, NIC 330 can include a storage controller, a storage device, an infrastructure processing unit (IPU), data processing unit (DPU), accelerators (e.g., FPGAs), or hardware queue manager (HQM).
With regard to claim 15, Kutch teaches at least one non-transitory computer-readable medium comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to ([0074], These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium product an article of manufacture including instructions which implement the function/act specified):
configure circuitry of a host interface between a host system and a physical device, accessible by device virtualization, to route events to a physical device or to an intermediary device, wherein the intermediary device performs event translation ([0005], In certain embodiments of the present disclosure, virtual PCI functions can be migrated in a manner that is substantially transparent to device drivers. For instance, a device driver can continue to use the same PCI bus number, function number and address range for a particular virtual function both before and after a migration. This can be accomplished by assigning a virtual PCI bus to each virtual function and by performing a translation that include mapping each virtual PCI bus to real PCI busses and adapters and can also include the use of address offsets to accommodate direct memory access (DMA) to main memory) and comprises direct memory access (DMA) circuitry ([0039], For example, for packet receipt, NIC 330 can copy by direct memory access (DMA) data to destination location and provide an Rx descriptor to a descriptor ring managed by FDR 320 (Examiner notes: wherein FDR implicitly comprises DMA circuitry)) and
However, the Kutch does not explicitly teach circuitry configured to schedule one or more processors of the intermediary device for execution.
Chiang teaches circuitry to perform a thread manager to load balance among at least one processor of the intermediary device ([0036], Load balancing manager 180 assigns a home processor or home processor element to process tree 220 by assigning a home processor element identifier (HPEI) to process tree 220) and select a thread ([0037], load balancing manager 180 dispatches with thread dispatcher 230 each thread in process order within process tree 220 to ready queue 1 and then to run queue 1 that corresponds to processor 1) which is substantially similar to claim 1 and therefore rejected with similar rationale.
Examiner notes: It would be obvious for one of ordinary skill in the art to recognize that the apparatus of claim 1 is being substantially recited again as limitations for the non-transitory computer-readable medium of claim 15.
With regard to claim 16, Kutch teaches the non-transitory computer-readable medium of claim 15, wherein the intermediary device performs event translation comprises retrieving a descriptor format translation ([0052], Descriptor formation translation can include one or more of: copying one or more fields from a first descriptor to a second descriptor; expanding or contracting content in one or more fields in a first descriptor and writing the expanded or contracted content to one or more fields in a second descriptor; filling-in content or leaving blank one or more fields of the second descriptor where one or more fields are not completed in the first descriptor; and so forth) between one of multiple different device drivers and the physical device ([0057], While examples described in FIGS. 7A-7C are with respect to a NIC or network interface device, various embodiments can apply to any workload descriptor format translation for a device such as an accelerator, hardware queue manager (HQM), queue management device (QMD), storage controller, storage device, accelerator, and so forth).
With regard to claim 17, Kutch teaches the non-transitory computer-readable medium of claim 15, wherein at least one event of the events comprise one or more of ([0041], FIGS. 4A and 4B depict an example of descriptor format translations for receive descriptors but translation can apply to transmit descriptors … Various examples relate to VDEV driver providing an empty descriptor to an FDR or descriptor translator and FDR or descriptor translator providing a descriptor for a received packet to the VDEV driver): translation of descriptor format into protocol-engine descriptor format ([0043], An FDR or descriptor translator may convert the descriptor format 400 to Rx descriptor format 402 where an Intel® E800 NIC is used. A VDEV driver may provide a buffer address value in the bits [63:0]. Fields VLAN Tag, Errors, Status, Fragment Checksum and Length are initialized to zero and can be filled-in on packet receipt by the NIC (Examiner notes: wherein the fields of the descriptor are provided by the protocol-engine descriptor format of the NIC) or translation of descriptor format from protocol-engine descriptor format to driver format ([0044], As shown in FIG. 4B, the NIC provides an Rx descriptor corresponding to a received packet back to the VDEV driver … Translation and mapping can be performed such as field’s length in bits changed and only valid bits copied. For example, information in L2TAG1 of descriptor 450 can be translated and conveyed in VLAN Tag of descriptor 452 (Examiner notes: wherein translation occurs for all fields required by the driver format).
With regard to claim 19, Kutch teaches the non-transitory computer-readable medium of claim 15, wherein the physical device comprises one or more of: a protocol engine, a storage controller, a network interface device, a graphics processing unit, and/or accelerator ([0032], A physical PCIe connected NIC 330 (e.g., a SR-IOV VF, S-IOV VDEV, or a PF) can be selected as a device that will receive and transmit packets or perform wok at the request of VEE 302 … Note that the while refence is made to a NIC, in addition or alternatively, NIC 330 can include a storage controller, a storage device, an infrastructure processing unit (IPU), data processing unit (DPU), accelerators (e.g., FPGAs), or hardware queue manager (HQM).
With regard to claim 20, Kutch teaches the non-transitory computer-readable medium of claim 15, wherein the device virtualization is based on one or more of: Single Root I/O Virtualization (SR-IOV), and/or Scalable Input/Output (I/O) Virtualization (S-IOV) ([0005], Intel® scalable IOV (S-IOV) and single root I/O virtualization (SR-IOV) may provide virtual machines and containers access to a device using isolated shared physical function (PF) resources and multiple virtual functions (VFs) and corresponding drivers).
With regard to claim 23, Kutch teaches the non-transitory computer-readable medium of claim 15, wherein the intermediary devices provides translated events to protocol engine ([0040], For example, for packet transmit, VDEV driver 304 can place a packet into a memory buffer and writes to a Tx descriptor … Where configured to translate a descriptor, FDR 320 can translate the Tx descriptor (Examiner notes: Processing prior to transmitting) to a format recognized and properly readable by NIC 330 … FDR 320 can monitor the Tx descriptors provided by VDEV driver 304, translate recently written Tx descriptor into a descriptor format used by NIC 330, include in the translated Tx descriptor address of the data buffer to be transmitted, and write the translated descriptor into a ring that NIC 330 is monitoring) and the protocol engine issues commands to be performed by the physical device (FIG. 3, FDR exchanging descriptor to NIC 330; [0040], For example, a transmit descriptor can include one or more of: packet buffer address (e.g., physical or virtual), layer 2 tag, VLAN tag, buffer size, offset, command, descriptor type, and so forth (Examiner notes: Such that executes within the NIC)).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Kutch in view of Chiang as applied to claim 9 above, and further in view of Hart et al. Pub. No. US 2015/0052279 A1 (hereinafter Hart).
With regard to claim 14, Kutch discloses virtual functions maintaining a base address register (Kutch, [0006]). However, the combination does not explicitly teach routing events based on the base address register (BAR) range associated with the events.
Hart teaches the method of claim 9, comprising:
a host system executing a driver that provides events to a host interface and the host interface is configured to route particular events to the intermediary device based on a base address register (BAR) range associated with the particular events ([0042], As discussed herein, the use of offsets can be particularly useful for managing direct memory access (DMA) by the adapters and device drivers. For instance, the physical range for a particular function could begin at 0xKxxxxxxx. The hypervisor can be used to program a base mapping register in the bridge to map the function to 0xKxxxxxxxx. If driver requests to map page 0x0 of the particular function, the hypervisor can write an entry 0xK00000000 into the bridge’s TCE table (the hypervisor thereafter adjusts mapping requests based on the DMA base address). If the corresponding adapter function presents DMA address 0x0 on the PCI bus, the PCI bridge also adjusts it to 0xK0000000 before consulting its TCE table).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Hart with the teachings of Kutch and Chiang in order to provide a method that teaches event routing using base address register (BAR) ranges. The motivation for applying Hart teaching with Kutch and Chiang teaching is to provide a method that allows for mitigation of adverse effects of manipulating virtual addresses mapped to particular virtual functions (Hart, [0020]). Through the use of base address register ranges and offsets, access to peripheral devices can be simply routed using the base address and a given offset. Kutch, Chiang, and Hart are analogous art directed towards multiprogram arrangements. Therefore, it would have been obvious for one of ordinary skill in the art to combine Hart with Kutch and Chiang to teach the claimed invention in order to provide a streamlined method of accessing physical device through base address register ranges associated with events.
Claim 12 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Kutch in view of Chiang as applied to claim 9 and 15 respectively above, and further in view of Zhu et al. Pub. No. US 2022/0382466 A1 (hereinafter Zhu).
With regard to claim 12, Kutch teaches the method of claim 9, wherein the intermediary device comprises multiple processors ([0073], FIG. 13 depicts an example system … System 1300 includes a processor 1310) and DMA circuitry ([0049], the descriptor format translator can be setup to provide access to descriptors to a NIC … Other setup operations can be performed for the device such as input-output memory management unit (IOMMU) configuration that connects a DMA-capable I/O bus to main memory, interrupt setup, and so forth) and
However, the combination does not explicitly teach the at least one core and DMA circuitry performing concurrent execution while waiting for completion of a DMA operation.
Zhu teaches at least one of the multiple processors processes an event while waiting for completion of a DMA operation for another event ([0053], Instead of waiting for the completion of the DMA read operation in idle, queue manager 640 may advance queue 630 to execute the next operation in the queue. In this way, the next memory operation in queue 630 can be executed concurrently with the DMA read operation, effectively harvesting the available margin of power source Vcc)
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Zhu with the teachings of Kutch and Chiang in order to provide an apparatus that teaches concurrent execution of operations during the execution of a direct memory access (DMA) operation. The motivation for applying Zhu teaching with Kutch and Chiang teaching is to provide an apparatus that allows for parallel execution of multiple instructions on a core such that enables efficient use of a processing resource by reducing the idleness consuming power and improving the throughput of data transferring (Zhu, [0054]). Kutch, Chiang, and Zhu are analogous art directed towards peripheral adapted interface arrangements. Therefore, it would have been obvious for one of ordinary skill in the art to combine Zhu with Kutch and Chiang to teach the claimed invention in order to provide concurrent execution of events thereby improving power consumption and data transfer throughput.
With regard to claim 18, Kutch teaches the non-transitory computer-readable medium of claim 15, wherein the intermediary device comprises the at least one processor ([0073], FIG. 13 depicts an example system … System 1300 includes a processor 1310)
However, the combination does not explicitly teach the at least one core and DMA circuitry performing concurrent execution while waiting for completion of a DMA operation.
Zhu teaches the at least one processor processes an event while waiting for completion of a DMA operation for another event ([0053], Instead of waiting for the completion of the DMA read operation in idle, queue manager 640 may advance queue 630 to execute the next operation in the queue. In this way, the next memory operation in queue 630 can be executed concurrently with the DMA read operation, effectively harvesting the available margin of power source Vcc) which is substantially similar to claim 12 and therefore rejected with similar rationale.
Examiner notes: It would be obvious for one of ordinary skill in the art to recognize that the method of claim 12 is being substantially recited again as limitations for the non-transitory computer-readable medium of claim 18.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IVAN A CASTANEDA whose telephone number is (571)272-0465. The examiner can normally be reached Monday-Friday 9:30AM-5:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/I.A.C./Examiner, Art Unit 2195
/Aimee Li/Supervisory Patent Examiner, Art Unit 2195