Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-20 are presented for examination.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as anticipated by Willis (US 2019/0044809 A1)
As per claim 1, Willis teaches A system, wherein the system comprises:
an integrated processor; (Willis Fig 12 Block 1214 (Block Interface Controller))
a first hardware device; (Willis Fig 12 Block 1202 (Compute Engine))
a first storage (Willis Fig 12 Block 1206 (Memory)), wherein:
the integrated processor comprises at least one processor, at least one memory, and a queue, the at least one processor is connected to the at least one memory and the queue through an internal bus (Willis Fig 14 [the arrows between the messaging queue manager 1412 and the SMP array 1422 represent internal buses])
the queue is connected to the first hardware device through a network (Willis [0073] Accordingly, messages and/or network packet data may be passed therebetween via one or more communication links, such as PCIe interconnects, to provide access to the host memory 1330 (e.g., the memory 1206 of the compute engine 1202 of FIG. 12).;
the first hardware device is configured to send a first notification message to the queue (Willis [0076] Each of the event logic units 1404 are configured to receive a particular type, or set of types, of event messages, as well as interpret and queue messages corresponding to the received messaged. Additionally, each of the event logic units 1404 are configured to build data structures based on an associated protocol which is readable by the SMP array 1422. For example, the fabric event logic unit 1406 may be configured to receive and interpret host fabric interface events, such as a message indicating a network packet is to be transmitted between the NIC 1214 and one of the host CPUs 1328. [0088] . In block 1512, the host CPU 1328 [hardware device] transmits a notification (i.e., a descriptor notification) to a flexible host interface (e.g., the illustrative flexible host interface 1314 of FIG. 14) of the NIC 1214 which notifies the NIC 1214 that the one or more generated descriptors have been placed into to the array of descriptors and the corresponding index(s) in the descriptor ring)
wherein the first notification message indicates that there is to-be-transmitted data in the first storage (Willis [0088] In block 1512, the host CPU 1328 transmits a notification (i.e., a descriptor notification) to a flexible host interface (e.g., the illustrative flexible host interface 1314 of FIG. 14) of the NIC 1214 which notifies the NIC 1214 that the one or more generated descriptors have been placed into to the array of descriptors and the corresponding index(s) in the descriptor ring. See also Fig 15 Steps 1502 and 1504);
the queue is configured to receive the first notification message and store the first notification message in a first hardware queue in the queue; (Willis [0077] The messaging queue manager 1412 is configured to manage event message queues 1414 between the event logic units 1404 and the SMP array 1422. To do so, the messaging queue manager 1412 is configured to create/destroy the appropriate event message queues 1414 based on the corresponding event logic units 1404. In some embodiments, the messaging queue manager 1412 is additionally configured to enqueue and dequeue messages into/from the appropriate one of the event message queues 1414. The illustrative messaging queue manager 1412 includes one or more fabric event queues 1416 (i.e., for queueing messages received from the fabric event logic unit 1406))
the at least one memory stores programming instructions for execution by the at least one processor to:
obtain the first notification message from the first hardware queue; (Willis [0091] the SMP array 1422 determines whether to retrieve a message in a message queue (e.g., one of the event message queues 1414, the DMA queues 1432, the descriptor queues 1442, the offload queues 1450, etc.) [0092] If the SMP array 1422 determines that a message is to be retrieved, the method 1700 advances to block 1704, in which the SMP array 1422 retrieves the message from the appropriate message queue.).
access the to-be-transmitted data in the first storage based on the first notification message. (Willis [0101] In data flow 1838, the DMA engine 1434 fetches the payload of the network packet from the packet buffer host address (i.e., in host memory)…. n data flow 1842, the DMA engine 1432 forwards the message to the SMP array 1422. See paragraph 102 for more details about message processing)
As per claim 2, Willis teaches wherein the integrated processor is obtained by encapsulating the at least one processor and the queue into a chip. (Willis 0065] Accordingly, the communication circuitry 1212 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication. As noted previously, the illustrative communication circuitry 1212 includes the NIC 1214, which may also be referred to as a smart NIC or an intelligent/smart host fabric interface (HFI), and is described in further detail in FIGS. 13 and 14. The NIC 1214 may be embodied as one or more add-in-boards, daughter cards, network interface cards, controller chips, chipsets, or other devices that may be used by the compute device 1200 to transmit/receive network communications to/from another compute device).
As per claim 3, Willis teaches wherein the queue comprises a plurality of hardware queues, the first hardware queue is one of the plurality of hardware queues, and the first hardware queue is configured to store a notification message of the first hardware device. (Willis [0077] The messaging queue manager 1412 is configured to manage event message queues 1414 between the event logic units 1404 and the SMP array 1422. To do so, the messaging queue manager 1412 is configured to create/destroy the appropriate event message queues 1414 based on the corresponding event logic units 1404. In some embodiments, the messaging queue manager 1412 is additionally configured to enqueue and dequeue messages into/from the appropriate one of the event message queues 1414. The illustrative messaging queue manager 1412 includes one or more fabric event queues 1416 (i.e., for queueing messages received from the fabric event logic unit 1406), one or more MMIO event queues 1418 (i.e., for queueing messages receive from the MMIO event logic unit 1408), and one or more signaling event queues 1420 (i.e., for queueing messages receive from the signaling event logic unit 1410)).
As per claim 4, Willis teaches wherein the queue is configured to identify, in the plurality of hardware queues comprised in the queue, the first hardware queue associated with the first hardware device, and store the first notification message in the first hardware queue. (Willis [0077] The messaging queue manager 1412 is configured to manage event message queues 1414 between the event logic units 1404 and the SMP array 1422. To do so, the messaging queue manager 1412 is configured to create/destroy the appropriate event message queues 1414 based on the corresponding event logic units 1404. In some embodiments, the messaging queue manager 1412 is additionally configured to enqueue and dequeue messages into/from the appropriate one of the event message queues 1414. The illustrative messaging queue manager 1412 includes one or more fabric event queues 1416 (i.e., for queueing messages received from the fabric event logic unit 1406), one or more MMIO event queues 1418 (i.e., for queueing messages receive from the MMIO event logic unit 1408), and one or more signaling event queues 1420 (i.e., for queueing messages receive from the signaling event logic unit 1410)).
As per claim 5, Willis teaches wherein the first hardware device is further configured to generate the first notification message, wherein the first notification message comprises a location identifier and an identifier of the first hardware device, and wherein the location identifier indicates a storage location of the to-be-transmitted data in the first storage. (Willis [0087] As described previously, descriptors are associated with a network packet, or a particular portion thereof. Accordingly, for example, one descriptor may correspond to a payload of the network packet, while another descriptor may correspond to a header of the network packet. The descriptor additionally includes information (i.e., descriptor information) usable to identify a storage location of the associated portion of the network packet, as well as a network protocol associated with a network packet. In some embodiments, the descriptor may include additional information which indicates or is otherwise usable to identify a type of data associated with the network packet, a packet flow of the network packet, etc. Accordingly, the descriptor information may be used to identify one or more operations to be performed thereon. [0088] In block 1506, the host CPU 1328 stores the generated descriptor(s) into an array of descriptors. In some embodiments, the array of descriptors may be structured as a descriptor table stored in a cache memory accessible by the host CPU 1328. It should be appreciated that, in other embodiments, the descriptor(s) may be placed into an alternative cache data structure. In block 1508, the host CPU 1328 identifies the array index corresponding to each location of the stored descriptor(s). In block 1510, the host CPU 1328 places the identified index(es) into a descriptor ring, which may be stored in a cache memory accessible by the host CPU 1328. In block 1512, the host CPU 1328 transmits a notification (i.e., a descriptor notification) to a flexible host interface (e.g., the illustrative flexible host interface 1314 of FIG. 14) of the NIC 1214 which notifies the NIC 1214 that the one or more generated descriptors have been placed into to the array of descriptors and the corresponding index(s) in the descriptor ring. Additionally, in some embodiments, in block 1514, the host CPU 1328 may include in the notification an indication of the number of indices placed into the descriptor ring)
As per claim 6, Willis teaches the queue is configured to send the first notification message to a first processor core, wherein the first processor core is a processor core in the at least one processor; and the first processor core is configured to obtain the first notification message from the first hardware queue, and obtain the to-be-transmitted data from the first storage based on a location identifier comprised in the first notification message.(Willis [0091] Referring now to FIG. 17, in use, the compute device 1200, or more particularly an SMP array of a flexible host interface of the NIC 1214 (the SMP array 1422 of the illustrative flexible host interface 1314 of FIG. 14), may execute a method 1700 for processing messages received at the SMP array. The method 1700 begins in block 1702, in which the SMP array 1422 determines whether to retrieve a message in a message queue (e.g., one of the event message queues 1414, the DMA queues 1432, the descriptor queues 1442, the offload queues 1450, etc.). As described previously, the queued messages may be formatted based on various different protocols. As such, for example, the event logic units 1404 are configured to build messages having data structures based on an associated protocol which is readable by the SMP array 1422. Accordingly, it should be appreciated that the SMP array 1422 can dynamically support processing of multiple protocols, such as may be based on assignment at initialization (e.g., of a virtual function which has been mapped to a particular message queue). [0092] If the SMP array 1422 determines that a message is to be retrieved, the method 1700 advances to block 1704, in which the SMP array 1422 retrieves the message from the appropriate message queue. In block 1706, the SMP array 1422 identifies a core (e.g., one of the processor cores 1424 of FIG. 14) to process the retrieved message. In block 1708, the SMP array 1422, or more particularly the identified core of the SMP array 1422, processes the retrieved message to identify whether any long-latency operations (e.g., a DMA fetch operation) need to be performed. In block 1710, the SMP array 1422 determines whether any long-latency operations were identified. If so, the method 1700 branches to block 1712, in which the SMP array 1422 identifies instructions to be performed as a function of the identified long-latency operation. For example, if the long-latency operation has been identified as a DMA operation, the instructions may include a fetch instruction with any information necessary to perform the DMA fetch operation. In block 1714, the SMP array 1422 generates a message which includes the identified instructions. In block 1716, the SMP array 1422 includes the next step to be performed into the message (e.g., an operation to be performed subsequent to the long-latency operation having completed). In block 1718, the SMP array 1422 transmits the generated message to an appropriate hardware unit scheduler (e.g., the DMA controller 1438 of the illustrative DMA engine 1434 of FIG. 14) to perform the long-latency operation see also paragraph 101-102).
As per claim 7, Willis teaches wherein the first storage comprises a main memory. (Willis Fig 12 Block 1206 (Memory) and [0101] In data flow 1838, the DMA engine 1434 fetches the payload of the network packet from the packet buffer host address (i.e., in host memory). In data flow 1840, upon the fetch operation having completed (i.e., the payload has been fetched from host memory and stored in a temporary buffer, such as the SRAM 1322 or the DDR SDRAM 1326), the DMA engine 1434 generates a message which indicates the payload is available for transmission and to transmit the associated network packet. In data flow 1842, the DMA engine 1432 forwards the message to the SMP array 1422. As noted previously, while not illustratively shown, it should be appreciated that the message is queued in one of the descriptor queues 1442, which allows for the message to be retrieved by the SMP array 1422 at its discretion)
As per claim 8, Willis teaches wherein the network comprises Ethernet or peripheral component interconnect express. (Willis [0073] The flexible host interface 1314 may be embodied as any type of host interface device capable of performing the functions described herein. The flexible host interface 1314 is configured to function as an interface between each of the host CPUs 1328 (e.g., each of the processors 1204 of the compute engine 1202 of FIG. 12) and the NIC 1214. As illustratively shown, the flexible host interface 1314 is configured to function as an interface between the host CPUs 1328 (e.g., one of the processors 1204 of the compute engine 1202 of FIG. 12) and the memory fabric 1306 (e.g., via the memory fabric interface 1304), as well as function as an interface between the host CPUs 1328 and the infrastructure 1316. Accordingly, messages and/or network packet data may be passed therebetween via one or more communication links, such as PCIe interconnects, to provide access to the host memory 1330 (e.g., the memory 1206 of the compute engine 1202 of FIG. 12)).
As per claim 9, Willis teaches wherein the system is used in one of a storage array, a server, or a switch. (Willis [0056] The compute device 1200 may be embodied as a server (e.g., a stand-alone server, a rack server, a blade server, etc.), a compute node, a storage node, a switch (e.g., a disaggregated switch, a rack-mounted switch, a standalone switch, a fully managed switch, a partially managed switch, a full-duplex switch, and/or a half-duplex communication mode enabled switch), a router, and/or a sled in a data center (e.g., one of the sleds 204, 404, 504, 1004, 1130, 1132, 1134), any of which may be embodied as one or more physical and/or virtual devices. As shown in FIG. 12, the illustrative compute device 1200 includes a compute engine 1202, an input/output (I/O) subsystem 1208, one or more data storage devices 1210, communication circuitry 1212, and, in some embodiments, one or more peripheral devices 1216. Of course, in other embodiments, the compute device 1200 may include other or additional components, such as those commonly found in a compute device (e.g., a power supply, cooling component(s), a graphics processing unit (GPU), etc.). It should be appreciated that they types of components may depend on the type and/or intended use of the compute device 1200. For example, in embodiments in which the compute device 1200 is embodied as a compute sled in a data center, the compute device 1200 may not include the data storage devices. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component).
As to claim 10, it is rejected based on the same reason as claim 1.
As to claim 11, it is rejected based on the same reason as claim 2.
As to claim 12, it is rejected based on the same reason as claim 3.
As to claim 20, it is rejected based on the same reason as claim 3.
As to claim 13, it is rejected based on the same reason as claim 5.
As to claim 14, it is rejected based on the same reason as claim 6.
As to claim 15, it is rejected based on the same reason as claim 1.
As to claim 18, it is rejected based on the same reason as claim 1.
As to claim 17, it is rejected based on the same reason as claim 7.
As to claim 19, it is rejected based on the same reason as claim 8.
As to claim 16, it is rejected based on the same reason as claim 9.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20210109681 – discloses a host is connected to an NVMe controller through a PCIe bus, and the NVMe controller is connected to a storage medium. The NVMe controller receives from the host a data packet that carries payload data and an association identifier. The association identifier associates the payload data with a write instruction. The NVMe controller obtains the write instruction according to the association identifier, and then writes the payload data into the storage medium according to the write instruction.
US 20180159963 A1 – discloses a method for reading or writing data by a computer device are provided. In the computer device, a central processing unit (CPU) is connected to a cloud controller using a double data rate (DDR) interface. Because the DDR interface has a high data transmission rate, interruption of CPU can be avoided. In addition, the CPU converts a read or write operation request into a control command and writes the control command into a transmission queue in the cloud controller. Because the cloud controller performs a read operation or a write operation on a network device according to operation information in the control command, after writing the control command into the transmission queue, the CPU does not need to wait for an operation performed by the cloud controller and can continue to perform other processes.
US 20130290984 A1 – discloses a low overhead method to handle inter process and peer to peer communication. A queue manager is used to create a list of messages with minimal configuration overhead. A hardware queue can be connected to another software task owned by the same core or a different processor core, or connected to a hardware DMA peripheral. There is no limitation on how many messages can be queued between the producer and consumer cores. The low latency interrupt generation to the processor cores is handled by an accumulator inside the QMSS which can be configured to generate interrupts based on a programmable threshold of descriptors in a queue. The accumulator thus removes the polling overhead from software and boosts performance by doing the descriptor pops and message transfer in the background.
US 6782537 B1 – discloses a deterministic, non-deadlocking technique to achieving distributed consensus in a multithreaded multiprocessing computing environment is provided. A communicator is established across multiple processes in the multithreaded computer environment notwithstanding that multiple groups of threads may be simultaneously trying to establish communicators. The technique includes communicating across the multiple processes to establish a candidate identifier for the communicator for a group of participating threads of the multiple processes; and communicating across the multiple processes to check at each participating thread of the multiple processes whether the candidate identifier can be claimed at its process, and if so, claiming the candidate identifier as the new identifier thereby establishing the communicator. As one example, the technique can be implemented via a subroutine call within a message passing interface (MPI) library.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAN KAMRAN whose telephone number is (571)272-3401. The examiner can normally be reached on 9-5.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached on (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MEHRAN KAMRAN/ Primary Examiner, Art Unit 2196