Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 2, 3, 7, 8, 10, 11, 15, 16, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over RUSSKIKH (US 20230262123) in view of ELZU (US 20040042458).
Regarding claims 1, 10, 16, RUSSKIKH (US 20230262123) teaches an apparatus comprising:
a network interface device (fig. 2) comprising:
circuitry to perform header splitting with payload reordering for one or more packets received at the network interface device from a sender network interface device (par. 26, The receiving NIC includes a receiving queue 220 formed similar to that of the transmit queue 212 to load received packets in a sequenced manner that corresponds to the ordered sequence of packets transmitted by the transmit NIC 112. For one embodiment, the receiving NIC 128 includes header-splitter hardware that, for each packet exiting the queue 220, automatically splits the packet header from the data payload. A storage controller employed by the header-splitter hardware manages loading of the split headers into a header buffer 222 and loading the split data segments sequentially into a data buffer 224…the sequential loading of the data segments into the data buffer is carried out to match the ordered sequencing of the as-received packets), wherein the payload reordering comprises:
store a first payload of a first packet of the one or more packets from the sender network interface device at a first location in a buffer based on a first sequence of the first packet (par. 26, 35, The receiving NIC includes a receiving queue 220 formed similar to that of the transmit queue 212 to load received packets in a sequenced manner that corresponds to the ordered sequence of packets transmitted by the transmit NIC 11),
store a second payload of a second packet of the one or more packets from the sender network interface device at a second location in the buffer based on a second sequence of the second packet (par. 26, 35, The receiving NIC includes a receiving queue 220 formed similar to that of the transmit queue 212 to load received packets in a sequenced manner that corresponds to the ordered sequence of packets transmitted by the transmit NIC 11), and
circuitry to copy headers and payloads associated with the one or more packets to at least one memory device (par. 26, load the remote memory 122 in a manner that fully mirrors or duplicates the transmitting device memory 106...A storage controller employed by the header-splitter hardware manages loading of the split headers into a header buffer 222 and loading the split data segments sequentially into a data buffer 224…the sequential loading of the data segments into the data buffer is carried out to match the ordered sequencing of the as-received packets).
However, RUSSKIKH does not teach store a first payload of a first packet based on a first sequence number of the first packet,
store a second payload of a second packet based on a second sequence number of the second packet,
for a sequence number gap between the first and second sequence numbers, reserve a region in the buffer, between the first and second payloads, for a payload of a third packet associated with a sequence number in the sequence number gap;
But, ELZU (US 20040042458) in a similar or same field of endeavor teaches wherein the payload reordering (fig. 3B, 5B, par. 68, 70) comprises:
store a first payload of a first packet of the one or more packets from the sender network interface device at a first location in a buffer based on a first sequence number of the first packet (fig. 2A, 3B, 5B, par. 60, 64, 65, The first byte of the buffer may correspond to a particular TCP sequence value. Other bytes in the TCP segment may be placed via offsets in the buffer that may correspond to respective deltas in the TCP sequence space with respect to the sequence value of the first byte…a TCP window may be defined in TCP sequence space. In one embodiment, the TCP window may have a left boundary at a TCP sequence value of RCV_NXT and a right boundary at a TCP sequence value of RCV_NXT+RCV_WIND. RCV_NXT may be a variable, for example, used to keep track of the next expected sequence number to be received by a receiver),
store a second payload of a second packet of the one or more packets from the sender network interface device at a second location in the buffer based on a second sequence number of the second packet (fig. 2A, 2B, 2C, 3B, 5B, par. 60, 64, 65, 66, the first byte of the buffer may correspond to a particular TCP sequence value. Other bytes in the TCP segment may be placed via offsets in the buffer that may correspond to respective deltas in the TCP sequence space with respect to the sequence value of the first byte… an out-of-order frame may be received by the network subsystem 50. The TEEC 70 may parse and may process the out-of-order frame… the data information of the out-of-order frame may be stored in the host memory), and
for a sequence number gap between the first and second sequence numbers, reserve a region in the buffer, between the first and second payloads, for a payload of a third packet associated with a sequence number in the sequence number gap (fig. 2B, 3B, 5B, par. 65, 66, 67, 68, The first hole may be defined by at least two variables: Hole_1_Start and Hole_1_End. Hole_1_Start may be defined, for example, as the TCP sequence value of the beginning of the first hole in TCP sequence space. Hole_1_End may be defined, for example, as the TCP sequence value of the ending of the first hole in TCP sequence space… the placement of the data information from the in-order frame may modify the first hole and the TCP window. The first hole may be reduced in size and one or more the first hole variables may be updated… an in-order frame may be received by the network subsystem 50. The placement of the data information from the in-order frame may completely plug the first hole and modify the TCP window).
Thus, it would have been obvious to the person of ordinary skill in the art before the effectively filing date of the claimed invention to implement the system or method as taught by ELZU in the system of RUSSKIKH to reorder payload.
The motivation would have been to prevent unwanted retransmitting and reduce and optimized bandwidth.
Regarding claims 2, 11, 17, RUSSKIKH (US 20230262123) teaches the apparatus of claim 1, wherein the perform header splitting with payload reordering for one or more packets received at the network interface device comprises perform payload reordering into the buffer based on a transmitter-specified order (par. 26, The receiving NIC includes a receiving queue 220 formed similar to that of the transmit queue 212 to load received packets in a sequenced manner that corresponds to the ordered sequence of packets transmitted by the transmit NIC 112... A storage controller employed by the header-splitter hardware manages loading of the split headers into a header buffer 222 and loading the split data segments sequentially into a data buffer 224…the sequential loading of the data segments into the data buffer is carried out to match the ordered sequencing of the as-received packets).
Regarding claim 3, RUSSKIKH (US 20230262123) teaches the apparatus of claim 1, wherein the perform header splitting with payload reordering for one or more packets received at the network interface device (par. 26, the receiving NIC 128 includes header-splitter hardware that, for each packet exiting the queue 220, automatically splits the packet header from the data payload. A storage controller employed by the header-splitter hardware manages loading of the split headers into a header buffer 222 and loading the split data segments sequentially into a data buffer 224. For one embodiment, the data buffer is configured as a ring buffer, including multiple storage elements and a head pointer indicating a location of an oldest data entry, and a tail pointer indicating a location of a newest data entry. For such an embodiment, the sequential loading of the data segments into the data buffer is carried out to match the ordered sequencing of the as-received packets) comprises: split the one or more received packets into headers and payloads (par. 26, the receiving NIC 128 includes header-splitter hardware that, for each packet exiting the queue 220, automatically splits the packet header from the data payload. A storage controller employed by the header-splitter hardware manages loading of the split headers into a header buffer 222 and loading the split data segments sequentially into a data buffer 224) and store a header of the headers into a first buffer (par. 26, the receiving NIC 128 includes header-splitter hardware that, for each packet exiting the queue 220, automatically splits the packet header from the data payload. A storage controller employed by the header-splitter hardware manages loading of the split headers into a header buffer 222 and loading the split data segments sequentially into a data buffer 224).
Regarding claim 7, RUSSKIKH (US 20230262123) teaches the apparatus of claim 1, wherein the network interface device comprises one or more of: network interface controller (NIC), a remote direct memory access (RDMA)-enabled NIC, SmartNIC, router, switch, forwarding element, infrastructure processing unit (IPU), or data processing unit (DPU) (fig. 7, par. 26, 40, NIC, RDMA).
Regarding claim 8, RUSSKIKH (US 20230262123) teaches the apparatus of claim 1, comprising a server comprising a memory, wherein the server is communicatively coupled to the network interface device and wherein the memory comprises the at least one memory device (fig. 2, par. 21, 26, server with NIC and memory).
Regarding claim 15, RUSSKIKH teaches the at least one computer-readable medium of claim 10, wherein a driver is to configure the network interface device (par. 22, driver).
Claim(s) 4, 12, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over RUSSKIKH (US 20230262123) and ELZU (US 20040042458) as applied to claims 1, 11, 16 above, and further in view of PHILBRICH et al. (US 20040158640).
Regarding claims 4, 12, 18, RUSSKIKH teaches the apparatus of claim 1, wherein the perform header splitting with payload reordering for the one or more packets received at the network interface device (fig. 4, par. 15, 26, 32, 36, 39, detect the header segment based on identifying descriptor information indicating at least a destination location of the second memory location for the payload segment…and a tail pointer indicating a location of a newest data entry…The header descriptor information may include service information such as a current packet offset (buffer offset));
However, RUSSKIKH does not teach comprises determine addresses in the buffer to which to copy portions of the one or more received packets based on a base address of a destination memory address and data of the packet.
But, PHILBRICH et al. (US 20040158640) in a similar or same field of endeavor teaches wherein the network interface device comprises determine addresses in the buffer to which to copy portions of the one or more received packets based on a base address of a destination memory address and data of the packet (fig. 17, par. 460, 461, 462, 476, the buffer field 404 of the header buffer 406 contains the address of the data SDBHANDLE 408 structure (0x1000) (address) with the bottom bit set to indicate that the data buffer is at offset 1 (address) within the two part data SDB (data buffer)… If the data resides in the header buffer, then we adjust the buffer descriptor such that it points to the data portion of the header buffer (beneath the status word, etc). Conversely, if the data resides in the data buffer, we use the buffer descriptor associated with the data buffer to point to the data buffer, and we use the packet descriptor associated with the header buffer to point to the data buffer descriptor).
Thus, it would have been obvious to the person of ordinary skill in the art before the effectively filing date of the claimed invention to implement the system or method as taught by PHILBRICH in the system of RUSSKIKH and ELZU to store the payload.
The motivation would have been to provide a vast majority of network message data is moved directly from the INIC into its final destination and the data may be moved in a single trip across the system memory bus (PHILBRICK par. 12).
Claim(s) 5, 13, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over RUSSKIKH (US 20230262123), ELZU (US 20040042458), and PHILBRICH et al. (US 20040158640) as applied to claims 4, 12, 18 above, and further in view of FAN (US 7512144).
Regarding claims 5, 13, 19, RUSSKIKH does not teach the apparatus of claim 4, wherein the data is based on one or more of: sequence numbers, length, line number, length, or a base sequence number.
But, FAN (US 7512144) in a similar or same field of endeavor teaches wherein the data is based on one or more of: sequence numbers, length, line number, length, or a base sequence number (claim 1, communicating said TCP sequence number to a local host that converts said TCP sequence number to a buffer descriptor index and a byte offset utilized for retrieving said data to be retransmitted from a memory within said host).
Thus, it would have been obvious to the person of ordinary skill in the art before the effectively filing date of the claimed invention to implement the system or method as taught by FAN in the system of RUSSKIKH, ELZU, and PHILBRICH to use sequence number to determine the memory location.
The motivation would have been to track the packet sequency number and able to determine missing packet.
Claim(s) 6, 14, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over RUSSKIKH (US 20230262123) and ELZU (US 20040042458) as applied to claims 1, 10, 16 above, and further in view of KHAN et al. (US 20200204657).
Regarding claims 6, 14, 20, RUSSKIKH does not explicitly teach the apparatus of claim 1, comprising processor-executed software to perform header reordering into at least one buffer for the one or more packets received at the network interface device.
But, KHAN et al. (US 20200204657) in a similar or same field of endeavor teaches comprising processor-executed software to perform header reordering into at least one buffer for the one or more packets received at the network interface device (par. 23, the specified packet segments associated with the packets are DMA copied by the CHM capable network interface controller (110), into header mbuf ring (121) [step 292] and the rest of the packet is mapped to payload mbuf ring (122)… The header mbuf rings (121) are directly manipulated by the processor (112) (e.g. Internet Protocol routing) to take decisions. The contiguous nature of the headers (that are much smaller than the entire packet) and regularity of strides of header access, more number of headers are packed into Layer 1 Data Cache (L1D Cache) due to automatic hardware prefetching).
Thus, it would have been obvious to the person of ordinary skill in the art before the effectively filing date of the claimed invention to implement the system or method as taught by KHAN in the system of RUSSKIKH and ELZU to store header.
The motivation would have been to improve storage and quick access header for processing.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over RUSSKIKH (US 20230262123) and ELZU (US 20040042458) as applied to claim 8 above, and further in view of LIAO et al. (US 20230156102).
Regarding claim 9, RUSSKIKH does not teach the apparatus of claim 8, comprising a datacenter, wherein the datacenter includes the server and the network interface device and a second network interface device that is to transmit packets to the network interface device and specify an order of payload storage in the at least one memory device.
But, LIAO et al. (US 20230156102) in a similar or same field of endeavor teaches comprising a datacenter, wherein the datacenter includes the server and the network interface device and a second network interface device that is to transmit packets to the network interface device and specify an order of payload storage in the at least one memory device (fig. 2, par. 22, 24, 149, server or data center).
Thus, it would have been obvious to the person of ordinary skill in the art before the effectively filing date of the claimed invention to implement the system or method as taught by KHAN in the system of RUSSKIKH and ELZU to implement in datacenter.
The motivation would have been to improve storage in datacenter as the overhead for memory accesses in the driver stack is reduced.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THINH D TRAN whose telephone number is (571)270-3934. The examiner can normally be reached mon-fri 9-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FARUK HAMZA can be reached at 5712727969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THINH D TRAN/for /Thinh Tran/, Patent Examiner of Art Unit 2466 01/15/2026