DETAILED ACTION
This communication is responsive to Applicant’s amendment for application 16/049,216 dated 25 August 2025, responding to the 2 June 2025 Office Action provided in the rejection of claims 1-20.
Claims 1-20 remain pending in the application and have been fully considered by the examiner.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner Notes
Examiner cites particular paragraphs or columns and lines in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Response to Arguments
Applicant’s primary argument is directed to:
(A) Claim 1 is patentable over the cited references because they do not disclose “predicting that a networking command is to be issued from an APD to a network interface controller (“NIC”), the predicting including detecting that the APD converts an APD command queue from being inactive to being active”. In particular, Applicant argues that the pre-encoding of a frame and sending to a NIC as disclosed by the Raduchel reference is not the same thing as “predicting that a network command is to be issued” (see Applicant’s remarks, pages 7-9), independent claims 10 and 19 are patentable for similar reasons, and claims 2-9, 11-18, and 20 are patentable as depending upon claims 1, 10, and 19, respectively.
Regarding (A), Examiner respectfully disagrees with this argument. As disclosed by Raduchel, pre-encoding (interchangeably referred to as pre-rending) “requires some ability to predict future events…In this scenario, rendering can be performed arbitrarily ahead of time, in accordance with predicted future conditions…”; paragraphs [0125]-[0126]. “To save even more time, pre-rendered frames can also be pre-encoded when encoder bandwidth is available. Pre-encoded frames can be sent to the client device 520 ahead of time when network bandwidth over the network 105 is available. When predictions are successful, all network delays are effectively eliminated”; (emphasis added), paragraph [0128]. This clearly show a process of predicting that commands (the frames) will need to be sent and thus sending them ahead of time, thus eliminating delays when the predictions are successful, disclosing claimed “predicting that a networking command is to be issued from an APD to a network interface controller (“NIC”), the predicting including detecting that the APD converts an APD command queue from being inactive to being active”. Therefore, this argument is unpersuasive.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 10-15, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Raduchel et al. (U.S. 2018/0063555) (Hereinafter Raduchel) in view of Ku, Kie Bong (U.S. 2012/0155205) (Hereinafter Ku), further in view of Han et al. (PacketShader: A GPU-Accelerated Software Router) (Hereinafter Han), further in view of Wang et al. (U.S. 7,307,998) (Hereinafter Wang), and further in view of Nakibly et al. (U.S. 10,228,869) (Hereinafter Nakibly – art made of record).
As per claim 1, Raduchel discloses a method for improving network-related performance for accelerated processing devices (“APDs”) (see for example Raduchel, this limitation is disclosed such that that there is a system capable of providing performance enhancement when using network-enabled graphics processing with a GPU (i.e. accelerated processing device); paragraphs [0011 ]-[0012]) , the method comprising:
predicting that a networking command is to be issued from an APD to a network interface controller (“NIC”) (see for example Raduchel, this limitation is disclosed such that GPU chip 310 transmits its output to the NIC while the output is being generated, which allows the host NIC to begin encapsulation and transmission even before rendering and/or encoding is complete; paragraphs [0087 ]-[0088]. Predicted pre-encoded frames are sent [from the GPU to the NIC] ahead of time when network bandwidth is available (i.e. predicting that a networking command is to be issued from an APD to a NIC); paragraph [0128]),
Although Raduchel discloses an accelerated processing device (APD), Raduchel does not explicitly teach predicting including detecting that a processing device converts a processing device command queue from being inactive to active, wherein the processing device command queue includes commands for execution by the processing device, and wherein the processing device executes commands from active processing device command queues and does not execute commands from inactive command queues.
However, Ku discloses detecting that a processing device converts a processing device command queue from being inactive to active (see for example Ku, this limitation is disclosed such that when a buffer (i.e. queue) control signal Buf_ctrl is activated after being deactivated, a command buffer which has been deactivated (i.e. inactive) is activated (i.e. converting a command queue from being inactive to active; paragraph [0064]. The command buffer is part of a semiconductor memory apparatus controlled by a buffer control unit; paragraph [0011]),
wherein the processing device command queue includes commands for execution by the processing device (see for example Ku, this limitation is disclosed such that the command buffer is configured to buffer external command, the external commands to be generated and output as internal commands by the semiconductor memory apparatus (i.e. the internal commands are for execution by the semiconductor memory apparatus, being used to generate the internal commands); paragraphs [0011]), and
wherein the processing device executes commands from active processing device command queues and does not execute commands from inactive command queues (see for example Ku, this limitation is disclosed such that the command buffer is configured to buffer the external command and output the internal command when the buffer control signal is activated (i.e. commands are only executed from active processing device command queues and not executed from inactive command queues); paragraph [0011]).
Raduchel in view of Ku is analogous art because they are from the same field of endeavor, processing management.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as taught by Raduchel by using a command buffer that can be activated and deactivated to store commands as taught by Ku because it would enhance the teaching of Raduchel with an effective means of performing a self-refresh operation, preventing commands from being input while a buffer is deactivated, as well as reducing current consumption by circuits used by the buffer (as suggested by Ku, see for example paragraphs [0069]-[0070]).
Raduchel in view of Ku does not explicitly teach the limitation wherein an APD command queue includes a networking command to be issued from the APD to a NIC.
However, Han discloses the limitation wherein an APD command queue includes a networking command to be issued from the APD to a NIC (see for example Han, this limitation is disclosed such that GPUs (i.e. a GPU being an accelerated processing device (APD)) are used for general packet processing; p.196 section 2. A GPU-accelerated IPv4 table lookup runs in the following order. In a pre-shading step, a worker thread fetches a chunk of packets. The worker thread collects packets that require slow-path processing (e.g., destined to local, malformed, TTL expired, or marked as wrong IP checksum by NICs) and passes them onto a Linux TCP/IP stack. For the remaining packets the worker thread updates TTL and checksum fields, gathers destination IP addresses into a new buffer, and passes the pointer to a master thread. In a shading step, the master thread transfers the IP addresses into the GPU memory and launches the GPU kernel to perform the table lookup. The GPU kernel returns a pointer of the buffer holding the next-hop information for each packet. The master thread copies the result from device memory to host memory, and then passes it to the worker thread. In a post-shading step, the worker thread distributes packets into NIC ports based on the forwarding decision; p.203 section 6.2.1), and
Raduchel in view of Ku is analogous art with Han because they are from the same field of endeavor, processing management.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as taught by Raduchel in view of Ku by pre-fetching and using GPU based networking as taught by Han because it would enhance the teaching of Raduchel in view of Ku with an effective means of using a GPU-accelerated software router framework with the benefit of low cost and high programmability (as suggested by Han, see for example p.195 col.2 ¶3).
Raduchel in view of Ku, further in view of Han does not explicitly teach, responsive to predicting, sending a pre-fetch request, to NIC, to pre-fetch a first network queue metadata into the NIC.
However, Wang discloses, responsive to predicting, sending a pre-fetch request, to NIC, to pre-fetch a first network queue metadata into the NIC (see for example Wang, this limitation is disclosed such that a receive buffer ring is used to stored receive buffer descriptors (RBDs). Each RBD consists of the address and length of a receive buffer in host memory, each receive buffer storing a network packet (i.e. a receive buffer descriptor (RBD) corresponds to claimed “first network queue metadata” that identifies a receive buffer (network command queue) that stores (is associated with) a network packet (network command)); col.1 lines {39}-{58}. The RBDs in host memory are prefetched into a NIC’s RBD cache; col.10 lines {28}-{58}. RBD management includes characteristics prediction; col.6 lines {1}-{28}. The host driver software sets up the RBD rings in host memory in response to statistics about packet size gathered in the network interface, or other information useful in predicting characteristics of network traffic (i.e. responsive to predicting). Packets that qualify for a given ring, based on packet size, are identified in the interface; col.6 lines {23}-{28}. Buffer rings are set up based on a packet size; col.7 line {56} – col.8 line {8})
Raduchel in view of Ku, further in view of Han is analogous art with Wang because they are from the same field of endeavor, processing management.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as taught by Raduchel in view of Ku, further in view of Han by pre-fetching packet descriptors identifying a host buffer into a NIC as taught by Wang because it would enhance the teaching of Raduchel in view of Ku, further in view of Han with an effective means of allowing fast descriptor access to transfer packets (as suggested by Wang, see for example col.10 lines {28}-{58}).
Raduchel in view of Ku, further in view of Han, further in view of Wang does not explicitly teach the limitation wherein a first network queue metadata indicates a location of a network command buffer; and executing a network command from the network command buffer using the network queue metadata to access the network command in the network command buffer.
However, Nakibly discloses the limitation wherein a first network queue metadata indicates a location of a network command buffer; and executing a network command from the network command buffer using the network queue metadata to access the network command in the network command buffer (see for example Nakibly, this limitation is disclosed such that a network interface card (NIC) such as an Ethernet controller executes a set of packet processing tasks for transmitting and/or receiving data packets (i.e. executing a network command using the network queue metadata to access the network command)). The packet processing tasks access shared context data (i.e. network queue metadata) that includes (i) a memory address pointing to where packet information is to be read from to generate a data packet for transmission, (ii) a network address associated with a sources of the packet, (iii) protocol information associated with the network protocol of the packet, and (iv) a queue pointer such as a head pointer or a tail pointer of a queue associated with information used for processing the packet (i.e. indicating a location of a network command buffer). Context data can be updated when the packet processing tasks are executed. For example, the address of the memory location where packet payload is stored can be updated for the generation and/or processing of data packets that carry data stored in different memory address locations, the head and tail pointer of a queue storing memory descriptors can be updated after a packet is retired; col.2 lines {5}-{31}. Each context access request may include a read operation to the context data; col.12 lines {26}-{59}.)
Raduchel in view of Ku, further in view of Han, further in view of Wang is analogous art with Nakibly because they are from the same field of endeavor, processing management.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as taught by Raduchel in view of Ku, further in view of Han, further in view of Wang by using context data for packet processing using memory and network addresses as taught by Nakibly because it would enhance the teaching of Raduchel in view of Ku, further in view of Han, further in view of Wang with an effective means of executing multiple packet processing tasks of transmitting and/or receiving data packets over a network (as suggested by Nakibly, see for example col.1 lines {6}-{23}).
As per claim 2, Raduchel in view of Ku, further in view of Han, further in view of Wang, further in view of Nakibly discloses the method of claim 1 (see rejection of claim 1 above), Raduchel further disclosing that a processing device is an APD (see for example Raduchel, this limitation is disclosed such that there is a GPU (i.e. accelerator) connected to a NIC; paragraph [0087]).
Although Raduchel discloses that a processing device is an APD, Raduchel does not explicitly teach that a processing device command queue is included in a set of processing device command queues, each of which are either active or inactive on the processing device, wherein the processing device executes commands from active processing device command queues but not from inactive processing device command queues.
However, Ku discloses that a processing device command queue is included in a set of processing device command queues, each of which are either active or inactive on the processing device, wherein the processing device executes commands from active processing device command queues but not from inactive processing device command queues (see for example Ku, this limitation is disclosed such that the semiconductor memory apparatus includes a plurality of buffers (i.e. a given command queue is part of a set of command queues); paragraph [0012]. When a buffer (i.e. queue) control signal Buf_ctrl is activated after being deactivated, a command buffer which has been deactivated (i.e. inactive) is activated (i.e. converting a command queue from being inactive to active; further, when the buffer control signal is deactivated, the command buffer is deactivated (i.e. command queues are either active or inactive on the processing device); paragraphs [0063]-[0064]. The command buffer is configured to buffer the external command and output the internal command when the buffer control signal is activated (i.e. commands are only executed from active processing device command queues and not executed from inactive command queues); paragraph [0011]).
As per claim 3, Raduchel in view of Ku, further in view of Han, further in view of Wang, further in view of Nakibly discloses the method of claim 2 (see rejection of claim 2 above), Raduchel further disclosing that a processing device is an APD (see for example Raduchel, this limitation is disclosed such that there is a GPU (i.e. accelerator) connected to a NIC; paragraph [0087]).
Although Raduchel discloses that a processing device is an APD, Raduchel does not explicitly teach each processing device command is associated with a network command queue.
However, Wang discloses that each processing command is associated with a network command queue (see for example Wang, this limitation is disclosed such that each receive buffer (i.e. network command queue) stores a network packet (i.e. each processing command is associated with a network command queue)); col.1 lines {39}-{58}).
As per claim 4, Raduchel in view of Ku, further in view of Han, further in view of Wang, further in view of Nakibly discloses the method of claim 1 (see rejection of claim 1 above), wherein the network interface controller (“NIC”) in the same computer as the APD (see for example Raduchel, this limitation is disclosed such that there is a host with a GPU and NIC; paragraph [0087]).
As per claim 5, Raduchel in view of Ku, further in view of Han, further in view of Wang, further in view of Nakibly discloses the method of claim 4 (see rejection of claim 4 above), Wang further disclosing in response to the pre-fetch request, pre-fetching network queue metadata into hardware slots of the NIC (see for example Wang, this limitation is disclosed such that RBDs in host memory are prefetched into a NIC’s RBD cache, initiating a transaction that initializes the NIC’s registers; col.5 lines {9}-{47}, col.10 lines {28}-{58}. RBD management includes characteristics prediction; col.6 lines {1}-{28}).
As per claim 6, Raduchel in view of Ku, further in view of Han, further in view of Wang, further in view of Nakibly discloses the method of claim 1, Raduchel further disclosing an APD (see for example Raduchel, this limitation is disclosed such that there is a GPU (i.e. accelerator) connected to a NIC; paragraph [0087]).
Raduchel does not explicitly teach that a device command queue is associated with a network command queue by one of an application, driver, or command processor.
However, Wang discloses that a device command queue is associated with a network command queue by one of an application, driver, or command processor (see for example Wang, this limitation is disclosed such that interactions with a NIC are controlled through a driver; col.5 lines {9}-{47}, col.10 lines {28}-{58}).
Regarding claim 10, it is a system claim having similar limitations cited in claim 1. Thus, claim 10 is also rejected under the same rationales as cited in the rejection of claim 1.
Regarding claim 11, it is a system claim having similar limitations cited in claim 2. Thus, claim 11 is also rejected under the same rationales as cited in the rejection of claim 2.
Regarding claim 12, it is a system claim having similar limitations cited in claim 3. Thus, claim 12 is also rejected under the same rationales as cited in the rejection of claim 3.
Regarding claim 13, it is a system claim having similar limitations cited in claim 4. Thus, claim 13 is also rejected under the same rationales as cited in the rejection of claim 4.
Regarding claim 14, it is a system claim having similar limitations cited in claim 5. Thus, claim 14 is also rejected under the same rationales as cited in the rejection of claim 5.
Regarding claim 15, it is a system claim having similar limitations cited in claim 6. Thus, claim 15 is also rejected under the same rationales as cited in the rejection of claim 6.
Regarding claim 19, it is a system claim having similar limitations cited in claim 1. Thus, claim 19 is also rejected under the same rationales as cited in the rejection of claim 1.
Regarding claim 20, it is a system claim having similar limitations cited in claim 2 and 4 or 6. Thus, claim 20 is also rejected under the same rationales as cited in the rejection of claim 2 and 4 or 6.
Claims 7-9 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Raduchel (U.S. 2018/0063555) in view of Ku (U.S. 2012/0155205), further in view of Han (PacketShader: A GPU-Accelerated Software Router), further in view of Wang (U.S. 7,307,998), further in view of Nakibly (U.S. 10,228,869) as applied to claims 1 and 10 above, respectively, and further in view of Manula et al. (U.S. 2014/0181241) (Hereinafter Manula).
As per claim 7, Raduchel in view of Ku, further in view of Han, further in view of Wang, further in view of Nakibly discloses the method of claim 1 (see rejection of claim 1 above), but does not explicitly teach the limitation wherein first network queue metadata indicates to a NIC a location of a first network command queue.
However, Manula discloses the limitation wherein first network queue metadata indicates to a NIC a location of a first network command queue (see for example Manula, this limitation is disclosed such that metadata includes information about where to place data; paragraph [0047]).
Raduchel in view of Ku, further in view of Han, further in view of Wang, further in view of Nakibly is analogous art with Manula because they are from the same field of invention, processing management.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as taught by Raduchel in view of Ku, further in view of Han, further in view of Wang, further in view of Nakibly by prefetching metadata about where to place data as taught by Manula because it would enhance the teaching of Raduchel in view of Ku, further in view of Han, further in view of Wang, further in view of Nakibly with an effective means of optimizing send queue cache logic (as suggested by Manula, see for example paragraph [0048]).
As per claim 8, Raduchel in view of Ku, further in view of Han, further in view of Wang, further in view of Nakibly, further in view of Manula discloses the method of claim 7, further comprising:
issuing, from the APD, network-related commands, to the NIC by placing the network-related commands into the first network command queue (see for example Manula, this limitation is disclosed such that commands are processed as work requests on queues of a queue pair; paragraph [0028]).
As per claim 9, Raduchel in view of Ku, further in view of Han, further in view of Wang, further in view of Nakibly, further in view of Wang discloses the method of claim 1 (see rejection of claim 1 above), but does not explicitly teach the limitation wherein improving networking-related performance includes locating, by the NIC, the second network command using the second network queue metadata, and reading and executing commands from the second network command queue.
However, Manula discloses locating, by the NIC, the second network command using the second network queue metadata (see for example Manula, this limitation is disclosed such that metadata includes information about where to place data; paragraph [0047]); and
reading and executing commands from the second network command queue (see for example Manula, this limitation is disclosed such that work requests are executed; paragraph [0005]).
Raduchel in view of Han, further in view of Wang, further in view of Nakibly is analogous art with Manula because they are from the same field of invention, processing management.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as taught by Raduchel in view of Han, further in view of Wang, further in view of Nakibly by prefetching metadata about where to place data as taught by Manula because it would enhance the teaching of Raduchel in view of Han, further in view of Wang, further in view of Nakibly with an effective means of optimizing send queue cache logic (as suggested by Manula, see for example paragraph [0048]).
Regarding claim 16, it is a system claim having similar limitations cited in claim 7. Thus, claim 16 is also rejected under the same rationales as cited in the rejection of claim 7.
Regarding claim 17, it is a system claim having similar limitations cited in claim 8. Thus, claim 17 is also rejected under the same rationales as cited in the rejection of claim 8.
Regarding claim 18, it is a system claim having similar limitations cited in claim 9. Thus, claim 18 is also rejected under the same rationales as cited in the rejection of claim 9.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN R LABUD whose telephone number is (571)270-5174. The examiner can normally be reached Monday - Thursday 10am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, APRIL BLAIR can be reached at (571)270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.R.L/ Examiner, Art Unit 2196
/APRIL Y BLAIR/ Supervisory Patent Examiner, Art Unit 2196