Prosecution Insights
Last updated: April 18, 2026
Application No. 18/432,986

LOW LATENCY COMMUNICATION CHANNEL OVER A COMMUNICATIONS BUS USING A HOST CHANNEL ADAPTER

Non-Final OA §103
Filed
Feb 05, 2024
Examiner
OCAK, ADIL
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
Mellanox Technologies Ltd.
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
92%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
279 granted / 376 resolved
+16.2% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
21 currently pending
Career history
397
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
21.7%
-18.3% vs TC avg
§112
6.5%
-33.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 376 resolved cases

Office Action

§103
DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/16/2026 has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Amendment This Office Action is made in response to amendment, filed 2/17/2026. Claim 1 is amended. Response to Arguments Applicant’s arguments see “Remarks”, made in an Amendment”, filed 2/17/2026. Applicant’s arguments presented in the previous response have been fully considered but are not persuasive for at least the reasons set forth in the Advisory Action mailed 3/4/2026, which is incorporated herein by reference. In the instant Office Action, the Examiner has modified the applied prior art by replacing secondary reference ZUR (Pun No. US 2013/0147643) with Zur (U.S. Patent No. 12,141,093), which more clearly and explicitly teach the amended limitation, including (i) writing a message into a Completion Queue (CQ), and/or (ii) writing a Completion Queue Entry (CQE) based on a received message. Zur teaches queue-based processing of work requests and completion handling, including storing work queue elements and updating queue indices and acknowledgements, which correspond to completion queue operations (see rejections below). The substitution of Zur does not change the underlying rationale of the rejection, but rather provides clearer evidence of the teachings relied upon. Accordingly, Applicant’s arguments have been fully considered but remain unpersuasive, and the rejections are maintained. Examiner's Suggestions for Possible Amendments While the rejection is maintained, the Examiner notes that certain additional structural limitations may further distinguish over the cited art. The Examiner has verified that the Applicant’s specification supports these suggestions: • Explicitly reciting that the exact message received by the Host Channel Adapter (HCA) is directly written, without transformation, into the Completion Queue (CQ) of the second device. • Reciting that the second device performs the one or more tasks only in response to reading a Completion Queue Entry (CQE) from the CQ. • Reciting additional structural or operational constraints on how completion information is generated and written into the CQ that are not taught by the cited art. • Further clarifying the relationship between the received message and the resulting CQE written to the CQ, such as specifying that the CQE corresponds directly to the received message. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 5-9, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kagan et al., Pub No US 2011/0270917 (hereafter Kagan) and further in view of Zur et al., Pub No US 12,141,093 (hereafter ZUR). Regarding Claim 1, Kagan discloses a first device [para.0019: Discloses a client device.] comprising: a control circuit controlling operation of the device [FIG.2, para.0076: Discloses execution unit (element 52 - a control circuit) is responsible for writing new entries to LOB (element 68 – a local database) as it prepares request messages for transmission over fabric (element 26 - an InfiniBand architecture network). The Execution Unit (element 52) controls operations and message generation, corresponding to the claimed control circuit.], wherein, when the device is coupled with an intra-node communications bus [para.0054: Discloses a system implemented either as hardware circuits or as software processes running on a programmable processor, or as a combination of hardware- and software-implemented elements and all of the elements of the HCA are implemented in a single integrated circuit chip; and para.0055: Discloses a PCI bus as the intra-node communications bus, with both the host and the HCA sharing a local address space.], the control circuit causes the device to: write, via the intra-node communications bus, a message to a Host Channel Adapter (HCA) coupled with the intra-node communications bus [FIG.1, para.0051-0052: Discloses a Host Channel Adapter (HCA) on a communications bus; and para.0055: Discloses host (element 24) posts work queue elements (WQEs) for a "queue pair" (QP) by writing work request descriptors (element 44) in memory (writing a message to the HCA). After host (element 24) has prepared one or more descriptors, it "rings" a doorbell (element 50) of HCA (element 22), by writing to a corresponding doorbell address occupied by the HCA in the address space on the host bus. The doorbell thus serves as an interface between host (element 24) and HCA (element 22); and para(s).0076-0077: Discloses prepares request messages for transmission over fabrics… channel adapters… IB fabrics … Thus, Kagan teaches transmitting messages over a communication fabric to channel adapters, corresponding to writing via an intra-node bus to an HCA.], Kagan does not explicitly disclose wherein the message comprises a predetermined number of bytes indicating a request for one or more tasks to be performed by a second device coupled with the communications bus and wherein the HCA writes the message into a Completion Queue (CO) of the second device. However, in analogous art, ZUR discloses work queue can include multiple WQEs, each of which corresponds to a data transfer task (col.3 lines 48-49) and discloses a WQE corresponds to a data transfer task and can include information needed for the data transfer task, e.g., the address information (col.22, lines 31-34). ZUR further discloses that the work queue buffer includes a plurality of slots where work queue elements (WQEs) can be placed. A slot is a portion of the work queue buffer configured to store a WQE. Because each slot is configured to store a WQE, the WQE has a defined size corresponding to the slot size. Accordingly, each WQE comprises a predetermined number of bytes representing a task request. ZUR also discloses a slot has an index indicating a position of the slot in the work queue. The index can also be the index of the WQE in the slot. A WQE includes information needed for sending data from the send buffer to the receive buffer. For instance, the WQE includes an address of the send buffer (also referred to as "local address") and an address of the receive buffer (also referred to as "remote address"). The work queue has a producer index (PI), which refers to the next slot where a new WQE can be placed. The PI may equal the index of the last WQE in the work queue plus 1. The work queue also has a consumer index (CI), which refers to the next WQE to be processed and completed. The CI may be the same as the index of the next WQE (col.3 lines 5-22). Fig.11 discloses storing the work queue element (WQE) in the slot based on the local producer index (block 1140). Thus, each slot is configured to hold a WQE implies defined size and implies predetermined bytes. Thus, ZUR teaches that WQEs are stored in predefined slots within a work queue buffer. Because each slot is configured to store a WQE, the WQE requires having a defined size corresponding to the slot size. Thus, the WQE comprises a predetermined number of bytes indicating a task request. ZUR also teaches queue insertion and completion tracking vis consumer index updates and acknowledgments, corresponding to CQ write and completion behavior (FIG.11 steps 1140-1170). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kagan with the message comprises a predetermined number of bytes indicating a request for one or more tasks to be performed by a second device coupled with the communications bus and wherein the HCA writes the message into a Completion Queue (CO) of the second device, as taught by ZUR, in order to yield predictable result such as improved structured task representation and efficient queue-based processing with defined message formats (ZUR: col.1 lines 25-27). Regarding Claim 2, the combined teachings of Kagan and ZUR discloses the device of claim 1, and Kagan further discloses wherein the control circuit is provided on a Data Processing Unit (DPU) [para.0056: Discloses a send data engine (SDE – DPU) that gathers data to be sent from the locations in memory specified by the WQEs and places the data in output packets for transmission over network (element 26). The data packets prepared by SDE are passed to an output port (element 56), which performs data link operations and other necessary functions.]. Regarding Claim 5, the combined teachings of Kagan and ZUR discloses the device of claim 1, and Kagan further discloses wherein the message comprises a pointer to a memory address [para.0055: Discloses the data source information typically includes a "gather list," pointing to the locations in memory from which the data in the outgoing message are to be taken.]. Regarding Claim 6, the combined teachings of Kagan and ZUR discloses the device of claim 1, and Kagan further discloses wherein the message comprises code [para.0067: Discloses opcode.]. Regarding Claim 7, the combined teachings of Kagan and ZUR discloses the device of claim 1, and Kagan further discloses wherein the message comprises a counter [para.0072: Discloses an LDB-use counter indicating the number of outstanding message records that the QP is currently holding in LDB memory. This counter is incremented each time an entry for the QP is pushed into the LDB memory, and decremented each time an entry is popped.]. Regarding Claim 8, the combined teachings of Kagan and ZUR discloses the device of claim 1, and Kagan further discloses wherein the message comprises a notification [para.0055: Discloses the host writes a WQE descriptor to the queue, it also rings the doorbell register, this is the notification that tells the HCA new work is available.]. Regarding Claim 9, Kagan discloses a Host Channel Adapter (HCA) [para.0014: Discloses an HCA comprises a local database (LDB) for holding context information regarding outstanding request messages sent by the HCA.] comprising: a control circuit controlling operation of the HCA [FIG.1, para.0051-0052: Discloses a Host Channel Adapter (HCA) on a communications bus; and para.0054: Discloses a system implemented as hardware circuits or as software processes running on a programmable processor, or as a combination of hardware- and software-implemented elements and all of the elements of the HCA are implemented in a single integrated circuit chip; and FIG.2, para.0076: Discloses execution unit (element 52 - a control circuit) is responsible for writing new entries to LOB (element 68 – a local database) as it prepares request messages for transmission over fabric (element 26 - an InfiniBand architecture network). The Execution Unit (element 52) controls operations and message generation, corresponding to the claimed control circuit.], wherein, when the HCA is coupled with an intra-node communications bus [para.0055: Discloses a PCI bus as the intra-node communications bus, with both the host and the HCA sharing a local address space.], the control circuit causes the HCA to: receive, via the intra-node communications bus, a message from a first device coupled with the intra-node communications bus [para(s).0055, 0076-0077: Discloses that the HCA is coupled to the host via a PCI bus and that messages are transmitted and received over the communication fabric between devices and channel adapters. These collectively corresponds to receiving a message via the intra-node communication bus.], Kagan does not explicitly disclose the message indicating a request from the first device for one or more tasks to be performed by a second device coupled with the intra-node communications bus; and write a Completion Queue Entry (CQE) based on the received message to a Completion Queue (CQ) of the second device via the intra-node communications bus. However, in analogous art, ZUR discloses that a work queue can include multiple work queue elements (WQEs), each corresponding to a data transfer task (col.3 lines 48-49) and include information needed for sending data, e.g., the address information (col.3 lines 19-22, col.22, lines 31-34). ZUR teaches that WQEs represent requests for tasks (data transfers) issued between devices, satisfying the claimed message indicating a request for tasks. ZUR further discloses the work queue has a consumer index (CI), which refers to the next WQE to be processed and completed, and that acknowledgments are provided and the consumer index is updated accordingly (col.3 lines 19-22, FIG.11 steps 1140-1170, FIG.12 steps 1250-1260). ZUR teaches queue insertion and completion tracking via consumer index updates and acknowledgments, corresponding to CQ write and completion behavior. Thus, ZUR teaches processing received work requests and updating queue structures to reflect completion via consumer index updates and acknowledgments, corresponding to generating a completion entry (CQE) and writing it to a completion queue (CQ), as claimed. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kagan with these features, as taught by ZUR, in order to yield predictable result such as improved structured task representation and efficient queue-based processing with defined message formats (ZUR: col.1 lines 25-27). Regarding Claim 13, Kagan discloses a system comprising: an intra-node communications bus [para.0055: Discloses a PCI bus as the intra-node communications bus, with both the host and the HCA sharing a local address space.]; a first device coupled with the intra-node communications bus, the first device comprising a control circuit controlling operation of the first device [para.0019: Discloses a client interface, for coupling to a client device (a first device) so as to receive from the client device work requests to send the messages over the network using a plurality of transport service instances, each of the messages being associated with a respective one of the transport service instances; and para.0004: Discloses the HCA is connected to the host bus; and para.0005: Discloses that after the client’s work request is received, the channel adapter executes the work items; and FIG.1, para(s).0051-0052: Discloses system components coupled via a communications bus; and para,0054: Discloses implementation via hardware/software including programmable processors; and FIG.2, para.0076: Discloses execution unit (element 52) controlling operations, corresponding to a control circuit.]; a Host Channel Adapter (HCA) coupled with the intra-node communications bus, the HCA comprising a control circuit controlling operation of the HCA [para.0014: Discloses a Host Channel Adapter (HCA – i.e., a network interface/adapter device); and FIG.2, para.0076: Discloses execution unit (element 52) controls operations and message generation.]; and a second device coupled with the intra-node communications bus, the second device comprising a control circuit controlling operation of the second device [para.0058: Discloses handling work requests by host (element 24) to send outgoing request message packets over network (element 26), thus, the request message indicates one or more tasks to performed; and para.0054: Discloses processor (element 24) communicates via network (element 26) with other HCAs, such as remote HCA (element 28 – a second device); and FIG.1: Discloses multiple devices (i.e., first, second , third … devices) coupled via the bus; and para.0054: Discloses devices implemented with control circuitry.], wherein: the control circuit controlling operation of the first device causes the first device to write a message to the HCA via the intra-node communications bus [para(s).0055, 0076-0077: Discloses that the HCA is coupled to the host via a PCI bus and that messages are transmitted and received over the communication fabric between devices and channel adapters. Thus, teaches devices communicating over PCI bus/fabric and sending messages between devices.], the control circuit controlling operation of the HCA causes the HCA to receive the message from the first device via the intra-node communications bus [para(s).0055, 0076-0077: Discloses that the HCA is coupled to the host via a PCI bus and that messages are transmitted and received over the communication fabric between devices and channel adapters. These collectively corresponds to receiving a message via the intra-node communication bus.], and write a Completion Queue Entry (CQE) based on the received message to a Completion Queue (CQ) of the second device via the intra-node communications bus [para(s).0059-0060: Discloses generating and placing a CQE on a CQ, notifying completion.], and the control circuit controlling operation of the second device causes the second device to read the CQE from the CQ of the second device and perform the one or more tasks [para(s).0059-0060: Discloses completion notification and processing of completed work.]. Kagan does not explicitly disclose the message indicating a request for one or more tasks to be performed by the second device. However, in analogous art, ZUR discloses that a work queue can include multiple work queue elements (WQEs), each corresponding to a data transfer task (col.3 lines 48-49) and that a WQE includes information needed for performing the data transfer, including source and destination address information (col.3 lines 19-22, col.22, lines 31-34). ZUR further discloses processing the WQE by executing a RDMA operation to transfer data (example 18: col.24 lines 41-54), thereby indicating that the requested tasks are carried out by a device other than the original device, the processing corresponds to execution of the requested task.. Accordingly, the WQE represents a message indicating a request for one or more tasks to be performed by another device, corresponding to the claimed second device. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kagan with these features, as taught by ZUR, in order to yield predictable result such as improved structured task representation and efficient queue-based processing with defined message formats (ZUR: col.1 lines 25-27). Regarding Claim 20, the combined teachings of Kagan and ZUR discloses the system of claim 13, and Kagan further discloses wherein the CQ of the second device stores CQEs from a plurality of devices coupled with the intra-node communications bus [para(s).0005-0006: Discloses the HCA writes a completion queue element (CQE) to a completion queue (memory), to be read by the client. A given HCA will serve simultaneously both as a requester, transmitting requests and receiving responses on behalf of local clients (plural), and as a responder, receiving requests from other channel adapters and returning responses accordingly.] and wherein the first device is one of the plurality of devices [para.0012: Discloses providing devices (plural) and methods for interfacing a host processor to a network, while affording enhanced efficiency in maintaining and accessing context information needed to process outstanding messages (plural).]. Claims 3-4 and 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Kagan et al., Pub No US 2011/0270917 (hereafter Kagan) and further in view of Zur et al., Pub No US 12,141,093 (hereafter ZUR) and further in view of LeBeane et al., Pat No US 10,936,533 (hereafter LeBeane). Regarding Claim 3, the combined teachings of Kagan and ZUR discloses the device of claim 1, the combined teachings do not explicitly disclose wherein the control circuit is provided on a Central Processing Unit (CPU). However, in analogous art, LeBeane in FIG.2 and col.3, lines 55-67 discloses a CPU (element 210) that includes any suitable general purpose processing unit or processor core and a GPU (element 220) that includes any suitable graphics processing unit or graphics processor core. CPU (element 210) and GPU (element 220) can be disposed on separate dies or packages, or can be cores on the same die, such as in an accelerated processing unit (APU). CPU (element 210) and GPU (element 220) can be implemented, for example, on a single die as processor (element 102). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kagan and ZUR with wherein the control circuit is provided on a Central Processing Unit (CPU), as taught by LeBeane in order to yield predictable result such as CPU performing other functions during a data transfer between the main memory and the hardware subsystem, or between main memories of two computer systems (LeBeane: col.1, lines 27-30). Regarding Claim 4, the combined teachings of Kagan and ZUR discloses the device of claim 1, the combined teachings do not explicitly disclose wherein the control circuit is provided on a Graphics Processing Unit (GPU). However, in analogous art, LeBeane discloses in FIG.2 and col.3, lines 55-67 discloses a CPU (element 210) that includes any suitable general purpose processing unit or processor core and a GPU (element 220) that includes any suitable graphics processing unit or graphics processor core. CPU (element 210) and GPU (element 220) can be disposed on separate dies or packages, or can be cores on the same die, such as in an accelerated processing unit (APU). CPU (element 210) and GPU (element 220) can be implemented, for example, on a single die as processor (element 102). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kagan and ZUR with wherein the control circuit is provided on a Graphics Processing Unit (GPU), as taught by LeBeane in order to yield predictable result such as having CPU and GPU on same control circuit, CPU performing other functions during a data transfer between the main memory and the hardware subsystem, or between main memories of two computer systems (LeBeane: col.1, lines 27-30). Regarding Claim 14, the combined teachings of Kagan and ZUR discloses the system of claim 13, the combination does not explicitly disclose wherein the intra-node communications bus comprises a Peripheral Component Interconnect express (PCIe) bus. However, in analogous art, LeBeane discloses local interconnect can include any suitable bus or other medium for interconnecting peripheral devices within a computer, such as a Peripheral Component Interconnect Express (PCie) bus. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kagan and ZUR with wherein the communications bus comprises a Peripheral Component Interconnect express (PCIe) bus, as taught by LeBeane in order to yield predictable result such providing any suitable computer communications network for communicating with a remote system (LeBeane: col.4, lines 11-12). Regarding Claim 15, the combined teachings of Kagan, ZUR and LeBeane discloses the system of claim 14, and Kagan further disclose wherein the first device comprises a Data Processing Unit (DPU) [para.0056: Discloses a send data engine (SDE – DPU) that gathers data to be sent from the locations in memory specified by the WQEs and places the data in output packets for transmission over network (element 26). The data packets prepared by SDE are passed to an output port (element 56), which performs data link operations and other necessary functions.]. This claim is rejected on the same grounds as claim 14. Regarding Claim 16, the combined teachings of Kagan, ZUR and LeBeane discloses the system of claim 14, and LeBeane further discloses wherein the first device comprises a Central Processing Unit (CPU) [FIG.2 and col.3, lines 55-67: Discloses a CPU (element 210) that includes any suitable general purpose processing unit or processor core and a GPU (element 220) that includes any suitable graphics processing unit or graphics processor core. CPU (element 210) and GPU (element 220) can be disposed on separate dies or packages, or can be cores on the same die, such as in an accelerated processing unit (APU). CPU (element 210) and GPU (element 220) can be implemented, for example, on a single die as processor (element 102).]. This claim is rejected on the same grounds as claim 14. Regarding Claim 17, the combined teachings of Kagan, ZUR, and LeBeane discloses the system of claim 14, and LeBeane further discloses wherein the control circuit is provided on a Graphics Processing Unit (GPU) [FIG.2 and col.3, lines 55-67: Discloses a CPU (element 210) that includes any suitable general purpose processing unit or processor core and a GPU (element 220) that includes any suitable graphics processing unit or graphics processor core. CPU (element 210) and GPU (element 220) can be disposed on separate dies or packages, or can be cores on the same die, such as in an accelerated processing unit (APU). CPU (element 210) and GPU (element 220) can be implemented, for example, on a single die as processor (element 102).]. This claim is rejected on the same grounds as claim 14. Claims 10-12 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kagan et al., Pub No US 2011/0270917 (hereafter Kagan) and further in view of Zur et al., Pub No US 12,141,093 (hereafter ZUR) and further in view of Pope et al., Pub No US 2006/0174251 (hereafter Pope). Regarding Claim 10, the combined teachings of Kagan and ZUR discloses the HCA of claim 9, and the combination does not explicitly disclose wherein the message is 4 to 8 bytes in length. However, in analogous art, Pope discloses [para.0149] individual buffers (messages) may be either 4 k or 8 k bytes long and they are chained together into logically contiguous sequences by means of physically contiguous descriptors in a buffer descriptor table (element 1310). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kagan and ZUR with wherein the message is 4 to 8 bytes in length, as taught by Pope in order to yield predictable result such as providing less excessive data movement [Pope: para.0026]. Regarding Claim 11, the combined teachings of Kagan, KUR and Pope discloses the HCA of claim 10, and Pope further discloses wherein writing the CQE to the CQ of the second device comprises concatenating together a plurality of messages and writing the concatenated messages to the CQ of the second device [para.0043: Discloses batching of transmit completion event notifications reducing the time required by the host system for event handling, since a single traversal through the event handling loop handles multiple transmit buffer (messages) completions; and para.0149: Discloses individual buffers (messages) are chained (concatenating) together into logically contiguous sequences by means of physically contiguous descriptors in a buffer descriptor table (element 1310).]. This claim is rejected on the same grounds as claim 10. Regarding Claim 12, the combined teachings of Kagan and ZUR discloses the HCA of claim 9, and Kagan further discloses wherein the control circuit controlling the HCA further causes the HCA to receive messages from a plurality of devices coupled with the intra-node communications bus and write a CQE for each received message into the CQ of the second device, wherein the first device is one of the plurality of devices [para.0012: Discloses providing devices (plural) and methods for interfacing a host processor to a network, while affording enhanced efficiency in maintaining and accessing context information needed to process outstanding messages (plural).] and the combination does not explicitly disclose wherein the CQE comprises a different address for each of the plurality of devices. However, in analogous art, Pope discloses [para.0149] are buffers (messages) are chained together into a single logically contiguous wrap-around space by the physically contiguous entries (elements - 1332, 1334 and 1336) in the buffer descriptor table (element 1310). The buffer descriptor table (element 1310) is indexed by "buffer ID", and each of its entries identifies, among other things, the base address of the corresponding buffer in host memory (element 222). Thus, each message comprises different address. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kagan and ZUR with the cited features above, as taught by Pope in order to yield predictable result such as providing less excessive data movement [Pope: para.0026]. Regarding Claim 18, the combined teachings of Kagan and ZUR discloses the system of claim 13, and the combination does not explicitly disclose wherein the message is 4 to 8 bytes in length. However, in analogous art, Pope discloses [para.0149] individual buffers (messages) may be either 4 k or 8 k bytes long and they are chained together into logically contiguous sequences by means of physically contiguous descriptors in a buffer descriptor table (element 1310). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Kagan and ZUR with wherein the message is 4 to 8 bytes in length, as taught by Pope in order to yield predictable result such as providing less excessive data movement [Pope: para.0026]. Regarding Claim 19, the combined teachings of Kagan, ZUR and Pope discloses the system of claim 18, and Pope further discloses wherein the CQE comprises a plurality of messages concatenated together [para.0043: Discloses batching of transmit completion event notifications reducing the time required by the host system for event handling, since a single traversal through the event handling loop handles multiple transmit buffer (messages) completions; and para.0149: Discloses individual buffers (messages) are chained (concatenating) together into logically contiguous sequences by means of physically contiguous descriptors in a buffer descriptor table (element 1310).]. This claim is rejected on the same grounds as claim 18. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Shahar et al., (US 11,966,355) – Discloses circuitry configured to read data to be processed from the specified source buffer, produce processed data by applying the specified data processing operation to the data read, and store the processed data in the first target address. In response to the RDMA write work item, the processing circuitry is configured to transmit the processed data in the first target address, via the network interface over the network, for storage in the second target address of the remote memory (col.2, lines 29-37). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADIL OCAK whose telephone number is (571) 272-2774. The examiner can normally be reached on M-F 8:00 AM - 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached on 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system; contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADIL OCAK/Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Feb 05, 2024
Application Filed
Aug 26, 2025
Non-Final Rejection — §103
Oct 21, 2025
Interview Requested
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 05, 2025
Examiner Interview Summary
Nov 26, 2025
Response Filed
Dec 11, 2025
Final Rejection — §103
Feb 17, 2026
Response after Non-Final Action
Mar 16, 2026
Request for Continued Examination
Mar 27, 2026
Response after Non-Final Action
Apr 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598348
METHODS AND APPARATUS TO CREDIT MEDIA SEGMENTS SHARED AMONG MULTIPLE MEDIA ASSETS
2y 5m to grant Granted Apr 07, 2026
Patent 12598334
LIVE-STREAMING STARTING METHOD, DEVICE AND PROGRAM PRODUCT
2y 5m to grant Granted Apr 07, 2026
Patent 12586039
Chat And Email Messaging Integration
2y 5m to grant Granted Mar 24, 2026
Patent 12574591
SYSTEM AND METHOD FOR PROVIDING ENHANCED AUDIO FOR STREAMING VIDEO CONTENT
2y 5m to grant Granted Mar 10, 2026
Patent 12572588
Local Public Notification Network Mediation
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
92%
With Interview (+18.3%)
2y 4m
Median Time to Grant
High
PTA Risk
Based on 376 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month