Prosecution Insights
Last updated: April 19, 2026
Application No. 17/571,292

NETWORK INTERFACE DEVICE

Final Rejection §103
Filed
Jan 07, 2022
Examiner
BARTELS, CHRISTOPHER A.
Art Unit
2184
Tech Center
2100 — Computer Architecture & Software
Assignee
Xilinx, Inc.
OA Round
4 (Final)
66%
Grant Probability
Favorable
5-6
OA Rounds
3y 5m
To Grant
79%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
364 granted / 547 resolved
+11.5% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
40 currently pending
Career history
587
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
66.9%
+26.9% vs TC avg
§102
23.9%
-16.1% vs TC avg
§112
3.6%
-36.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 547 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to the claim listing filed on October 31st, 2025. Claims 1, 2, 3, and 5-21 are currently pending. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, and 5-21 are rejected 35 U.S.C. 103 as being unpatentable over Walsh et al. (US Pat No. 5754436 A, hereinafter referred to as Walsh) in view of Kaplan et al. (US Pat No. 11409685 B1, hereinafter referred to as Kaplan) in further view of Lal et al. (USPGPUB No. 2021/0117246 A1, hereinafter referred to as Lal) and Sen et al. (USPGPUB No. 2020/0218684 A1, hereinafter referred to as Sen) in further view of Trikalinou et al. (USPGPUB No. 2021/0117340 A1, hereinafter referred to as Trikalinou). Referring to claim 1, Walsh discloses a network interface device comprising {“notebook computer 6”, see Fig. 2a, Col 7, lines 37-38}: a network interface configured to interface with a network {“local area network LAN connector,”, see Fig. 2a Col 7, lines 65-67}, the network interface configured to receive data from the network and transmit to the network {said LAN connector part of “connectors 55 are physically mounted and electrically connected to Docking PCB” for sending/receiving data as claimed, see Fig. 2a, Col 7, lines 61-62}; a host interface configured to interface with a host device {host interface “docking station PCB” (see Fig. 2a and 3, Col 8, lines 44-47) to host device “docking station MPU and memory circuitry 74” (see Fig. 3, Col 8, lines 56-58}, the host interface configured to receive data {“LAN circuit 79 provides two-way [data] communication between the docking station 7 and to other computers”, see Fig. 3, Col 8, lines 65-67} from the host device and transmit data to the host device {“SCSI interface 77… with bus 71 and can receive and send data for any suitable SCSI peripheral”, see Fig. 3, Col 8, lines 56-58}; and data path circuitry configured to cause data {“docking station PCB has a comprehensive connector 89”, see Figs. 2a, 3, and 4, Col 9, lines 51-53} to be at least one of moved into or out {“connectors 60-64 through connector 89 pass respectively…” for data movement as claimed, see Fig. 4, Col 9, lines 56-59} of the network interface device {“Comprehensive connector 89 not only accommodates lines from a bus to bus interface 90 [but also includes claimed network interface device]” connectors 55, see Figs. 2a and 4, Col 9, lines 61-64}, the data path circuitry comprising: first circuitry for providing one or more data processing operations {first circuitry “SCSI card 77”, see Fig. 4 Col 10, lines 17-21}; and interface circuitry supporting a plurality of channels {interface circuitry “ISA or EISA bus 83” (see Figs. 3 or 4 Col 10, lines 21-22}, the plurality of channels comprising {“implements EISA-compatible edge/level interrupt channel control registers at I/O addresses 4D0h and 4D1h”, see Figs. 43 and 44, Col 121, lines 38-44}: Walsh does not appear to explicitly disclose event channels providing respective command completion information to the plurality of data path circuitry user instances; and data channels providing the associated data. However, Kaplan discloses event channels providing respective command completion information {“network adapter posts a [command completion information] CQE in the completion queue”, (Col 4, lines 58-60) per channel as claimed} to the plurality of data path circuitry user instances {user instances “native application that is configured to run on user devices, which users may interact with” (Col 30, lines 47-49); such interacting including with “completion queue 322 of network adapter 308 can be mapped to an interrupt register 512 of hardware data processor 506, which” (see Fig. 6, Col 21, lines 51-53} the user devices connect over “network adapter 308” (“Each request can include a DMA descriptor to be executed by the DMA engine to perform a data transfer between local memory 314 and each of network adapter 308”, see Fig. 3a, Col 14, lines 20-23}; and data channels providing the associated data {data channels “at least one read… and separate write channels of multiple memory banks” (Col 23, lines 2-5) provides associated data to a “local memory” mapped to said channels (see Figs. 3a and 7, Col 15, lines 58-59)}. Walsh and Kaplan are analogous because they are from the same field of endeavor, communicating with networked device(s). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Walsh and Kaplan before him or her, to modify Walsh’s “notebook computer 6” (see Fig. 2a, Col 7, lines 37-38) incorporating Kaplan’ “network adapter” with “completion queue entry” storage/operations (see Fig. 6). The suggestion/motivation for doing so would have been to implement remote DMA and CQE entries To reduce the host processor's involvement in the transfer of the data from the network adapter to the hardware data processor, which can reduce the data transfer latency between the network adapter and the hardware data processor (Kaplan Col 3, lines 61-66). Therefore, it would have been obvious to combine Kaplan with Walsh to obtain the invention as specified in the instant claim(s). Furthermore, Lal discloses provide command channels receiving command information {“provide a [command information] session key to a remote application 4820 over a secure channel”, see Fig. 48 [0651]} from a plurality of data path circuitry user instances {“ autonomous FPGA 4830 using a central orchestration server 4810 to facilitate attestation and session setup [plurality of paths” (see Fig. 48, [0651]} over user instances “execute different instances of the same [user] application, each instance having a separate context” (see Fig. 4b [0106])}, the command information indicating a path for associated data through the data path circuitry {“ address of the assigned autonomous FPGA 4830 along with the session key (token) for [a path] establishing secure communication channel”, see Fig. 48 [0658], last sentence} Walsh/Kaplan and Lal are analogous because they are from the same field of endeavor, communicating with networked device(s). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Walsh/Kaplan and Lal before him or her, to modify Walsh/Kaplan’s device incorporating Lal’s “FPGA 4830” implementing a “remote application over a secure channel” (see Fig. 48 [0651]). The suggestion/motivation for doing so would have been to implement disaggregated compute resources, such as CPUs, GPUs, and hardware accelerators that are connected via a network instead of being on the same platform and connected via physical links such as peripheral component interconnect express which in turn disaggregated computing enables improved resource utilization and lowers ownership costs by enabling more efficient use of available resources (Lal [0003]). Therefore, it would have been obvious to combine Lal with Kaplan/Walsh to obtain the invention as specified in the instant claim(s). Neither one of the group consisting Walsh, Kaplan, and Lal appears to explicitly disclose provide command channels receiving command information from a plurality of data path circuitry user instances, the command information indicating a path for associated data through the data path circuitry and one or more parameters for the one or more data processing operations provided by the first circuitry; wherein the command information comprises one of: a program, which when run executes multiple commands, or a reference to a program stored on the network interface device. Furthermore, Sen discloses provide command channels receiving command information {“RDMA connection [channel]”, see Fig. 3 [0071], 2nd sentence} from a plurality of data path circuitry user instances {user interfaces “host fabric interface 210” (see Figs. 2 and 3, [0071]; other examples}, the command information indicating a path for associated data through the data path circuitry {“utilizes RDMA kernel bypassing mechanisms to [indicate a path] access the RNIC 210-1 directly to achieve the aforementioned performance and resource utilization efficiencies”, see Fig. 8, [0082], last two sentences; another type of command information “sending commands to the hardware accelerator 212, getting and setting properties of the hardware accelerator 212”, see Fig. 4, [0048], last sentence} and one or more parameters {parameters “necessary parameters as required”, see Figs. 8 and 9, [0088], 2nd sentence}} for the one or more data processing operations provided by the first circuitry {“by the [one or more data processing operations] particular protocol used for the [first circuitry] primary connection 831”, see Figs. 8 and 9, [0088], 2nd sentence}; wherein the command information comprises one of: a program, which when run executes multiple commands, or a reference to a program stored {“engine/utility compiles the [reference to a program stored/] developed acceleration programs into the loadable accelerator images, last sentence;} on the network interface device {Examiner’s note: by recitation “or” term renders this claim as Markush claim, thus the reference needs only disclose one element from the group to address the claim}. event channels {“event [channel] subscription and notification”, see Fig. 8, [0034], 3rd sentence providing respective command completion information {“914 may involve performing the TCP three-way handshake for connection establishment [completion]”, [0092], 3rd sentence} to the plurality of data path circuitry user instances {“operation 1014, the accelerator manager 502 generates and sends, to the initiator 822”, [0099], 1st sentence}; and data channels providing the associated data {“connection establishment request message for the [data channel] for the primary connection 831”, see Figs. 8 and 9, [0089] 1st sentence} to the plurality of data path circuitry user instances {“obtain the session ID from the host fabric interface 210 for”, see Fig. 9, [0090], last sentence}. Walsh/Kaplan/Lal and Sen are analogous because they are from the same field of endeavor, communicating with networked device(s). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Walsh/Kaplan/Lal and Sen before him or her, to modify Walsh/Kaplan’s device incorporating Sen’s “accelerator 212” and corresponding “RDMA connection”/channels (see Figs. 2 and 3). The suggestion/motivation for doing so would have been to implement an accelerator fabric is an architecture in which computing systems are communicatively coupled with hardware accelerators via a network fabric (e.g., Ethernet, Fibre Channel (FC), etc.) where individual computing systems can select a particular hardware accelerator from a pool of remote hardware accelerators to perform a suitable task (Sen [0017], 2nd sentence) that addresses the drawbacks incorporating hardware accelerators depending on the particular task being performed, the hardware accelerator may experience a high level of use during some time periods and a low or no level of use at other times (Sen [0016], last sentence). Therefore, it would have been obvious to combine Sen with Kaplan/Walsh/Lal to obtain the invention as specified in the instant claim(s). However, neither one of the group consisting of Walsh, Kaplan, and Sean appears to explicitly disclose a network interface controller comprising: a host data path unit (DPU) configured to communicate with the host device; a network DPU configured to communicate with the network; Command channels configured to pass messages from user logic to the host DPU and the network DPU and for receiving command information from a plurality of data path circuitry user instances; Event channels configured for passing command completion messages from the host DPU and the network DPU to the user logic; Furthermore, Trikalinou discloses a network interface controller comprising {“RDMA NICs may be located inside other accelerators”, [0098], last two sentences, an example NIC “IPU 730” (see Fig. 7)}: a host data path unit (DPU) {said “IPU 730” subcomponent DPU “processor 732”, see Fig. 7 [0101]} configured to communicate with the host device {host DPU “732” communicates to host device “host SoC 720”, see Fig. 7, [0101]}; a network DPU configured {network DPU “encryption engine 736”, see Fig. 7 [0102], 1st sentence} to communicate with the network {communicating over network “The network (e.g., 735, 835) may be a TCP/IP network”, see Fig. 7 [0101], 2nd sentence}; Command channels {“via a secure [command] channel established”, see Fig. 6 [0094], 4th sentence} configured to pass messages from user logic {passing messages “PCIe/MCTP SPDM (Management Component Transport Protocol, Security Protocol and Data Model, respectively”, see Fig. 6 [0094], 4th sentence} to the host DPU {“shared with [host] IO device SoC 620 ”, see Fig. [0094], 4th sentence} and the network DPU and for receiving command information {“ A key for encrypting/decrypting the IO data (e.g., a 64-bit cipher such as PRINCE, [command information] Galois/Counter Mode (GCM)”, see Figs. 6 and 7 [0094], 4th sentence} from a plurality of data path circuitry user instances {“IO device SoC 620 is responsible for IO data decryption” consisting of a plurality of data path circuitry user instances “IO device SoC 620 includes an encryption engine 616, which may perform one or more encryption/decryption functions”, “root port 612”, and “IOMMU 614” as a plurality of data paths as claimed, see Fig. 6 [0091]}; Event channels configured for passing command completion messages {“existing allocated completion buffer” in an appropriate [event] channel, see Fig. 4, [0086], last three sentences} from the host DPU {“ encrypted via the encryption engine 416 inline in IO path” (see Fig. 4 [0086], last three sentences) or respective host DPU “732” (see Fig. 7)} and the network DPU to the user logic {“ For multiple IO keys, e.g., per tenant or context keys, utilizing the existing device to context VT-d mapping, the IO key can be stored in a [user logic] PASID (Process Address Space ID) table entry or referenced via a pointer in that entry”, see Fig. 4 [0085]}; Walsh/Kaplan/Lal/Sen and Trikalinou are analogous because they are from the same field of endeavor, communicating with networked device(s). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Walsh/Kaplan/Lal/Sen and Trikalinou before him or her, to modify Walsh/Kaplan/Lal/Sen’s device incorporating Trikalinou’s “IPU 730” in coupling with “Host Soc 710” (see Fig. 7). The suggestion/motivation for doing so would have been to implement/incorporate these encrypted pointers are sent to the IO device as part of control path operations and the corresponding memory may be initialized while recognizing the actual code/data encryption key (e.g., Gimli/GCM key) can stay the same GA infrastructures which facilitates IO (input/output) side protections are provided from malicious/vulnerable CPU side accesses. (Trikalinou [0081]). Therefore, it would have been obvious to combine Trikalinou with Kaplan/Walsh/Lal/Sen to obtain the invention as specified in the instant claim(s). As per claim 2, the rejection of claim 1 is incorporated and Lal discloses wherein the plurality of data path circuitry user instances are provided by one or more of {“prepare the context [of one or more user instances]”, see Fig. 25, [0312]}: a central processing unit on the network interface controller {central processing unit “GPU remoting middleware layer”, see Fig. 28 [0341]}; a central processing unit in the host device {“GPU stack is partitioned between the userspace and kernel space components”, see Fig. 28 [0339]}; and programmable logic circuitry of the network interface controller {programmable logic “bridged across the [network] fabric by a middleware called GPU-over-Fabric (GoF)” (see Fig. 28 [0339]) via network interface device “connected over a fabric 2370 via NICs 2350a, 2350b” (see Fig. 23 [0294])}. As per claim 3, the rejection of claim 1 is incorporated and Lal discloses wherein the data path circuitry comprises command scheduling circuitry configured to schedule commands for execution {“kernels and data structures utilized for execution.”, see Fig. 23 [0297]}, the commands being associated with the command information {“construct command buffers that initialize the GPU”, see Fig. 23 [0297]}, the command scheduling circuitry scheduling one of the commands {“The KMD 2230 is responsible for… memory and scheduling [commands] workloads on the GPU”, see Fig. 22 [0292]} when at least a part of the associated data is available {“the [available] associated data structures are to be relocated to the remote host memory of the remote platform 2304”, see Fig. 23 [0297]} and a data destination is reserved {data destination “remote platform 2304”, see Fig. 23 [0297]}. As per claim 5, the rejection of claim 3 is incorporated and Lal discloses wherein the command scheduling circuitry is configured, when a command has been completed {“notifying host software when a workload [set of commands] is complete.”, see Fig. 4b [0104]}, to cause a command completion event to be provided to one of the event channels {“The command processors 457 can interrupt the one or more CPU(s) 446 when the submitted commands are complete.”, see Fig. 4D [0135], last sentence}; Trikalinou discloses and wherein the event channels are configured to provide command completion information {“existing allocated completion buffer” in an appropriate channel, see Fig. 4, [0086], last three sentences} to the plurality of data path circuitry user instances {“IO device SoC 620 is responsible for IO data decryption” consisting of a plurality of data path circuitry user instances “IO device SoC 620 includes an encryption engine 616, which may perform one or more encryption/decryption functions”, “root port 612”, and “IOMMU 614” receiving said completion information as claimed, see Fig. 6 [0091]}. As per claim 6, the rejection of claim 21 is incorporated and Lal discloses wherein the program is configured, when run, to cause two or more commands to be executed {“The KMD 2230 is responsible for… memory and scheduling [a set of commands] workloads on the GPU”, see Fig. 22 [0292]}, each of the two or more commands being associated with a respective command completion event {“each MMIO requests to confirm it was completed successfully”, see Fig. 37 [0497]}. As per claim 7, the rejection of claim 21 is incorporated and Lal discloses wherein the program is configured, when run, to cause two or more commands to be executed {“The KMD 2230 is responsible for… memory and scheduling [a set of commands] workloads on the GPU”, see Fig. 22 [0292]}, the executing of one of the two or more commands being dependent on an outcome of executing {one or more commands performed by “ray tracing cores 445 independently perform ray traversal and intersection and [one or more outcomes] return hit data”, see Fig. 4c [0120]} of another of the two or more commands {such outcomes “a hit, no hit, multiple hit” sent to two or more commands “to the thread context” as well as “cores 443, 444 are freed to perform other graphics or compute work while the ray tracing cores 445 perform the traversal and intersection operations”, see Fig. 4c [0120]}. As per claim 8, the rejection of claim 21 is incorporated and Lal discloses wherein the program is configured, when run, to support a loop {“parallel matrix multiplication work”, see Fig. 4c [0116]}, where the loop is repeated until {“entire matrix is loaded into tile registers and at least one column of a second matrix is loaded each cycle for N [loop] cycles. Each cycle, there are N dot products that are processed”, see Fig. 4c, [0116]} one or more conditions is satisfied {“potentially combining details from multiple frames, to construct a [satisfied condition] high-quality final image”, [0115] last sentence}. As per claim 9, the rejection of claim 21 is incorporated and Lal discloses wherein the program is configured, when run, to call a function {“first set of specialized circuitry for performing bounding box tests (e.g., for traversal operations)”, see Fig. 4c [0120]} to cause one or more actions associated {actions “circuitry for performing depth testing and culling”, see Fig. 4c [0118], 3rd sentence} with that function to be executed {“to the [executed] thread context”, see Fig. 4c [0120] last two sentences}. As per claim 10, the rejection of claim 1 is incorporated and Lal discloses wherein a barrier command {“the TEE and the accelerator 236 generate authentication tags (ATs”, see Fig. 2 [0081]} is provided between a first command and a second command {“for the transferred data and may use those ATs to validate the [respective claimed set of commands] transactions.”, see Fig. 2 [0081]} to cause the first command to be executed before the second command {“code and data included in the secure enclave may be encrypted or otherwise [barrier] protected from being accessed”, see Fig. 2 [0083] 4th sentence}. Trikalinou discloses wherein the command information comprises the barrier command {barrier command “Secure Protocol & Data Model (SPDM) flows or via normal MMIO (links [barrier-] protected using PCIe/CXL IDE link encryption)”, see Fig. 6, [0092]}. As per claim 11, the rejection of claim 1 is incorporated and Lal discloses wherein the data path circuitry comprises a data classifier configured to classify data received by the network interface and to provide {classified into secure and unsecured data “cryptographically protected from untrusted components of the computing device 100 (e.g., protected from software outside of the trusted code base of the tenant enclave).”, see Figs. 4a-4d [0090]}, in dependence on classifying of the data {“a circular buffer in which the elements are protected by [classified] authentication tags”, see Fig. 13b [0220]}, a reference to a program which when run {“initialize the GPU environment and reference various buffers, [programs] kernels and data structure”, see Fig. 23 [0297]} causes one or more commands to be performed {“kernels and data structures utilized for [to be] execution.”, see Fig. 23 [0297]}}, the reference to the program being command information for the data received by the network interface {via network interface device “connected over a fabric 2370 via NICs 2350a, 2350b” (see Fig. 23 [0294]). As per claim 12, the rejection of claim 1 is incorporated and Lal discloses wherein the first circuitry for providing the one or more data processing operations comprises one or more data processing offload pipelines {“directly manage the remote device it may be offloading workload to.” (see Fig. 34 [0488]) said remote device “graphics processor core 419” including pipelines “a geometry and fixed function pipeline 431 [or 437]” ([0103], last sentence}, the data processing offload pipelines comprising a sequence of one or more offload engines {“core 419 may have greater than or fewer than the illustrated sub-cores 421A-421F… For each set of N sub-cores, the graphics processor core 419 can also include memory 436, a geometry/fixed function pipeline 437”, see Fig. 4b [0105], 1st sentence}, each of the one or more offload engines is configured to perform a function {“ shared function logic 435… as well as additional fixed function logic 438”, see Fig. 4b [0105], 1st sentence} with respect to data as it passes through a respective one of the data processing offload pipelines {“accelerate various graphics and compute [data] processing operations”, see Fig. 4b [0105], 1st sentence}. As per claim 13, the rejection of claim 12 is incorporated and Lal discloses comprising one or more direct memory access adaptors {“RDMA Network Interface Controller (RDMA NIC or RNIC) interface directly”, see Fig. 10 [0182]} providing an input/output subsystem for the data path circuitry {I/O subsystem “The platform controller hub 130 can also connect to one or more Universal Serial Bus (USB) controllers 142 connect input devices, such as keyboard and mouse 143 combinations, a camera 144, or other USB input devices”, see Figs. 1 and 2 [0076], last sentence}, the one or more direct memory access adaptors interfacing with one or more of the data processing offload pipelines {“ secure DMA engine 304 may intercept, filter, or otherwise process data traffic on one or more cache-coherent interconnects” (see Fig. 3 [0089] last sentence) such interconnects include “bus controller units 416 manage a set of peripheral buses” ([0094]) including pipelines (“430” and “431”, [0101])} to receive data from one or more data processing offload pipelines {“ facilitate the decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data.”, see Fig. 4b [0102], last sentence} and/or deliver data to one or more of the data processing offload pipelines {delivering data “media operations via requests to compute or sampling logic within the sub-cores 421-421F.”, see Fig. 4b [0102], last sentence}. As per claim 14, the rejection of claim 1 is incorporated and Lal discloses wherein different data path circuitry user instances are configured {“key used to encrypt the data should be shared between the [different user instances] local and remote applications”, see Figs. 11 and 13a [0227], in use, to issue commands {“form of identification may be a sequence number or unique value to” RDMA commands, see Figs. 11 and 13a [0226]} to a same command channel of the command channels {“crypted to protect confidentiality with the same or different key used to calculate the MAC” of a claimed channel, see Figs. 11 and 13a [0227]}. As per claim 15, the rejection of claim 1 is incorporated and Lal discloses wherein one of the data path circuitry user instances is configured to take over providing a plurality of commands {“form of identification may be a sequence number or unique value to” RDMA commands (see Figs. 11 and 13a [0226]) taken over as “determine if it wants to offload its workload to the GPU [taking over]” ([0366]) } via a same command channel from another of the data path instances {“crypted to protect confidentiality with the same or different key used to calculate the MAC” of a claimed channel, see Figs. 11 and 13a [0227]}. As per claim 16, the rejection of claim 1 is incorporated and Lal discloses wherein the first circuitry comprises: a first host data processing part {“[first host] paired RNICs 1430, 1440”, see Fig. 19a and 19b [0269]}; and a second network data processing part {“paired RNICs 1430, [second network] 1440”, see Fig. 19a and 19b [0263]}. As per claim 17, the rejection of claim 16 is incorporated and Lal discloses comprising a data path between the first host data processing part and the second network data processing part {datapath “operation flow 1400 of integrity protection of RDMA SEND” between “paired RNICs 1430, 1440” (see Fig. 14 [0229], 1st sentence}, the data path being configured to transfer data from one of the first host data processing part and the second network data processing part {“RDMA transaction among a plurality of different components at a source and a sink.”, see Figs. 19a and 19b [0257], 2nd sentence} to the other of the first host data processing part and the second network data processing part {“consumerSink 1460 indicates to sinkNIC 1440 that the consumerSink 1460 is ready to receive messages 1901 from the sinkNIC 1440”, see Figs. 19a and 19b [0258]}. As per claim 18, the rejection of claim 17 is incorporated and Lal discloses wherein the first host data processing part comprises a first set of buffers {“structured for workloads to be submitted through command buffers”, see Fig. 9 [0285]} and the second network data processing part comprises a second set of buffers {“include references to [second set] buffers in memory that contain user data”, see Fig. 9 [0285]}, the data path being provided between the first set of buffers and the second set of buffers {datapath “operation flow 1400 of integrity protection of RDMA SEND” between “paired RNICs 1430, 1440” (see Fig. 14 [0229], 1st sentence}. As per claim 19, the rejection of claim 17 is incorporated and Lal discloses comprising a network on chip {“ fabric 685 may be a network on a chip interconnect”, see Fig. 6c [0151] last sentence}, the data path being provided by the network on chip {the datapath “operation flow 1400 of integrity protection of RDMA SEND” between “paired RNICs 1430, 1440” (see Fig. 14 [0229], 1st sentence}; Trikalinou discloses wherein the data channels are provided {“communicate either via [data] channels mapped on a communication medium ”, see Figs. 2 and 3 [0046], 1st sentence} by the network on chip {“including network-on-chip (NoC) which requires the AFUs”, see Fig. 2 [0046]}. Referring to claim 20 is a method claim reciting claim functionality corresponding to the device claim of claim 1, thereby rejected under same rationale as claim 1 above. As per claim 21, the rejection of claim 1 is incorporated and Trikalinou discloses wherein the command information comprises one of: a program, which when run executes multiple commands {“ A key for encrypting/decrypting the IO data (e.g., a 64-bit cipher such as PRINCE, [command information] Galois/Counter Mode (GCM)”, see Figs. 6 and 7 [0094], 4th sentence; for a plurality of commands “all CC cryptographic operations may be self-contained inside the IO device”, see Fig. 6 [0094]} or wherein the program instructs the first circuitry to fetch and process the associated data {Examiner’s interpretation: the recitation “or” renders this dependent claim as a Markush claim, thus the reference needs only disclose one member to address the claim}; Response to Arguments Applicant’s arguments filed on 10/31/2025 have been considered but deemed moot in view of the new ground of rejection(s). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references are indicative the current state of the art regarding claim 1’s “data path unit”, “network interface controller”, or “command”/”event channel”: US 20250007689 A1, US 11983441 B2, US 20240089239 A1, US 20240061792 A1, US 11831550 B2, US 20220245072 A1, US 20190250850 A1, US 20190229925 A1, US 20190065417 A1, US 20180335971 A1, US 20150324118 A1, and US 7861026 B2. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER A. BARTELS whose telephone number is (571)270-3182. The examiner can normally be reached on Monday-Friday 9:00a-5:30pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dr. Henry Tsai can be reached on 571-272-4176. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C. B./ Examiner, Art Unit 2184 /HENRY TSAI/Supervisory Patent Examiner, Art Unit 2184
Read full office action

Prosecution Timeline

Jan 07, 2022
Application Filed
Sep 20, 2024
Non-Final Rejection — §103
Dec 18, 2024
Response Filed
Apr 15, 2025
Final Rejection — §103
Jun 16, 2025
Response after Non-Final Action
Jul 10, 2025
Request for Continued Examination
Jul 17, 2025
Response after Non-Final Action
Jul 26, 2025
Non-Final Rejection — §103
Oct 31, 2025
Response Filed
Feb 17, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602339
STRAIN RELIEF FOR FLOATING CARD ELECTROMECHANICAL CONNECTOR
2y 5m to grant Granted Apr 14, 2026
Patent 12596662
METHOD FOR INTEGRATING INTO A DATA TRANSMISSION A NUMBER OF I/O MODULES CONNECTED TO AN I/O STATION, STATION HEAD FOR CARRYING OUT A METHOD OF THIS TYPE, AND SYSTEM HAVING A STATION HEAD OF THIS TYPE
2y 5m to grant Granted Apr 07, 2026
Patent 12579090
METHOD AND SYSTEM FOR SHIFTING DATA WITHIN MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12572491
MEMORY WITH CACHE-COHERENT INTERCONNECT
2y 5m to grant Granted Mar 10, 2026
Patent 12572486
Subgraph segmented optimization method based on inter-core storage access, and application
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
66%
Grant Probability
79%
With Interview (+12.8%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 547 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month