Prosecution Insights
Last updated: April 19, 2026
Application No. 18/779,259

PACKET PROCESSING FOR CLUSTERED CONTAINERS USING INTERNAL BRIDGING AND AN OFFLOAD ARCHITECTURE

Final Rejection §103
Filed
Jul 22, 2024
Examiner
LEE, CHUN KUAN
Art Unit
2181
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
2 (Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
71%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
455 granted / 669 resolved
+13.0% vs TC avg
Minimal +3% lift
Without
With
+3.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
32 currently pending
Career history
701
Total Applications
across all art units

Statute-Specific Performance

§101
1.7%
-38.3% vs TC avg
§103
79.4%
+39.4% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 669 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . RESPONSE TO ARGUMENTS Applicant's arguments filed 1/20/2026 have been fully considered but they are not persuasive. In response to applicant’s arguments with regard to the independent claim 1 rejected under 35 U.S.C. 103(a) that the combination of the references does not teach/suggest the claimed features recited in the independent claim; applicant's arguments have fully been considered, but are not found to be persuasive. The examiner respectfully disagrees, and to further clarify, by combining Sharma’s selecting from a set of transmission interfaces (e.g. associated with virtual network interface (212), (213), and virtual function (27A) in Fig. 3 being selected/used for communication: [0114]; and [0122]), wherein the set of transmission interfaces comprises a virtual Ethernet interface and one of a plurality of virtual functions ([0114]; and [0122]), wherein external packets are transmitted over one of the plurality of virtual functions and are processed using resources on a unit (DPU) (e.g. associated processing by Network Interrace Card and transmitting over virtual functions (27A, 27B): [0114]), wherein the set of transmission interfaces are operating accordingly (e.g. associated with virtual network interface (212), (213), and virtual function (27A) in Fig. 3: [0114]; and [0122]), and wherein the transmission interface is the virtual Ethernet interface ([0072]; [0122]); operating with virtual Ethernet interface, being associated with the virtual Ethernet interface ([0027]; [0122]) (Fig. 3; [0008]; [0072]; and [0110]-[0122]),and Gadi’s operating with a data processing unit (DPU) (e.g. Fig. 2, ref. 109) (Fig. 2; [0030]; and [0037]-[0039] with Tang’s operating, by an application executing on a container in a pod on a physical host, a transmission interface over which to transmit a packet (e.g. associated with transmission of data between Pods in Fig. 3-4: [0026]-[0027]; and [0059]-[0063]) based on whether the packet is an internal packet (e.g. associated with intra-node data transfer: Fig. 3-4; [0059]-[0063]) or an external packet (e.g. associated with communication between nodes executing on different host computers: Fig. 7, [0071]; [0077]), wherein internal packets are destined for another pod on the physical host (e.g. associated with intra-node data transfer: Fig. 3-4; [0059]-[0063]) and external packets are destined for another pod on a different physical host (e.g. associated with communication between nodes executing on different host computers: Fig. 7, [0071]; [0077]), wherein internal packets are transmitted over the virtual Ethernet interface and are processed by a host bridge (e.g. associated with transfer over virtual Ethernet interface (362, 364) and OVS Bridge (330): Fig. 3-4; [0059]-[0063]), wherein external packets are transmitted accordingly (e.g. associated with communication between nodes executing on different host computers: Fig. 7, [0070]-[0071]; [0077]), wherein the transmission interface is associated with the pod (e.g. associated with Fig. 3-4, ref. 320, 322, 324: [0059]-[0063]), and receiving the packet by the host bridge in the physical host (e.g. associated with bridge (330) in Fig. 3-4), wherein the packet is destined for a second container on a second pod in executing on the physical host (e.g. associated with bridge (330) receiving data transmitted by Pod1A (320) in Fig. 3-4: [0059]-[0063]); and transmitting, by the host bridge, the packet to a second interface, wherein the second pod is associated with the second interface (e.g. associated with bridge (330) transmitting the received data to corresponding Pod in Fig. 3-4: [0059]-[0063]) (Fig. 3-4; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0059]-[0063]; and [0069]-[0080]), the resulting combination of the references would further teach/suggest applicant’s claimed features. Additionally, applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. In response to applicant’s arguments with regard to the independent claim 20 rejected under 35 U.S.C. 103(a) that the combination of the references does not teach/suggest the claimed features recited in the independent claim; applicant's arguments have fully been considered, but are not found to be persuasive. The examiner respectfully disagrees, and to further clarify, by combining Sharma’s selecting from a set of transmission interfaces (e.g. associated with virtual network interface (212), (213), and virtual function (27A) in Fig. 3 being selected/used for communication: [0114]; and [0122]), wherein the set of transmission interfaces comprises a virtual Ethernet interface and one of a plurality of virtual functions (Fig. 3; [0072]; [0114]; and [0122]), wherein the virtual Ethernet interface is operating accordingly, wherein the virtual function is operating accordingly (Fig. 3; [0072]; [0114]; and [0122]), and wherein the set of transmission interfaces are operating accordingly (e.g. associated with virtual network interface (212), (213), and virtual function (27A) in Fig. 3: [0114]; and [0122]), and wherein the transmission interface is the virtual Ethernet interface ([0072]; [0122]); operating with virtual Ethernet interface, being associated with the virtual Ethernet interface ([0027]; [0122]) (Fig. 3; [0008]; [0072]; and [0110]-[0122]) with Tang’s operating, by an application executing on a container in a pod on a physical host, a transmission interface over which to transmit a packet (e.g. associated with transmission of data between Pods in Fig. 3-4: [0026]-[0027]; and [0059]-[0063]) based on whether the packet is an internal packet (e.g. associated with intra-node data transfer: Fig. 3-4; [0059]-[0063]) or an external packet (e.g. associated with communication between nodes executing on different host computers: Fig. 7, [0071]; [0077]), wherein the transmission interface is associated with the pod (e.g. associated with Fig. 3-4, ref. 320, 322, 324: [0059]-[0063]), and for internal network traffic and internal network traffic is destined or another pod on the physical host (e.g. associated with intra-node data transfer: Fig. 3-4; [0059]-[0063]), for external network traffic and external network traffic is destined for another pod on a different physical host (e.g. associated with communication between nodes executing on different host computers: Fig. 7, [0071]; [0077]), and receiving the packet by a host bridge in the physical host (e.g. associated with bridge (330) in Fig. 3-4), wherein the packet is destined for a second container on a second pod in executing on the physical host (e.g. associated with bridge (330) receiving data transmitted by Pod1A (320) in Fig. 3-4: [0059]-[0063]); and transmitting, by the host bridge, the packet to a second interface, wherein the second pod is associated with the second interface (e.g. associated with bridge (330) transmitting the received data to corresponding Pod in Fig. 3-4: [0059]-[0063]) (Fig. 3-4; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0059]-[0063]; and [0069]-[0080]), the resulting combination of the references would further teach/suggest applicant’s claimed features. Additionally, applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. I. REJECTIONS BASED ON PRIOR ART Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-19 are rejected under 35 U.S.C. 103 as being unpatentable over Tang et al. (US Pub.: 2023/0231827) in view of Sharma et al. (US Pub.: 2024/0422107) and Gadi et al. (US Pub.: 2024/0241728). As per claim 1, Tang teaches/suggests a method for processing packets, comprising: operating, by an application executing on a container in a pod on a physical host, a transmission interface over which to transmit a packet (e.g. associated with transmission of data between Pods in Fig. 3-4: [0026]-[0027]; and [0059]-[0063]) based on whether the packet is an internal packet (e.g. associated with intra-node data transfer: Fig. 3-4; [0059]-[0063]) or an external packet (e.g. associated with communication between nodes executing on different host computers: Fig. 7, [0071]; [0077]), wherein internal packets are destined for another pod on the physical host (e.g. associated with intra-node data transfer: Fig. 3-4; [0059]-[0063]) and external packets are destined for another pod on a different physical host (e.g. associated with communication between nodes executing on different host computers: Fig. 7, [0071]; [0077]), wherein internal packets are transmitted over the virtual Ethernet interface and are processed by a host bridge (e.g. associated with transfer over virtual Ethernet interface (362, 364) and OVS Bridge (330): Fig. 3-4; [0059]-[0063]), wherein external packets are transmitted accordingly (e.g. associated with communication between nodes executing on different host computers: Fig. 7, [0070]-[0071]; [0077]), wherein the transmission interface is associated with the pod (e.g. associated with Fig. 3-4, ref. 320, 322, 324: [0059]-[0063]), and receiving the packet by the host bridge in the physical host (e.g. associated with bridge (330) in Fig. 3-4), wherein the packet is destined for a second container on a second pod in executing on the physical host (e.g. associated with bridge (330) receiving data transmitted by Pod1A (320) in Fig. 3-4: [0059]-[0063]); and transmitting, by the host bridge, the packet to a second interface, wherein the second pod is associated with the second interface (e.g. associated with bridge (330) transmitting the received data to corresponding Pod in Fig. 3-4: [0059]-[0063]) (Fig. 3-4; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0059]-[0063]; and [0069]-[0080]). Tang do not teach the method comprising: selecting from a set of transmission interfaces, wherein the set of transmission interfaces comprises a virtual Ethernet interface and one of a plurality of virtual functions, being transmitted over one of the plurality of virtual functions and are processed using resources on a data processing unit (DPU), wherein the set of transmission interfaces are operating accordingly, and wherein the transmission interface is the virtual Ethernet interface; operating with virtual Ethernet interface, being associated with the virtual Ethernet interface. Sharma teaches/suggests a method comprising: selecting from a set of transmission interfaces (e.g. associated with virtual network interface (212), (213), and virtual function (27A) in Fig. 3 being selected/used for communication: [0114]; and [0122]), wherein the set of transmission interfaces comprises a virtual Ethernet interface and one of a plurality of virtual functions ([0114]; and [0122]), wherein external packets are transmitted over one of the plurality of virtual functions and are processed using resources on a unit (DPU) (e.g. associated processing by Network Interrace Card and transmitting over virtual functions (27A, 27B): [0114]), wherein the set of transmission interfaces are operating accordingly (e.g. associated with virtual network interface (212), (213), and virtual function (27A) in Fig. 3: [0114]; and [0122]), and wherein the transmission interface is the virtual Ethernet interface ([0072]; [0122]); operating with virtual Ethernet interface, being associated with the virtual Ethernet interface ([0027]; [0122]) (Fig. 3; [0008]; [0072]; and [0110]-[0122]). Gadi teach/suggest a method comprising: operating with a data processing unit (DPU) (e.g. Fig. 2, ref. 109) (Fig. 2; [0030]; and [0037]-[0039]). It would have been obvious for one of ordinary skill in this art, before the effective filing date of the claimed invention, to include Sharma’s interfacing architecture and Gadi’s DPU into Tang’s method for the benefit of simplifying the configuration and reconfiguration of virtual router (Sharma, [0012]) and offloading workload to the DPU (Gadi, [0030]) to obtain the invention as specified in claim 1. As per claim 2, Tang, Sharma and Gadi teach/suggest all the claimed features of claim 1 above, where Tang, Sharma and Gadi further teach/suggest the method further comprising: selecting, by the application, a second transmission interface from the set of transmission interfaces over which to transmit a second packet, wherein the second transmission interface is the one of the plurality of virtual functions; receiving the second packet via a representor port on the DPU operatively connected to a physical host, wherein the physical host is connected to the DPU using a Peripheral Component Interconnect Express (PCIe) connection, wherein the pod is associated with one of the plurality of virtual functions, wherein the PCIe connection is associated with the one of the plurality of virtual functions; wherein the one or the plurality of virtual functions is mapped to the representor port, in response to receiving the packet, processing the packet using a hardware switch pipeline in the DPU; and initiating transmission of the packet towards its intended destination using the exact match flow entry (Tang, Fig. 3-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0059]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]) As per claim 3, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 2 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein the intended destination of the second packet is external to the physical host and the DPU (Tang, Fig. 3-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0059]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]). As per claim 4, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 2 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein the processing the packet comprises using a plurality of match/action tables to identify an exact match flow entry for the packet, wherein the plurality of match/action tables do not contain any exact match flow entries that are not exact match flow entries (Tang, Fig. 3-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0059]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 5, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 4 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein the plurality of match/action tables are organized in a hierarchical table structure (Tang, Fig. 3-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; and [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 6, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 5 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein the hierarchical table structure comprises a layer 2 source table, a layer 2 destination table, and a layer 3 routing flows table (Tang, Fig. 3-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 7, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 4 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein the plurality of match/action tables are content addressable memory tables and wherein the exact match flow entry is stored on one of the content addressable memory tables (Tang, Fig. 3-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 8, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 2 above, where Tang, Sharma, and Gadi further teach/suggest the method further comprising: prior to receiving the second packet, receiving a third packet by the DPU; in response to receiving the third packet, making a first determination that there is no exact match flow entry in the hardware switch pipeline; in response to the first determination, classifying, in the DPU, the third packet as a data packet; in response to the classifying of the third packet: identifying, using a software data plane in the DPU, a forwarding information base (FIB) entry to be used to transmit the third packet towards its intended destination; identifying, using the software data plane, a flow associated with the third packet; initiating, using the software data plane, programming of the exact match flow entry in the hardware switch pipeline; and initiating, using the software data plane, transmission of the third packet towards its intended destination using the FIB entry and the hardware switch pipeline, wherein the packet is associated with the flow (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 9, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 8 above, where Tang, Sharma, and Gadi further teach/suggest the method further comprising: prior to receiving the third packet, receiving a fourth packet by the DPU; in response to receiving the fourth packet, making a second determination that there is no exact match flow entry in the hardware switch pipeline in the DPU; in response to the second determination, classifying, in the DPU, the fourth packet as a control plane packet, wherein the control plane packet comprises a Border Gateway Protocol (BGP) message; in response to the classifying of the fourth packet: processing, by a control plane in the DPU, the fourth packet to obtain a route entry, wherein the route entry is stored in a routing information based (RIB) in the control plane, and wherein information in the route entry is subsequently stored in the software data plane in the FIB entry (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 10, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 8 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising 10. The method of claim 8, wherein the exact match flow entry is not programmed until the hardware switch pipeline until after the packet is received by the DPU (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 11, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 8 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein programming of the exact match flow entry in the hardware switch pipeline comprises storing the exact match flow entry in any available storage location in the hardware switch pipeline (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 12, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 11 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein the programming of the exact match flow entry does not require any reordering of any previously stored exact match flow entries in the hardware switch pipeline (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 13, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 8 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein the classifying is performed using vector packet processing (VPP) (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 14, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 13 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein a host control plane plug-in executing on the physical host configures the software data plane on the DPU to perform the VPP (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 15, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 14 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein the host control plane plug-in associates the pod with the one of the plurality of virtual functions (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 16, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 2 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein the pod comprises a plurality of containers, wherein the container is one of the plurality of containers, wherein the plurality of containers all use the one of the plurality of virtual functions to transmit packets to the DPU (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 17, Tang, Sharma and Gadi teach/suggest all the claimed features of claim 1 above, where Tang, and Sharma further teach/suggest the method comprising wherein the selection of the transmission interface is determined based on an intended destination of the packet (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; and Sharma, Fig. 3; [0008]; [0072]; [0110]-[0122]). As per claim 18, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 2 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein the exact match flow entry is a route associated with a destination that has a specific Internet Protocol (IP) address (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. As per claim 19, Tang, Sharma, and Gadi teach/suggest all the claimed features of claim 2 above, where Tang, Sharma, and Gadi further teach/suggest the method comprising wherein the exact match flow entry has a subnet mask of 255.255.255.255 (Tang, Fig. 3-6; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0050]-[0051]; [0057]-[0063]; [0069]-[0080]; Sharma, Fig. 3; [0008]; [0046]; [0072]; [0078]-[0079]; [0110]-[0122]; and Gadi, Fig. 2; [0030]; [0037]-[0039]), wherein it would have been obvious to one of ordinary skilled in the art that the resulting combination of the references further teaches the above claimed features as data is proper communicated for processing. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Tang et al. (US Pub.: 2023/0231827) in view of Sharma et al. (US Pub.: 2024/0422107). As per claim 20, Tang teaches/suggests a method for processing packets, comprising: operating, by an application executing on a container in a pod on a physical host, a transmission interface over which to transmit a packet (e.g. associated with transmission of data between Pods in Fig. 3-4: [0026]-[0027]; and [0059]-[0063]) based on whether the packet is an internal packet (e.g. associated with intra-node data transfer: Fig. 3-4; [0059]-[0063]) or an external packet (e.g. associated with communication between nodes executing on different host computers: Fig. 7, [0071]; [0077]), wherein the transmission interface is associated with the pod (e.g. associated with Fig. 3-4, ref. 320, 322, 324: [0059]-[0063]), and for internal network traffic and internal network traffic is destined or another pod on the physical host (e.g. associated with intra-node data transfer: Fig. 3-4; [0059]-[0063]), for external network traffic and external network traffic is destined for another pod on a different physical host (e.g. associated with communication between nodes executing on different host computers: Fig. 7, [0071]; [0077]), and receiving the packet by a host bridge in the physical host (e.g. associated with bridge (330) in Fig. 3-4), wherein the packet is destined for a second container on a second pod in executing on the physical host (e.g. associated with bridge (330) receiving data transmitted by Pod1A (320) in Fig. 3-4: [0059]-[0063]); and transmitting, by the host bridge, the packet to a second interface, wherein the second pod is associated with the second interface (e.g. associated with bridge (330) transmitting the received data to corresponding Pod in Fig. 3-4: [0059]-[0063]) (Fig. 3-4; Fig. 6-8; [0003]-[0009]; [0026]-[0030]; [0034]; [0059]-[0063]; and [0069]-[0080]). Tang do not teach the method comprising: selecting from a set of transmission interfaces, wherein the set of transmission interfaces comprises a virtual Ethernet interface and one of a plurality of virtual functions, wherein the set of transmission interfaces are operating accordingly, wherein the virtual Ethernet interface is operating accordingly, wherein the virtual function is operating accordingly, and wherein the transmission interface is the virtual Ethernet interface; operating with virtual Ethernet interface, being associated with the virtual Ethernet interface. Sharma teaches/suggests a method comprising: selecting from a set of transmission interfaces (e.g. associated with virtual network interface (212), (213), and virtual function (27A) in Fig. 3 being selected/used for communication: [0114]; and [0122]), wherein the set of transmission interfaces comprises a virtual Ethernet interface and one of a plurality of virtual functions (Fig. 3; [0072]; [0114]; and [0122]), wherein the virtual Ethernet interface is operating accordingly, wherein the virtual function is operating accordingly (Fig. 3; [0072]; [0114]; and [0122]), and wherein the set of transmission interfaces are operating accordingly (e.g. associated with virtual network interface (212), (213), and virtual function (27A) in Fig. 3: [0114]; and [0122]), and wherein the transmission interface is the virtual Ethernet interface ([0072]; [0122]); operating with virtual Ethernet interface, being associated with the virtual Ethernet interface ([0027]; [0122]) (Fig. 3; [0008]; [0072]; and [0110]-[0122]). It would have been obvious for one of ordinary skill in this art, before the effective filing date of the claimed invention, to include Sharma’s interfacing architecture into Tang’s method for the benefit of simplifying the configuration and reconfiguration of virtual router (Sharma, [0012]) to obtain the invention as specified in claim 20. II. PERTINENT RELATED PRIOR ART Zhou et al. (US Pub.: 2024/0179071): discloses an agent distributing OVS flow rules to network elements using OVS Daemons and OVS bridge, which bridge communication between all pods. III. CLOSING COMMENTS CONCLUSION STATUS OF CLAIMS IN THE APPLICATION The following is a summary of the treatment and status of all claims in the application as recommended by M.P.E.P. 707.07(i): CLAIMS REJECTED IN THE APPLICATION Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. DIRECTION OF FUTURE CORRESPONDENCES Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHUN KUAN LEE whose telephone number is (571)272-0671. The examiner can normally be reached Monday-Friday. IMPORTANT NOTE If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Idriss Alrobaye can be reached on (571) 270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHUN KUAN LEE/Primary Examiner Art Unit 2181 March 11, 2026
Read full office action

Prosecution Timeline

Jul 22, 2024
Application Filed
Oct 15, 2025
Non-Final Rejection — §103
Jan 08, 2026
Interview Requested
Jan 15, 2026
Applicant Interview (Telephonic)
Jan 20, 2026
Response Filed
Jan 24, 2026
Examiner Interview Summary
Mar 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602270
KV-CACHE STREAMING FOR IMPROVED PERFORMANCE AND FAULT TOLERANCE IN GENERATIVE MODEL SERVING
2y 5m to grant Granted Apr 14, 2026
Patent 12596659
METHODS, DEVICES AND SYSTEMS FOR HIGH SPEED TRANSACTIONS WITH NONVOLATILE MEMORY ON A DOUBLE DATA RATE MEMORY BUS
2y 5m to grant Granted Apr 07, 2026
Patent 12579080
OUTPUT METHOD AND DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12579089
DATA PROCESSING METHOD, APPARATUS AND SYSTEM BASED ON PARA-VIRTUALIZATION DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12554540
EVENT PROCESSING BY HARDWARE ACCELERATOR
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
71%
With Interview (+3.1%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 669 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month