DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
1. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
2. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma et at. (US Pub. No. US20240422107) in view of Gadi et al. (US Pub. No. US20240241728)
As per claims 1, 20, Sherma discloses a method for processing packets (paragraph 114, exchange packets using links of an underlying physical network), comprising:
receiving a packet (paragraph 114, exchange packets using links of an underlying physical network) via a representor port (paragraph 114, Interfaces 232 include a port interface card having one or more network ports.) on a data processing unit (DPU) (fig.3, computing device 200) operatively connected to a physical host (paragraph 117, routing packets among virtual network endpoints of one or more virtual networks, where the virtual network endpoints are hosted by one or more of servers 12), wherein the physical host is connected using a Peripheral Component Interconnect Express (PCIe) connection (fig.3, PCIe bus 242), wherein the PCIe connection is associated with a plurality of virtual functions (paragraph 116, Virtual functions 27A and 27B may represent lightweight PCIe functions that share physical resources with a physical function used by physical driver 225 and with other virtual functions);
wherein the packet originated from a pod (fig.3, pod 202s) executing on the physical host, wherein the pod is associated with one of the plurality of virtual functions (paragraph 65, creates virtual network interfaces to connect pods to virtual router 21A and enable containers of such pods to communicate, via the virtual network interfaces), wherein the one of the plurality of virtual functions is mapped to the representor port (paragraph 114, associated with virtual network interface 212, 213 and virtual function 27A in figure 3 being selected for communication), in response to receiving the packet, processing the packet using a hardware switch pipeline in the DPU (paragraph 120, virtual router 220 may implement a packet processing pipeline), wherein the processing the packet comprises using a plurality of match/action tables to identify an exact match flow entry for the packet (paragraph 33, identifies a packet flow to which a received packet corresponds), wherein the plurality of match/action tables do not contain any match flow entries that are not exact match flow entries (paragraph 161, Virtual router agent 504 may apply slow-path packet processing for the first (initial) packet of each new flow traversing virtual router 502 and installs corresponding flow entries to flow tables for the new flows for fast path processing by virtual router 502 for subsequent packets of the flows); and
initiating transmission of the packet towards its intended destination using the exact match flow entry (paragraph 161, apply slow-path packet processing for the first (initial) packet of each new flow traversing virtual router 502 and installs corresponding flow entries to flow tables for the new flows for fast path processing by virtual router 502 for subsequent packets of the flows).
Sharma discloses all the limitations as the above but does not explicitly disclose a method for processing packets further comprising a data processing unit (DPU) operatively connected to a physical host wherein the physical host is connected to the DPU. However, Gadi discloses this. (Fig.2, DPU device 109, operatively connected to a physical host (fig.2, host 106) being connected to the DPU and operating with module in the DPU as cites in paragraph 30, the DPU devices 109 can execute workloads 130 assigned to execute on host devices 106)
It would have been obvious to one with ordinary skill in the art before the effective filling date of the claimed invention was made to consider the teachings of Gadi with the teaching of Sharma to benefit of offloading workload to the DPU so as to yield the predicatable result so as to control efficiently, thus enhance the system performance.
As per claim 2, Sherma discloses wherein the plurality of match/action tables are content addressable memory tables and wherein the exact match flow entry is stored on one of the content addressable memory tables (paragraph 11, DPDK applications are running, they will conflict with each other's PCI addresses as both may attempt try to initialize the same interfaces.)
As per claim 3, Sherma discloses wherein the exact match flow entry is a route associated with a destination that has a specific Internet Protocol (IP) address. (paragraph 26, an Internet Protocol (IP) intranet operated by the service provider that operates service provider network 107)
As per claim 4, Sherma discloses wherein the exact match flow entry has a subnet mask of 255.255.255.255. (paragraph 26, an Internet Protocol (IP) intranet operated by the service provider that operates service provider network 107)
As per claim 5, Sherma discloses the method further comprising:
prior to receiving the packet, receiving a second packet by the DPU (paragraph 69, Pod 22A is configured with a data interface 28 that is used for high-throughput packet processing, more specifically, for sending and receiving packets with virtual router 21A for high-throughput applications);
in response to receiving the second packet, making a first determination that there is no exact match flow entry in the hardware switch pipeline (paragraph 120, virtual router 220 may implement a packet processing pipeline);
in response to the first determination, classifying, in the DPU, the second packet as a data packet (paragraph 172, the computing device executes the virtual router to process a packet (710). The computing device executes a container of the second virtual computing instance (712));
in response to the classifying of the second packet:
identifying, using a software data plane in the DPU, a forwarding information base (FIB) entry to be used to transmit the second packet towards its intended destination (paragraph 8, deploying and managing a virtual router having Data Plane Development Kit (DPDK) functionality to a computing device);
identifying, using the software data plane, a flow associated with the second packet (paragraph 13, data plane monitoring and corrective actions, where the virtual router exposes its own internal state tracking via endpoints (e.g., HTTP REST endpoints);
initiating, using the software data plane, programming of the exact match flow entry in the hardware switch pipeline (paragraph 120, virtual router 220 may implement a packet processing pipeline); and
initiating, using the software data plane, transmission of the second packet towards its intended destination using the FIB entry and the hardware switch pipeline, wherein the packet is associated with the flow (paragraph 120, The pipeline can be stitched by the virtual router agent 216 from the simplest to the most complicated manner depending on the operations to be applied to a packet).
As per claim 6, Sherma discloses the method further comprising:
prior to receiving the second packet, receiving a third packet by the DPU (paragraph 69, Pod 22A is configured with a data interface 28 that is used for high-throughput packet processing, more specifically, for sending and receiving packets with virtual router 21A for high-throughput applications);
in response to receiving the third packet, making a second determination that there is no exact match flow entry in the hardware switch pipeline in the DPU(paragraph 120, virtual router 220 may implement a packet processing pipeline);
in response to the second determination, classifying, in the DPU, the third packet as a control packet, wherein the control packet comprises a Border Gateway Protocol (BGP) message (paragraph 8, deploying and managing a virtual router having Data Plane Development Kit (DPDK) functionality to a computing device);
in response to the classifying of the third packet: processing, by a control plane in the DPU, the third packet to obtain a route entry, wherein the route entry is stored in a routing information based (RIB) in the control plane, and wherein information in the route entry is subsequently stored in the software data plane in the FIB entry (paragraph 8, deploying and managing a virtual router having Data Plane Development Kit (DPDK) functionality to a computing device).
As per claim 7, Sherma discloses wherein the exact match flow entry is not programmed until the hardware switch pipeline until after the packet is received by the DPU (paragraph 120, virtual router 220 may implement a packet processing pipeline).
As per claim 8, Sherma discloses wherein programming of the exact match flow entry in the hardware switch pipeline comprises storing the exact match flow entry in any available storage location in the hardware switch pipeline (paragraph 120, virtual router 220 may implement a packet processing pipeline).
As per claim 9, Sherma discloses wherein the programming of the exact match flow entry does not require any reordering of any previously stored exact match flow entries in the hardware switch pipeline (paragraph 120, virtual router 220 may implement a packet processing pipeline).
As per claim 10, Sherma discloses wherein the classifying is performed using vector packet processing (VPP) (paragraph 165, the initialization sequence for this communication is triggered once a virtual router agent to virtual Netlink channel 552 (e.g., over a socket) has been established).
As per claim 11, Sherma discloses wherein a host control plane plug-in executing on the physical host configures the software data plane on the DPU to perform the VPP (paragraph 165, The initialization sequence for this communication is triggered once a virtual router agent to virtual Netlink channel 552 (e.g., over a socket) has been established).
As per claim 12, Sherma discloses wherein the host control plane plug-in associates the pod with one of the plurality of virtual functions (paragraph 65, creates virtual network interfaces to connect pods to virtual router 21A and enable containers of such pods to communicate, via the virtual network interfaces).
As per claim 13, Sherma discloses wherein the pod comprises a plurality of containers, wherein the plurality of containers all use the one of the plurality of virtual functions to transmit packets. (paragraph 65, creates virtual network interfaces to connect pods to virtual router 21A and enable containers of such pods to communicate, via the virtual network interfaces).
As per claim 14, Sherma discloses wherein the intended destination of the packet is external to the physical host and the DPU (paragraph 117, routing packets among virtual network endpoints of one or more virtual networks, where the virtual network endpoints are hosted by one or more of servers 12).
As per claim 15, Sherma discloses wherein the intended destination of the packet a second pod executing on the physical host (paragraph 117, routing packets among virtual network endpoints of one or more virtual networks, where the virtual network endpoints are hosted by one or more of servers 12).
As per claim 16, Sherma discloses wherein the plurality of match/action tables are organized in a hierarchical table structure (paragraph 161, Virtual router agent 504 may apply slow-path packet processing for the first (initial) packet of each new flow traversing virtual router 502 and installs corresponding flow entries to flow tables for the new flows for fast path processing by virtual router 502 for subsequent packets of the flow).
As per claim 17, Sherma discloses wherein the hierarchical table structure comprises a layer 2 source table, a layer 2 destination table, and a layer 3 routing flows table (paragraph 161, Virtual router agent 504 may apply slow-path packet processing for the first (initial) packet of each new flow traversing virtual router 502 and installs corresponding flow entries to flow tables for the new flows for fast path processing by virtual router 502 for subsequent packets of the flow).
As per claim 18, Sherma discloses the method further comprising:
after transmitting the packet, making a determination that no packets associated with the exact match flow entry associated have been received by the hardware switch pipeline for a predetermined period of time (paragraph 66, What unit this corresponds to depends on a particular container runtime implementation: for example, in implementations of the application container specification such as rkt, each pod runs in a unique network namespace); and
initiating, in response to the determination, deletion of the exact match flow entry from the hardware switch pipeline (paragraph 120, virtual router 220 may implement a packet processing pipeline).
As per claim 19, Sherma discloses wherein the predetermined period of time is started when the exact match flow entry is programmed into the hardware switch pipeline (paragraph 161, apply slow-path packet processing for the first (initial) packet of each new flow traversing virtual router 502 and installs corresponding flow entries to flow tables for the new flows for fast path processing by virtual router 502 for subsequent packets of the flows).
3. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Tang et al. [US Pub. No. US20230231827] discloses a method of sending data in a network that includes at least one worker node executing one or more sets of containers and a virtual switch, the virtual switch including a gateway interface.
Conclusion
4. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIM T HUYNH whose telephone number is (571)272-3635 or via e-mail addressed to [kim.huynh3@uspto.gov]. The examiner can normally be reached on M-F 7.00AM- 4:00PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tsai Henry can be reached at (571)272-4176 or via e-mail addressed to [Henry.Tsai@USPTO.GOV].
The fax phone numbers for the organization where this application or proceeding is assigned are (571)273-8300 for regular communications and After Final communications. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is (571)272-2100.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K. T. H./
Examiner, Art Unit 2184
/HENRY TSAI/Supervisory Patent Examiner, Art Unit 2184