Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This non-final office action is responsive to the U.S. patent application no. 18/622,926 filed on March 30, 2024.
Claims 1-20 are pending.
Claims 1-20 are rejected.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on July 8, 2025 is compliant with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner.
Examiner’s Note
Claim 20 recites “a hardware-based network interface device” which Examiner interprets to comprise hardware components.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being unpatentable over Cheng et al. (U.S. 2018/0359131).
Regarding claim 1, Cheng disclosed A method for managing data flows in a software defined network (SDN) comprising a plurality of computing nodes and a hardware-based network interface device (Cheng disclosed in Abstract “SDN”; Cheng further disclosed in Fig. 1 and [0027] “System 100 may include a core network 101. There may be provider edges (PEs), such as PE 110 or PE 107, connected with core 101 and customer edge (CE) devices, such as CE device 102 or CE device 103.” Said system 100 anticipates the “software defined network” in the claim), the plurality of computing nodes hosting a plurality of virtual machines and a network virtual appliance (Cheng disclosed in Fig. 1 and [0027] that “PE 107, for example, may include whitebox leaf (wLeaf) switches, such as wLeaf switch 106, which are communicatively connected with vPE servers, such as vPE server 108.” Cheng’s “PE 107’ and “PE 110” anticipate the computing nodes in the claim, and Cheng’s “vPE server” anticipates the network virtual appliance in the claim), the method comprising:
receiving, by the hardware-based network interface device, a first data packet of a data flow addressed to an endpoint hosted on one of the plurality of virtual machines (Cheng, Fig 3, step 121 and [0035], “At step 121, vPE 109 may receive data that may have originated from CE device 102 and been passed to vPE 109 via wLeaf switch.” Cheng also disclosed in Fig. 1 and [0027] that traffic from CE device 102 may be addressed to CE device 103 via vPE and vLeaf switch. This disclosure makes it clear that the wLeaf switch receives first data that have originated from CE device 102, and the “wLeaf switch” anticipates the hardware-based network interface device);
forwarding, by the hardware-based network interface device to the network virtual appliance, the first data packet (Cheng disclosed in Fig 3, step 121 and [0035] that “At step 121, vPE 109 may receive data that may have originated from CE device 102 and been passed to vPE 109 via wLeaf switch,” implying that the wLeaf switch forwarded the data it received to vPE 109. Here vPE 109 anticipates the “network virtual appliance” in the claim);
processing, by the network virtual appliance, the first data packet according to a match action associated with the data flow (Cheng disclosed in [0035] that “At step 122, the received data of step 121 may be sampled by sampler 135,” and Sampler 135 is a sampling application in vPE 109; Cheng further disclosed in [0036] that “sampled traffic may be forwarded to a sketch program 137 of flow detection module 136. Sketch program 137 may use a hash function and map the packet fields (e.g., IP src, IP dst, TCP sport and dport, etc) to a number of buckets.” In other words the sampling application in vPE performs a match action);
forwarding, by the network virtual appliance to the hardware-based network interface device, the processed data packet for routing to the endpoint (Cheng disclosed in Fig. 4 a normal data path, where a packet received in interface 131 is forwarded by wLeaf switch 106 to vPE 108 for processing, then is forwarded back to wLeaf 106 for routing out of the interface 140 to the destination CE device);
sending, by the network virtual appliance to the hardware-based network interface device, a request to offload processing of subsequent packets of the data flow in accordance with the match action associated with the data flow (Cheng disclosed in Fig. 4 that the vPE sends Offload(f1) and Offload(f2) instructions to the offloading module 134. Cheng further disclosed in Figs. 3, 4 and [0037] that “offloading module 134 may send instructions to wLeaf switch 106 to implement offloading for data traffic (e.g., flows) matching the bucket bj.”);
based on content of the request, generating, by hardware-based network interface device, session information associated with the data flow, wherein the session information enables offloading of processing of the subsequent data packets associated with the data flow from the network virtual appliance to the hardware-based network interface device (Cheng disclosed in [0052] that the add(wLeaf, flow, actions) call and del(wLeaf, flow) call may respectively add and delete certain flow entries for wLeaf switch 106.);
applying, by the hardware-based network device, the match action associated with the data flow to the subsequent data packets, wherein the application of the match action is disaggregated from physical dependencies on a computing node that is hosting the network virtual appliance (Cheng disclosed in [0007] that “based on the determining the criteria to indicate data to offload through the switch, providing instructions to bypass the virtual machine for subsequent data received at the switch and matching the criteria.” The “bypass” in Cheng is equivalent to “disaggregate from physical dependencies” in the claim); and
forwarding, by the hardware-based network device, the processed subsequent data packets to the endpoint, thereby enabling the subsequent data packets to be processed and forwarded by the hardware-based network device without being forwarded to or processed by the network virtual appliance (Cheng disclosed in Fig. 4 and [0040] that in the wLeaf switch, “matching flows received on interface 131 may be sent through interface 140 rather than being forward through interface 132 which connects with vPE server 108”).
Claim 12 lists substantially the same elements as claim 1, but in system form rather than method form. Therefore, the rejection rationale for claim 1 applies equally as well to claim 12.
Regarding claims 2 and 13, Cheng disclosed the subject matter of claims 1 and 12, respectively.
Cheng further disclosed wherein the request comprises a flow offload (or FastPath++) packet that includes matches and actions (Cheng disclosed in [0052] the add(wLeaf, flow, actions) call and del(wLeaf, flow) call that anticipates the “request” in the claim).
Regarding claims 3 and 14, Cheng disclosed the subject matter of claims 2 and 13, respectively.
Cheng further disclosed wherein the hardware-based network device is configured to generate the data flow based on the matches and actions and process the data flow to be offloaded from the network virtual appliance to the hardware-based network device without forwarding packets associated with the data flow to the network virtual appliance (Cheng disclosed in Fig. 4 and [0040] that in the wLeaf switch, “matching flows received on interface 131 may be sent through interface 140 rather than being forward through interface 132 which connects with vPE server 108”).
Regarding claim 4, Cheng disclosed the method of claim 3.
Cheng further disclosed wherein the matches and actions include encapsulation with a SRC IP or DST IP (Cheng, Figs. 4, 6 and [0040], “Offloading module 134 may give specific characteristics (e.g., IP address ranges, port numbers, class of service, etc.) for the switch to determine whether initiate an offloaded data path or a normal data path.”).
Regarding claims 5 and 15, Cheng disclosed the subject matter of claims 1 and 12, respectively.
Cheng further disclosed sending an additional request to terminate processing of the data flow (Cheng disclosed in [0041, 0048, 0051 and 0052] that the offload module may send del(wLeaf, flow) call to delete certain flow entries in the switch to end the offloading of the flow).
Regarding claims 6 and 16, Cheng disclosed the subject matter of claims 1 and 12, respectively.
Cheng further disclosed wherein the hardware-based network device is configured to use an age of the data flow to determine when to stop or remove processing of the data flow (Cheng disclosed in [0048, 0049] and Fig. 6 that the flow tables in the wLeaf switch may have a timeout setting that allows an entry to be recycled if no packet matches).
Regarding claims 7 and 17, Cheng disclosed the subject matter of claims 1 and 12, respectively.
Cheng further disclosed wherein the hardware-based network device is configured to terminate processing of the data flow in response to expiration of a TTL (The “timeout” in the forwarding table entries disclosed by Cheng in Fig. 6 and paragraphs {0048, 0049] anticipates the “TTL” in the claim).
Regarding claim 8, Cheng disclosed the method of claim 3.
Cheng further disclosed wherein the data flow is offloaded when the data flow meets a bandwidth threshold (Cheng disclosed in [0037] that “when the incremental speed of a counter Δcj/Δt exceeds a threshold, the bucket bj may be determined to be a candidate for offloading and offloading module 134 may send instructions to wLeaf switch 106 to implement offloading for data traffic (e.g., flows) matching the bucket bj”).
Regarding claims 9 and 18, Cheng disclosed the subject matter of claims 1 and 12, respectively.
Cheng further disclosed wherein the generating the session information comprises parsing a plurality of rules to identify rules that are applicable to a source or destination of the data flow (Cheng disclosed in [0038] of using matching criteria. For instance Cheng disclosed “detection module 136 maps packets with source or destination addresses falling in 128.112.1.0/24 and UDP ports 5060 and 5061 to bucket b0 using a hash function. Other traffic is mapped to bucket b1. VoIP traffic usually uses port 5060 and 5061 for service.”).
Regarding claims 10 and 19, Cheng disclosed the subject matter of claims 1 and 12, respectively.
Cheng further disclosed returning processing of the subsequent packets of the data flow from the hardware-based network device to the network virtual appliance (Cheng disclosed in [0048] that “offloading module 134 may call del function to explicitly delete an offloaded function”).
Regarding claim 11, Cheng disclosed the method of claim 10.
Cheng further disclosed wherein the returning is performed in response to determining that the data flow no longer meets a criterion for offloading processing of packets of the data flow to the hardware-based network device (Cheng, [0041], “ f1 141 may later terminate or the traffic may decrease to a relatively small rate (e.g., offloading module 134 may learn such information from the pulled counters) and call del to delete the entries”).
Claim 20 lists substantially the same elements as claim 1, but in device form rather than method form. Therefore, the rejection rationale for claim 1 applies equally as well to claim 20.
Related Prior Art
Han et al. (US 2020/0213154) is directed to a method and system for storing a fast-path and a slow-path table in a memory associated with a programmable switch, such as a cache of the programmable switch. An offload controller may control the contents of the fast-path and/or slow-path table and may thereby control behavior of the programmable switch.
Gupta et al. (US 11,102,164) and Chiou et al. (11,593,138) are patents that have the same assignee (Microsoft) as the current application and disclose very similar subject matter.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIRLEY X ZHANG whose telephone number is (571)270-5012. The examiner can normally be reached 8:30am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joon H Hwang can be reached at 571-272-4036. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIRLEY X ZHANG/Primary Examiner, Art Unit 2447