Prosecution Insights
Last updated: April 19, 2026
Application No. 18/233,779

SCALABLE ROUTING AND FORWARDING OF PACKETS IN CLOUD INFRASTRUCTURE

Final Rejection §103
Filed
Aug 14, 2023
Examiner
HACKENBERG, RACHEL J
Art Unit
2454
Tech Center
2400 — Computer Networks
Assignee
Oracle International Corporation
OA Round
3 (Final)
79%
Grant Probability
Favorable
4-5
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
236 granted / 300 resolved
+20.7% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
53.2%
+13.2% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 300 resolved cases

Office Action

§103
DETAILED ACTION Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDSs) were submitted on 08/15/2025, 09/04/2025, 12/02/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements have been considered by the examiner. Response to Arguments Applicant's remarks/arguments filed 09/04/2025 have been fully considered. Applicant argues that the prior art of record Gao does not teach on “selecting, by the packet processing system, a particular host machine from the set of one or more host machines included in the packet processing system for processing the packet” as recited in the independent claims. In response to the argument, Examiner respectfully disagrees. The limitation is recited broadly. The prior art of record Gao teaches on selecting one host machine for processing the packet. Fig 2, Host-B has been selected to process the packet. See Gao, Fig 1, Host-A, Host-B, Host-C. Sending packets from Host-A VM-A to Host-C VM-C. Also sending packets from Host A VM-A to Host B (VM-B and VM-Y). Each host has multiple VMs and multiple VNICs and see Fig 2, Host-B receives the packet. Applicant argues that the office action equates VTEP with worker thread and the VNIC. In response to the argument, Examiner respectfully disagrees. The prior art of record Gao teaches on multiple host machines. Each host machine has virtual machines with multiple VNICs. The VTEP is responsible for directing/routing the packet to the particular/desired VNIC. Options 326 is set to “1” which indicates to VTEP 117B that only one VNIC (VNIC-B 154) in Host B receives the packet for the VM-B 134 (next-hop target). See Gao, [0023] In response to receiving the encapsulated packets (see 185 in FIG. 1), VTEP 117B is configured to decapsulate and retrieve information from the encapsulated packets. [0046] At host-B 110B, VTEP 117B decapsulates and retrieves information from port mirroring packets 240, more specifically from port mirroring options B. Virtual switch 116B may access information retrieved from port mirroring options B. Based on the "1" specified in option 326 and the MAC address associated with VM-B 134, virtual switch 116B is configured to forward a single copy of mirrored packets 241 (i.e., 260) to VM-B 134. Applicant argues that the amendment “wherein the plurality of VNICs is shared by the set of one or more host machines” to the independent claims overcomes the prior art rejection. In response to the argument, Examiner respectfully disagrees. Per the interview discussion on 08/29/2025, it was agreed that a set of host machines {plurality of host machines} that shared a plurality of VNICs would overcome the current art rejection. However, as the claim is amended to recite “wherein the plurality of VNICs is shared by the set of one or more host machines”, the prior art of record Gao reads on this limitation. Please see rejection below in view of: US 2018/0349163 A1 (Gao) in view of US 2019/0179668 A1 (Wang) and further in view of US 2021/0119921 A1 (Xie) regarding Claims 1, 4-7, 10, 13-16, 19. More in view of US 2020/0314015 A1 (Mariappan) for Claims 2-3, 11-12, 20. More in view of US 2019/0265996 A1 (Shevade) for Claims 8-9, 17-18. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4-7, 10, 13-16, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2018/0349163 A1 (Gao) in view of US 2019/0179668 A1 (Wang) and further in view of US 2021/0119921 A1 (Xie). Regarding Claim 1: Gao teaches A method comprising: receiving, by a packet processing system (Fig 1, System 100) comprising a set of one or more host machines and a plurality of virtual network interface cards (VNICs), a packet originating from a first compute instance (Fig 1, Host A VM-A) hosted on a first host machine (Fig 1, Host A), (Fig 1, Host-A, Host-B, Host-C. Sending packets from Host-A VM-A to Host-C VM-C. Also sending packets from Host A VM-A to Host B (VM-B and VM-Y). Each host has multiple VMs and multiple VNICs.) selecting, by the packet processing system, a particular host machine from the set of one or more host machines included in the packet processing system for processing the packet; (Fig 1, Host-A, Host-B, Host-C. Sending packets from Host-A VM-A to Host-C VM-C. Also sending packets from Host A VM-A to Host B (VM-B and VM-Y). Each host has multiple VMs and multiple VNICs. Fig 2, Host-B receives the packet.) selecting, (ie. VTEP) for processing the packet; processing the packet by the particular worker thread (ie. VTEP), ([0023] In response to receiving the encapsulated packets (see 185 in FIG. 1), VTEP 117B is configured to decapsulate and retrieve information from the encapsulated packets. [0046] At host-B 110B, VTEP 117B decapsulates and retrieves information from port mirroring packets 240.) wherein the processing comprises determining, based on information included in the packet, a VNIC from the plurality of VNICs to be used for forwarding the packet, wherein the plurality of VNICs is shared by the set of one or more host machines (Fig 1, each host machine comprises multiple VNICs), and determining a next-hop target to which the packet is to be forwarded using the VNIC and destination information included in the packet; ({options 326: [0045] option 326 in the GBM header of port mirroring packets 240 and port mirroring packets 410 may both be set to "1," indicating a single MAC address and a single monitoring VM to forward mirrored traffic to. [0046] At host-B 110B, VTEP 117B decapsulates and retrieves information from port mirroring packets 240, more specifically from port mirroring options B. Virtual switch 116B may access information retrieved from port mirroring options B. Based on the "1" specified in option 326 and the MAC address associated with VM-B 134, virtual switch 116B is configured to forward a single copy of mirrored packets 241 (i.e., 260) to VM-B 134.) Options 326 is set to “1” which indicates to VTEP 117B that only one VNIC (VNIC-B 154) in Host B receives the packet for the VM-B 134 (next-hop target). and causing, by the packet processing system, the packet to be forwarded to the next-hop target. ([0046] At host-B 110B, VTEP 117B decapsulates and retrieves information from port mirroring packets 240, more specifically from port mirroring options B. Virtual switch 116B may access information retrieved from port mirroring options B. Based on the "1" specified in option 326 and the MAC address associated with VM-B 134, virtual switch 116B is configured to forward a single copy of mirrored packets 241 (i.e., 260) to VM-B 134. [0056] Fig 6, Forward generated copy to VNIC according to retrieved MAC address.) Gao teaches on selecting a particular worker thread ([0023]). However, Gao is silent on selecting, from a plurality of worker threads on the particular host machine, a particular worker thread for processing the packet; Wang teaches, in the same field of endeavor, on techniques for distributing processing of routes among multiple execution threads of a network device, Abstract. Wang also teaches selecting, from a plurality of worker threads on the particular host machine (Fig 1, Network Device 12), a particular worker thread for processing the packet; processing the packet by the particular worker thread. ([0010] identify, with a thread of the plurality of execution threads, a first route processing thread of the execution threads to process a first route of a routing protocol. [0052] Main thread 28 may manage reordering the routes received in order from the multiple route processing threads 26.) It would have been obvious to an ordinary person having skill in the art before the effective filing date of the invention, to modify Gao per Wang to include a plurality of worker threads on the particular host machine. This would have been advantageous as discussed above, as it would allow the modified system to provide load balancing and scalability for a virtual environment implemented by a host machine by implementing multiple choices of worker threads for routing. Gao teaches on forwarding received packets ([0018]). However, Gao (as modified by Wang) is silent on receiving a packet from a host outside of the packet processing system. Xie teaches, in the same field of endeavor, a packet sending method including determining, selecting and forwarding using a plurality of forwarding entries, Abstract. Xie also teaches on receiving a packet from a host outside of the system. ([0054] In a possible implementation, in a multi-stage network, the first device may be an edge device, or may be referred to as a first-stage leaf node or an access tier device. The first device receives the first packet, for example, a multicast packet, outside a BIER domain, and the first device encapsulates the received multicast packet with a BIER header. The BIER header encapsulated by the first device for the multicast packet includes a bit string, and the bit string is used to identify an egress device to which the first packet is sent in the BIER domain. [0055] In a possible implementation, the first device receives a second packet from a fourth device, and generates the first packet by encapsulating the second packet with the BIER header. For example, the edge device receives the second packet from the fourth device outside the BIER domain, and the edge device generates the first packet by encapsulating the second packet with the BIER header.) It would have been obvious to an ordinary person having skill in the art before the effective filing date of the invention, to modify Gao (as modified by Wang) per Xie to include a host outside of the system. This would have been advantageous as discussed above, as it would allow the combined system to provide proper packet processing and load balancing in multiple network architecture structure types which provides flexibility for implementation. Regarding Claim 10: Gao teaches A packet processing system, comprising: one or more processors; and a non-transitory computer-readable storage medium containing instructions which, when executed on the one or more processors, cause the one or more processors to perform operations ([0057] The computer system may include a non-transitory computer-readable medium having stored thereon instructions or program code that, when executed by the processor, cause the processor to perform processes described herein with reference to FIG. 1) including: receiving, by the packet processing system comprising a set of one or more host machines and a plurality of virtual network interface cards (VNICs), a packet originating from a first compute instance (Fig 1, Host A VM-A) hosted on a first host machine (Fig 1, Host A), (Fig 1, Host-A, Host-B, Host-C. Sending packets from Host-A VM-A to Host-C VM-C. Also sending packets from Host A VM-A to Host B (VM-B and VM-Y). Each host has multiple VMs and multiple VNICs.) selecting a particular host machine from the set of one or more host machines included in the packet processing system for processing the packet; (Fig 1, Host-A, Host-B, Host-C. Sending packets from Host-A VM-A to Host-C VM-C. Also sending packets from Host A VM-A to Host B (VM-B and VM-Y). Each host has multiple VMs and multiple VNICs. Fig 2, Host-B receives the packet.) selecting, (ie. VTEP) for processing the packet; ([0023] In response to receiving the encapsulated packets (see 185 in FIG. 1), VTEP 117B is configured to decapsulate and retrieve information from the encapsulated packets. [0046] At host-B 110B, VTEP 117B decapsulates and retrieves information from port mirroring packets 240.) processing the packet by the particular worker thread (ie. VTEP), wherein the processing comprises determining, based on information included in the packet, a VNIC from the plurality of VNICs to be used for forwarding the packet, wherein the plurality of VNICs is shared by the set of one or more host machines (Fig 1, each host machine comprises multiple VNICs), and determining a next-hop target to which the packet is to be forwarded using the VNIC and destination information included in the packet; ({options 326: [0045] option 326 in the GBM header of port mirroring packets 240 and port mirroring packets 410 may both be set to "1," indicating a single MAC address and a single monitoring VM to forward mirrored traffic to. [0046] At host-B 110B, VTEP 117B decapsulates and retrieves information from port mirroring packets 240, more specifically from port mirroring options B. Virtual switch 116B may access information retrieved from port mirroring options B. Based on the "1" specified in option 326 and the MAC address associated with VM-B 134, virtual switch 116B is configured to forward a single copy of mirrored packets 241 (i.e., 260) to VM-B 134.) Options 326 is set to “1” which indicates to VTEP 117B that only one VNIC (VNIC-B 154) in Host B receives the packet for the VM-B 134 (next-hop target). and causing the packet to be forwarded to the next-hop target. ([0046] At host-B 110B, VTEP 117B decapsulates and retrieves information from port mirroring packets 240, more specifically from port mirroring options B. Virtual switch 116B may access information retrieved from port mirroring options B. Based on the "1" specified in option 326 and the MAC address associated with VM-B 134, virtual switch 116B is configured to forward a single copy of mirrored packets 241 (i.e., 260) to VM-B 134. [0056] Fig 6, Forward generated copy to VNIC according to retrieved MAC address.) Gao teaches on selecting a particular worker thread ([0023]). However, Gao is silent on selecting, from a plurality of worker threads on the particular host machine, a particular worker thread for processing the packet; Wang teaches selecting, from a plurality of worker threads on the particular host machine (Fig 1, Network Device 12), a particular worker thread for processing the packet; processing the packet by the particular worker thread. ([0010] identify, with a thread of the plurality of execution threads, a first route processing thread of the execution threads to process a first route of a routing protocol. [0052] Main thread 28 may manage reordering the routes received in order from the multiple route processing threads 26.) It would have been obvious to an ordinary person having skill in the art before the effective filing date of the invention, to modify Gao per Wang to include a plurality of worker threads on the particular host machine. This would have been advantageous as discussed above, as it would allow the modified system to provide load balancing and scalability for a virtual environment implemented by a host machine by implementing multiple choices of worker threads for routing. Gao teaches on forwarding received packets ([0018]). However, Gao (as modified by Wang) is silent on receiving a packet from a host outside of the packet processing system. Xie teaches on receiving a packet from a host outside of the system. ([0054] In a possible implementation, in a multi-stage network, the first device may be an edge device, or may be referred to as a first-stage leaf node or an access tier device. The first device receives the first packet, for example, a multicast packet, outside a BIER domain, and the first device encapsulates the received multicast packet with a BIER header. The BIER header encapsulated by the first device for the multicast packet includes a bit string, and the bit string is used to identify an egress device to which the first packet is sent in the BIER domain. [0055] In a possible implementation, the first device receives a second packet from a fourth device, and generates the first packet by encapsulating the second packet with the BIER header. For example, the edge device receives the second packet from the fourth device outside the BIER domain, and the edge device generates the first packet by encapsulating the second packet with the BIER header.) It would have been obvious to an ordinary person having skill in the art before the effective filing date of the invention, to modify Gao (as modified by Wang) by modifying Gao per Xie to include a host outside of the system. This would have been advantageous as discussed above, as it would allow the combined system to provide proper packet processing and load balancing in multiple network architecture structure types which provides flexibility for implementation. Regarding Claim 19: Gao teaches One or more computer readable non-transitory media storing computer- executable instructions that, when executed by one or more processors, ([0057] The computer system may include a non-transitory computer-readable medium having stored thereon instructions or program code that, when executed by the processor, cause the processor to perform processes described herein with reference to FIG. 1) cause: receiving, by a packet processing system comprising a set of one or more host machines and a plurality of virtual network interface cards (VNICs), a packet originating from a first compute instance (Fig 1, Host A VM-A) hosted on a first host machine (Fig 1, Host A), (Fig 1, Host-A, Host-B, Host-C. Sending packets from Host-A VM-A to Host-C VM-C. Also sending packets from Host A VM-A to Host B (VM-B and VM-Y). Each host has multiple VMs and multiple VNICs.) selecting, by the packet processing system, a particular host machine from the set of one or more host machines included in the packet processing system for processing the packet; (Fig 1, Host-A, Host-B, Host-C. Sending packets from Host-A VM-A to Host-C VM-C. Also sending packets from Host A VM-A to Host B (VM-B and VM-Y). Each host has multiple VMs and multiple VNICs. Fig 2, Host-B receives the packet.) selecting, (ie. VTEP) for processing the packet; ([0023] In response to receiving the encapsulated packets (see 185 in FIG. 1), VTEP 117B is configured to decapsulate and retrieve information from the encapsulated packets. [0046] At host-B 110B, VTEP 117B decapsulates and retrieves information from port mirroring packets 240.) processing the packet by the particular worker thread (ie. VTEP), wherein the processing comprises determining, based on information included in the packet, a VNIC from the plurality of VNICs to be used for forwarding the packet, wherein the plurality of VNICs is shared by the set of one or more host machines (Fig 1, each host machine comprises multiple VNICs), and determining a next-hop target to which the packet is to be forwarded using the VNIC and destination information included in the packet; ({options 326: [0045] option 326 in the GBM header of port mirroring packets 240 and port mirroring packets 410 may both be set to "1," indicating a single MAC address and a single monitoring VM to forward mirrored traffic to. [0046] At host-B 110B, VTEP 117B decapsulates and retrieves information from port mirroring packets 240, more specifically from port mirroring options B. Virtual switch 116B may access information retrieved from port mirroring options B. Based on the "1" specified in option 326 and the MAC address associated with VM-B 134, virtual switch 116B is configured to forward a single copy of mirrored packets 241 (i.e., 260) to VM-B 134.) Options 326 is set to “1” which indicates to VTEP 117B that only one VNIC (VNIC-B 154) in Host B receives the packet for the VM-B 134 (next-hop target). and causing, by the packet processing system, the packet to be forwarded to the next-hop target. ([0046] At host-B 110B, VTEP 117B decapsulates and retrieves information from port mirroring packets 240, more specifically from port mirroring options B. Virtual switch 116B may access information retrieved from port mirroring options B. Based on the "1" specified in option 326 and the MAC address associated with VM-B 134, virtual switch 116B is configured to forward a single copy of mirrored packets 241 (i.e., 260) to VM-B 134. [0056] Fig 6, Forward generated copy to VNIC according to retrieved MAC address.) Gao teaches on selecting a particular worker thread ([0023]). However, Gao is silent on selecting, from a plurality of worker threads on the particular host machine, a particular worker thread for processing the packet; Wang teaches selecting, from a plurality of worker threads on the particular host machine (Fig 1, Network Device 12), a particular worker thread for processing the packet; processing the packet by the particular worker thread. ([0010] identify, with a thread of the plurality of execution threads, a first route processing thread of the execution threads to process a first route of a routing protocol. [0052] Main thread 28 may manage reordering the routes received in order from the multiple route processing threads 26.) It would have been obvious to an ordinary person having skill in the art before the effective filing date of the invention, to modify Gao per Wang to include a plurality of worker threads on the particular host machine. This would have been advantageous as discussed above, as it would allow the modified system to provide load balancing and scalability for a virtual environment implemented by a host machine by implementing multiple choices of worker threads for routing. Gao teaches on forwarding received packets ([0018]). However, Gao (as modified by Wang) is silent on receiving a packet from a host outside of the packet processing system. Xie teaches on receiving a packet from a host outside of the system. ([0054] In a possible implementation, in a multi-stage network, the first device may be an edge device, or may be referred to as a first-stage leaf node or an access tier device. The first device receives the first packet, for example, a multicast packet, outside a BIER domain, and the first device encapsulates the received multicast packet with a BIER header. The BIER header encapsulated by the first device for the multicast packet includes a bit string, and the bit string is used to identify an egress device to which the first packet is sent in the BIER domain. [0055] In a possible implementation, the first device receives a second packet from a fourth device, and generates the first packet by encapsulating the second packet with the BIER header. For example, the edge device receives the second packet from the fourth device outside the BIER domain, and the edge device generates the first packet by encapsulating the second packet with the BIER header.) It would have been obvious to an ordinary person having skill in the art before the effective filing date of the invention, to modify Gao (as modified by Wang) by modifying Gao per Xie to include a host outside of the system. This would have been advantageous as discussed above, as it would allow the combined system to provide proper packet processing and load balancing in multiple network architecture structure types which provides flexibility for implementation. Regarding Claims 4, 13: Gao (as modified by Wang & Xie) teaches the inventions of Claims 1, 10 as described. Gao teaches wherein the next-hop target is one of a virtual router, a service gateway, a dynamic routing gateway, an Internet gateway, and a network address translation gateway. ([0017] Physical network 102 may include any suitable number of interconnected network devices, such as layer-3 routers, layer-2 switches, gateway devices. Includes Internet Protocol layer.) Regarding Claims 5, 14: Gao (as modified by Wang & Xie) teaches the inventions of Claims 1, 10 as described. Gao teaches wherein one or more virtual IP addresses (ie. IP addresses of the VNICs) are allocated for the VNIC and each of the one or more virtual IP address is associated with a forwarding rule (ie. Geneve tunnel 203). ([0039] To obtain the IP address of VTEP 117B associated with host-B 110B, where VNIC-Y 153 and VNIC-B 154 reside, virtual switch 116A may broadcast a message to all hosts in virtualized computing environment 100 to request for the IP address of the VTEP associated with the host that supports VNIC-Y 153 and VNIC-B 154. In some embodiments, the Address Resolution Protocol (ARP) is utilized. When host-B 110B supporting VNIC-Y 153 and VNIC-B 154 receives the broadcasted request, host-B 110B may send an IP address associated with host-B 110B (e.g., IP address of VTEP 117B) to virtual switch 116A at host-A 110A. With the IP address of VTEP 117B, VTEP 117 A and VTEP 117B may establish the Geneve tunnel 203 between them.) Regarding Claims 6, 15: Gao (as modified by Wang & Xie) teaches the inventions of Claims 1, 10 as described. Gao teaches wherein the VNIC from the plurality of VNICs to be used for forwarding the packet is determined using a source information of the packet. ([0052) At 550 in FIG. 5, virtual switch 116A is configured to transmit the port mirroring packets from the source host (e.g., host-A 110A) to one or more destination host (e.g., host-B 110B and/or host-D 110D) in a port mirroring session. In some embodiments, the port mirroring packets may be transmitted along a tunnel between a source VTEP (e.g., VTEP 117A) on the source host and one or more destination VTEPs ( e.g., VTEP 117B and/or VTEP 117D) on the destination hosts.) Regarding Claims 7, 16: Gao (as modified by Wang & Xie) teaches the inventions of Claims 1, 10 as described. Gao teaches wherein the next-hop target to which the packet is to be forwarded is determined using metadata associated with the VNIC, wherein metadata includes information identifying one or more forwarding rules of the VNIC or security and firewall rules of the VNIC. ([0018] Each virtual switch 116A/116B/116C may generally correspond to a logical collection of virtual ports that are each logically associated with a VNIC. For example in FIG. 1, at host-A 110A, virtual switch 116A is a logical collection of virtual ports VP-A 161 and VP-X 162, which are associated with VNIC-A 151 and VNIC-X 152, respectively. Each virtual switch 116A/116B/116C maintains forwarding information to forward packets to and from the corresponding VNICs.) Claim(s) 2-3, 11-12, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2018/0349163 A1 (Gao) in view of US 2019/0179668 A1 (Wang) and further in view of US 2021/0119921 A1 (Xie) more in view of US 2020/0314015 A1 (Mariappan). Regarding Claims 2, 11, 20: Gao (as modified by Wang & Xie) teaches the inventions of Claims 1, 10, 19 as described. Gao teaches routing devices ([0017]). However, Gao (as modified by Wang & Xie) is silent on wherein the packet originating from the first compute instance is received by a top-of-rack (TOR) switch included in the packet processing system. Mariappan teaches, in the same field of endeavor, techniques for specifying a backend virtual network for a service load balancer, Abstract. Mariappan also teaches wherein the packet originating from the first compute instance is received by a top-of-rack (TOR) switch included in the packet processing system. ([0036] Data center 10 includes storage and/or compute servers interconnected via switch fabric 14 provided by one or more tiers of physical network switches and routers, with servers 12A-12X (herein, "servers 12") depicted as coupled to top-of-rack switches 16A-16N.) It would have been obvious to an ordinary person having skill in the art before the effective filing date of the invention, to modify Gao (as modified by Wang & Xie) by modifying Gao per Mariappan to include wherein the packet originating from the first compute instance is received by a top-of-rack (TOR) switch included in the packet processing system. This would have been advantageous as discussed above, as it would allow the combined system to provide proper packet processing and load balancing utilizing multiple network device types which provides flexibility for implementation. Regarding Claims 3, 12: Gao (as modified by Wang & Xie) teaches the inventions of Claims 1, 10 as described. Gao teaches multiple host machines and routing packets between them ([0017][0018]). However, Gao (as modified by Wang & Xie) is silent on wherein the particular host machine from the set of one or more host machines is selected by a top-of-rack (TOR) switch included in the packet processing system based on an equal cost multipath algorithm. Mariappan teaches wherein the particular host machine from the set of one or more host machines is selected by a top-of-rack (TOR) switch included in the packet processing system based on an equal cost multipath algorithm. ([0087] Network controller 24 configures the load balancer 31 to use virtual network interface 26N that corresponds to the specified backend virtual network, e.g., as a next hop for service traffic 32. (The next hop may be an equal-cost multipath next hop or other composite next hop.)) It would have been obvious to an ordinary person having skill in the art before the effective filing date of the invention, to modify Gao (as modified by Wang & Xie) by modifying Gao per Mariappan to include wherein the particular host machine from the set of one or more host machines is selected by a top-of-rack (TOR) switch included in the packet processing system based on an equal cost multipath algorithm. This would have been advantageous as discussed above, as it would allow the combined system to provide targeted load balancing by utilizing multiple algorithms to provide optimization of routing in a multiple device type and network structure environment. Claim(s) 8-9, 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2018/0349163 A1 (Gao) in view of US 2019/0179668 A1 (Wang) and further in view of US 2021/0119921 A1 (Xie) more in view of US 2019/0265996 A1 (Shevade). Regarding Claims 8, 17: Gao (as modified by Wang & Xie) teaches the inventions of Claims 1, 10 as described. Gao teaches wherein the packet processing system receives the packet from a network virtualization device (NVD) (ie. virtual switch 116A) associated with the first host machine, the NVD including a ([0039] To obtain the IP address of VTEP 117B associated with host-B 110B, where VNIC-Y 153 and VNIC-B 154 reside, virtual switch 116A may broadcast a message to all hosts in virtualized computing environment 100 to request for the IP address of the VTEP associated with the host that supports VNIC-Y 153 and VNIC-B 154. In some embodiments, the Address Resolution Protocol (ARP) is utilized. When host-B 110B supporting VNIC-Y 153 and VNIC-B 154 receives the broadcasted request, host-B 110B may send an IP address associated with host-B 110B (e.g., IP address of VTEP 117B) to virtual switch 116A at host-A 110A. With the IP address of VTEP 117B, VTEP 117 A and VTEP 117B may establish the Geneve tunnel 203 between them.) Gao teaches on multiple VNICs and microprocessors ([0016][0060]). However, Gao (as modified by Wang & Xie) is silent on wherein the packet processing system receives the packet from a network virtualization device (NVD) associated with the first host machine, the NVD including a micro-VNIC that is configured to route the packet to the packet processing system. Shevade teaches, in the same field of endeavor, on various embodiments of methods and apparatus for enhancing the scalability and availability of a virtualized computing service (VCS) using a control plane, Abstract. Shevade also teaches wherein the packet processing system receives the packet from a network virtualization device (NVD) associated with the first host machine, the NVD including a micro-VNIC that is configured to route the packet to the packet processing system. ([0035] The data plane 150 may comprise several types of virtualization hosts 155 in the depicted embodiment, individuals one of which may be used to host one or more VMs requested by VCS clients 180. [0056] For some micro-VMs (such as 540A), respective virtual network interfaces may be set up at the VCS, e.g., with the help of the OVMC 570.) It would have been obvious to an ordinary person having skill in the art before the effective filing date of the invention, to modify Gao (as modified by Wang & Xie) by modifying Gao per Shevade to include wherein the packet processing system receives the packet from a network virtualization device (NVD) associated with the first host machine, the NVD including a micro-VNIC that is configured to route the packet to the packet processing system. This would have been advantageous as discussed above, as it would allow the combined system to provide shorter response time for fulfilling certain types of VM configuration requests, see Shevade [0037]. Regarding Claims 9, 18: Gao (as modified by Wang & Xie & Shevade) teaches the inventions of Claims 8, 17 as described. Gao teaches wherein the ([0018] Each virtual switch 116A/116B/116C maintains forwarding information to forward packets to and from the corresponding VNICs. [0039][0040] With the IP address of VTEP 117B, VTEP 117 A and VTEP 117B may establish the Geneve tunnel 203 between them. VTEP 117 A then sends port mirroring packets 240 to host-B 110B via tunnel 203. At host-B 110B, VTEP 117B decapsulates and retrieves information from packets 240.) Gao teaches on multiple VNICs and microprocessors ([0016][0060]). However, Gao (as modified by Wang & Xie) is silent on wherein the micro-VNIC directs the packet to a virtual internet protocol address of the VNIC. Shevade teaches wherein the micro-VNIC directs the packet to a virtual internet protocol address of the VNIC. ([0038] A particular cell may, for example, be selected based on a mapping from a networking-related property of the requested VM (e.g., a subnet within which the VM is to be included, and IP address of the VM. [0056] For some micro-VMs (such as 540A), respective virtual network interfaces may be set up at the VCS, e.g., with the help of the OVMC 570.) It would have been obvious to an ordinary person having skill in the art before the effective filing date of the invention, to modify Gao (as modified by Wang & Xie) by modifying Gao per Shevade to include wherein the micro-VNIC directs the packet to a virtual internet protocol address of the VNIC. This would have been advantageous as discussed above, as it would allow the combined system to provide shorter response time for fulfilling certain types of VM configuration requests, see Shevade [0037]. Conclusion & Contact Information THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL J HACKENBERG whose telephone number is (571)272-5417. The examiner can normally be reached 9am-5pm M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Glenton B Burgess can be reached at (571)272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RACHEL J HACKENBERG/Primary Examiner, Art Unit 2454
Read full office action

Prosecution Timeline

Aug 14, 2023
Application Filed
Sep 15, 2023
Response after Non-Final Action
Feb 07, 2025
Non-Final Rejection — §103
Feb 26, 2025
Response Filed
May 23, 2025
Non-Final Rejection — §103
Aug 29, 2025
Applicant Interview (Telephonic)
Aug 29, 2025
Examiner Interview Summary
Sep 04, 2025
Response Filed
Dec 06, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587464
FAULT INJECTION CONFIGURATION EQUIVALENCY TESTING
2y 5m to grant Granted Mar 24, 2026
Patent 12580819
DETERMINING SERVICE GROUP CAPACITY BASED ON AN AGGREGATE RISK METRIC
2y 5m to grant Granted Mar 17, 2026
Patent 12500823
SYSTEM AND METHOD FOR ENTERPRISE - WIDE DATA UTILIZATION TRACKING AND RISK REPORTING
2y 5m to grant Granted Dec 16, 2025
Patent 12495001
CAPACITY AWARE LOAD PACKING FOR LAYER-4 LOAD BALANCER
2y 5m to grant Granted Dec 09, 2025
Patent 12470508
RESTRICTING MESSAGE NOTIFICATIONS AND CONVERSATIONS BASED ON DEVICE TYPE, MESSAGE CATEGORY, AND TIME PERIOD
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 300 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month