Prosecution Insights
Last updated: April 19, 2026
Application No. 18/227,334

ADAPTIVE TRAFFIC FORWARDING OVER MULTIPLE CONNECTIVITY SERVICES

Final Rejection §103
Filed
Jul 28, 2023
Examiner
KIM, ANDREW CHANUL
Art Unit
2471
Tech Center
2400 — Computer Networks
Assignee
VMware, Inc.
OA Round
2 (Final)
32%
Grant Probability
At Risk
3-4
OA Rounds
3y 1m
To Grant
12%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
8 granted / 25 resolved
-26.0% vs TC avg
Minimal -20% lift
Without
With
+-20.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
67 currently pending
Career history
92
Total Applications
across all art units

Statute-Specific Performance

§101
0.6%
-39.4% vs TC avg
§103
64.9%
+24.9% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This is in response to an amendment/response filed 1/16/2026. No claims have been cancelled. No claims have been added. Claims 1-21 are now pending. Applicant’s amendments to the Drawings have overcome each and every objection previously set forth in the Non-Final Office Action mailed 9/16/2025. Applicant’s amendments to the claims have overcome each and every 35 U.S.C. 112(b) rejection previously set forth in the Non-Final Office Action mailed 9/16/2025. Response to Arguments Applicant's arguments filed 1/16/2026 have been fully considered but they are not persuasive. On page 15-19 of the remarks, in regard to the independent claim, the Applicant disagrees with the rejection under 35 U.S.C. 103 as being unpatentable over Ead et al. US 20240129242 (hereinafter “Ead”) in view of Alicherry et al. US 20160048407 (hereinafter “Alicherry”) and in further view of Levy et al. US 20170244630 (hereinafter “Levy”) Specifically, the Applicant remarks: Alicherry does not teach "determination that a condition for scaling up a bandwidth of the first connectivity service. A skilled artisan would understand that merely adding or removing VNAs or moving traffic from one VNA to another addresses a particular problem of load-balancing traffic through a particular VNA and is not concerned with connectivity services. The Examiner respectfully disagrees. Regarding (1), Alicherry mentions in [34] that "Scaling up of the VNAs 106 may be understood as the process of increasing the number of VNAs 106 present in the cloud 104 in order to either handle an increase or a potential increase in the traffic, i.e., the number of flows or data packets handled by the cloud 104 or reduce load on an existing overloaded VNA 106" - in other words, the number of VNAs is increased to handle an increase in traffic which also means the overall bandwidth of the system is increased. The applicant argues that this is not concerned with "connectivity services". A virtual network appliance provides network services including, routing, traffic forwarding, and/or secure communication between network entities or computer systems which are all directly related to "connectivity services"; therefore, applicant's argument is not persuasive. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 8, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Ead et al. US 20240129242 (hereinafter “Ead”) in view of Alicherry et al. US 20160048407 (hereinafter “Alicherry”) and in further view of Levy et al. US 20170244630 (hereinafter “Levy”) As to claim 1, 8, and 15 (claim 1 is the method claim for the non-transitory computer-readable medium and computer system in claim 8 and 15 respectively): Ead discloses: A method for a first computer system to perform adaptive traffic forwarding, (“forwarding by the network load balancer, traffic associated with the service and received from the second virtual network to the packet processor; and processing by the packet processor, the traffic received from the network load balancer to generate processed traffic, the processed traffic being forwarded by the packet processor to the service endpoint corresponding to the service in the first cloud environment.”, Ead [0014]) (“Route tables, security rules, and DHCP options may be configured for a VCN. Route tables are virtual route tables for the VCN and include rules to route traffic from subnets within the VCN to destinations outside the VCN by way of gateways or specially configured instances. A VCN's route tables can be customized to control how packets are forwarded/routed to and from the VCN. DHCP options refers to configuration information that is automatically provided to the instances when they boot up.”, Ead [0071]) wherein the method comprises: monitoring metric information associated with at least a first connectivity service from multiple connectivity services that are connecting (“The monitoring module 724 enables monitoring of a complete stack (e.g., applications, infrastructure, network, and services) and use alarms, logs, and events data to perform automated actions. The monitoring module 724 may also utilize dashboards to visually depict (e.g., in GUI) one or more metrics data obtained from the first cloud environment.”, Ead [0169]) (“As shown in FIG. 7, the observability module (included in the pool of adaptors 712C) is configured to mirror or forward (e.g., publishes) logs, metrics, and other performance parameters related to the resources deployed in the customer tenancy in the first cloud environment to a dashboard e.g., included in the monitoring module 724 included in the second cloud environment, for further processing.”, Ead [0181]) (“For example, the observability adaptor/module (included in the pool of adaptors 712C) may transmit metrics associated with resources deployed in the first cloud environment to the monitoring module of the second cloud environment. As another example of the provisioning process, the network adaptor (included in the pool of adaptors 712C) may attach the transit gateway to the customer's VPC in the second cloud environment i.e., form the peering in the second cloud environment.”, Ead [0187]) (a) the first computer system located in a first cloud environment at a first geographical site and (b) a second computer system located in a second cloud environment at a second geographical site; (FIG. 7 shows first and second cloud environment provided by different cloud providers, Ead) (“The resources in CSPI may be spread across one or more data centers that may be geographically spread across one or more geographical regions.”, Ead [0046]) and updating routing information to associate the subset with a second connectivity service from the multiple connectivity services; (“The configuration information for a VCN may include, for example, information about the address range associated with the VCN, subnets within the VCN and associated information, one or more VRs associated with the VCN, compute instances in the VCN and associated VNICs, NVDs executing the various virtualization network functions (e.g., VNICs, VRs, gateways) associated with the VCN, state information for the VCN, and other VCN-related information. In certain embodiments, a VCN Distribution Service publishes the configuration information stored by the VCN Control Plane, or portions thereof, to the NVDs. The distributed information may be used to update information (e.g., forwarding tables, routing tables, etc.) stored and used by the NVDs to forward packets to and from the compute instances in the VCN.”, Ead [0073]) Ead as described above does not explicitly teach: in response to determination that a condition for scaling up a bandwidth of the first connectivity service is satisfied based on the metric information, selecting, from a set of multiple flows associated with the first connectivity service, a subset that includes at least a first flow between a first endpoint located in the first cloud environment and a second endpoint located in the second cloud environment; and in response to detecting egress packets associated with the first flow from the first endpoint, forwarding the egress packets towards the second computer system using the second connectivity service based on the updated routing information to cause the second computer system to forward the egress packets towards the second endpoint. However, Alicherry further teaches selecting a flow based on the determination that a condition for scaling up is satisfied which includes: in response to determination that a condition for scaling up a bandwidth of the first connectivity service is satisfied based on the metric information, selecting, from a set of multiple flows associated with the first connectivity service, a subset that includes at least a first flow between a first endpoint located in the first (“a method for managing virtual network appliances (VNAs) is described. The method for managing the VNAs comprises ascertaining total load handled by a plurality of VNAs operating in a cloud computing network. Further, the total load is compared with a minimum threshold level and a maximum threshold level. The method further comprises determining whether to perform at least one of a scaling up or scaling down of the plurality of VNAs based on the comparing. Further, at least one VNA is identified from among the plurality of VNAs for flow migration based on the determination.”, Alicherry [0006]) (“The method comprises receiving performance data for a VNA. The method further comprises analyzing the performance data to determine whether the VNA has a weak performance status, where the weak performance status corresponds to any one of an overloaded, an under-loaded, and a failed status. Further, a flow migration request is provided to a classifier for migrating one or more flows of data packets from the VNA based on the analyzing.”, Alicherry [0007]) (“Further, in one embodiment, the flow distribution system 110 may facilitate scaling up and scaling down of the VNAs 106 in the cloud 104. Scaling up of the VNAs 106 may be understood as the process of increasing the number of VNAs 106 present in the cloud 104 in order to either handle an increase or a potential increase in the traffic, i.e., the number of flows or data packets handled by the cloud 104 or reduce load on an existing overloaded VNA 106. Scaling down of the VNAs 106 may be understood as the process of the reducing the number of VNAs 106 present in the cloud 104 in order to reduce the resources utilized by the cloud in case the load currently handled by the cloud 104 can be still be handled by the VNAs 106 remaining after removal of one VNA 106. Thus, scaling up or scaling down the VNAs 106 may facilitate the network appliances managing architecture 108 in efficiently managing the resource utilization of the cloud 104.”, Alicherry [0034]) Ead and Alicherry are analogous because they pertain to managing cloud computing network. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include selecting a flow based on the determination that a condition for scaling up is satisfied as described in Alicherry into Ead. By modifying the method to include selecting a flow based on the determination that a condition for scaling up is satisfied as taught by Alicherry, the benefits of improved traffic forwarding scheme (Alicherry [0007] and Ead [0071]) are achieved. The combination of Alicherry and Ead as described above does not explicitly teach: and in response to detecting egress packets associated with the first flow from the first endpoint, forwarding the egress packets towards the second computer system using the second connectivity service based on the updated routing information to cause the second computer system to forward the egress packets towards the second endpoint. However, Levy further teaches routing packets to another endpoint based on updated routing information after detecting packets from the first flow which includes: and in response to detecting egress packets associated with the first flow from the first endpoint, forwarding the egress packets towards the second computer system using the second connectivity service based on the updated routing information to cause the second computer system to forward the egress packets towards the second endpoint. (“FIG. 1 is a block diagram that schematically illustrates a communication network 20, in accordance with an embodiment of the present invention. Network 20 comprises multiple network switches 24 that are interconnected by network links 28. Network 20 provides connectivity and communication services to multiple endpoints 32.”, Levy [0026]) (“The packet processing circuitry is configured to detect a compromised ability to forward via the ports a flow of packets originating from a source endpoint to a destination endpoint, to identify, in response to detecting the compromised ability, based on the topology, a second network switch that lies on a current route of the flow, and also lies on one or more alternative routes from the source endpoint to the destination endpoint that do not traverse the network switch, and to send via one of the ports a notification, which is addressed individually to the second network switch and requests the second network switch to reroute the flow.”, Levy [0009]) (“Thus, when a need arises to reroute a flow, the congested switch queries its database with the source address of the flow, and retrieves the identity (e.g., the address) of the rerouting switch. The congested switch then generates a notification packet, referred to as “adaptive routing notification (ARN),” “congestion notification” or simply “notification.” The ARN comprises a unicast packet that is addressed individually to the specific rerouting switch selected by the congested switch.”, Levy [0048]) (“The congested switch sends the ARN to the rerouting switch. The rerouting switch receives the ARN, and in response may reroute the flow to an alternative route that reaches the destination endpoint but does not traverse the congested switch. Note that, since the ARN is addressed explicitly to the rerouting switch, it can be forwarded to the rerouting switch over any desired route, not necessarily over the reverse direction of the route of the flow.”, Levy [0049]) Ead, Levy, and Alicherry are analogous because they pertain to packet management in communication network. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include routing packets to another endpoint based on updated routing information after detecting packets from the first flow as described in Levy into Ead as modified by Alicherry. By modifying the method to include routing packets to another endpoint based on updated routing information after detecting packets from the first flow as taught by Levy, the benefits of improved traffic forwarding scheme (Alicherry [0007], Levy [0009], and Ead [0071]) are achieved. Claim(s) 2-4, 9-11, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Ead in view of Alicherry and Levy, as applied to claim 1, and further in view of Patil et al. US 20240205089 (hereinafter “Patil”) As to claim 2, 9, and 16 (claim 2 is the method claim for the non-transitory computer-readable medium and computer system in claim 9 and 16 respectively): The combination of Ead, Alicherry, and Levy, as described above, does not explicitly teach: The method of claim 1, wherein updating routing information comprises: installing an adaptive static route that associates (a) destination information associated with the second endpoint of the first flow with (b) a next hop associated with the second connectivity service at the first computer system. However, Patil further teaches updating routing information which includes: The method of claim 1, wherein updating routing information comprises: installing an adaptive static route that associates (a) destination information associated with the second endpoint of the first flow with (b) a next hop associated with the second connectivity service at the first computer system. (“FIGS. 3A-3B depict an example illustrating how network routing information 122 may be updated based on update 144. In this example, network routing information 122 includes a route table 300 comprising entries that each describe a network address of a corresponding computing device in distributed computing system 100, a virtual machine identifier of a virtual machine attached to the computing device, and a network address of a next hop associated a network route to the computing device and/or mobility client therein.”, Patil [0027]) (“FIG. 3B depicts route table 300 in an updated state in which the route table is updated based on update 144. Here, entry 302 is updated to reflect the lack of attachment of virtual machine 112B to computing device 102A—virtual machine identifier 126B is deleted from the entry to remove the association of the virtual machine with the computing device. Further, an entry 304 is updated to reflect the attachment of virtual machine 112B to computing device 102B—virtual machine identifier 126B is inserted into the entry along with its private VM network address and is associated with the network address corresponding to the computing device.”, Patil [0029]) Ead, Levy, Patil, and Alicherry are analogous because they pertain to packet management in communication network. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include updating routing information after detecting packets from the first flow as described in Patil into Ead as modified by Alicherry and Levy. By modifying the method to include updating routing information as taught by Patil, the benefits of improved traffic forwarding scheme (Alicherry [0007], Levy [0009], Patil [0027], and Ead [0071]) are achieved. As to claim 3, 10, and 17 (claim 3 is the method claim for the non-transitory computer-readable medium and computer system in claim 10 and 16 respectively): The combination of Ead, Alicherry, and Levy, as described above, does not explicitly teach: The method of claim 2, wherein the method further comprises: sending, at multiple time intervals, a route advertisement towards the second computer system to cause the second computer system to update routing information to associate (a) destination information associated with the first endpoint with (b) a next hop associated with the second connectivity service at the second computer system. However, Patil further teaches sending multiple messages to cause the system to update routing information which includes: The method of claim 2, wherein the method further comprises: sending, at multiple time intervals, a route advertisement towards the second computer system to cause the second computer system to update routing information to associate (a) destination information associated with the first endpoint with (b) a next hop associated with the second connectivity service at the second computer system. (“To address these challenges, hypervisors 106 at computing devices 102 implement mobility clients 128 that send messages—to other mobility clients at other hypervisors, as well as to a mobility server 130 implementing a mobility service 132—indicating the migration of virtual machines 112. Such computing device-to-computing device messaging, or hypervisor-to-hypervisor messaging, may indicate the network location of a virtual machine 112 following its migration, enabling a computing device 102 that previously hosted the virtual machine, yet continues to receive network traffic for the virtual machine, to forward the network traffic to the computing device that now hosts the virtual machine post-migration. In this way, delays in updating network routing information 122 maintained by SDN 116 may be tolerated without delaying the delivery of network traffic to the computing device 102 where the migrated virtual machine 112 is located, and without dropping packets that become part of forwarded traffic. From a client perspective, desired virtual machine performance, availability, and service level may be maintained in the presence of virtual machine migration, with delays associated with updating network routing information 122 at SDN 116 being obscured from clients.”, Patil [0020]) (“Returning to FIG. 1, a packet 140 is further depicted that illustrates network routing following the migration of virtual machine 112B to computing device 102B, and after network routing information 122 has been updated at SDN 116 to reflect this migration.”, Patil [0025]) (“FIGS. 3A-3B depict an example illustrating how network routing information 122 may be updated based on update 144. In this example, network routing information 122 includes a route table 300 comprising entries that each describe a network address of a corresponding computing device in distributed computing system 100, a virtual machine identifier of a virtual machine attached to the computing device, and a network address of a next hop associated a network route to the computing device and/or mobility client therein.”, Patil [0027]) Ead, Levy, Patil, and Alicherry are analogous because they pertain to packet management in communication network. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include sending multiple messages to cause the system to update routing information as described in Patil into Ead as modified by Alicherry and Levy. By modifying the method to include sending multiple messages to cause the system to update routing information as taught by Patil, the benefits of improved traffic forwarding scheme (Alicherry [0007], Levy [0009], Patil [0027], and Ead [0071]) are achieved. As to claim 4, 11, and 18 (claim 4 is the method claim for the non-transitory computer-readable medium and computer system in claim 11 and 18 respectively): Ead as described above does not explicitly teach: The method of claim 2, wherein the method further comprises: in response to determination that a condition for scaling down the bandwidth of the first connectivity service is satisfied based on updated metric information uninstalling the adaptive static route to cause forwarding of subsequent egress packets associated with the first flow using the first connectivity service. However, Alicherry further teaches forwarding packets associated with a flow using certain service after determining that a condition for scaling down is satisfied which includes: The method of claim 2, wherein the method further comprises: in response to determination that a condition for scaling down the bandwidth of the first connectivity service is satisfied based on updated metric information (“In case at block 304 it is determined that the total load is less than the minimum threshold level, which is the ‘Yes’ path from the block 304, it is determined to perform scaling down of the plurality of VNA at block 310.”, Alicherry [0061]) (“Further, in one embodiment, the flow distribution system 110 may facilitate scaling up and scaling down of the VNAs 106 in the cloud 104. Scaling up of the VNAs 106 may be understood as the process of increasing the number of VNAs 106 present in the cloud 104 in order to either handle an increase or a potential increase in the traffic, i.e., the number of flows or data packets handled by the cloud 104 or reduce load on an existing overloaded VNA 106. Scaling down of the VNAs 106 may be understood as the process of the reducing the number of VNAs 106 present in the cloud 104 in order to reduce the resources utilized by the cloud in case the load currently handled by the cloud 104 can be still be handled by the VNAs 106 remaining after removal of one VNA 106. Thus, scaling up or scaling down the VNAs 106 may facilitate the network appliances managing architecture 108 in efficiently managing the resource utilization of the cloud 104.”, Alicherry [0034]), uninstalling the adaptive static route to cause forwarding of subsequent egress packets associated with the first flow using the first connectivity service. (“On determining to perform either of scaling up or scaling down at the block 308 and 310, respectively, the method moves at block 312. At the block 312, at least one VNA is identified from among the plurality of VNA for flow migration based on the determination. In one implementation, upon determining to perform scaling up, at least one VNA may be ascertained that is handling load greater than an aggregate load of the plurality of VNA and identified as the at least one VNA for flow migration. In another implementation, upon determining to perform scaling down, at least one VNA may be ascertained that is having the weakest performance status among the plurality of VNA and thus identified as the at least one VNA for flow migration, a new VNA may be launched Alternatively, as discussed in method 200, the controller may decide to migrate flows from one VNA to another VNA upon determining the VNA to have weak performance status.”, Alicherry [0061]) Ead and Alicherry are analogous because they pertain to managing cloud computing network. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include forwarding packets associated with a flow using certain service after determining that a condition for scaling down is satisfied as described in Alicherry into Ead. By modifying the method to include forwarding packets associated with a flow using certain service after determining that a condition for scaling down is satisfied as taught by Alicherry, the benefits of improved traffic forwarding scheme (Alicherry [0007] and Ead [0071]) are achieved. Claim(s) 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ead in view of Alicherry, Levy, and Patil, as applied to claim 3, and further in view of Roberts et al. US 20230177941 (hereinafter “Roberts”) As to claim 5, 12, and 19 (claim 5 is the method claim for the non-transitory computer-readable medium and computer system in claim 12 and 19 respectively): The combination of Ead, Alicherry, Patil, and Levy, as described above, does not explicitly teach: The method of claim 3, wherein the method further comprises: in response to determination that a condition for scaling down the bandwidth is satisfied based on updated metric information, stop sending the route advertisement towards the second computer system. However, Roberts further teaches not sending updates after the system scales down which includes: The method of claim 3, wherein the method further comprises: in response to determination that a condition for scaling down is satisfied based on updated metric information, stop sending the route advertisement towards the second computer system. (“The update condition has now changed, which causes a scale down of the connection intervals between the activity tracking device and the computing device. This, as noted above, causes the second transfer rate to govern for data exchanged to the computing device for real-time data display. In one embodiment, arrow 1036 indicates a request from the computing device for real time updates. Arrows 1038 indicate data transfers of any data available for transfer, using the second data transfer rate. Arrow 1039 indicate a command that the client device has closed the application 1014, so that the device can stop sending updates.”, Roberts [0131]) (“Further, in one embodiment, the flow distribution system 110 may facilitate scaling up and scaling down of the VNAs 106 in the cloud 104. Scaling up of the VNAs 106 may be understood as the process of increasing the number of VNAs 106 present in the cloud 104 in order to either handle an increase or a potential increase in the traffic, i.e., the number of flows or data packets handled by the cloud 104 or reduce load on an existing overloaded VNA 106. Scaling down of the VNAs 106 may be understood as the process of the reducing the number of VNAs 106 present in the cloud 104 in order to reduce the resources utilized by the cloud in case the load currently handled by the cloud 104 can be still be handled by the VNAs 106 remaining after removal of one VNA 106. Thus, scaling up or scaling down the VNAs 106 may facilitate the network appliances managing architecture 108 in efficiently managing the resource utilization of the cloud 104.”, Alicherry [0034]) Ead, Levy, Patil, Roberts, and Alicherry are analogous because they pertain to packet management in communication network. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include not sending updates after the system scales down as described in Roberts into Ead as modified by Alicherry, Patil, and Levy. By modifying the method to include not sending updates after the system scales down as taught by Roberts, the benefits of improved traffic forwarding scheme (Alicherry [0007], Levy [0009], Patil [0027], and Ead [0071]) and adaptive advertising scheme (Roberts [0131]) are achieved. Claim(s) 6, 7, 13, 14, 20, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Ead in view of Alicherry and Levy, as applied to claim 1, and further in view of Shanks et al. US 20170180155 (hereinafter “Shanks”) As to claim 6, 13, and 20 (claim 6 is the method claim for the non-transitory computer-readable medium and computer system in claim 13 and 20 respectively): The combination of Ead, Alicherry, and Levy, as described above, does not explicitly teach: The method of claim 1, wherein selecting the subset comprises: selecting the first flow based on a policy specifying an application segment or traffic type associated with the first flow is moveable from the first connectivity service to the second connectivity service. However, Shanks further teaches selecting a flow based on a policy that specifies service compatibility which includes: The method of claim 1, wherein selecting the subset comprises: selecting the first flow based on a policy specifying an application segment or traffic type associated with the first flow is moveable from the first connectivity service to the second connectivity service. (“Policy-based rules are often defined for routing packets in a particular way, throughout a private network. For example, a network administrator determines that all web-surfing traffic will be sent from Device A 151 to Device B 154 through either Tunnel A 210 or Tunnel C 214. In some implementations, one or more policy-based rules are associated with a particular type of traffic. For example, a particular policy-based rule applies to all video conferencing traffic.”, Shanks [0048]) (“Briefly, the method 600 includes selecting a data tunnel from a plurality of data tunnels between a first network device and a second network device, for forwarding one or more packets from the first network device to the second network device. In some implementations, this selection process includes making determinations of which data tunnels of the plurality of data tunnels match or satisfy one or more performance-based rules and/or one or more policy-based rules. In some implementations, method 600 occurs after a particular network traffic type has been determined for the one or more packets being forwarded from the first network device to the second network device.”, Shanks [0068]) Ead, Levy, Shanks, and Alicherry are analogous because they pertain to packet management in communication network. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include selecting a flow based on a policy that specifies service compatibility as described in Shanks into Ead as modified by Alicherry and Levy. By modifying the method to include selecting a flow based on a policy that specifies service compatibility as taught by Shanks, the benefits of improved traffic forwarding scheme (Alicherry [0007], Levy [0009], Shanks [0068], and Ead [0071]) are achieved. As to claim 7, 14, and 21 (claim 7 is the method claim for the non-transitory computer-readable medium and computer system in claim 14 and 21 respectively): The combination of Ead, Alicherry, and Levy, as described above, does not explicitly teach: The method of claim 1, wherein selecting the subset comprises: selecting the first flow based an amount of available bandwidth associated with the second connectivity service and an amount of bandwidth required by the first flow. However, Shanks further teaches selecting a flow based on the bandwidth requirement and availability of the service and flow which includes: The method of claim 1, wherein selecting the subset comprises: selecting the first flow based an amount of available bandwidth associated with the second connectivity service and an amount of bandwidth required by the first flow. (“In some embodiments, multi-uplink network devices selectively route network traffic over data tunnels, to isolate sensitive data and ensure that there is adequate bandwidth for high priority activities such as transferring payment information.”, Shanks [0025]) (“FIG. 5A is an example of a table 500 of data tunnel performance parameter values in accordance with some implementations. Table 500 includes, in this example, information stored in columns such as tunnel 502, parameter 504 and value 506. As an example, performance parameter values table 500 corresponds to performance parameter values 180 of Device A 151, as described earlier with respect to FIG. 1, FIGS. 2A and 2B and FIG. 3.”, Shanks [0063]) (“In the example of FIG. 5A, tunnel column 502 includes a unique identifier for a respective data tunnel. In some implementations, every data tunnel of a respective private network has a unique identifier. In some implementations, every data tunnel of a respective pair of network devices has a unique identifier. In some implementations, additional information found in table 500 includes uplink identifiers associated with a respective data tunnel, throughput, bandwidth, a number of packet-forwarding rules associated with the respective data tunnel and/or a time stamp or duration of time since the last update of the performance parameter values for the respective data tunnel. In some implementations, table 500 also includes a device identifier corresponding to a respective data tunnel.”, Shanks [0064]) (“FIG. 5B is an example of a table 550 of packet-forwarding rules in accordance with some implementations. Packet-forwarding rule table 550 includes traffic type column 552, performance-based rule column 554 and policy-based rule column 556. Table 550 illustrates examples of performance-based rules as well as policy-based rules for various types of identified network traffic types.”, Shanks [0066]) (“Briefly, the method 800 includes selecting a data tunnel from a plurality of data tunnels between a first network device and a second network device, for forwarding one or more packets from the first network device to the second network device. In some implementations, this selection process includes making determinations of which data tunnels of the plurality of data tunnels match or satisfy one or more performance-based rules and/or one or more policy-based rules. In some implementations, method 800 includes determining one or more performance-based characteristics of one or more data tunnels, used in the selection process.”, Shanks [0077]) Ead, Levy, Shanks, and Alicherry are analogous because they pertain to packet management in communication network. Thus it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include teaches selecting a flow based on the bandwidth requirement and availability of the service and flow as described in Shanks into Ead as modified by Alicherry and Levy. By modifying the method to include teaches selecting a flow based on the bandwidth requirement and availability of the service and flow as taught by Shanks, the benefits of improved traffic forwarding scheme (Alicherry [0007], Levy [0009], Shanks [0068], and Ead [0071]) are achieved. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW C KIM whose telephone number is (703)756-5607. The examiner can normally be reached M-F 9AM - 5PM (PST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sujoy K Kundu can be reached at (571) 272-8586. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.C.K./ Examiner Art Unit 2471 /SUJOY K KUNDU/Supervisory Patent Examiner, Art Unit 2471
Read full office action

Prosecution Timeline

Jul 28, 2023
Application Filed
Sep 08, 2025
Non-Final Rejection — §103
Jan 16, 2026
Response Filed
Feb 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12490157
TIMING CHANGE AND NEW RADIO MOBILITY PROCEDURE
2y 5m to grant Granted Dec 02, 2025
Patent 12464341
DEVICE, PROCESS, AND APPLICATION FOR DETERMINING WIRELESS DEVICE CARRIER COMPATIBILITY
2y 5m to grant Granted Nov 04, 2025
Patent 12439313
INTER-DONOR TOPOLOGY ADAPTATION IN INTEGRATED ACCESS AND BACKHAUL NETWORKS
2y 5m to grant Granted Oct 07, 2025
Patent 12418821
AWARENESS LAYERS FOR MANAGING ACCESS POINTS IN CENTRALIZED WIRELESS NETWORKS
2y 5m to grant Granted Sep 16, 2025
Patent 12414023
METHOD AND NETWORK APPARATUS FOR PROVISIONING MOBILITY MANAGEMENT DURING CONGESTION IN A WIRELESS NETWORK
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
32%
Grant Probability
12%
With Interview (-20.2%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month