Prosecution Insights
Last updated: April 19, 2026
Application No. 18/577,429

MULTI-CHASSIS LINK AGGREGATION ROUTING COMPUTATION METHOD, SWITCH, SYSTEM, AND STORAGE MEDIUM

Final Rejection §103
Filed
Jan 08, 2024
Examiner
GEBRE, MESSERET F
Art Unit
2445
Tech Center
2400 — Computer Networks
Assignee
ZTE CORPORATION
OA Round
2 (Final)
55%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
75%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
154 granted / 278 resolved
-2.6% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
34 currently pending
Career history
312
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
64.4%
+24.4% vs TC avg
§102
1.8%
-38.2% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 278 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . General Remarks 1/ claims 1-3, and 6-9, 11-16, and 18-21 are pending 2/ Claims 1 and 6 are independent 3/ Previous IDS files 10/14/2024 and 01/08/2024 has been considered Response to Arguments Applicant's arguments filed 06/25/2025 have been fully considered but they are not persuasive. -Regarding claim 1, applicant argued that the combination does not explicitly disclose: “A/… It is our position that Durrani does not explicitly describe the link establishment process nor the basis on which these links are established, be it virtual IP addresses, interface IP addresses, or MAC addresses. The reference to the use of individual IP addresses in Durrani is related to the exchange of routing adjacency information after adjacency relationships are established, not the routing link establishment process itself. Claim 1 of our application specifically recites the performance of routing link establishment "according to the slave IP addresses". This indicates a fundamental aspect of our invention that relies on the slave IP addresses for the initial establishment of links between network devices. Durrani, on the other hand, discusses the use of IP addresses in the context of exchanging routing information, which is a subsequent step in network communication after the links have been established”. Examiner Respectfully disagrees: fig. 1A discloses switch 102 has its IP 1.1.1.1 and switch 104 has its own IP 1.1.1.2 (slave IP) and devices 106 and 118 . It is known to a person having ordinary skill in the art at the time of the invention was filed, before MCLAG is established, to establish adjacency relationships, the above individual IP addresses are used to exchange routing adjacency information via routing protocols among the devices (118, 102, 104, and 106) to establish MCLAG and before LACP is used to negotiate attributes among the devices. For example OSPF type 1 message exchanged to establish network topology or any link state routing protocol exchanged among the devices to establish adjacency before the devices start to communicate data corresponds to route/link establishment. All nodes exchange adjacency information to establish adjacency network topology using routing protocol. That exchange is performed using the individual IP addresses of the devices that corresponds to the slave IP address. It is known in the art that routing protocols such as OSPF, IS-IS, or BGP are used to build the network topology before establishing the MCLAG. IP address exchanged in that topology formation corresponds to the salve IP address. Those address are used to establish network topology and neighbor adjacency among the devices before MCLAG is established. - further applicant argued that the combination does not explicitly disclose: " The key distinction lies in the fact that Durrani requires an additional step of replacing the interface IP addresses with the virtual IP address in the routing entries, this obviously consider the two switches as two separate devices, while Claim 1 directly utilizes the master IP address for routing announcements and computations " Examiner respectfully disagrees: Durrani discloses MCLAG system as indicated in fig. 1A. It is known in the art to a person having ordinary skill in the art that for communication purposes, the MCLAG switches 102 and 104 are abstracted as one device to upstream and downstream devices using a virtual IP address (master IP address) too abstract the MCLAG switches as one. In MCLAG system, the MACLAG switches communicates with the upstream and the downstream devices using the virtual IP address. The upstream and downstream devices see one abstracted device represented by the virtual IP address. Durrani, in [0009] discloses PE devices 102 and 104 are configured/configurable to act in concert as a virtual router via VRRP (or variants thereof, such as VRRPe), such that PE devices 102 and 104 share a common virtual IP address (1.1.1.254) and a common virtual MAC address (CCCC.CCCC.CCCC). Durrani in [0102-0103] discloses new DRP control packet can advertise the virtual IP address of cluster 122, the node's interface IP address, and optionally the virtual MAC address of cluster 122. [0103] Upon receiving the control packet, CE device 118 can store the information included in the packet in a local routing database. Then, when CE device 118 is in the process of computing/generating its local L3 routing table, CE device 118 can modify each routing entry that points to the interface IP address included in the control packet (as the next hop address for a destination) such that the entry points instead to cluster 122's virtual IP address. As described in further detail below, this causes CE device 118 to forward data packets originating from, e.g., client 120 to the virtual MAC address of cluster 122, rather than the interface MAC address (slave MAC address) of a particular node in the cluster. [0131] The benefit of the DRP modification approach in this scenario is that the next hop address in the routing table of CE device 118 is the virtual IP address of cluster 122, That means downstream device communicates with MCLAG switches using the virtual IP address (master IP address). Durrani in [0107] discloses after CE device 118 has computed its L3 routing table as described above, network environment 100 can route data traffic between client 120 and network core 116 in the manner shown in flow 600 of FIG. 6. At step (1) (reference numeral 602) of flow 600, client 120 can transmit a data packet destined for server 110 (having IP address 2.2.2.2) to CE device 118. At step (2) (reference numeral 604), CE device 118 can select, as the next hop for the packet, the virtual IP address for cluster 122 (i.e., 1.1.1.254), resolve the corresponding virtual MAC address for 1.1.1.254 (i.e., CCCC.CCCC.CCCC) via Address Resolution Protocol (ARP), and set the destination MAC address of the packet to the virtual MAC address. In addition, because links 124 and 126 are part of a LAG, CE device 118 can perform a hash on the headers of the data packet and select either link 124 to PE device 102 or link 104 to PE device 104 for forwarding the packet. In this example, CE device 118 selects link 126 to PE 104 that corresponds to the downstream device sees one abstracted device of the MCLAG. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, and 6-9, 11-16, and 18-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Durrani (US pg. no. 20140204761), further in view of Garcia (US pg. no. 20190182202). Regarding claim 1. Durrani discloses a multi-chassis link aggregation routing computation method, which is performed by a first switch in a multi-chassis link aggregation switch system (fig. 1B, switch 102 of MCLAG cluster 122), wherein the multi-chassis link aggregation switch system further comprises a second switch (([0105] FIG. 5 depicts an exemplary DRP link state advertisement (route announcement) flow 500 with respect to network environment 100 according to an embodiment. As shown, as part of this flow, each PE device 102/104 (first and second switches of MCLAG) of VRRP/MC-LAG cluster 122 can detect that it is part of a virtual router configuration and transmit a new type of DRP control packet (502/504) to CE device 118. Control packet 502/504 can include, e.g., the virtual IP address of cluster 122 (i.e., 1.1.1.254) and the interface IP address of the originating node (1.1.1.1 in the case of control packet 502, 1.1.1.2 in the case of control packet 504). In certain embodiments, control packet 502/504 can also include the virtual MAC address of cluster 122 (i.e., CCCC.CCCC.CCCC). [0106] Once CE device 118 has received control packets 502 and 504 from PE devices 102 and 104 respectively, CE device 118 can store the information included in the control packets in a local routing database. CE device 118 can then run a Shortest Path First (SPF) algorithm on the routing database to generate routing entries for its Layer 3 routing table). the first switch and the second switch are configured with a master Internet Protocol (IP) address representing the multi-chassis link aggregation switch system (fig. 1A discloses switch 102 and 104 (the first and second switches respectively in MCLAG relationships). The virtual IP address 1.1.1.254 (master IP address) is the same for both switches 102 and 104 because it represents abstracted one the MCLAG switch for device 118. After MCLAG is established device 118 sees abstracted switch represented by the virtual IP ), the first switch and the second switch are further configured with their respective slave IP addresses (fig. 1A discloses switch 102 has its IP 1.1.1.1 and switch 104 has its own IP 1.1.1.2 (slave IP) that is used to exchange routing information via routing protocols among the devices before MCLAG is established and LACP is used to negotiate attributes among the devices to establish the MCLAG), and the method comprises: respectively performing routing link establishment between the first switch and the second switch, between the first switch and an upstream gateway device, and between the first switch and a downstream server according to the slave IP addresses (fig. 1A discloses switch 102 has its IP 1.1.1.1 and switch 104 has its own IP 1.1.1.2 (slave IP) and devices 106 and 118 that have the IP addresses. It is known to a person having ordinary skill in the art at the time of the invention was filed, before MCLAG is established, to establish adjacency relationships, the above individual IP addresses are used to exchange routing adjacency information via routing protocols among the devices (118, 102, 104, and 106) before MCLAG is established and before LACP is used to negotiate attributes among the devices. Those address are used to establish network topology and neighbor adjacency among the devices before MCLAG is established; fig. 5, and [0105-0106] discloses that before MCLAG information (DRP)exchanged, the device 118 knows and stores in routing table the slave IP of switch 102 and 104 respectively where that is received from the switches during adjacency discovery information exchange that is well known group of link state routing protocol such as (BGP, OSPF). However, after full establishment of MCLAG and exchange of abstracted IP address of the MCLAG, device 118 changed the slave address to the master address); and performing, according to the master IP address, route announcement between the first switch and the downstream server ([0105] FIG. 5 depicts an exemplary DRP link state advertisement (route announcement) flow 500 with respect to network environment 100 according to an embodiment. As shown, as part of this flow, each PE device 102/104 (first second switch) of VRRP/MC-LAG cluster 122 can detect that it is part of a virtual router configuration and transmit a new type of DRP control packet (route announcement packet)) to CE device 118 (downstream server). Control packet 502/504 can include, e.g., the virtual IP address of cluster 122 (i.e., 1.1.1.254 corresponds to master IP address that abstract the two switches of the MCLAG as one to device 118 or upstream device) and the interface IP address of the originating node (1.1.1.1 in the case of control packet 502, 1.1.1.2 in the case of control packet 504). In certain embodiments, control packet 502/504 can also include the virtual MAC address of cluster 122 (i.e., CCCC.CCCC.CCCC). [0106] Once CE device 118 has received control packets 502 and 504 from PE devices 102 and 104 respectively, CE device 118 can store the information included in the control packets in a local routing database. CE device 118 can then run a Shortest Path First (SPF) algorithm on the routing database to generate routing entries for its Layer 3 routing table (route computation). As part of this processing, CE device 118 can identify routing entries that point to the interface IP address of either PE device 102 or 104 (slave IP addresses of the switches that was used to establish link before completing the MCLAG) as the next hop address (topology after the MCLAG is full established), and can replace those interface IP addresses with the virtual IP address of cluster 122. This results in a routing table that contains routing entries pointing to the VRRP virtual IP address (master IP address the same for both switches to abstract them as one virtual component to downstream and upstream devices), rather than the interface IP addresses (slave IP addresses of the switches) of PE devices 102 and 104. This is illustrated in L3 routing table 130 of FIG. 5, which shows the next hop address for destination 2.2.2.2 as being virtual IP address 1.1.1.254; [0107] After CE device 118 has computed its L3 routing table as described above, network environment 100 can route data traffic between client 120 and network core 116 in the manner shown in flow 600 of FIG. 6. At step (1) (reference numeral 602) of flow 600, client 120 can transmit a data packet destined for server 110 (having IP address 2.2.2.2) to CE device 118. At step (2) (reference numeral 604), CE device 118 can select, as the next hop for the packet, the virtual IP address for cluster 122 (i.e., 1.1.1.254), resolve the corresponding virtual MAC address for 1.1.1.254 (i.e., CCCC.CCCC.CCCC) via Address Resolution Protocol (ARP), and set the destination MAC address of the packet to the virtual MAC address. In addition, because links 124 and 126 are part of a LAG, CE device 118 can perform a hash on the headers of the data packet and select either link 124 to PE device 102 or link 104 to PE device 104 for forwarding the packet. In this example, CE device 118 selects link 126 to PE 104; fig. 5 discloses DRP (route announcement) being exchanged between device 102 (first switch and device 106 (upstream gateway/north bound device) and device 118 (south band device/server). Durrani further, inherently discloses performing, according to the master IP address, route announcement between the first switch and the upstream gateway device ( fig. 5 discloses DRP (route announcement being exchanged between the switch 102 and device 106. DRP comprises virtual IP address (master IP address) (see [0105]); [0105] FIG. 5 depicts an exemplary DRP link state advertisement (route announcement) flow 500 with respect to network environment 100 according to an embodiment. As shown, as part of this flow, each PE device 102/104 (first second switch) of VRRP/MC-LAG cluster 122 can detect that it is part of a virtual router configuration and transmit a new type of DRP control packet (route announcement packet)) to CE device 118. Control packet 502/504 can include, e.g., the virtual IP address of cluster 122 (i.e., 1.1.1.254 corresponds to master IP address that abstract the two switches of the MCLAG as one to device 118 or upstream device) and the interface IP address of the originating node (1.1.1.1 in the case of control packet 502, 1.1.1.2 in the case of control packet 504). In certain embodiments, control packet 502/504 can also include the virtual MAC address of cluster 122 (i.e., CCCC.CCCC.CCCC). [0106] Once CE device 118 has received control packets 502 and 504 from PE devices 102 and 104 respectively, CE device 118 can store the information included in the control packets in a local routing database. CE device 118 can then run a Shortest Path First (SPF) algorithm on the routing database to generate routing entries for its Layer 3 routing table (route computation). As part of this processing, CE device 118 can identify routing entries that point to the interface IP address of either PE device 102 or 104 (slave IP addresses of the switches that was used to establish link before completing the MCLAG) as the next hop address (topology after the MCLAG is full established), and can replace those interface IP addresses with the virtual IP address of cluster 122). Durrani further discloses wherein after performing, according to the master IP address, route announcement between the first switch and the downstream server ([0102] discloses each node (i.e., PE device 102/104 (first and second switch)) of VRRP/MC-LAG cluster 122 can transmit (as part of the normal DRP link state advertisement process) a new type of DRP control packet. This new DRP control packet can advertise the virtual IP address (master IP address) of cluster 122, the node's interface IP address, and optionally the virtual MAC address of cluster 122; fig. 5 discloses switch 102 of fig. 5 sends DRP message to upstream device 106. In light of the instant disclosure that corresponds to route advertisement) and between the first switch and the upstream gateway device ([0102] discloses each node (i.e., PE device 102/104) of VRRP/MC-LAG cluster 122 can transmit (as part of the normal DRP link state advertisement process) a new type of DRP control packet. This new DRP control packet can advertise the virtual IP address (master IP address) of cluster 122, the node's interface IP address, and optionally the virtual MAC address of cluster 122; fig. 5 discloses switch 102 of fig. 5 sends DRP message to downstream device 118. In light of the instant disclosure that corresponds to route advertisement), the method further comprises: receiving, by routing nodes in the upstream gateway device and the downstream server, route announcements containing the master IP address and sent by the first switch (fig. 5 102) and the second switch (fig. 5 discloses 104 of MCLAG ) of the multi-chassis link aggregation switch system ([0102] discloses each node (i.e., PE device 102/104 (first and second switch)) of VRRP/MC-LAG cluster 122 can transmit (as part of the normal DRP link state advertisement process) a new type of DRP control packet. This new DRP control packet can advertise the virtual IP address (master IP address) of cluster 122, the node's interface IP address, and optionally the virtual MAC address of cluster 122; fig. 5 discloses switch 102/104 of fig. 5 inherently sends DRP message to upstream device 106. In light of the instant disclosure that corresponds to route advertisement; fig. 5 discloses switch 102/104 of fig. 5 sends DRP message to downstream device 118 that corresponds to route advertisement), and performing, by the routing nodes in the upstream gateway device and the downstream server, computation according to a routing protocol and based on the master IP address in the route announcements ([0132] discloses CE device 118 re-computes the best path for destination 110 given that the next hop is the virtual IP address of cluster 122; routing table is maintained intact and a new active link is added to the LAG group; [0103] discloses when CE device 118 is in the process of computing/generating its local L3 routing table, CE device 118 can modify each routing entry that points to the interface IP address included in the control packet (as the next hop address for a destination) such that the entry points instead to cluster 122's virtual IP address. The upstream device is considered to do the same computation the downstream device does using the virtual IP address). But, Durrani, does not explicitly disclose: route announcement between the first switch and the upstream gateway device; However, in the same field of endeavor, Garcia discloses performing, according to the master IP address, route announcement between the first switch and the upstream gateway device ([0039] and fig. 1 disclose the pair of switches 106a, 106b (first and second switches) forms a single logical node or virtual endpoint. For example, the pair of switches 106a, 106b (also known as MC Pairs) is assigned a common virtual IP address (master IP address representing MC pairs) for advertising to other network nodes 116 (upstream gateway device). The network nodes 116 then learn the virtual IP address as the destination IP address for any of the edge nodes 104a (server) connected to the pair of switches 106a, 106b. The network nodes 116 route packets destined to any of edge nodes 104a-d to the switches 106a, 106b using the virtual IP address, regardless of whether the edge nodes 104 are single homed or dual homed). Therefore, it would have been obvious to a person having ordinary skill in the art at the time of the invention was effectively filed to combine the teaching of Durrani with Garcia. The modification would allow exchanging routing and link state information among devices to effectively and rapidly learn information to establish network topology for the link aggregation for efficient packet forwarding network that prevents packet loss. Regarding claim 2. The combination discloses multi-chassis link aggregation routing computation method according to claim 1. Garcia further discloses, further comprising: in a case where there is a single-homing server only connected to the first switch ([0038] and fig. 1 discloses The MCLAG system 100 may thus host dual homed nodes 104a (first switch) and single homed nodes 104c (second switch)) , performing route announcement with the single-homing server according to a first IP address ([0039-0042] discloses The MC pair of switches 106a, 106b is each assigned a dedicated IP address (first IP address configured for single homed device after the master IP address (virtual IP address is configured), such as a system IP address. The system IP address of a switch 106 is specific to the switch 106 and identifies the switch 106 uniquely or separately from the other switch in the MC pair. This system IP address (or any other IP address that is specific or unique to the node) is used as the associated IP address for any locally connected, single homed node 104c, 104d. For example, in an address resolution protocol (ARP) requests or responses, a switch 106 advertises its dedicated IP address for locally connected, single homed nodes 104); wherein the first IP address comprises an IP address which is configured after the multi-chassis link aggregation switch system configures the master IP address and the slave IP addresses ([0039-0042] discloses The MC pair of switches 106a, 106b is each assigned a dedicated IP address (first IP address configured for single homed device after the master IP address (virtual IP address is configured), such as a system IP address. The system IP address of a switch 106 is specific to the switch 106 and identifies the switch 106 uniquely or separately from the other switch in the MC pair. This system IP address (or any other IP address that is specific or unique to the node) is used as the associated IP address for any locally connected, single homed node 104c, 104d. For example, in an address resolution protocol (ARP) requests or responses, a switch 106 advertises its dedicated IP address for locally connected, single homed nodes 104. The pair of switches 106a, 106b continues to advertise the virtual IP address for dual homed nodes 104a, 104b). Regarding claim 3. The combination discloses multi-chassis link aggregation routing computation method according to claim 1. Garcia further discloses, further comprising: in a case where there is a single-homing server only connected to the first switch (fig. 1 discloses edge node 104c (single homed device) only connected to switch 106 a (first switch), performing route announcement with the single-homing server according to the master IP address ([0039] discloses [0039-0042] the pair of switches 106a, 106b (first and second switch) forms a single logical node or virtual endpoint. For example, the pair of switches 106a, 106b (also known as MC Pairs) is assigned a common virtual IP address (master IP address) for advertising (route announcement to upstream device). The network nodes 116 then learn the virtual IP address as the destination IP address for any of the edge nodes 104a, 104b, 104c (single homed device), and 104d 104c (single homed device) connected to the pair of switches 106a, 106b. The network nodes 116 route packets destined to any of edge nodes 104a-d to the switches 106a, 106b using the virtual IP address, regardless of whether the edge nodes 104 are single homed or dual homed. The packets forwarded to the virtual IP address may arrive at either of the switches 106a, 106b. If the system is using virtual IP address for routing packets to the single homed devices, it is a known for a person having ordinary skill in that that the nodes establish neighbor adjacency relationship by exchanging routing protocols such as (OSPF, BGP etc) and compute route to reach different nodes in the network and store it in database before they exchange packet. If the system is using virtual IP address, Adjacency relationship is established with 104 devices (including single homed devices) using the virtual IP address (master IP address), However, advertising and using the virtual IP addresses as a neighboring node IP address for single homed devices is sub-optimal (see [0040-0042] below). Therefore, the system is capable of performing route announcement to single homed devices using the master IP address; [0040] For example, the network nodes 116 may forward a packet destined to an edge node 104 using the virtual IP address assigned to the MC pair of switches 106a, 106b. The packet may arrive at either of the MC pair of switches 106a, 106b. For a dual homed edge node 104a, 104b, either of the MC pair of switches 106a, 106b may forward the packet to the dual homed edge node 104a, 104b, because each of the MC pair of switches 106a, 106b is locally connected to the dual homed edge node 104a, 104b. [0041] However, in the case of a single homed edge node 104c, 104d, the receiving switch 106 in the MC pair may not be locally connected to the edge node 104c, 104d. The receiving switch 106 must then transmit the packet to the other switch in the MC pair over the VFL 124 to reach the single homed edge node 104c, 104d. For example, a packet destined to the single homed edge node 104c connected to the first switch 106a of the MC pair may arrive at the second switch 106b of the MC pair. The second switch 106b of the MC pair must then forward the packet over the VFL 124 to the first switch 106a of the MC pair. The first switch 106a then forwards the packet received over the VFL 124 to the single homed edge node 104c. This routing for a single homed edge node is sub optimal and consumes the bandwidth of the VFL 124. The VFL 124 may have limited or low provisioned bandwidth that is not ideal for traffic flow, especially if one or more of the single homed edge nodes are in a heavy traffic state. [0042] In an embodiment, the MCLAG system 100 is modified such that traffic destined to a single homed node 104a, 104d is forwarded directly to the switch 106a, 106b of the MC pair that is locally connected to the single homed node 104c, 104d. The MC pair of switches 106a, 106b is each assigned a dedicated IP address, such as a system IP address. The system IP address of a switch 106 is specific to the switch 106 and identifies the switch 106 uniquely or separately from the other switch in the MC pair. This system IP address (or any other IP address that is specific or unique to the node) is used as the associated IP address for any locally connected, single homed node 104c, 104d. For example, in an address resolution protocol (ARP) requests or responses, a switch 106 advertises its dedicated IP address for locally connected, single homed nodes 104. The pair of switches 106a, 106b continues to advertise the virtual IP address for dual homed nodes 104a, 104b). Regarding claim 6. Durrani discloses switch, wherein the switch is a first switch in a multi-chassis link aggregation switch system (fig. 1A discloses switch 102 of MCLAG), the multi-chassis link aggregation switch system further comprises a second switch(fig. 1A discloses switch 104 of MCLAG), the first switch and the second switch are configured with a master Internet Protocol (IP) address representing the multi-chassis link aggregation switch system (fig. 1A discloses switches 102 and 104 are configured with virtual IP address that abstracts the two switches to the upstream and downstream devices of the MCLAG), the first switch and the second switch are further configured with their respective slave IP addresses (fig. 1A discloses switches 102 and 104 have interface IP 1.1.1.1, and 1.1.1.2 respectively that is used to exchange routing information before MCLAG is established using LACP or are used to exchange control information between the peer switches), and the switch comprises: at least one processor (fig. 1A switches 101 and 102 comprises processor); and a memory in communication connection with the at least one processor, wherein the memory stores instructions that are able to be executed by the at least one processor (fig. 1A switches 101 and 102 comprises processor and memory to execute instructions), and the instructions, when being executed by the at least one processor, cause the at least one processor to execute following operations: respectively performing routing link establishment between the first switch and the second switch, between the first switch and an upstream gateway device, and between the first switch and a downstream server according to the slave IP addresses (fig. 1A discloses switch 102 has its IP 1.1.1.1 and switch 104 has its own IP 1.1.1.2 (slave IP) and devices 106 and 118 that have the IP addresses. Before MCLAG is established, to establish adjacency relationships, the above IP addresses are used to exchange routing adjacency information via routing protocols among the devices (118, 102, 104, and 106) before MCLAG is established and before LACP is used to negotiate attributes among the devices those address are used to establish network topology and neighbor adjacency among the devices before MCLAG is established; fig. 5, and [0105-0106] discloses that before MCLAG information (DRP)exchanged, the device 118 knows and stores in routing table the slave IP of switch 102 and 104 respectively where that is received from the switches during adjacency discovery information exchange that is well known group of link state routing protocol such as (BGP, OSPF). However, after full establishment of MCLAG and exchange of abstracted IP address of the MCLAG, device 118 changed the slave address to the master address: and performing, according to the master IP address, route announcement between the first switch and the downstream server ([0105] FIG. 5 depicts an exemplary DRP link state advertisement (route announcement) flow 500 with respect to network environment 100 according to an embodiment. As shown, as part of this flow, each PE device 102/104 of VRRP/MC-LAG cluster 122 can detect that it is part of a virtual router configuration and transmit a new type of DRP control packet (502/504) to CE device 118. Control packet 502/504 can include, e.g., the virtual IP address of cluster 122 (i.e., 1.1.1.254) and the interface IP address of the originating node (1.1.1.1 in the case of control packet 502, 1.1.1.2 in the case of control packet 504). In certain embodiments, control packet 502/504 can also include the virtual MAC address of cluster 122 (i.e., CCCC.CCCC.CCCC). [0106] Once CE device 118 has received control packets 502 and 504 from PE devices 102 and 104 respectively, CE device 118 can store the information included in the control packets in a local routing database. CE device 118 can then run a Shortest Path First (SPF) algorithm on the routing database to generate routing entries for its Layer 3 routing table. As part of this processing, CE device 118 can identify routing entries that point to the interface IP address of either PE device 102 or 104 as the next hop address, and can replace those interface IP addresses with the virtual IP address of cluster 122. This results in a routing table that contains routing entries pointing to the VRRP virtual IP address, rather than the interface IP addresses of PE devices 102 and 104. This is illustrated in L3 routing table 130 of FIG. 5, which shows the next hop address for destination 2.2.2.2 as being virtual IP address 1.1.1.254; [0107] After CE device 118 has computed its L3 routing table as described above, network environment 100 can route data traffic between client 120 and network core 116 in the manner shown in flow 600 of FIG. 6. At step (1) (reference numeral 602) of flow 600, client 120 can transmit a data packet destined for server 110 (having IP address 2.2.2.2) to CE device 118. At step (2) (reference numeral 604), CE device 118 can select, as the next hop for the packet, the virtual IP address for cluster 122 (i.e., 1.1.1.254), resolve the corresponding virtual MAC address for 1.1.1.254 (i.e., CCCC.CCCC.CCCC) via Address Resolution Protocol (ARP), and set the destination MAC address of the packet to the virtual MAC address. In addition, because links 124 and 126 are part of a LAG, CE device 118 can perform a hash on the headers of the data packet and select either link 124 to PE device 102 or link 104 to PE device 104 for forwarding the packet. In this example, CE device 118 selects link 126 to PE 104; fig. 5 discloses DRP (route announcement) being exchanged between device 102 (first switch and device 106 (upstream gateway/north bound device) and device 118 (south band device/server Durrani further, inherently discloses performing, according to the master IP address, route announcement between the first switch and the upstream gateway device ( fig. 5 discloses DRP (route announcement) being exchanged between device 102 (first switch and device 106 (upstream gateway/north bound device) and device 118 (south band device/server); [0105] FIG. 5 depicts an exemplary DRP link state advertisement (route announcement) flow 500 with respect to network environment 100 according to an embodiment. As shown, as part of this flow, each PE device 102/104 of VRRP/MC-LAG cluster 122 can detect that it is part of a virtual router configuration and transmit a new type of DRP control packet (502/504) to CE device 118. Control packet 502/504 can include, e.g., the virtual IP address of cluster 122 (i.e., 1.1.1.254)). wherein the route announcement triggers routing nodes in the upstream gateway device (fig. 5 106) and the downstream server (fig. 5, 118) to receive route announcements containing the master IP address and sent by the first switch and the second switch of the multi-chassis link aggregation switch system ([0102] discloses each node (i.e., PE device 102/104 (first and second switch)) of VRRP/MC-LAG cluster 122 can transmit (as part of the normal DRP link state advertisement process) a new type of DRP control packet. This new DRP control packet can advertise the virtual IP address (master IP address) of cluster 122, the node's interface IP address, and optionally the virtual MAC address of cluster 122; fig. 5 inherently discloses switch 102/104 of fig. 5 sends DRP message to upstream device 106. In light of the instant disclosure that corresponds to route advertisement; fig. 5 discloses switch 102/104 of fig. 5 sends DRP message to downstream device 118 that corresponds to route advertisement), and perform computation according to a routing protocol and based on the master IP address in the route announcement ([0132] discloses CE device 118 re-computes the best path for destination 110 given that the next hop is the virtual IP address of cluster 122; routing table is maintained intact and a new active link is added to the LAG group; [0103] discloses when CE device 118 is in the process of computing/generating its local L3 routing table, CE device 118 can modify each routing entry that points to the interface IP address included in the control packet (as the next hop address for a destination) such that the entry points instead to cluster 122's virtual IP address. The upstream device is considered to do the same computation the downstream device does using the virtual IP address). But, Durrani, does not explicitly disclose: route announcement between the first switch and the upstream gateway device; However, in the same field of endeavor, Garcia discloses performing, according to the master IP address, route announcement between the first switch and the upstream gateway device ([0039] and fig. 1 disclose the pair of switches 106a, 106b (first and second switches) forms a single logical node or virtual endpoint. For example, the pair of switches 106a, 106b (also known as MC Pairs) is assigned a common virtual IP address (master IP address representing MC pairs) for advertising to other network nodes 116 (upstream gateway device). The network nodes 116 then learn the virtual IP address as the destination IP address for any of the edge nodes 104a (server) connected to the pair of switches 106a, 106b. The network nodes 116 route packets destined to any of edge nodes 104a-d to the switches 106a, 106b using the virtual IP address, regardless of whether the edge nodes 104 are single homed or dual homed). Therefore, it would have been obvious to a person having ordinary skill in the art at the time of the invention was effectively filed to combine the teaching of Durrani with Garcia. The modification would allow exchanging routing and link state information among devices to effectively and rapidly learn information to establish network topology for the link aggregation for efficient packet forwarding network that prevents packet loss. Regarding claim 7. In the combination, Garcia discloses a non-transitory computer-readable storage medium, storing a computer program, wherein the computer program, when being executed by a processor (fig. 1 discloses switch 106a and 106b. the processor and memory comprised in the switches correspond to processor and non-transitory computer-readable storage medium) , causes the processor to execute the multi-chassis link aggregation routing computation method according to claims 1. All other limitations of claim 7 are similar with the limitations of claim 1. Claim 7 is rejected on the analysis of claim 1 above. Regarding claim 8. The combination discloses multi-chassis link aggregation routing computation method according to claim 2. Garcia discloses, wherein the original IP address of the multi-chassis link aggregation switch system serves as the first IP address, and the master IP address and the slave IP addresses are additionally configured ([0042] The MC pair of switches 106a, 106b is each assigned a dedicated IP address (original IP address), such as a system IP address (first IP address). The system IP address of a switch 106 is specific to the switch 106 and identifies the switch 106 uniquely or separately from the other switch in the MC pair. This system IP address (or any other IP address that is specific or unique to the node) is used as the associated IP address for any locally connected, single homed node 104c, 104d. For example, in an address resolution protocol (ARP) requests or responses, a switch 106 advertises its dedicated IP address for locally connected, single homed nodes 104. The pair of switches 106a, 106b continues to advertise the virtual IP address (master IP addresses) for dual homed nodes 104a, 104b. It is known in the art that in MCLAG, the master IP address (virtual IP address) is created to abstract the two switches as one switch for both ,double homed, down streamed devices such as 104a and 104b and up streamed device 116. It is known and standard in MCLAG configuration that before the MCLAG is established, the switches use their respective IP address (slave IP address) to exchange routing information and establish adjacency with neighbor nodes where later after the MCLAG is established, these addresses are abstracted by the virtual IP address that is the same for both switches 106a and 106b to make look the peer switches 106a 106b as one switch in the MCLAG. For example, switch 104a thinks that is communicating with one switch 106 with the virtual IP address instead of two switches with differing IP address (slave IP addresses). The system is capable of configuring different IP address in different manners to represent the first IP address, the master IP address, or the slave IP address as long as the single IP address is used for the single homed device, the master IP address (virtual IP address) is used to abstract the peer switches 106a and 106b as one switch in the MCLAG for the MCLAG components, and the slave IP addresses (device IP addresses) for exchanging original adjacency information and control information between the two switches before eh MCLAG is established). Regarding claim 9. The combination discloses multi-chassis link aggregation routing computation method according to claim 2. Garcia discloses, wherein the original IP address of the multi-chassis link aggregation switch system serves as the master IP address, and the slave IP addresses and the first IP address are additionally configured ([0042] The MC pair of switches 106a, 106b is each assigned a dedicated IP address, such as a system IP address (first IP address). The system IP address of a switch 106 is specific to the switch 106 and identifies the switch 106 uniquely or separately from the other switch in the MC pair. This system IP address (or any other IP address that is specific or unique to the node) is used as the associated IP address for any locally connected, single homed node 104c, 104d. For example, in an address resolution protocol (ARP) requests or responses, a switch 106 advertises its dedicated IP address for locally connected, single homed nodes 104. The pair of switches 106a, 106b continues to advertise the virtual IP address (master IP addresses) for dual homed nodes 104a, 104b. It is known in the art that in MCLAG, the master IP address (virtual IP address) is created to abstract the two switches as one switch for both ,double homed, down streamed devices such as 104a and 104b and up streamed device 116. It is known and standard in MCLAG configuration that before the MCLAG is established, the switches use their respective IP address (slave IP address) to exchange routing information and establish adjacency with neighbor nodes where later after the MCLAG is established, these addresses are abstracted by the virtual IP address that is the same for both switches 106a and 106b to make the peer switches 106a 106b look as one switch in the MCLAG. For example, switch 104a thinks that is communicating with one switch 106 with the virtual IP address instead of two switches with differing IP address (slave IP addresses). The system is capable of configuring different IP address in different manners to represent the first IP address, the master IP address, or the slave IP address as long as the single IP address is used for the single homed device, the master IP address (virtual IP address) is used to abstract the peer switches 106a and 106b as one switch in the MCLAG for the MCLAG components, and the slave IP addresses (device IP addresses) for exchanging original adjacency information and control information between the two switches before eh MCLAG is established. Configuring different IP addresses in different ways to perform the same function of originally configured IP address configured in previous prior method is an obvious variation where these IP addresses formation correspond to a printed matter where the printed matter where they are used different version of labeling without any structural difference on the system that amounts to a printed matter that does not have any structural difference on the system or have non-obvious functional relationship except for labeling difference of an identification that can be configured differently. Printed matter is not given patentable weight. See MPEP 2111.05 that states that first step of the printed matter analysis is the determination that the limitation in question is in fact directed toward printed matter. Once it is determined that the limitation is directed to printed matter, the examiner must then determine if the matter is functionally or structurally related to the associated physical substrate. See In re DiStefano, 808 F.3d 845, 117 USPQ2d 1267-1268 (Fed. Cir. 2015). If a new and unobvious functional relationship between the printed matter and the substrate does not exist. USPTO personnel need not give patentable weight to printed matter. See In re Lowry, 32 F.3d 1579, 1583-84, 32 USPQ2d 1031, 1035 (Fed. Cir. 1994); In re Ngai, 367 F.3d 1336, 70 USPQ2d 1862 (Fed. Cir. 2004). The printed matter or display format do not have structural difference on the system). Regarding claim 11. The combination discloses multi-chassis link aggregation routing computation method according to claim 1. Durrani discloses, wherein a next hop IP address of a route of a dual-active node connected to both the first switch and the second switch points to the master IP address of a Layer-3 interface of the multi-chassis link aggregation switch system ([0105-0106] discloses each PE device 102/104 of VRRP/MC-LAG cluster 122 can detect that it is part of a virtual router configuration and transmit a new type of DRP control packet (502/504) to CE device 118. Control packet 502/504 can include, e.g., the virtual IP address of cluster 122 (i.e., 1.1.1.254) and the interface IP address of the originating node (1.1.1.1 in the case of control packet 502, 1.1.1.2 in the case of control packet 504). In certain embodiments, control packet 502/504 can also include the virtual MAC address of cluster 122 (i.e., CCCC.CCCC.CCCC). [0106] Once CE device 118 has received control packets 502 and 504 from PE devices 102 and 104 respectively, CE device 118 can store the information included in the control packets in a local routing database. CE device 118 can then run a Shortest Path First (SPF) algorithm on the routing database to generate routing entries for its Layer 3 routing table. As part of this processing, CE device 118 can identify routing entries that point to the interface IP address of either PE device 102 or 104 as the next hop address, and can replace those interface IP addresses with the virtual IP address of cluster 122. This results in a routing table that contains routing entries pointing to the VRRP virtual IP address, rather than the interface IP addresses of PE devices 102 and 104. This is illustrated in L3 routing table 130 of FIG. 5, which shows the next hop address for destination 2.2.2.2 as being virtual IP address 1.1.1.254), and an egress is an aggregation link group of a communication link between the upstream device and the downstream device (fig. 1C discloses MC-LAG cluster/VRP group where an egress is an aggregation link group of a communication link between the upstream device and the downstream device). Regarding claim 12. The combination discloses multi-chassis link aggregation routing computation method according to claim 1. Garcia discloses, wherein a next hop of a route of a single-homing node only connected to the first switch points to a first IP address of a Layer-3 interface of the multi-chassis link aggregation switch system ([0039-0042] discloses The MC pair of switches 106a, 106b is each assigned a dedicated IP address (first IP address configured for single homed device after the master IP address (virtual IP address is configured), such as a system IP address. The system IP address of a switch 106 is specific to the switch 106 and identifies the switch 106 uniquely or separately from the other switch in the MC pair. This system IP address (or any other IP address that is specific or unique to the node) is used as the associated IP address for any locally connected, single homed node 104c, 104d. For example, in an address resolution protocol (ARP) requests or responses, a switch 106 advertises its dedicated IP address for locally connected, single homed nodes 104. The pair of switches 106a, 106b continues to advertise the virtual IP address for dual homed nodes 104a, 104b), and an egress is an aggregation link group of a communication link between the upstream device and the downstream device (fig. 1 discloses MCLAG system 100 where an egress is an aggregation link group of a communication link between the upstream device and the downstream device), wherein the first IP address an IP address which is configured after the multi-chassis link aggregation switch system configures the master IP address and the slave IP addresses([0039-0042] discloses The MC pair of switches 106a, 106b is each assigned a dedicated IP address (first IP address configured for single homed device after the master IP address (virtual IP address is configured), such as a system IP address. The system IP address of a switch 106 is specific to the switch 106 and identifies the switch 106 uniquely or separately from the other switch in the MC pair. This system IP address (or any other IP address that is specific or unique to the node) is used as the associated IP address for any locally connected, single homed node 104c, 104d. For example, in an address resolution protocol (ARP) requests or responses, a switch 106 advertises its dedicated IP address for loc
Read full office action

Prosecution Timeline

Jan 08, 2024
Application Filed
Mar 22, 2025
Non-Final Rejection — §103
Jun 25, 2025
Response Filed
Oct 03, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603844
DEADLOCK FREE ALL-TO-ALL COLLECTIVE COMMUNICATION SCHEDULES
2y 5m to grant Granted Apr 14, 2026
Patent 12598143
COORDINATING CONGESTION CONTROL AND ADAPTIVE LOAD BALANCING
2y 5m to grant Granted Apr 07, 2026
Patent 12598144
METHOD AND DEVICE FOR PROCESSING PACKET
2y 5m to grant Granted Apr 07, 2026
Patent 12592891
CONGESTION CONTROL APPLYING ADAPTIVE PATH SELECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12580864
INTER-CLUSTER HIERARCHICAL ROUTING WITH MULTIPLE PATHS FOR LOAD BALANCING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
55%
Grant Probability
75%
With Interview (+19.8%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 278 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month