Prosecution Insights
Last updated: April 19, 2026
Application No. 18/652,622

MALLEABLE ROUTING FOR DATA PACKETS

Final Rejection §103§DP
Filed
May 01, 2024
Examiner
OHRI, ROMANI
Art Unit
2413
Tech Center
2400 — Computer Networks
Assignee
Cisco Technology Inc.
OA Round
2 (Final)
85%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
378 granted / 445 resolved
+26.9% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
477
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
55.9%
+15.9% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 445 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Amendment Applicant’s amendment, filed November 10, 2025, has been entered and carefully considered. Claims 1, 6-8, 15-16, 18, and 20 are amended, claim 17 is canceled. Claims 1-16 and 18-20 are pending. Response to Arguments Applicant’s arguments filed on November 10, 2025 with regards to rejection of claims 1-16 and 18-20 but are moot because the amended claimed limitations are cited using new citations from the cited prior art. The combination of Brahim and Chawla reads on the amended limitations. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). Claims 1-2, 6-10, 15-16 and 20 are provisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 2-4, 9, 10-12, 18-20 and 25 of copending Application No. 17/360,283. Although the conflicting claims are not identical, they are not patentably distinct from each other. Instant Application Claim Number Instant Application 18/652,622 Claim Text Copending Application 17/685,857 Claim Number Copending Application 17/360,283 Claim Text 1 A method comprising: at each of a plurality of network nodes, the plurality of network nodes forming a subset of the nodes in a larger network, wherein each network node within the larger network supports a first method for determining a next hop of a received packet according to a first routing criterion, the first method being identical across all network nodes within the larger network; receiving, by a first network node of the plurality of network nodes, router capability advertisements from a subset of the plurality of network nodes, the router capability advertisements indicating Support for an alternate one or more routing capabilities capability supported by each advertising node: one or more flexible routing criteria associated with the alternate routing capability: and a segment identifier (SID) associated with the flexible routing criteria: wherein the flexible routing criteria include a target metric to be minimized when determining the next hop of a received packet and a set of constrains applicable to determining the next hop of a received packet; Forming, based at least in part on the received advertisements, a pruned network topology by selecting one or more network nodes from the subset of network nodes that support the flexible routing criteria associated with the alternate routing capability and that are not excluded by set of constraints; Determining a first route for first set of data packets through the pruned network topology, wherein determining the first route is performed by selected each hop in the first route to minimize the target metric associated with the flexible routing criteria; and propagating the first set of data packets along the determined route; wherein the target metric to be minimized is latency. 2 A method comprising: at each of a plurality of network nodes of a network, wherein each network node supports routing packets according to a uniform routing criterion, indicating over the network one or more routing capabilities supported by the network node, each routing capability being associated with one or more alternative routing criteria to the uniform routing criterion; receiving, by a first network node, the indications regarding the routing capabilities supported by each of the plurality of network nodes; determining, at the first network node, one or more alternative routing criterioncriteria to use to transmit a first set of data packets across [[a]] the network; identifying, based at least in part on the received indications, one or more network nodes from the plurality of network nodes that support the routing capability associated with the determined one or more alternative routing criteria; determining a route for the first set of data packets through the one or more network nodes and [[the]] communication links that support the routing capability; and propagating the first set of data packets along the determined route. 2 The method of claim 1, wherein the first route taken by the first set of data packets traversing the network is different than a second route taken by a second set of data packets traversing the network, the second route determined using the first routing criterion. 3 The method of claim 2, wherein the pathroute taken by the first set of packets traversing the systemnetwork is different than a second route taken by a second set of packets traversing the system using anetwork, the second route calculated using the uniform routing criterion. 3 The method of claim 1, wherein determining the first route that controls the latency comprises minimizing the latency of the first set of data packets traversing the network. 4 The method of claim 1, wherein determining the first route that controls the latency comprises reducing the latency of the first set of data packets traversing the network relative to the first routing criterion. 5 The method of claim 3, wherein determining the first route for the first set of data packets comprises determining the first route according to Dijkstra's algorithm. 6 The method of claim 1, wherein two or more network nodes from the plurality of nodes each receive the indications from other network nodes regarding the alternative routing capability supported by the other network nodes, and wherein each network node from the two or more network nodes determines a forwarding entry for each alternative routing capability based in part upon the indications received from other network nodes. 9, 25 The method of claim 2, wherein two or more network nodes from the plurality of network nodes each receive the indications from other nodes regarding the routing capabilities supported by other network nodes; and wherein each network node from the two or more network nodes determines a forwarding entry for each alternative routing capability based in part upon the indications received from other network nodes. 7 7. (New) The system of claim 1, wherein the first routing criterion is a shortest route criterion. 4 The method of claim 2, wherein the uniform routing criterion is a shortest route criterion. 8 A system for configurable traffic routing comprising: a network comprising a plurality of network nodes, the plurality of network nodes forming a subset of the nodes in a larger network, each network node within the larger network supporting a first method for determining a next hop of a received packet according to a first routing criterion, the first method being identical across all network nodes within the larger network, each network node indicating over the network an alternate routing capability supported by the plurality of network nodes, the alternate routing capability being associated with a second method for determining the next hop of a received packet, the second method using one or more operator-defined alternative routing criteria not used in the first method for determining the next hop of a received packet; wherein a first network node of the plurality of network nodes receives router capability advertisements from a subset of the plurality of network nodes, the router capability advertisements indicating support for an operator-defined alternate routing capability by each advertising node; one or more flexible routing criteria associated with the alternate routing capability; and a segment identifier (SID) associated with the flexible routing criteria; wherein the flexible routing criteria include a target metric to be minimized when determining the next hop of a received packet and a set of constraints applicable to determining the next hop of a received packet; a node/link determination module, and a route determination module, wherein: the node/link determination module includes logic configured to form, based at least in part on the received advertisements, apruned network topology by selecting one or morenetwork nodes from the subset of network nodes that support the flexible routing criteria associated with the operator-defined alternate routing capability and that are not excluded by the set of constraints; and the route determination module includes logic configured to determine a first route for first set of data packets through the pruned network topology; wherein determining the first route is performed by selecting each hop in the first route to minimize the target metric associated with the flexible routing criteria; andlogic configured to propagate the first set of data packets along the determined route;wherein the target metric to be minimized is latency. 10 A system for configurable traffic routing comprising: a plurality of network nodes, each network node including a network interface, a- non transitory memory and one or more processors coupled with the non transitory memory, wherein each network node supports routing packets according to a uniform routing criterion, and is further configured to support routing capabilities, each routing capability being associated with one or more operator-defined routing criteria, and wherein each network node transmits an indication of the supported routing capabilities on the network interface;a communications network with links interconnecting the plurality of network nodes in a physical topology; the system further including a criterion determination module, a node/link determination module, and a route determination module; wherein the criterion determination module is configured to determine one or more operator-defined routing criteria to transmit a set of data packets across [[a]] the communication network, wherein the one or more operator-defined routing criteria are associated with a routing capability; wherein the node/link determination module is configured to identify network nodes and communication links in the communication network that satisfy the routing capability; and wherein the route determination module is configured to use the one or more operator-defined routing criteria to identify a route for data packets through the network nodes and the communication links that support the routing capability; wherein the system is operable to configure the set of network nodes and the communication links corresponding to the route identified by the route determination module to exchange a first set of packets traversing the systemcommunication network using the identified route. 9 The system of claim 8, wherein the first route taken by the first set of data packets in the network is different than a second route taken by a second set of data packets traversing the network, the second route determined using the first method for determining the next hop of a received packet. 11 wherein the pathroute taken by the first set of packets traversing the systemcommunication network is different than a second route taken by a second set of data packets traversing the_communication network, the secondsystem using a route calculated using the uniform routing criterion. 10 The system of claim 8, wherein the first routing criterion is a shortest route criterion. 12, 20 12. (Currently Amended) The system of claim 10, wherein the uniform routing criterion is a shortest route criterion 11 The system of claim 8, wherein the network performance metric is a latency measurement. 12 The system of claim 11, wherein the first route taken by the first set of data packets minimizes a latency associated with traversing the network. 13 The system of claim 11, wherein the first route taken by the first set of data packets reduces a latency associated with traversing the network relative to the first routing criterion. 14 The system of claim 7 wherein more than one network node includes the node/link determination module and the route determination module. 15 A non-transitory computer-readable medium that stores a set of instructions which when executed by one or more processors a located in a plurality of network nodes, each network node supporting a first method for routing packets, cause selected network nodes to perform a actions to support a second method for determining the next hop of a received packet, the actions including: receiving, by a first network node of the plurality of network nodes, router capability advertisements from a subset of the plurality of network nodes, the router capability advertisements indicating support for the second method of routing packets by each advertising node; one or more flexible routing criteria associated with the second method; and a segment identifier (SID) associated with the flexible routing criteria; wherein the flexible routing criteria include a target metric to be minimized when determining the next hop of a received packet and a set of constraints applicable to determining the next hop of a received packet; forming, based at least in part on the received advertisements, a pruned network topology by selecting one or more network nodes from the subset of network nodes that support the flexible routing criteria associated with the second method of routing packets and that are not excluded by the set of constraints; determining a first route for a first set of data packets through the pruned network topology, wherein determining the first route is performed by selecting each hop in the first route to minimize the target metric associated with the flexible routing criteria; and propagating the first set of data packets along the determined route; wherein the target metric to be minimized is latency. 18 A non-transitory computer-readable medium containing logicinstructions which, when executed by one or more processors perform a method comprising: at each of a plurality of network nodes, each network node supporting routing packets according to a uniform routing criterion, indicating one or more routing capabilities supported by the network node, each routing capability being associated with a one or more operator-defined alternative routing criteria; receiving, by a first network node, the indications regarding the alternative routing capabilities supported by each of the plurality of nodes; determining, at the network first node, one or more operator-defined alternative routing criteria to use to route a set of data packets across a network; identifying, based at least in part on the received indications, one or more network nodes from the plurality of network nodes that support the routing capability associated with the determined one or more operator-defined alternative routing criteria; determine a route satisfying the determined one or more operator-defined alternative routing criteria for the set of data packets through the one or more network nodes and [[the]] communication links that support the routing capability; and propagate the set of data packets along the determined route. 16 The non-transitory computer-readable medium of The non-transitory computer-readable medium of wherein the first route taken by the first set of data packets is different than the route taken by a second set of data packets traversing the network using a second route, the second route determined using the first method of routing packets 19 The logicnon-transitory computer-readable medium of claim 18, wherein the pathroute taken by the set of data packets traversing the system is different than a second route taken by a second set of data packets traversing the network, system using the second route calculated by using the uniform routing criterion. 20 The non-transitory computer-readable medium of claim 16, wherein two or more network nodes from the plurality of network nodes each include instructions which when executed by one or more processors to further performs actions including: receiving the advertisements from other network nodes regarding the routing capabilities supported by the other network nodes; and determining a forwarding entry for each routing capability supported based in part upon the advertisements received from other network nodes. 25 wherein two or more network nodes from the plurality of network nodes each receive the indications from other nodes regarding the routing capabilities supported by other network nodes; and wherein each network node from the two or more network nodes determines a forwarding entry for each alternative routing capability based in part upon the indications received from other network nodes. One of ordinary skill in the art would conclude that the claims at issue are obvious variants of one another because the method claims in copending Application 17360,283 comprise substantially the same limitations as the identified method, system and a non-transitory computer-readable medium claims in the instant application. With regards to the system claims, it would have been obvious to one of ordinary skill in the art to implement the method steps detailed in Claim 1 in a system comprising a system for configurable traffic routing comprising: a plurality of network nodes, each network node including a network interface, a non-transitory memory and one or more processors coupled with the non-transitory memory to perform the steps as described in independent claims. This is a provisional obviousness-type double patenting rejection because the conflicting claims have not in fact been patented. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6, 8-9, 12-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ould Brahim et al. (US 2018/0167458 A1, hereinafter referred as Brahim). in view of Chawla et al. (US 2015/0026313 A1). Regarding claim 1, Brahim discloses a method comprising (Fig. 2 discloses the mechanism of a network comprising a controller which configured to discover set of ingress PE devices providing traffic to an egress PE devices): at each of a plurality of network nodes, the plurality of network nodes forming a subset of the nodes in a larger network, wherein each network node within the larger network supports a first method for determining a next hop of a received packet according to a first routing criterion (Paragraphs 0034-0040 disclose determined by the SDN controller 140 based on information received by the SDN controller 140 from the egress peer node (e.g., based on processing of flow statistics received from the egress peer node), the configuration of the egress peer node by the SDN controller 140 to collect flow statistics (routing capabilities) and provide the flow information and the providing of the flow information to the SDN controller 140 may be based on a control protocol (e.g., OpenFlow, BGP FlowSpec, Netflow or the like); the first method being identical across all network nodes within the larger network (Paragraphs 0034-0040) receiving, by a first network node of the plurality of network nodes, router capability advertisements from a subset of the plurality of network nodes, the router capability advertisements indicating support for an alternate one or more routing capabilities capability supported by each advertising node (Paragraphs 0023, 0034, 0039 discloses in the network 110, the egress BR 119-2 is available as an alternate egress peer node for each of the ingress PE devices 111-1, 111-2, and 111-3. FIG. 2, as indicated by the flow information of flow information table 201, the SDN controller 140 identifies the top three traffic flows, in terms of bandwidth consumption, on egress peer link 122 of egress BR 119-1. The top three traffic flows include a first traffic flow (which is indicated by a source/destination IP address pair of 1.1.1.0/24) for which egress peer link 122 of egress BR 119-1 supports 2.5 M, a second traffic flow (which is indicated by a source/destination IP address pair of 2.2.1.0/24) for which egress peer link 122 of BR 119-1 supports 2 M, and a third traffic flow (which is indicated by a source/destination IP address pair of 3.3.1.0/24) for which egress peer link 122 of BR 119-1 supports 1 M); one or more flexible routing criteria associated with the alternate routing capability (Paragraphs 0023, 0034, 0039 discloses in the network 110, the egress BR 119-2 is available as an alternate egress peer node for each of the ingress PE devices 111-1, 111-2, and 111-3. FIG. 2, as indicated by the flow information of flow information table 201, the SDN controller 140 identifies the top three traffic flows, in terms of bandwidth consumption, on egress peer link 122 of egress BR 119-1. The top three traffic flows include a first traffic flow (which is indicated by a source/destination IP address pair of 1.1.1.0/24) for which egress peer link 122 of egress BR 119-1 supports 2.5 M, a second traffic flow (which is indicated by a source/destination IP address pair of 2.2.1.0/24) for which egress peer link 122 of BR 119-1 supports 2 M, and a third traffic flow (which is indicated by a source/destination IP address pair of 3.3.1.0/24) for which egress peer link 122 of BR 119-1 supports 1 M), a segment identifier (SID) associated with the flexible routing criteria (paragraph 0023, 0039): and wherein the flexible routing criteria include a target metric to be minimized when determining the next hop of a received packet (Paragraph 0039 discloses the SDN controller 140 may initiate a management action to attempt to alleviate the congestion on the egress peer link due to the traffic flow. The SDN controller 140 may select, from the set of ingress PE devices that are contributing to congestion of the given traffic flow on the egress peer link, a selected ingress PE device sending traffic to the egress peer link for the given traffic flow and may initiate redirection of the traffic of the traffic flow from being sent from the selected ingress PE device to the egress peer link to being sent from the selected ingress PE device to an alternate egress peer link (e.g., of the same egress peer node or a different egress peer node)). and a set of constrains applicable to determining the next hop of a received packet (Fig. 2, paragraphs 0039-0040 disclose the SDN controller 140 may initiate a management action to attempt to alleviate the congestion on the egress peer link due to the traffic flow. The SDN controller 140 may select, from the set of ingress PE devices that are contributing to congestion of the given traffic flow on the egress peer link. The SDN controller 140 may select, from the set of ingress PE devices that are contributing to congestion on the egress peer link, a selected ingress PE device sending traffic to the egress peer link (e.g., the ingress PE device sending the most traffic to the egress peer link, the ingress PE device sending the next-to-most traffic to the egress peer link, or the like) and may initiate redirection of traffic from being sent from the selected ingress PE device to the egress peer link to being sent from the selected ingress PE device to an alternate egress peer link (e.g., of the same egress peer node or a different egress peer node); forming, based at least in part on the received advertisements, a pruned network topology by selecting one or more network nodes from the subset of network nodes that support the flexible routing criteria associated with the alternate routing capability and that are not excluded by set of constraints (Paragraphs 0034-0040 disclose the SDN controller 140 identifies a set of traffic flows on the egress peer node and the egress peer link for which ingress PE device discovery is to be performed (which also may be referred to as a set of traffic flows of interest or, more simply, traffic flows of interest). The traffic flows on the egress peer link may include all of the traffic flows on the egress peer link or a subset of the traffic flows on the egress peer link. The traffic flows may be the top N (N.gtoreq.1) traffic flows on the egress peer link, which may be measured in terms of bandwidth. The SDN controller 140 may identify the set of traffic flows on the egress peer link based on flow information associated with the egress peer link. The flow information associated with the egress peer link may include per-flow bandwidth information that includes, for each of the traffic flows, an indication of the amount of bandwidth of the respective traffic flow that is supported by the egress peer link); wherein determining a first route for first set of data packets through the pruned network topology, wherein determining the first route is performed by selected each hop in the first route to minimize the target metric associated with the flexible routing criteria (Paragraph 0034 discloses the traffic flows may be the top N (N≥1) traffic flows on the egress peer link, which may be measured in terms of bandwidth. The SDN controller 140 may identify the set of traffic flows on the egress peer link based on flow information associated with the egress peer link. The flow information associated with the egress peer link may include per-flow bandwidth information that includes, for each of the traffic flows, an indication of the amount of bandwidth of the respective traffic flow that is supported by the egress peer link. The per-flow bandwidth information may be based on flow statistics which may be collected for the traffic flows on the egress peer node and the egress peer link); wherein the target metric to be minimized is latency (paragraphs 0039-0040). Brahim does not explicitly disclose the mechanism of propagating the first set of data packets along the determined route. In an analogous art, Chawla discloses propagating the first set of data packets along the determined route (Fig. 4, paragraphs 0021-0028, 0033-0040 disclose the management IHS 202 that allows a network administrator or other user of the IHS network to provide a plurality of DCB configuration details that may include the identity of data traffic flows, priorities for data traffic flows, bandwidth allocations for data traffic flows, lossless behavior requests for data traffic flows, congestion notification requests for data traffic flows. At step 404 where a data traffic flow is identified. Paragraphs 0033-0040). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Chawla to the system of Brahim to provide systems and methods for configuring and managing a Data Center Bridged (DCB) Information Handling System (IHS) network include a plurality of switch IHSs that are connected together and which also utilize flow based networking to configure and manage the IHS network (Abstract, Chawla). Regarding claim 8, claim 8 comprises substantially similar limitations as claimed in claim 1, claimed as a system to perform the steps of clam 1. Brahim discloses a system for configurable traffic routing comprising, a network comprising a plurality of network nodes (Fig. 2 discloses the mechanism of a network comprising a controller which configured to discover set of ingress PE devices providing traffic to an egress PE devices): at each of a plurality of network nodes, the plurality of network nodes forming a subset of the nodes in a larger network, wherein each network node within the larger network supports a first method for determining a next hop of a received packet according to a first routing criterion (Paragraphs 0034-0040 disclose determined by the SDN controller 140 based on information received by the SDN controller 140 from the egress peer node (e.g., based on processing of flow statistics received from the egress peer node), the configuration of the egress peer node by the SDN controller 140 to collect flow statistics (routing capabilities) and provide the flow information and the providing of the flow information to the SDN controller 140 may be based on a control protocol (e.g., OpenFlow, BGP FlowSpec, Netflow or the like); the first method being identical across all network nodes within the larger network (Paragraphs 0034-0040) indicating over the network an alternate capability supported by the plurality of network nodes, the alternate routing capability being associated with a second method for determining the next hop of a received packet, the second method using one or more operator defined alternative routing criteria not used in the first method for determining the next hop of a received packet (Paragraphs 0023, 0034, 0039 discloses in the network 110, the egress BR 119-2 is available as an alternate egress peer node for each of the ingress PE devices 111-1, 111-2, and 111-3. FIG. 2, as indicated by the flow information of flow information table 201, the SDN controller 140 identifies the top three traffic flows, in terms of bandwidth consumption, on egress peer link 122 of egress BR 119-1. The top three traffic flows include a first traffic flow (which is indicated by a source/destination IP address pair of 1.1.1.0/24) for which egress peer link 122 of egress BR 119-1 supports 2.5 M, a second traffic flow (which is indicated by a source/destination IP address pair of 2.2.1.0/24) for which egress peer link 122 of BR 119-1 supports 2 M, and a third traffic flow (which is indicated by a source/destination IP address pair of 3.3.1.0/24) for which egress peer link 122 of BR 119-1 supports 1 M); wherein a first network node of the plurality of network nodes receives router capability advertisements from a subset of the plurality of network nodes, the router capability advertisements indicating support for an operator-defined alternate routing capability by each advertising node (Paragraphs 0023, 0034, 0039 discloses in the network 110, the egress BR 119-2 is available as an alternate egress peer node for each of the ingress PE devices 111-1, 111-2, and 111-3. FIG. 2, as indicated by the flow information of flow information table 201, the SDN controller 140 identifies the top three traffic flows, in terms of bandwidth consumption, on egress peer link 122 of egress BR 119-1. The top three traffic flows include a first traffic flow (which is indicated by a source/destination IP address pair of 1.1.1.0/24) for which egress peer link 122 of egress BR 119-1 supports 2.5 M, a second traffic flow (which is indicated by a source/destination IP address pair of 2.2.1.0/24) for which egress peer link 122 of BR 119-1 supports 2 M, and a third traffic flow (which is indicated by a source/destination IP address pair of 3.3.1.0/24) for which egress peer link 122 of BR 119-1 supports 1 M); one or more flexible routing criteria associated with the alternate routing capability (Paragraphs 0023, 0034, 0039 discloses in the network 110, the egress BR 119-2 is available as an alternate egress peer node for each of the ingress PE devices 111-1, 111-2, and 111-3. FIG. 2, as indicated by the flow information of flow information table 201, the SDN controller 140 identifies the top three traffic flows, in terms of bandwidth consumption, on egress peer link 122 of egress BR 119-1. The top three traffic flows include a first traffic flow (which is indicated by a source/destination IP address pair of 1.1.1.0/24) for which egress peer link 122 of egress BR 119-1 supports 2.5 M, a second traffic flow (which is indicated by a source/destination IP address pair of 2.2.1.0/24) for which egress peer link 122 of BR 119-1 supports 2 M, and a third traffic flow (which is indicated by a source/destination IP address pair of 3.3.1.0/24) for which egress peer link 122 of BR 119-1 supports 1 M), a segment identifier (SID) associated with the flexible routing criteria (paragraph 0023, 0039): and wherein the flexible routing criteria include a target metric to be minimized when determining the next hop of a received packet (Paragraph 0039 discloses the SDN controller 140 may initiate a management action to attempt to alleviate the congestion on the egress peer link due to the traffic flow. The SDN controller 140 may select, from the set of ingress PE devices that are contributing to congestion of the given traffic flow on the egress peer link, a selected ingress PE device sending traffic to the egress peer link for the given traffic flow and may initiate redirection of the traffic of the traffic flow from being sent from the selected ingress PE device to the egress peer link to being sent from the selected ingress PE device to an alternate egress peer link (e.g., of the same egress peer node or a different egress peer node)). and a set of constrains applicable to determining the next hop of a received packet (Fig. 2, paragraphs 0039-0040 disclose the SDN controller 140 may initiate a management action to attempt to alleviate the congestion on the egress peer link due to the traffic flow. The SDN controller 140 may select, from the set of ingress PE devices that are contributing to congestion of the given traffic flow on the egress peer link. The SDN controller 140 may select, from the set of ingress PE devices that are contributing to congestion on the egress peer link, a selected ingress PE device sending traffic to the egress peer link (e.g., the ingress PE device sending the most traffic to the egress peer link, the ingress PE device sending the next-to-most traffic to the egress peer link, or the like) and may initiate redirection of traffic from being sent from the selected ingress PE device to the egress peer link to being sent from the selected ingress PE device to an alternate egress peer link (e.g., of the same egress peer node or a different egress peer node); the node/link determination module includes logic configured to form, based at least in part on the received advertisements, a pruned network topology by selecting one or more network nodes from the subset of network nodes that support the flexible routing criteria associated with the alternate routing capability and that are not excluded by set of constraints (Paragraphs 0034-0040 disclose the SDN controller 140 identifies a set of traffic flows on the egress peer node and the egress peer link for which ingress PE device discovery is to be performed (which also may be referred to as a set of traffic flows of interest or, more simply, traffic flows of interest). The traffic flows on the egress peer link may include all of the traffic flows on the egress peer link or a subset of the traffic flows on the egress peer link. The traffic flows may be the top N (N.gtoreq.1) traffic flows on the egress peer link, which may be measured in terms of bandwidth. The SDN controller 140 may identify the set of traffic flows on the egress peer link based on flow information associated with the egress peer link. The flow information associated with the egress peer link may include per-flow bandwidth information that includes, for each of the traffic flows, an indication of the amount of bandwidth of the respective traffic flow that is supported by the egress peer link); wherein determining a first route for first set of data packets through the pruned network topology, wherein determining the first route is performed by selected each hop in the first route to minimize the target metric associated with the flexible routing criteria , wherein the flexible routing criteria associated with operator-defined alternate routing capability and that are not excluded by the set of constraints (Paragraph 0034 discloses the traffic flows may be the top N (N≥1) traffic flows on the egress peer link, which may be measured in terms of bandwidth. The SDN controller 140 may identify the set of traffic flows on the egress peer link based on flow information associated with the egress peer link. The flow information associated with the egress peer link may include per-flow bandwidth information that includes, for each of the traffic flows, an indication of the amount of bandwidth of the respective traffic flow that is supported by the egress peer link. The per-flow bandwidth information may be based on flow statistics which may be collected for the traffic flows on the egress peer node and the egress peer link); the route determination module includes logic configured to determine a first route for the first set of data packets through the pruned network topology wherein determining the first route is performed by selecting each hop in the first route to minimize the target metric associated with the flexible routing criteria (Paragraph 0034 discloses the traffic flows may be the top N (N≥1) traffic flows on the egress peer link, which may be measured in terms of bandwidth. The SDN controller 140 may identify the set of traffic flows on the egress peer link based on flow information associated with the egress peer link. The flow information associated with the egress peer link may include per-flow bandwidth information that includes, for each of the traffic flows, an indication of the amount of bandwidth of the respective traffic flow that is supported by the egress peer link. The per-flow bandwidth information may be based on flow statistics which may be collected for the traffic flows on the egress peer node and the egress peer link); wherein the target metric to be minimized is latency (paragraphs 0039-0040). Brahim does not explicitly disclose the mechanism of propagating the first set of data packets along the determined route. In an analogous art, Chawla discloses propagating the first set of data packets along the determined route (Fig. 4, paragraphs 0021-0028, 0033-0040 disclose the management IHS 202 that allows a network administrator or other user of the IHS network to provide a plurality of DCB configuration details that may include the identity of data traffic flows, priorities for data traffic flows, bandwidth allocations for data traffic flows, lossless behavior requests for data traffic flows, congestion notification requests for data traffic flows. At step 404 where a data traffic flow is identified. Paragraphs 0033-0040). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Chawla to the system of Brahim to provide systems and methods for configuring and managing a Data Center Bridged (DCB) Information Handling System (IHS) network include a plurality of switch IHSs that are connected together and which also utilize flow based networking to configure and manage the IHS network (Abstract, Chawla). Regarding claim 15, claim 15 comprises substantially similar limitations as claimed in claim 1, claimed as a non-transitory computer readable storage medium storing containing logic (Fig. 2 discloses a controller apparatus which includes a processor and a memory configured to identify a set of traffic flows on an egress peer link of an egress peer node), when executed by one or more processor to perform the steps of clam 1. Regarding claims 2, 9 and 16, Brahim does not disclose wherein the first route taken by the first set of data packets traversing the network is different than a second route taken by a second set of data packets traversing the network, the second route determined using the first routing criterion. In an analogous art, Chawla discloses wherein the first route taken by the first set of data packets traversing the network is different than a second route taken by a second set of data packets (Fig. 5a disclose the three flow and each flow has different routing criteria from the first one) traversing the network, the second route determined using the first routing criterion (Paragraphs 0033-0040 disclose each flow identifier has its own identity as also disclosed in Fig. 5a, 506, 508 and 510). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Chawla to the system of Brahim to provide systems and methods for configuring and managing a Data Center Bridged (DCB) Information Handling System (IHS) network include a plurality of switch IHSs that are connected together and which also utilize flow based networking to configure and manage the IHS network (Abstract, Chawla). Regarding claims 3, 12 and 17, Brahim discloses wherein determining the first route that controls the latency of the first set of data packets as they traverse the network comprises minimizing the latency of the first set of data packets traversing the network. (Paragraph 0039 discloses the SDN controller 140 may initiate a management action to attempt to alleviate the congestion on the egress peer link due to the traffic flow. The SDN controller 140 may select, from the set of ingress PE devices that are contributing to congestion of the given traffic flow on the egress peer link, a selected ingress PE device sending traffic to the egress peer link for the given traffic flow and may initiate redirection of the traffic of the traffic flow from being sent from the selected ingress PE device to the egress peer link to being sent from the selected ingress PE device to an alternate egress peer link (e.g., of the same egress peer node or a different egress peer node)). Regarding claims 4 and 13, Brahim discloses wherein determining the first route that controls the latency comprises reducing the latency of the first set of data packets traversing the network relative to the first routing criterion (Paragraph 0039 discloses the SDN controller 140 may initiate a management action to attempt to alleviate the congestion on the egress peer link due to the traffic flow. The SDN controller 140 may select, from the set of ingress PE devices that are contributing to congestion of the given traffic flow on the egress peer link, a selected ingress PE device sending traffic to the egress peer link for the given traffic flow and may initiate redirection of the traffic of the traffic flow from being sent from the selected ingress PE device to the egress peer link to being sent from the selected ingress PE device to an alternate egress peer link (e.g., of the same egress peer node or a different egress peer node)). Regarding claim 18, Brahim discloses wherein determining the first route that controls the latency of the first set of data packets as they traverse the network comprises reducing the latency of the first set of data packets traversing the network (Paragraph 0039 discloses the SDN controller 140 may initiate a management action to attempt to alleviate the congestion on the egress peer link due to the traffic flow. The SDN controller 140 may select, from the set of ingress PE devices that are contributing to congestion of the given traffic flow on the egress peer link, a selected ingress PE device sending traffic to the egress peer link for the given traffic flow and may initiate redirection of the traffic of the traffic flow from being sent from the selected ingress PE device to the egress peer link to being sent from the selected ingress PE device to an alternate egress peer link (e.g., of the same egress peer node or a different egress peer node)) relative to the latency of a second set of data packets traversing the network via a second route computed using the first routing criterion (Fig 5, or Paragraph 0039 discloses the SDN controller 140, where it is determined that the traffic flow 2.2.1.0/24 is contributing to congestion on the egress peer link 122 of the egress BR 119-1, may use the per-PE/per-flow bandwidth usage information in flow information table 203 to determine that ingress PE device 111-1 and ingress PE device 111-2 are both sourcing traffic of the traffic flow 2.2.1.0/24 to the egress peer link 122 of the egress BR 119-1 and that ingress PE device 111-1 is sourcing more traffic of traffic flow 2.2.1.0/24 to the egress peer link 122 of the egress BR 119-1 (namely, 1.5 M) than the ingress PE device 111-2 (namely, 0.5 M). Thus based on the traffic flow statistics, ingress device sends towards the first egress peer link the first set of data packet flow and towards the second egress peer link the second set of data packet flow). Regarding claims 6 and 20, Brahim discloses wherein two or more network nodes from the plurality of nodes each receive the advertisements from other network nodes regarding the alternative routing capability supported by the other network nodes (Fig. 2, paragraphs 0034-0040) and wherein each network node from the two or more network nodes determines a forwarding entry for each alternative routing capability based in part upon the advertisements received from other network nodes (Fig. 2, paragraphs 0034-0040 disclose the flow information of flow information table 202-1 associated with ingress PE device 111-1, the SDN controller 140 determines, for each of the traffic flows traversing the egress peer link 122 of the egress BR 119-1 that is sourced from the ingress PE device 111-1, an indication of an amount of bandwidth of that traffic flow that is sent from ingress PE device 111-1 to the egress peer link 122 of the egress BR 119-1 (e.g., for the traffic flow denoted using 1.1.1.0/24 the ingress PE device 111-1 sends 1 M of data to the egress peer link 122 of the egress BR 119-1 and for the traffic flow denoted using 2.2.1.0/24 the ingress PE device 111-1 sends 1.5 M of data to the egress peer link 122 of the egress BR 119-1). Similarly, in the example of FIG. 2, as indicated by the flow information of flow information table 202-2 associated with ingress PE device 111-2, the SDN controller 140 determines, for each of the traffic flows traversing the egress peer link 122 of the egress BR 119-1 that is sourced from the ingress PE device 111-2, an indication of an amount of bandwidth of that traffic flow that is sent from the ingress PE device 111-2 to the egress peer link 122 of the egress BR 119-1 (e.g., for the traffic flow denoted using 1.1.1.0/24 the ingress PE device 111-2 sends 1 M of data to the egress peer link 122 of the egress BR 119-1 and for the traffic flow denoted using 2.2.1.0/24 the ingress PE device 111-2 sends 0.5 M of data to the egress peer link 122 of the egress BR 119-1). Regarding claim 14, Brahim discloses wherein more than one node includes a node/link determination module and a route determination module (Fig. 6 discloses a computer which has a processor/memory and cooperating element to perform the determination steps, I/O device to perform the transmitting/receiving functions, identify paths/links etc.). Claims 5, 7, 10-11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ould Brahim et al. (US 2018/0167458 A1, hereinafter referred as Brahim). in view of Chawla et al. (US 2015/0026313 A1) and further in view of Ellis et al. (US 2018/0006931 A1). Regarding claim 5, the combination of Brahim and Chawla don’t disclose the mechanism of wherein determining the first route for the first set of data packets comprises determining the first route according to Dijkstra's algorithm. In an analogues art, Ellis discloses wherein determining the first route for the first set of data packets comprises determining the first route according to Dijkstra's algorithm (Paragraph 0185 disclose selecting the lowest latency path using the Dijkstra’s algorithm. The OSPF routing policies for constructing a route table are governed by link metrics associated with each routing interface. Cost factors may be the distance of a router (round-trip time), data throughput of a link, or link availability and reliability, which may be expressed as simple unitless numbers. This provides a dynamic process of traffic load balancing between routes of equal cost). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Ellis to the modified system of Brahim and Chawla to provide capabilities for supporting one or more network zones and associated provider services based on latency information (Paragraph 0005, Ellis). Regarding claim 19, the combination of Brahim and Chawla don’t disclose the mechanism of wherein determining the first route for the first set of data packets through the network comprises determining the first route according to Dijkstra's algorithm using the latency as an input. In an analogous art, Ellis discloses wherein determining the first route for the first set of data packets through the network comprises determining the first route according to Dijkstra's algorithm using the latency as an input (Paragraph 0185 disclose selecting the lowest latency path using the Dijkstra’s algorithm. The OSPF routing policies for constructing a route table are governed by link metrics associated with each routing interface. Cost factors may be the distance of a router (round-trip time), data throughput of a link, or link availability and reliability, which may be expressed as simple unitless numbers. This provides a dynamic process of traffic load balancing between routes of equal cost). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Ellis to the modified system of Brahim and Chawla to provide capabilities for supporting one or more network zones and associated provider services based on latency information (Paragraph 0005, Ellis). Regarding claims 7 and 10, the combination of Brahim and Chawla don’t disclose the mechanism of wherein the first routing criterion is a shortest route criterion. In an analogous art, Ellis discloses wherein the first routing criterion is a shortest route criterion (Paragraph 0168 and paragraphs 0184-0185). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Ellis to the modified system of Brahim and Chawla to provide capabilities for supporting one or more network zones and associated provider services based on latency information (Paragraph 0005, Ellis). Regarding claim 11, the combination of Brahim and Chawla don’t disclose the mechanism of wherein the network performance metric is a latency measurement. In an analogous art, Ellis discloses wherein the network performance metric is a latency measurement (Paragraphs 0049, 0051, 0168, 0184-0185). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Ellis to the modified system of Brahim and Chawla to provide capabilities for supporting one or more network zones and associated provider services based on latency information (Paragraph 0005, Ellis). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Zhang et al. (US 2014/0185519 A1) receiving, by a telecommunication device, a network packet from an application of the telecommunication device; selecting, by the telecommunication device, a network connectivity from a plurality of network connectivity based at least in part on user routing criteria and connectivity metrics, the plurality of network connectivity being respectively associated with a plurality of network operators; and transmitting, by the telecommunication device, the network packet using the selected network connectivity (Abstract). Sivasankaran (US 20160191341 A1) discloses A network device is located in a cloud routing services center that is separate from a customer network. The network device provides a user interface to solicit, from a customer device outside the cloud routing services center, structured routing criteria for virtual private network (VPN) routes over a Multiprotocol Label Switching (MPLS) network and receives, from the customer device, customer routing criteria selected from the structured routing criteria (Abstract). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROMANI OHRI whose telephone number is (571)272-5420. The examiner can normally be reached 8:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, UN C CHO can be reached on 5712727919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROMANI OHRI/Primary Examiner, Art Unit 2413
Read full office action

Prosecution Timeline

May 01, 2024
Application Filed
May 03, 2025
Non-Final Rejection — §103, §DP
Nov 10, 2025
Response Filed
Feb 18, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604331
METHOD AND APPARATUS FOR RESOURCE RESTRICTION
2y 5m to grant Granted Apr 14, 2026
Patent 12574802
RECONFIGURABLE INTELLIGENT SURFACE (RIS) SCHEDULING
2y 5m to grant Granted Mar 10, 2026
Patent 12568505
METHODS AND SYSTEMS FOR DETERMINING DOWNLINK CONTROL INFORMATION IN WIRELESS NETWORKS
2y 5m to grant Granted Mar 03, 2026
Patent 12563424
METHOD AND DEVICE FOR DISCONTINUOUS WIRELESS COMMUNICATION
2y 5m to grant Granted Feb 24, 2026
Patent 12557133
SIDELINK RESOURCE INDICATIONS AND USAGE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+17.0%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 445 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month