Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-20 are pending in Instant Application.
Priority
Examiner acknowledges Applicant’s claim to priority benefits of provisional application 63281262 filed 11/19/2021.
Information Disclosure Statement
The information disclosure statement(s) (IDS) submitted on 10/31/2024 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered if signed and initialed by the Examiner.
Double Patenting
A rejection based on double patenting of the "same invention" type finds its support in the language of 35 U.S.C. 101 which states that "whoever invents or discovers any new and useful process ... may obtain a patent therefor ..." (Emphasis added). Thus, the term "same invention," in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957); and In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970).
A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the conflicting claims so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101.
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 1, 2, 4, 6, 9, 10, 12, 17-19 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1, 2, 4, 6, 9, 10, 12, 17-19 of U.S. Patent No. 12206573. Although the claims at issue are not identical, they are not patentably distinct from each other because claim limitation of the instant application is broader than the claim limitation of the Patent application.
U.S. Patent: 12206573
Instant Application
1. A computer-implemented method comprising:
provisioning, by a controller, multiple nodes of a network to conduct a path tracing session using probe packets, the multiple nodes including a source node, a midpoint node, and a sink node,
the provisioning including:
causing the source node to generate an individual probe packet to traverse an equal-cost multi-path (ECMP) path through the network, the individual probe packet having a timestamp, encapsulate, and forward (TEF) label and a header that includes an entropy value corresponding to the ECMP path, the ECMP path including the midpoint node,
causing the midpoint node to record path tracing information in the individual probe packet, and
causing the sink node to forward the individual probe packet to the controller in response to the TEF label after the individual probe packet has traversed the ECMP path;
analyzing the path tracing information in the individual probe packet to produce a mapping of the entropy value to the ECMP path; and
using the mapping to cause the source node to generate a subsequent probe packet to traverse the ECMP path through the network.
1. A computer-implemented method comprising:
provisioning, by a controller, multiple nodes of a network to conduct a path tracing session using probe packets, the multiple nodes including a source node, a midpoint node, and a sink node,
the provisioning including:
causing the source node to generate an individual probe packet to traverse an equal-cost multi-path (ECMP) path through the network, the individual probe packet having a path tracing indicator (PTI) and an entropy value, the PTI corresponding to the ECMP path, the ECMP path including the midpoint node,
causing the midpoint node to record path tracing information in the individual probe packet, and
causing the sink node to forward the individual probe packet to the controller in response to the PTI after the individual probe packet has traversed the ECMP path;
analyzing the path tracing information in the individual probe packet to produce a mapping of the entropy value to the ECMP path; and
using the mapping to cause the source node to generate a subsequent probe packet to traverse the ECMP path through the network.
Same scope
2. The computer-implemented method of claim 1, wherein the entropy value is included in an entropy label located in a multi-protocol label switching (MPLS) label stack in the header of the individual probe packet.
2. The computer-implemented method of claim 1, wherein the PTI is included in a structured entropy label (SEL) located in a header of the individual probe packet.
Same scope
4. The computer-implemented method of claim 1, further comprising: reducing a number of additional probe packets sent via the ECMP path by selecting the mapping of the entropy value to the ECMP path from a set of additional mappings that include additional entropy values mapped to the ECMP path; and sending the entropy value of the selected mapping to the source node for generation of the subsequent probe packet.
6. The computer-implemented method of claim 1, further comprising: reducing a number of additional probe packets sent via the ECMP path by selecting the mapping of the entropy value to the ECMP path from a set of additional mappings that include additional entropy values mapped to the ECMP path; and sending the entropy value of the selected mapping to the source node for generation of the subsequent probe packet.
Same scope
6. The computer-implemented method of claim 1, further comprising: causing the source node to generate a second individual probe packet to traverse the ECMP path, the second individual probe packet having a second TEF label, wherein the second individual probe packet is dropped at the midpoint node, becoming a dropped probe packet
7. The computer-implemented method of claim 1, further comprising: causing the source node to generate a second individual probe packet to traverse the ECMP path, the second individual probe packet having a second PTI, wherein the second individual probe packet is dropped at the midpoint node, becoming a dropped probe packet.
Same scope
9. A computing device comprising: one or more processors; and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to: provision multiple nodes of a network to conduct a path tracing session using probe packets, the multiple nodes including a source node, a midpoint node, and a sink node, provisioning the multiple nodes including: causing the source node to generate an individual probe packet to traverse an equal-cost multi-path (ECMP) path through the network, the individual probe packet having a timestamp, encapsulate, and forward (TEF) label and a header that includes an entropy value corresponding to the ECMP path, causing the midpoint node to record path tracing information in the individual probe packet, and causing the sink node to forward the individual probe packet to the computing device in response to the TEF label after the individual probe packet has traversed the ECMP path; analyze the path tracing information in the individual probe packet to produce a mapping of the entropy value to the ECMP path; and use the mapping to cause the source node to generate a subsequent probe packet to traverse the ECMP path through the network.
9. A computing device comprising: one or more processors; and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to: provision multiple nodes of a network to conduct a path tracing session using probe packets, the multiple nodes including a source node, a midpoint node, and a sink node, provisioning the multiple nodes including: causing the source node to generate an individual probe packet to traverse an equal-cost multi-path (ECMP) path through the network, the individual probe packet having a path tracing indicator (PTI) and an entropy value, the PTI corresponding to the ECMP path, the ECMP path including the midpoint node, causing the midpoint node to record path tracing information in the individual probe packet, and causing the sink node to forward the individual probe packet to a controller in response to the PTI after the individual probe packet has traversed the ECMP path; analyzing the path tracing information in the individual probe packet to produce a mapping of the entropy value to the ECMP path; and using the mapping to cause the source node to generate a subsequent probe packet to traverse the ECMP path through the network.
Same scope
10. The computing device of claim 9, wherein the entropy value is included in an entropy label located after a multi-protocol label switching (MPLS) label stack in the header of the individual probe packet.
10. The computing device of claim 9, wherein the PTI is included in a structured entropy label (SEL) located in a header of the individual probe packet.
Same scope
12. The computing device of claim 9, wherein the computer-executable instructions further cause the one or more processors to: reduce a number of additional probe packets sent via the ECMP path by selecting the mapping of the entropy value to the ECMP path from a set of additional mappings that include additional entropy values mapped to the ECMP path; and send the entropy value of the selected mapping to the source node for generation of the subsequent probe packet.
14. The computing device of claim 9, wherein the computer-executable instructions further cause the one or more processors to: reduce a number of additional probe packets sent via the ECMP path by selecting the mapping of the entropy value to the ECMP path from a set of additional mappings that include additional entropy values mapped to the ECMP path; and send the entropy value of the selected mapping to the source node for generation of the subsequent probe packet.
Same scope
17. A method comprising:
causing a source node to generate a first probe packet to traverse a multi-protocol label switching (MPLS) network, the first probe packet including a first entropy value;
causing one or more midpoint nodes of the MPLS network to record path tracing information in the first probe packet responsive to a timestamp, encapsulate, and forward (TEF) label of the first probe packet;
receiving the first probe packet from a sink node after the first probe packet has traversed the MPLS network via at least one of the midpoint nodes;
analyzing the path tracing information to discover an equal-cost multi-path (ECMP) path that the first probe packet traversed across the MPLS network;
producing a first entropy-to-path mapping of the first entropy value to the ECMP path; and
using the first entropy-to-path mapping to monitor the ECMP path by causing the source node to produce a subsequent probe packet that includes the first entropy value.
17. A method comprising:
causing a source node to generate a first probe packet to traverse a multi-protocol label switching (MPLS) network, the first probe packet including a first entropy value;
causing one or more midpoint nodes of the MPLS network to record path tracing information in the first probe packet responsive to a path tracing indicator (PTI) of the first probe packet;
receiving the first probe packet from a sink node after the first probe packet has traversed the MPLS network via at least one of the midpoint nodes;
analyzing the path tracing information to discover an equal-cost multi-path (ECMP) path that the first probe packet traversed across the MPLS network;
producing a first entropy-to-path mapping of the first entropy value to the ECMP path; and
using the first entropy-to-path mapping to monitor the ECMP path by causing the source node to produce a subsequent probe packet that includes the first entropy value.
Same scope
18. The method of claim 17, further comprising: causing the source node to generate a second probe packet to traverse the MPLS network, the second probe packet including a second entropy value; receiving the second probe packet from the sink node after the second probe packet has traversed the MPLS network; and analyzing second path tracing information of the second probe packet to produce a second entropy-to-path mapping that includes the second entropy value.
18. The method of claim 17, further comprising: causing the source node to generate a second probe packet to traverse the MPLS network, the second probe packet including a second entropy value; receiving the second probe packet from the sink node after the second probe packet has traversed the MPLS network; and analyzing second path tracing information of the second probe packet to produce a second entropy-to-path mapping that includes the second entropy value.
Same scope
19. The method of claim 18, further comprising: determining that the first probe packet and the second probe packet traversed a same ECMP path across the MPLS network; and selecting one of the first entropy value from the first probe packet or the second entropy value from the second probe packet to provide to the source node for the subsequent probe packet
19. The method of claim 18, further comprising: determining that the first probe packet and the second probe packet traversed a same ECMP path across the MPLS network; and selecting one of the first entropy value from the first probe packet or the second entropy value from the second probe packet to provide to the source node for the subsequent probe packet.
Same scope
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7-13, 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Pignataro et al., “hereinafter 20180176134” (U.S. Patent Application: 20180176134) in view of Kumar et al., “hereinafter Kumar” (U.S. Patent Application: 20180062990).
As per Claim 1, Pignataro discloses a computer-implemented a method comprising:
provisioning, by a controller, multiple nodes of a network to conduct a path tracing session using probe packets, the multiple nodes including a source node, a midpoint node, and a sink node (Pignataro, Para.34, Routing process 244 may further use Equal-Cost Multi-Path (ECMP) routing to select which path a given traffic flow should take in the network. For example, service providers offering VPN services are expected to have multiple paths (e.g., ECMP paths) between ingress PE (iPE) routers and egress PE (ePE) routers that are commonly provisioned with VPN services. In such scenarios, any intermediate/transit node with multiple (e.g., ECMP) paths to an egress PE can use some selected information as input for hashing in order to decide the egress interface for packet forwarding. For example, this information can be either L3/L4 details from the packet, entropy labels, or 3/5/7-tuple entities, Para.69, MPLS/SR/SRv6 domain 400 that comprises a set of devices/nodes R1-R9, Para.68, model validator 314 may assess different probe packets sent in the network with different entropy values),
the provisioning including:
causing the source node to generate an individual probe packet to traverse an equal-cost multi-path (ECMP) path through the network (Pignataro, Para.34, Routing process 244 may further use Equal-Cost Multi-Path (ECMP) routing to select which path a given traffic flow should take in the network. For example, service providers offering VPN services are expected to have multiple paths (e.g., ECMP paths) between ingress PE (iPE) routers and egress PE (ePE) routers that are commonly provisioned with VPN services. In such scenarios, any intermediate/transit node with multiple (e.g., ECMP) paths to an egress PE can use some selected information as input for hashing in order to decide the egress interface for packet forwarding, Para.68, entropy path analysis process 248 may include a model validator 314 that receives probe data 318 from any number of different probing mechanisms (e.g., S-BFD, LSP Ping, etc.), to validate model 312. For example, model validator 314 may assess different probe packets sent in the network with different entropy values (and potentially adjusting the TTLs), to see if the probe packets flows over the paths predicted by model 312.), the individual probe packet having a path tracing indicator (PTI) and an entropy value, the PTI corresponding to the ECMP path, the ECMP path including the midpoint node (Pignataro, Para.48, information that can be captured in iOAM data 302 may include, but is not limited to, path tracing information (e.g., for ECMP networks, etc.), service/path verification, traffic matrix information, path metrics (e.g., delay, loss, jitter, etc.), entropy details, custom information (e.g., geo-locations, etc.), and the like. For example, a data packet may be appended to include a Node-ID field, an ingress interface field, an egress interface field, a proof of transit field, a sequence number field, a timestamp field, a custom data field, an entropy label, etc., that can be updated as the data packet is communicated through the network.),
causing the midpoint node to record path tracing information in the individual probe packet (Pignataro, Para.42, OAM allows for the recording of the complete path traversed within the packet header itself. This is in contrast to other out-of-band approaches (e.g., LSP ping, etc.) that can be used to query the entropy details along the path, Para.34, outing process 244 may further use Equal-Cost Multi-Path (ECMP) routing to select which path a given traffic flow should take in the network. For example, service providers offering VPN services are expected to have multiple paths (e.g., ECMP paths) between ingress PE (iPE) routers and egress PE (ePE) routers that are commonly provisioned with VPN services. In such scenarios, any intermediate/transit node with multiple (e.g., ECMP) paths to an egress PE can use some selected information as input for hashing in order to decide the egress interface for packet forwarding.), and
analyzing the path tracing information in the individual probe packet to produce a mapping of the entropy value to the ECMP path (Pignataro, Para.66, entropy path analysis process 248 may include a path selector 316 that receives flow data 320 regarding a particular path in the network and use model 312 to predict the core link utilization for the flow (e.g., based on ECMP prediction from the derived topology graph), Para.82, the entropy topology model may map path selection predictions for the network paths with entropy values. In other words, based on the topology of the network itself and the received iOAM data (e.g., the entropy values and paths of the traffic flows), the device may train a model that maps path predictions and entropy values. Thus, for example, the model may predict the most likely path that a flow will take using a certain range of entropy values and/or determine the appropriate range of entropy values to cause the flow to likely flow over a specified path.); and
using the mapping to cause the source node to generate a subsequent probe packet to traverse the ECMP path through the network (Pignataro, Para.82, the entropy topology model may map path selection predictions for the network paths with entropy values. In other words, based on the topology of the network itself and the received iOAM data (e.g., the entropy values and paths of the traffic flows), the device may train a model that maps path predictions and entropy values, Para.83, the device may send an instruction that causes a computed entropy value to be inserted into the header of the particular traffic flow. In other words, to cause the flow to take the particular path, the device may use the entropy topology model to determine the entropy label that is most likely to cause the network to route the flow along the desired path. In turn, the device may send an instruction to a router in the network to adjust the entropy label of the flow (e.g., to relieve congestion in the network, to satisfy an SLA of the flow, etc.).);
However Pignataro does not explicitly disclose causing the sink node to forward the individual probe packet to the controller in response to the PTI after the individual probe packet has traversed the ECMP path.
Kumar discloses causing the sink node to forward the individual probe packet to the controller in response to the PTI after the individual probe packet has traversed the ECMP path (Kumar, Para.17, as each switch along the forwarding path receives a probe packet, it not only forwards the packet as normal, but also sends the packet to the SDN controller 70, embedded with original and additional metadata (such as ingress interface, egress interface, etc.), through the SDN protocol (e.g. OpenFlow). At 86, as the SDN controller 70 receives probe packets from the devices along the forwarding path, the SDN controller 70 is able to detect and monitor the health of the paths.).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Pignataro with the teachings as in Kumar. The motivation for doing so would have been for efficiently and quickly monitoring all the paths in data center network, particularly in very large scale data center networks. The number of paths can be very large, but network administrators still want to proactively monitor all the paths. The efficient algorithm presented herein reduces the number of packets needed to be sent in the network and yet still covers all the paths. The SDN controller learns about the ECMP hash and packet distribution and can readjust to efficiently cover all the ECMP paths throughout the network. (Kumar, Para.47).
With respect to Claim 9 is substantially similar to Claim 1 and is rejected in the same manner, the same art and reasoning applying.
As per Claim 2, Pignataro in view of Kumar discloses the computer-implemented method of claim 1, wherein the PTI is included in a structured entropy label (SEL) located in a header of the individual probe packet (Pignataro, Para.35, Entropy labels, for example, are “random” label values included in a header field (e.g., an IP header or a MPLS label stack) of a packet to aid ECMP based load-balancing (“flow entropy”).).
With respect to Claim 10, 20 are substantially similar to Claim 2 and is rejected in the same manner, the same art and reasoning applying.
As per Claim 3, Pignataro in view of Kumar discloses the computer-implemented method of claim 2, wherein the PTI is in entropy label control bits of the SEL (Pignataro, Para.71, The link R7-R9 has a capacity of 1 Gbps [0073] There is a 300 Gbps flow 404 with header 406 that traverses the path R2-R3-R5-R6-R7-R9. [0074] There is a 500 Gbps flow 408 that traverses the path R1-R7-R9.).
With respect to Claim 11 is substantially similar to Claim 3 and is rejected in the same manner, the same art and reasoning applying.
As per Claim 4, Pignataro in view of Kumar discloses the computer-implemented method of claim 2, wherein the individual probe packet includes an entropy label indicator to indicate a presence of the SEL in the individual probe packet (Pignataro, Para.10, The iOAM data comprises entropy values for the plurality of traffic flows. The device receives network topology information indicative of network paths available in the network. The device generates a machine learning-based entropy topology model for the network based on the received iOAM data and the received network topology information. The entropy topology model maps path selection predictions for the network paths with entropy values. The device uses the entropy topology model to cause a particular traffic flow to use a particular network path, Para.36, iOAM allows for the collection of various flow characteristics (e.g., the complete path taken, etc.) by piggy-backing the data collection in the packet headers themselves of actual user traffic. This is in contrast to out-of-band approaches that may gather characteristics by introducing new packets into the network, such as probe packets, and is a complementary approach. In various embodiments, process 248 may use iOAM data with topology information regarding the network to form an entropy topology model that maps path selection predictabilities to entropy values. In other words, the generated model may be able to predict which path will be selected for a given flow in view of the characteristics of the flow.).
With respect to Claim 12 is substantially similar to Claim 4 and is rejected in the same manner, the same art and reasoning applying.
As per Claim 5, Pignataro in view of Kumar discloses the computer-implemented method of claim 4, wherein the SEL is positioned after the entropy label indicator in the header of the individual probe packet (Pignataro, Para.53, entropy topology model 312 may be configured to take as input flow information for a particular traffic flow (e.g., from the 12/13/14/15 flow header, including entropy), and output a path predictability for each path in the network. For example, in view of the flow information for a particular flow, entropy topology model 312 may output percentages or numbers per path in the network that represent the likelihood of the traffic flow following that path, Para.70, TABLE-US-00001 TABLE 1 Flow Entropy Ingress Egress Path Graph Src/ Flow Label, R2 R9 Path Details: Dst/ Entropy Label, P1 = {R2-R3-R5-R7-R9}; VRF Source port, P2 = {R2-R3-R5-R6-R8-R9}; Destination P3 = {R2-R4-R5-R6-R7-R9; port, Extension . . . Header, etc, Para.48, a data packet may be appended to include a Node-ID field, an ingress interface field, an egress interface field, a proof of transit field, a sequence number field, a timestamp field, a custom data field, an entropy label, etc., that can be updated as the data packet is communicated through the network.).
With respect to Claim 13 is substantially similar to Claim 5 and is rejected in the same manner, the same art and reasoning applying.
As per Claim 7, Pignataro in view of Kumar discloses the computer-implemented method of claim 1, further comprising: causing the source node to generate a second individual probe packet to traverse the ECMP path, the second individual probe packet having a second PTI, wherein the second individual probe packet is dropped at the midpoint node, becoming a dropped probe packet (Pignataro, Para.61, the path availability is impacted or path experiences packet drops etc., machine learning process 310 may also decrement the score by a certain number or percentage, as well. For example, the score may be decremented as follows: Score (−)=Path Packet Drops (50%)+Path Availability per 24 hrs (50%) [0062] Path availability=0 in case of 100% available; 10 in case of 99% available etc. [0063] Path Packet drops=0 in case of 0 drops; 10% in case of 100 packets drop, etc, Para.68, entropy path analysis process 248 may include a model validator 314 that receives probe data 318 from any number of different probing mechanisms (e.g., S-BFD, LSP Ping, etc.), to validate model 312. For example, model validator 314 may assess different probe packets sent in the network with different entropy values (and potentially adjusting the TTLs), to see if the probe packets flows over the paths predicted by model 312. In some embodiments, this determination may be used as additional input to machine learning process 310, to further refine entropy topology model 312.).
With respect to Claim 15 is substantially similar to Claim 7 and is rejected in the same manner, the same art and reasoning applying.
As per Claim 8, Pignataro in view of Kumar discloses the computer-implemented method of claim 1, wherein the PTI is configured to trigger path tracing behavior at the midpoint node (Pignataro, Para.54, The path graph library 306 may be constructed to have numerous fields from iOAM data 302 and more (12/13/14/15 header info, ingress/egress nodes, etc.), to allow for increased granularity for machine learning process 310., Para.69, FIGS. 4A-4D illustrate the use of an entropy topology model to affect traffic flows, in accordance with various embodiments. As shown, consider an example MPLS/SR/SRv6 domain 400 that comprises a set of devices/nodes R1-R9. For simplicity, also assume that a machine learning (ML) agent (e.g., another device 200 is present in the network and implements the techniques described previously.).
With respect to Claim 16 is substantially similar to Claim 8 and is rejected in the same manner, the same art and reasoning applying.
As per Claim 17, Pignataro discloses a computer-implemented a method comprising:
causing a source node to generate a first probe packet to traverse a multi-protocol label switching (MPLS) network, the first probe packet including a first entropy value (Pignataro, Para.13, routers 110, 120 may be interconnected by the public Internet, a multiprotocol label switching (MPLS) virtual private network (VPN), Para.35, Entropy labels, for example, are “random” label values included in a header field (e.g., an IP header or a MPLS label stack) of a packet to aid ECMP based load-balancing (“flow entropy”). …Entropy labels solve this problem by giving the source router the ability to “tag” different flows with different entropy label values, resulting in different headers/label stacks for different flows and better ECMP load-balancing, Para.45, The device generates a machine learning-based entropy topology model for the network based on the received iOAM data and the received network topology information. The entropy topology model maps path selection predictions for the network paths with entropy values. The device uses the entropy topology model to cause a particular traffic flow to use a particular network path.);
causing one or more midpoint nodes of the MPLS network to record path tracing information in the first probe packet responsive to a path tracing indicator (PTI) of the first probe packet (Pignataro, Para.42, OAM allows for the recording of the complete path traversed within the packet header itself. This is in contrast to other out-of-band approaches (e.g., LSP ping, etc.) that can be used to query the entropy details along the path, Para.34, outing process 244 may further use Equal-Cost Multi-Path (ECMP) routing to select which path a given traffic flow should take in the network. For example, service providers offering VPN services are expected to have multiple paths (e.g., ECMP paths) between ingress PE (iPE) routers and egress PE (ePE) routers that are commonly provisioned with VPN services. In such scenarios, any intermediate/transit node with multiple (e.g., ECMP) paths to an egress PE can use some selected information as input for hashing in order to decide the egress interface for packet forwarding. );
analyzing the path tracing information to discover an equal-cost multi-path (ECMP) path that the first probe packet traversed across the MPLS network (Pignataro, Para.35, Entropy labels, for example, are “random” label values included in a header field (e.g., an IP header or a MPLS label stack) of a packet to aid ECMP based load-balancing (“flow entropy”)., Para.66, entropy path analysis process 248 may include a path selector 316 that receives flow data 320 regarding a particular path in the network and use model 312 to predict the core link utilization for the flow (e.g., based on ECMP prediction from the derived topology graph), Para.82, the entropy topology model may map path selection predictions for the network paths with entropy values. In other words, based on the topology of the network itself and the received iOAM data (e.g., the entropy values and paths of the traffic flows), the device may train a model that maps path predictions and entropy values. Thus, for example, the model may predict the most likely path that a flow will take using a certain range of entropy values and/or determine the appropriate range of entropy values to cause the flow to likely flow over a specified path.);
producing a first entropy-to-path mapping of the first entropy value to the ECMP path (Pignataro, Para.35, Entropy labels solve this problem by giving the source router the ability to “tag” different flows with different entropy label values, resulting in different headers/label stacks for different flows and better ECMP load-balancing.); and
using the first entropy-to-path mapping to monitor the ECMP path by causing the source node to produce a subsequent probe packet that includes the first entropy value (Pignataro, Para.82, the entropy topology model may map path selection predictions for the network paths with entropy values. In other words, based on the topology of the network itself and the received iOAM data (e.g., the entropy values and paths of the traffic flows), the device may train a model that maps path predictions and entropy values, Para.83, the device may send an instruction that causes a computed entropy value to be inserted into the header of the particular traffic flow. In other words, to cause the flow to take the particular path, the device may use the entropy topology model to determine the entropy label that is most likely to cause the network to route the flow along the desired path. In turn, the device may send an instruction to a router in the network to adjust the entropy label of the flow (e.g., to relieve congestion in the network, to satisfy an SLA of the flow, etc.).);
However Pignataro does not explicitly disclose receiving the first probe packet from a sink node after the first probe packet has traversed the MPLS network via at least one of the midpoint nodes.
Kumar discloses receiving the first probe packet from a sink node after the first probe packet has traversed the MPLS network via at least one of the midpoint nodes (Kumar, Para.17, as each switch along the forwarding path receives a probe packet, it not only forwards the packet as normal, but also sends the packet to the SDN controller 70, embedded with original and additional metadata (such as ingress interface, egress interface, etc.), through the SDN protocol (e.g. OpenFlow). At 86, as the SDN controller 70 receives probe packets from the devices along the forwarding path, the SDN controller 70 is able to detect and monitor the health of the paths.).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Pignataro with the teachings as in Kumar. The motivation for doing so would have been for efficiently and quickly monitoring all the paths in data center network, particularly in very large scale data center networks. The number of paths can be very large, but network administrators still want to proactively monitor all the paths. The efficient algorithm presented herein reduces the number of packets needed to be sent in the network and yet still covers all the paths. The SDN controller learns about the ECMP hash and packet distribution and can readjust to efficiently cover all the ECMP paths throughout the network. (Kumar, Para.47).
As per Claim 18, Pignataro in view of Kumar discloses method of claim 17, further comprising: causing the source node to generate a second probe packet to traverse the MPLS network, the second probe packet including a second entropy value (Pignataro, Para.13, routers 110, 120 may be interconnected by the public Internet, a multiprotocol label switching (MPLS) virtual private network (VPN), Para.35, Entropy labels, for example, are “random” label values included in a header field (e.g., an IP header or a MPLS label stack) of a packet to aid ECMP based load-balancing (“flow entropy”). …Entropy labels solve this problem by giving the source router the ability to “tag” different flows with different entropy label values, resulting in different headers/label stacks for different flows and better ECMP load-balancing, Para.45, The device generates a machine learning-based entropy topology model for the network based on the received iOAM data and the received network topology information. The entropy topology model maps path selection predictions for the network paths with entropy values. The device uses the entropy topology model to cause a particular traffic flow to use a particular network path.);
and analyzing second path tracing information of the second probe packet to produce a second entropy-to-path mapping that includes the second entropy value (Pignataro, Para.35, Entropy labels, for example, are “random” label values included in a header field (e.g., an IP header or a MPLS label stack) of a packet to aid ECMP based load-balancing (“flow entropy”)., Para.66, entropy path analysis process 248 may include a path selector 316 that receives flow data 320 regarding a particular path in the network and use model 312 to predict the core link utilization for the flow (e.g., based on ECMP prediction from the derived topology graph), Para.82, the entropy topology model may map path selection predictions for the network paths with entropy values. In other words, based on the topology of the network itself and the received iOAM data (e.g., the entropy values and paths of the traffic flows), the device may train a model that maps path predictions and entropy values. Thus, for example, the model may predict the most likely path that a flow will take using a certain range of entropy values and/or determine the appropriate range of entropy values to cause the flow to likely flow over a specified path.);
However Pignataro does not explicitly disclose receiving the second probe packet from the sink node after the second probe packet has traversed the MPLS network.
Kumar discloses receiving the second probe packet from the sink node after the second probe packet has traversed the MPLS network (Kumar, Para.17, as each switch along the forwarding path receives a probe packet, it not only forwards the packet as normal, but also sends the packet to the SDN controller 70, embedded with original and additional metadata (such as ingress interface, egress interface, etc.), through the SDN protocol (e.g. OpenFlow). At 86, as the SDN controller 70 receives probe packets from the devices along the forwarding path, the SDN controller 70 is able to detect and monitor the health of the paths.).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Pignataro with the teachings as in Kumar. The motivation for doing so would have been for efficiently and quickly monitoring all the paths in data center network, particularly in very large scale data center networks. The number of paths can be very large, but network administrators still want to proactively monitor all the paths. The efficient algorithm presented herein reduces the number of packets needed to be sent in the network and yet still covers all the paths. The SDN controller learns about the ECMP hash and packet distribution and can readjust to efficiently cover all the ECMP paths throughout the network. (Kumar, Para.47).
As per Claim 19, Pignataro in view of Kumar discloses method of claim 18, further comprising: determining that the first probe packet and the second probe packet traversed a same ECMP path across the MPLS network; and selecting one of the first entropy value from the first probe packet or the second entropy value from the second probe packet to provide to the source node for the subsequent probe packet (Pignataro, Para.35, When multiple flows have the same forwarding information this means they cannot be effectively load-balanced. Entropy labels solve this problem by giving the source router the ability to “tag” different flows with different entropy label values, resulting in different headers/label stacks for different flows and better ECMP load-balancing.).
Claims 6, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Pignataro et al., “hereinafter 20180176134” (U.S. Patent Application: 20180176134) in view of Kumar et al., “hereinafter Kumar” (U.S. Patent Application: 20180062990) and further in view of Frost (U.S. patent Application: 20150003255).
As per Claim 6, discloses the computer-implemented method of claim 1,
However does not disclose reducing a number of additional probe packets sent via the ECMP path by selecting the mapping of the entropy value to the ECMP path from a set of additional mappings that include additional entropy values mapped to the ECMP path; and sending the entropy value of the selected mapping to the source node for generation of the subsequent probe packet.
Frost discloses reducing a number of additional probe packets sent via the ECMP path by selecting the mapping of the entropy value to the ECMP path from a set of additional mappings that include additional entropy values mapped to the ECMP path; and sending the entropy value of the selected mapping to the source node for generation of the subsequent probe packet (Frost, Para.39, Until an entropy label value has been determined for each ECMP path between the particular source network node and the destination network node, process blocks 408-416 are repeated. In process block 408, an ECMP path-taken probe packet is constructed that includes the entropy label chosen in process block 406 (for the first iteration through process blocks 408-416) or 416 (for subsequent iterations). In one embodiment, the Time-to-Live (TTL) value (e.g., in a label) of the ECMP probe packet is set to one (1) to cause the next receiving network node (e.g., packet switching device) to look at the received packet, inspecting the packet to identify that it is in fact a probe packet, and then to correspondingly process the probe packet (e.g., add a path identifier to a list of path identifiers).).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings as in Tarnoff with the teachings as in Frost. The motivation for doing so would have been to force packets to follow specific ECMP paths. (Frost, Para.40).
With respect to Claim 14 is substantially similar to Claim 6 and is rejected in the same manner, the same art and reasoning applying.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NORMIN ABEDIN whose telephone number is (571)270-5970. The examiner can normally be reached Monday to Friday from 10 am to 6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivek Srivastava can be reached at 5712727304. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NORMIN ABEDIN/Primary Examiner, Art Unit 2449