DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to remarks filed 08/19/2025.
Claims 1-20 are pending and presented for examination. Claims 1, 9, and 17 are amended.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2025 has been entered.
Response to Amendment
Examiner acknowledges receipt of applicant’s response under 37 C.F.R. § 1.21, filed 12/17/2025, to Notice of Non-Compliant Amendment dated 10/31/2025. Examiner has considered the remarks and accepted the response. The amendment and remarks, filed 08/19/2025, are entered.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/20/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-4, 6, 7, 9, 10-12, 14, 15, 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lo et al. (US 20210092092 A1, hereinafter “Lo” ), in view of Jokela (P. Jokela, H. Mahkonen, C. E. Rothenberg and J. Ott, "(Deployable) reduction of multicast state with in-packet bloom filters," 2013 IFIP Networking Conference, Brooklyn, NY, USA, 2013, pp. 1-9. Hereinafter “Jokela”), in view of Levy-Abegnoli et al (US 9608863 B2, hereinafter “Levy-Abegnoli”).
RE Claim 1: Lo discloses:
A network switch (¶0022, ¶0030, Fig. 2) comprising:
a processor (¶0057, Fig. 7);
a non-transitory computer-readable storage medium storing instructions, which when executed by the processor causes the processor (¶0057, Fig. 7) to:
receive a join request, for a multicast group indicated by an overlay multicast address, from a remote network switch (Network device may receive a join request to join the overlay multicast group A which includes other network devices, network switches. ¶0037, Fig. 4, Fig. 6), wherein the network switch is coupled to a source host device (multicast source, 305, is connected to multicast network device, the network switch. ¶0026, Fig. 3), and the remote network switch is coupled to a receiver host device of the multicast group (multicast receiver, 316, is connected to network device,335, the remote network switch. ¶0026, Fig. 3), and wherein the network switch and the remote network switch are configured as virtual endpoints in an overlay network deployed over an underlay network (The overlay network may be a virtual network built upon the underlying network. ¶0029. Network devices, network switches, may be an edge switch, endpoints, connected to multicast sources and receivers to the underlay network. ¶0035);
map the overlay multicast address to an underlay multicast address (mapping of overlay multicast groups, addresses, to underlay multicast groups. ¶0016. Fig. 6), wherein the remote network switch joins the multicast group represented by the underlay multicast address (network device, remote switch, uses the underlay multicast group, addresses, at a join request. ¶0057. Fig. 6), wherein to map the overlay multicast address, the processor is to:
wherein the IP address field comprises most significant bits (MSBs) which correspond to a prefix of the IP address field based on a network layer protocol used in the underlay network (A portion of the underlay multicast group address is based on a VRF prefix associated with an overlay multicast group. One or more octets (e.g., portions, fields, etc.) of the VRF prefix may be used in a portion of the underlay multicast group address. The network device may use and/or apply a function to portions (e.g., one or more bits, one or more octets, etc.) of the VRF prefix. ¶¶0042, 0047, Fig. 5; An example of multicast address for overlay, underlay, prefix and suffix according so some embodiments. ¶0048, Fig. 5; Overlay multicast group address of 224.2.1.1, a VRF prefix of 225.0.0.0/8, and a VRF suffix of 0.0.0.0/0. The VRF prefix 225.0.0.0/8 may indicate that the first 8 bits of the VRF prefix 225.0.0.0/8 should be used in the underlay multicast group address. ¶¶0052, 0047, Fig. 5: 520),
wherein a first portion of the MSBs which correspond to the prefix comprise fixed-value bits set to a non-zero value based on the network layer protocol used in the underlay network (An example of multicast address for overlay, underlay, prefix and suffix according so some embodiments. ¶0048, Fig. 5; Overlay multicast group address of 224.2.1.1, a VRF prefix of 225.0.0.0/8, and a VRF suffix of 0.0.0.0/0. The VRF prefix 225.0.0.0/8 may indicate that the first 8 bits of the VRF prefix 225.0.0.0/8 should be used in the underlay multicast group address. ¶¶0052, 0047, Fig. 5: 520; The underlay prefix, a first portion of MSBs of the address, is 225 decimal which is equal to 1110 0001 binary and therefore a portion of the bits set to non-zero values.), and
wherein a second portion of the MSBs which correspond to the prefix comprise variable-value bits initially set to zero (A portion of the underlay multicast group address is based on a VRF prefix associated with an overlay multicast group. One or more octets (e.g., portions, fields, etc.) of the VRF prefix may be used in a portion of the underlay multicast group address. The network device may use and/or apply a function to portions (e.g., one or more bits, one or more octets, etc.) of the VRF prefix. ¶¶0042, 0047, Fig. 5; An example of multicast address for overlay, underlay, prefix and suffix according so some embodiments. ¶0048, Fig. 5; The VRF prefix 225.0.0.0/8 may indicate that the first 8 bits of the VRF prefix 225.0.0.0/8 should be used in the underlay multicast group address. ¶¶0052, 0047, Fig. 5: 520; The underlay prefix, a first portion of MSBs of the address, is 225 decimal which is equal to 1110 0001 binary and therefore a portion of the bits set to zero values.);
receive multicast traffic for the multicast group from the source host device (Multicast source, 305, sends packets to network device, 325, connected to the underlay network. ¶0026, Fig. 4);
encapsulate the multicast traffic with a destination address identical to the underlay multicast address (network device encapsulates multicast packets, encapsulated packet includes the underlay multicast group address which is generated based on associated overlay multicast group address. ¶0044); and
forward the multicast traffic to the multicast group via the underlay network based on the destination address, such that the receiver host device receives the multicast traffic via the remote network switch (multicast source transmits data to multicast receiver using underlay multicast group associated with an overlay multicast group. Network devices 325 and 335 connect underlay to source/receiver, remote network switch. ¶0036).
However Lo does not explicitly disclose:
wherein to map the overlay multicast address, the processor is to:
obtain a set of hash values by performing a set of hash functions on a representation of the overlay multicast address,
the representation comprising an address associated with the source host device and the overlay multicast address,
wherein a respective hash value corresponds to a variable-value bit position in an IP address field of the underlay multicast address; and
set, for a respective bit position corresponding to the respective hash value, a value of a bit at the respective bit position to one;
However, Jokela discloses:
wherein to map the overlay multicast address, the processor is to:
obtain a set of hash values by performing a set of hash functions on a representation of the overlay multicast address (The source edge node is responsible for mapping between each IP multicast group (S,G), a representation of the overlay multicast address, and a corresponding inter-packet bloom filter. Pg. 4, Col. 1, Para. 2; A Bloom filter is an m-bit long bit string where all bits are initial set to zero. Inserting data, overlay multicast address, into the filter starts by hashing the data with k different hash functions. The result is k index values in the range [0..m-1]. Each of the indexed k bits are set to one in the Bloom filter. Pg. 2, Section B, 1) Bloom Filter; When source edge node, SE, receives a join message from a destination edge node, DE, it creates a mapping from the DE identifier to the path iBF, in-packet bloom filter, leading from SE to DE. Pg. 5, Col. 1, Para. 2),
the representation comprising an address associated with the source host device and the overlay multicast address (Source Specific Multicast, SSM, is created from the IP address of the source node, S, and a Group ID, G, where G identifies the group at the source node, a representation of the overlay multicast address. The multicast group is denoted (S,G) as a globally unique identifier of the group, a representation of the source host and the overlay multicast group. Pg. 2, 3rd para.; The source edge node is responsible for mapping between each IP multicast group (S,G) and a corresponding inter-packet bloom filter. Pg. 4, Col. 1, Para. 2;), and
set, for a respective bit position corresponding to the respective hash value, a value of a bit at the respective bit position to one (Data is input to a Bloom filter which starts by hashing the data with k different hash functions. The result is k index values in the range [0..m-1]. Each of the indexed k bits are set to one in the Bloom filter. Pg. 2, Section B, 1) Bloom Filter;);
Lo and Jokela do not explicitly disclose:
wherein to map the overlay multicast address, the processor is to:
wherein a respective hash value corresponds to a variable-value bit position in an IP address field of the underlay multicast address;
However, Levy-Abegnoli discloses:
wherein to map the overlay multicast address, the processor is to:
wherein a respective hash value corresponds to a variable-value bit position in an IP address field of the underlay multicast address (Bloom filter used to configure a candidate device address a unique address value. ¶0016; Each network device autoconfigure an IPv6 address that is unique at least within a link layer domain, such as underlay multicast network, of the allocating network device. ¶0018; Allocation of Bloom filter bit positions for address computation. ¶0026, Fig. 4; One or more hash functions used for mapping candidate address to corresponding one or more bits in the N-bit Bloom filter bit vector. ¶0033, Fig. 5;
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Lo, mapping with encapsulation and forwarding of multicast traffic by an underlay address based on a method to set the prefixes and suffixes of an address, with the teachings of Jokela, use of multiple hash functions on multicast addresses by Bloom filter techniques, and the teachings of Levy-Abegnoli, mapping the output of a Bloom filter technique to create a unique address value for routing multicast traffic.
The motivation in doing so would be to provide method and apparatus to update address mappings of a multicast network with overlay and underlay networks. Updating addresses by use of Bloom filter technique, reducing hash collisions and provides a unique address, allows the network to update mapping routes as different traffic arises without manual intervention to remap routing nodes thereby improving efficiency of the multicast network.
RE Claim 2: Lo discloses:
The network switch, wherein the overlay multicast address includes at least one of an Internet Protocol (IP) address of the source host device and an IP address of the multicast group in an overlay address space (example of an overlay multicast group IPv4 address of 224.1.1.1. ¶0049, Fig. 5).
RE Claim 3: Lo discloses:
The network switch, wherein the underlay multicast address includes an IP address of the multicast group in an underlay address space (example of an underlay multicast group address, 225.1.1.1 as a function of overlay multicast group address, 224.1.1.1. ¶0049, Fig. 5).
RE Claim 4: Lo discloses:
The network switch, wherein the set of hash functions is implemented in each virtual endpoint in the overlay network (each network device generates the same underlay multicast group address. ¶0016. Network device applies function to a portion of overlay multicast group address to determine the portion of the underlay multicast address. The function may be a hash function on one or more bits, octets. ¶0041. The overlay network may be a virtual network built upon the underlying network. ¶0029. Network devices, network switches, may be an edge switch, endpoints, connected to multicast sources and receivers to the underlay network. ¶0035).
RE Claim 6: Lo discloses:
The network switch, wherein the underlay network comprises switches storing a multicast forwarding table comprising the map of the overlay multicast address to the underlay multicast address (Network device may be provided a table and rules with portions of overlay multicast address. The device determines corresponding underlay multicast address. Table and rules are shared to each network device to determine the same corresponding underlay multicast group address. ¶0047).
RE Claim 7: Lo discloses:
The network switch, wherein the join request originates from the remote network switch in response to the remote network switch receiving a group membership request for the multicast group from the receiver host device (Network device, switch, receives a join request from a multicast receiver, a receiver host device, to join an overlay multicast group, group membership. ¶0037).
RE Claim 9: Lo discloses:
A method comprising:
receiving a join request, for a multicast group indicated by an overlay multicast address, from a remote network switch (Network device may receive a join request to join the overlay multicast group A which includes other network devices, network switches. ¶0037, Fig. 4, Fig. 6), wherein the network switch is coupled to a source host device (multicast source, 305, is connected to multicast network device, the network switch. ¶0026, Fig. 3) and the remote network switch is coupled to a receiver host device of the multicast group (multicast receiver, 316, is connected to network device,335, the remote network switch. ¶0026, Fig. 3), and wherein the network switch and the remote network switch are configured as virtual endpoints in an overlay network deployed over an underlay network (The overlay network may be a virtual network built upon the underlying network. ¶0029. Network devices, network switches, may be an edge switch, endpoints, connected to multicast sources and receivers to the underlay network. ¶0035);
mapping the overlay multicast address to an underlay multicast address (mapping of overlay multicast groups, addresses, to underlay multicast groups. ¶0016. Fig. 6), wherein the remote network switch joins the multicast group represented by the underlay multicast address (network device, remote switch, uses the underlay multicast group, addresses, at a join request. ¶0057. Fig. 6), wherein the mapping comprises:
wherein the IP address field comprises most significant bits (MSBs) which correspond to a prefix of the IP address field based on a network layer protocol used in the underlay network (A portion of the underlay multicast group address is based on a VRF prefix associated with an overlay multicast group. One or more octets (e.g., portions, fields, etc.) of the VRF prefix may be used in a portion of the underlay multicast group address. The network device may use and/or apply a function to portions (e.g., one or more bits, one or more octets, etc.) of the VRF prefix. ¶¶0042, 0047, Fig. 5; An example of multicast address for overlay, underlay, prefix and suffix according so some embodiments. ¶0048, Fig. 5; Overlay multicast group address of 224.2.1.1, a VRF prefix of 225.0.0.0/8, and a VRF suffix of 0.0.0.0/0. The VRF prefix 225.0.0.0/8 may indicate that the first 8 bits of the VRF prefix 225.0.0.0/8 should be used in the underlay multicast group address. ¶¶0052, 0047, Fig. 5: 520),
wherein a first portion of the MSBs which correspond to the prefix comprise fixed-value bits set to a non-zero value based on the network layer protocol used in the underlay network (An example of multicast address for overlay, underlay, prefix and suffix according so some embodiments. ¶0048, Fig. 5; Overlay multicast group address of 224.2.1.1, a VRF prefix of 225.0.0.0/8, and a VRF suffix of 0.0.0.0/0. The VRF prefix 225.0.0.0/8 may indicate that the first 8 bits of the VRF prefix 225.0.0.0/8 should be used in the underlay multicast group address. ¶¶0052, 0047, Fig. 5: 520; The underlay prefix, a first portion of MSBs of the address, is 225 decimal which is equal to 1110 0001 binary and therefore a portion of the bits set to non-zero values.), and
wherein a second portion of the MSBs which correspond to the prefix comprise variable-value bits initially set to zero (A portion of the underlay multicast group address is based on a VRF prefix associated with an overlay multicast group. One or more octets (e.g., portions, fields, etc.) of the VRF prefix may be used in a portion of the underlay multicast group address. The network device may use and/or apply a function to portions (e.g., one or more bits, one or more octets, etc.) of the VRF prefix. ¶¶0042, 0047, Fig. 5; An example of multicast address for overlay, underlay, prefix and suffix according so some embodiments. ¶0048, Fig. 5; The VRF prefix 225.0.0.0/8 may indicate that the first 8 bits of the VRF prefix 225.0.0.0/8 should be used in the underlay multicast group address. ¶¶0052, 0047, Fig. 5: 520; The underlay prefix, a first portion of MSBs of the address, is 225 decimal which is equal to 1110 0001 binary and therefore a portion of the bits set to zero values.);
receiving multicast traffic for the multicast group from the source host device (Multicast source, 305, sends packets to network device, 325, connected to the underlay network. ¶0026, Fig. 4); encapsulating the multicast traffic with a destination address identical to the underlay multicast address (network device encapsulates multicast packets, encapsulated packet includes the underlay multicast group address which is generated based on associated overlay multicast group address. ¶0044) and
forwarding the multicast traffic to the multicast group via the underlay network based on the destination address, such that the receiver host device receives the multicast traffic via the remote network switch (multicast source transmits data to multicast receiver using underlay multicast group associated with an overlay multicast group. Network devices 325 and 335 connect underlay to source/receiver, remote network switch. ¶0036).
Lo does not explicitly disclose:
wherein the mapping comprises:
obtaining a set of hash values by performing a set of hash functions on a representation of the overlay multicast address,
the representation comprising an address associated with the source host device and the overlay multicast address,
wherein a respective hash value corresponds to a variable-value bit position in an IP address field of the underlay multicast address; and
setting, for a respective bit position corresponding to the respective hash value, a value of a bit at the respective bit position to one;
However, Jokela discloses:
wherein the mapping comprises:
obtaining a set of hash values by performing a set of hash functions on a representation of the overlay multicast address (The source edge node is responsible for mapping between each IP multicast group (S,G), a representation of the overlay multicast address, and a corresponding inter-packet bloom filter. Pg. 4, Col. 1, Para. 2; A Bloom filter is an m-bit long bit string where all bits are initial set to zero. Inserting data, overlay multicast address, into the filter starts by hashing the data with k different hash functions. The result is k index values in the range [0..m-1]. Each of the indexed k bits are set to one in the Bloom filter. Pg. 2, Section B, 1) Bloom Filter; When source edge node, SE, receives a join message from a destination edge node, DE, it creates a mapping from the DE identifier to the path iBF, in-packet bloom filter, leading from SE to DE. Pg. 5, Col. 1, Para. 2),
the representation comprising an address associated with the source host device and the overlay multicast address (Source Specific Multicast, SSM, is created from the IP address of the source node, S, and a Group ID, G, where G identifies the group at the source node, a representation of the overlay multicast address. The multicast group is denoted (S,G) as a globally unique identifier of the group, a representation of the source host and the overlay multicast group. Pg. 2, 3rd para.; The source edge node is responsible for mapping between each IP multicast group (S,G) and a corresponding inter-packet bloom filter. Pg. 4, Col. 1, Para. 2;), and
setting, for a respective bit position corresponding to the respective hash value, a value of a bit at the respective bit position to one (Data is input to a Bloom filter which starts by hashing the data with k different hash functions. The result is k index values in the range [0..m-1]. Each of the indexed k bits are set to one in the Bloom filter. Pg. 2, Section B, 1) Bloom Filter;);
Lo and Jokela do not explicitly disclose:
wherein the mapping comprises:
wherein a respective hash value corresponds to a variable-value bit position in an IP address field of the underlay multicast address (Bloom filter used to configure a candidate device address a unique address value. ¶0016; Each network device autoconfigure an IPv6 address that is unique at least within a link layer domain, such as underlay multicast network, of the allocating network device. ¶0018; Allocation of Bloom filter bit positions for address computation. ¶0026, Fig. 4; One or more hash functions used for mapping candidate address to corresponding one or more bits in the N-bit Bloom filter bit vector. ¶0033, Fig. 5);
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Lo, mapping with encapsulation and forwarding of multicast traffic by an underlay address based on a method to set the prefixes and suffixes of an address, with the teachings of Jokela, use of multiple hash functions on multicast addresses by Bloom filter techniques, and the teachings of Levy-Abegnoli, mapping the output of a Bloom filter technique to create a unique address value for routing multicast traffic.
The motivation in doing so would be to provide method and apparatus to update address mappings of a multicast network with overlay and underlay networks. Updating addresses by use of Bloom filter technique, reducing hash collisions and provides a unique address, allows the network to update mapping routes as different traffic arises without manual intervention to remap routing nodes thereby improving efficiency of the multicast network.
RE Claim 10: Lo discloses:
The method, wherein the overlay multicast address includes at least one of an Internet Protocol (IP) address of the source host device and an IP address of the multicast group in an overlay address space (example of an overlay multicast group IPv4 address of 224.1.1.1. ¶0049, Fig. 5).
RE Claim 11: Lo discloses:
The method, wherein the underlay multicast address includes an IP address of the multicast group in an underlay address space (example of an underlay multicast group address, 225.1.1.1 as a function of overlay multicast group address, 224.1.1.1. ¶0049, Fig. 5).
RE Claim 12: Lo discloses:
The method, wherein the set of hash functions is implemented in each virtual endpoint in the overlay network. (each network device generates the same underlay multicast group address. ¶0016. Network device applies function to a portion of overlay multicast group address to determine the portion of the underlay multicast address. The function may be a hash function on one or more bits, octets. ¶0041. The overlay network may be a virtual network built upon the underlying network. ¶0029. Network devices, network switches, may be an edge switch, endpoints, connected to multicast sources and receivers to the underlay network. ¶0035)
RE Claim 14: Lo discloses:
The method, wherein the underlay network comprises switches storing a multicast forwarding table comprising the map of the overlay multicast address to the underlay multicast address (Network device may be provided a table and rules with portions of overlay multicast address. The device determines corresponding underlay multicast address. Table and rules are shared to each network device to determine the same corresponding underlay multicast group address. ¶0047).
RE Claim 15: Lo discloses:
The method, wherein the join request originates from the remote network switch in response to the remote network switch receiving a group membership request for the multicast group from the receiver host device (Network device, switch, receives a join request from a multicast receiver, a receiver host device, to join an overlay multicast group, group membership. ¶0037).
RE Claim 17: Lo discloses:
A non-transitory, computer readable medium including instructions that, when executed by a processor (¶0057, Fig. 7), cause a network switch (¶0022, ¶0030, Fig. 2) to:
receive a join request, for a multicast group indicated by an overlay multicast address, from a remote network switch (Network device may receive a join request to join the overlay multicast group A which includes other network devices, network switches. ¶0037, Fig. 4, Fig. 6), wherein the network switch is coupled to a source host device (multicast source, 305, is connected to multicast network device, the network switch. ¶0026, Fig. 3), and the remote network switch is coupled to a receiver host device of the multicast group (multicast receiver, 316, is connected to network device,335, the remote network switch. ¶0026, Fig. 3), and wherein the network switch and the remote network switch are configured as virtual endpoints in an overlay network deployed over an underlay network (The overlay network may be a virtual network built upon the underlying network. ¶0029. Network devices, network switches, may be an edge switch, endpoints, connected to multicast sources and receivers to the underlay network. ¶0035);
map the overlay multicast address to an underlay multicast address (mapping of overlay multicast groups, addresses, to underlay multicast groups. ¶0016. Fig. 6), wherein the remote network switch joins the multicast group represented by the underlay multicast address (network device, remote switch, uses the underlay multicast group, addresses, at a join request. ¶0057. Fig. 6), wherein to map the overlay multicast address, the processor is to:
wherein the IP address field comprises most significant bits (MSBs) which correspond to a prefix of the IP address field based on a network layer protocol used in the underlay network (A portion of the underlay multicast group address is based on a VRF prefix associated with an overlay multicast group. One or more octets (e.g., portions, fields, etc.) of the VRF prefix may be used in a portion of the underlay multicast group address. The network device may use and/or apply a function to portions (e.g., one or more bits, one or more octets, etc.) of the VRF prefix. ¶¶0042, 0047, Fig. 5; An example of multicast address for overlay, underlay, prefix and suffix according so some embodiments. ¶0048, Fig. 5; Overlay multicast group address of 224.2.1.1, a VRF prefix of 225.0.0.0/8, and a VRF suffix of 0.0.0.0/0. The VRF prefix 225.0.0.0/8 may indicate that the first 8 bits of the VRF prefix 225.0.0.0/8 should be used in the underlay multicast group address. ¶¶0052, 0047, Fig. 5: 520),
wherein a first portion of the MSBs which correspond to the prefix comprise fixed-value bits set to a non-zero value based on the network layer protocol used in the underlay network (An example of multicast address for overlay, underlay, prefix and suffix according so some embodiments. ¶0048, Fig. 5; Overlay multicast group address of 224.2.1.1, a VRF prefix of 225.0.0.0/8, and a VRF suffix of 0.0.0.0/0. The VRF prefix 225.0.0.0/8 may indicate that the first 8 bits of the VRF prefix 225.0.0.0/8 should be used in the underlay multicast group address. ¶¶0052, 0047, Fig. 5: 520; The underlay prefix, a first portion of MSBs of the address, is 225 decimal which is equal to 1110 0001 binary and therefore a portion of the bits set to non-zero values.), and
wherein a second portion of the MSBs which correspond to the prefix comprise variable-value bits initially set to zero (A portion of the underlay multicast group address is based on a VRF prefix associated with an overlay multicast group. One or more octets (e.g., portions, fields, etc.) of the VRF prefix may be used in a portion of the underlay multicast group address. The network device may use and/or apply a function to portions (e.g., one or more bits, one or more octets, etc.) of the VRF prefix. ¶¶0042, 0047, Fig. 5; An example of multicast address for overlay, underlay, prefix and suffix according so some embodiments. ¶0048, Fig. 5; The VRF prefix 225.0.0.0/8 may indicate that the first 8 bits of the VRF prefix 225.0.0.0/8 should be used in the underlay multicast group address. ¶¶0052, 0047, Fig. 5: 520; The underlay prefix, a first portion of MSBs of the address, is 225 decimal which is equal to 1110 0001 binary and therefore a portion of the bits set to zero values.);
receive multicast traffic for the multicast group from the source host device (Multicast source, 305, sends packets to network device, 325, connected to the underlay network. ¶0026, Fig. 4);
encapsulate the multicast traffic with a destination address identical to the underlay multicast address (network device encapsulates multicast packets, encapsulated packet includes the underlay multicast group address which is generated based on associated overlay multicast group address. ¶0044); and
forward the multicast traffic to the multicast group via the underlay network based on the destination address, such that the receiver host device receives the multicast traffic via the remote network switch (multicast source transmits data to multicast receiver using underlay multicast group associated with an overlay multicast group. Network devices 325 and 335 connect underlay to source/receiver, remote network switch. ¶0036).
However Lo does not explicitly disclose:
wherein to map the overlay multicast address, the processor is to:
obtain a set of hash values by performing a set of hash functions on a representation of the overlay multicast address,
the representation comprising an address associated with the source host device and the overlay multicast address,
wherein a respective hash value corresponds to a variable-value bit position in an IP address field of the underlay multicast address; and
set, for a respective bit position corresponding to the respective hash value, a value of a bit at the respective bit position to one;
However, Jokela discloses:
wherein to map the overlay multicast address, the processor is to:
obtain a set of hash values by performing a set of hash functions on a representation of the overlay multicast address (The source edge node is responsible for mapping between each IP multicast group (S,G), a representation of the overlay multicast address, and a corresponding inter-packet bloom filter. Pg. 4, Col. 1, Para. 2; A Bloom filter is an m-bit long bit string where all bits are initial set to zero. Inserting data, overlay multicast address, into the filter starts by hashing the data with k different hash functions. The result is k index values in the range [0..m-1]. Each of the indexed k bits are set to one in the Bloom filter. Pg. 2, Section B, 1) Bloom Filter; When source edge node, SE, receives a join message from a destination edge node, DE, it creates a mapping from the DE identifier to the path iBF, in-packet bloom filter, leading from SE to DE. Pg. 5, Col. 1, Para. 2),
the representation comprising an address associated with the source host device and the overlay multicast address (Source Specific Multicast, SSM, is created from the IP address of the source node, S, and a Group ID, G, where G identifies the group at the source node, a representation of the overlay multicast address. The multicast group is denoted (S,G) as a globally unique identifier of the group, a representation of the source host and the overlay multicast group. Pg. 2, 3rd para.; The source edge node is responsible for mapping between each IP multicast group (S,G) and a corresponding inter-packet bloom filter. Pg. 4, Col. 1, Para. 2;), and
set, for a respective bit position corresponding to the respective hash value, a value of a bit at the respective bit position to one (Data is input to a Bloom filter which starts by hashing the data with k different hash functions. The result is k index values in the range [0..m-1]. Each of the indexed k bits are set to one in the Bloom filter. Pg. 2, Section B, 1) Bloom Filter;);
Lo and Jokela do not explicitly disclose:
wherein to map the overlay multicast address, the processor is to:
wherein a respective hash value corresponds to a variable-value bit position in an IP address field of the underlay multicast address;
However, Levy-Abegnoli discloses:
wherein to map the overlay multicast address, the processor is to:
wherein a respective hash value corresponds to a variable-value bit position in an IP address field of the underlay multicast address (Bloom filter used to configure a candidate device address a unique address value. ¶0016; Each network device autoconfigure an IPv6 address that is unique at least within a link layer domain, such as underlay multicast network, of the allocating network device. ¶0018; Allocation of Bloom filter bit positions for address computation. ¶0026, Fig. 4; One or more hash functions used for mapping candidate address to corresponding one or more bits in the N-bit Bloom filter bit vector. ¶0033, Fig. 5);
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Lo, mapping with encapsulation and forwarding of multicast traffic by an underlay address based on a method to set the prefixes and suffixes of an address, with the teachings of Jokela, use of multiple hash functions on multicast addresses by Bloom filter techniques, and the teachings of Levy-Abegnoli, mapping the output of a Bloom filter technique to create a unique address value for routing multicast traffic.
The motivation in doing so would be to provide method and apparatus to update address mappings of a multicast network with overlay and underlay networks. Updating addresses by use of Bloom filter technique, reducing hash collisions and provides a unique address, allows the network to update mapping routes as different traffic arises without manual intervention to remap routing nodes thereby improving efficiency of the multicast network.
RE Claim 18: Lo discloses:
The non-transitory computer-readable medium, wherein the overlay multicast address includes at least one of an Internet Protocol (IP) address of the source host device and an IP address of the multicast group in an overlay address space. (example of an overlay multicast group IPv4 address of 224.1.1.1. ¶0049, Fig. 5)
RE Claim 19: Lo discloses:
The non-transitory computer-readable medium, wherein the underlay multicast address includes an IP address of the multicast group in an underlay address space. (example of an underlay multicast group address, 225.1.1.1 as a function of overlay multicast group address, 224.1.1.1. ¶0049, Fig. 5)
RE Claim 20: Lo discloses:
The non-transitory computer-readable medium, wherein the set of hash functions is implemented in each virtual endpoint in the overlay network. (each network device generates the same underlay multicast group address. ¶0016. Network device applies function to a portion of overlay multicast group address to determine the portion of the underlay multicast address. The function may be a hash function on one or more bits, octets. ¶0041. The overlay network may be a virtual network built upon the underlying network. ¶0029. Network devices, network switches, may be an edge switch, endpoints, connected to multicast sources and receivers to the underlay network. ¶0035)
Claims 5, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Lo, Jokela, and Levy-Abegnoli, as applied to claims 1 and 9 above, and further in view of Ou et al. (US 20140294003 A1, hereinafter “Ou”).
RE Claim 5: Lo, Jokela, and Levy-Abegnoli do not explicitly disclose:
The network switch, wherein the join request is a Protocol Independent Multicast (PIM) join request.
However, Ou discloses:
The network switch, wherein the join request is a Protocol Independent Multicast (PIM) join request (Edge devices, a network switch, use PIM Join messages to join a multicast group. ¶0043-44) .
It would have been obvious to one having ordinary skill in the art before the effective filing date
of the claimed invention to modify the join requests of Lo with the PIM protocol join requests as described by Ou.
The motivation in doing so would be to allow utilizing standardized protocols to implement multicast thereby improving compatibility.
RE Claim 13: Lo, Jokela, and Levy-Abegnoli do not explicitly disclose:
The method, wherein the join request is a Protocol Independent Multicast (PIM) join request.
However, Ou discloses:
The method, wherein the join request is a Protocol Independent Multicast (PIM) join request (Edge devices, a network switch, use PIM Join messages to join a multicast group. ¶0043-44) .
It would have been obvious to one having ordinary skill in the art before the effective filing date
of the claimed invention to modify the join requests of Lo with the PIM protocol join requests as described by Ou.
The motivation in doing so would be to allow utilizing standardized protocols to implement multicast thereby improving compatibility.
Claims 8, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Lo, Jokela, and Levy-Abegnoli, as applied to claims 1 and 9 above, and further in view of Govindan et al. (US 20210234714 A1, hereinafter “Govindan”).
RE Claim 8: Lo, Jokela, and Levy-Abegnoli do not explicitly disclose:
The network switch, wherein the multicast group is one of a source specific multicast (SSM) group or an any-source multicast (ASM) group.
However, Govindan discloses:
The network switch, wherein the multicast group is one of a source specific multicast (SSM) group or an any-source multicast (ASM) group (Ingress/egress router, a network switch, can receive SSM or any-source multicast (ASM) traffic. The traffic is sent a set of underlay multicast groups. ¶0044)
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Lo, support of multicast traffic, with the teachings of Govindan, support use of SSM and ASM multicast groups.
The motivation in doing so would be to support additional multicast groups to obtain multicast information for improving multicast network configurations.
RE Claim 16: Lo, Jokela, and Levy-Abegnoli do not explicitly disclose:
The method, wherein the multicast group is one of a source specific multicast (SSM) group or an any-source multicast (ASM) group.
However Govindan discloses:
The method, wherein the multicast group is one of a source specific multicast (SSM) group or an any-source multicast (ASM) group (Ingress/egress router, a network switch, can receive SSM or any-source multicast (ASM) traffic. The traffic is sent a set of underlay multicast groups. ¶0044).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to combine the method of Lo, support of multicast traffic, with the teachings of Govindan, support use of SSM and ASM multicast groups.
The motivation in doing so would be to support additional multicast groups to obtain multicast information for improving multicast network configurations.
Response to Arguments
Applicant argues that “Lo, Jokela, and Levy-Abegnoli, either individually or in combination, do not disclose the amended limitations, as written, of claims 1, 9, and 17, in regards to the underlay multicast address prefix configuration.
Examiner respectfully disagrees. In a review of Lo, the subject matter is found. A portion of the underlay multicast group address is based on a VRF prefix associated with an overlay multicast group. One or more octets (e.g., portions, fields, etc.) of the VRF prefix may be used in a portion of the underlay multicast group address. The network device may use and/or apply a function to portions (e.g., one or more bits, one or more octets, etc.) of the VRF prefix. ¶¶0042, 0047, Fig. 5; An example of multicast address for overlay, underlay, prefix and suffix according so some embodiments. ¶0048, Fig. 5; Overlay multicast group address of 224.2.1.1, a VRF prefix of 225.0.0.0/8, and a VRF suffix of 0.0.0.0/0. The VRF prefix 225.0.0.0/8 may indicate that the first 8 bits of the VRF prefix 225.0.0.0/8 should be used in the underlay multicast group address. ¶¶0052, 0047, Fig. 5: 520; The underlay prefix, a first portion of MSBs of the address, is 225 decimal which is equal to 1110 0001 binary and therefore a portion of the bits set to non-zero values.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US 20110083035 A1 Liu et al.
US 8175107 B1 Yalagandula et al.
US 20150124805 A1 Yadav et al.
US 20090225752 A1 Mitsumori
The above references disclose various aspects of multicast address and hash functions.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL A. LANGER whose telephone number is (703)756-1780. The examiner can normally be reached Monday - Friday, 8:00 am - 5:00 pm, Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nishant B. Divecha can be reached at 1 (571) 270-3125. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAUL A. LANGER/Examiner, Art Unit 2419
/Nishant Divecha/Supervisory Patent Examiner, Art Unit 2419