DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This communication is in response to claims 1-20 filed on 06/28/2024.
Claim Objections
Claims 5, 12, and 19 is objected to because of the following informalities:
2. Claims 5, 12, and 19 recites, “the corresponding depth that is one level higher that the corresponding depth of the first spine device” and “the corresponding depth that is one level lower that the corresponding depth of the second spine device” at the end of the first limitation of the claim. It is believed that the word “that” is intended to be “than”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6, 8-14 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically, for the following reasons:
3. Claims 6, 13 and 20 recite, “the upstream replication of the network traffic” at the beginning of the claim, however claims 3, 10 and 17, from which claims 6, 13, and 20 respectively depend, disclose upstream replication of the network traffic to a first spine, and claims 5, 12, and 19, from which claims 6, 13, and 20 also depend, disclose upstream replication of the network traffic by the first spine device to a second spine device. It therefore unclear which of these recitations of upstream replication “the upstream replication” in claims 6, 13 and 20 are intended to refer to.
For purposes of examination, claims 6, 13 and 20 are interpreted as reciting, “wherein upstream replication of the network traffic is repeated until the network traffic reaches at least one spine device with the corresponding depth having a highest value”.
4. Claims 6, 13 and 20 recite “each receiving spine device performing a corresponding downstream replication to different zones of the one or more zones than a zone assigned to the given leaf device”. It is unclear whether “each receiving spine device” is intended to refer to just “at least one spine device with the corresponding depth having a highest value”, disclosed previously in the same claim, or to also include the first spine device, the second spine device, and the third spine device, which also receive network traffic as disclosed in claims 3, 5, 10, 12, 17, and 19, from which claims 6, 13 and 20 respectively depend.
Given that claims 3, 10 and 17 disclose that the first spine device replicates downstream to leaf devices in the same zone, for purposes of examination “each receiving spine device” in claims 6, 13, and 20 is interpreted as referring to “each receiving spine device of the at least one spine device with a corresponding depth having a highest value”.
5. Claims 8-14 are is directed to a network device comprising one or more memories having computer-readable instructions stored therein, and one or more processors configured to execute the computer-readable instructions to perform the claimed functionality, however the subsequently claimed features include functionalities performed by multiple different devices, such as a given leaf device and one or more spine devices. It is therefore submitted that it is improper for these claims, including limitations performed by multiple devices, to be directed to a single network device.
For purposes of examination, claims 8-14 are interpreted as being directed to a system comprising the claimed leaf devices and spine devices.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
6. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ruan et al. (US 2021/0320820) in view of Li (US 2020/0374155).
Regarding claim 1, Ruan teaches a method comprising:
defining a corresponding depth for each leaf device and each spine device in a leaf-spine network fabric having a hierarchical structure (tier three switches are called Top-of-Rack switches (TORs) or Leaf switches, tier two switches are called Spine switches, and tier 1 switches are called Super Spine switches, [0262]);
defining one or more zones in the leaf-spine network fabric (data center network is organized as a plurality of “plan-of-deployment” units also referred to herein as “PoDs.” Each PoD is a modular unit of physical infrastructure that houses a set of network, compute, storage, and application components that work together to deliver networking services, [0263]; see FIG. 23);
generating a routing policy for each leaf device and one or more spine devices in the leaf-spine network fabric (with respect to full routing tables for the data center, only core switches 22 may need to perform full lookup operations, [0084]; install one or more policies within the devices of switch fabric 210, where the policies specify packet forwarding rules based on deterministic forwarding labels or other packet forwarding information carried within the headers of the packets injected into switch fabric 210 by DPUs 200, [0278]) based at least in part of the corresponding depth and the one or more zones defined (the FCP Path Index value assigned a given FCP leg is generated based on a combination of the peak point for FCP leg and an “FCP color” assigned to the network interface of the DPU on the FCP leg, [0006]; The “WithinPoD” sub-pool represents FCP Path Index values for FCP legs where a spine switch is the peak point (e.g., Spine 1-1 for traffic between DPU A and DPU B. The “AcrossPoD” sub-pool represents FCP Path Index values for FCP legs where a super spine switch is the peak point (e.g., Super Spine 4 for traffic between DPU A and DPU F, [0275]); and
performing ingress replication of network traffic received at a given leaf device (ToR 202, 204 of FIG. 23) using the corresponding routing policy of the given leaf device and the corresponding routing policy of at least one of the one or more spine devices (Switching devices within switch fabric 210 apply the installed routing/switching policies to direct the FCP packets along the particular paths based on the deterministic forwarding labels carried within the FCP packets, [0278]).
However, Ruan does not explicitly disclose the routing policy for each leaf device and one or more spine devices is a replication list.
Li teaches generating a corresponding replication list for each leaf device and one or more spine devices in a leaf-spine network fabric (Table 2 shows ingress replication lists on three devices, [0085]; see Table 2 on page 9).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed to utilize ingress replication lists for leaf and spine devices in the system/method of Ruan as suggested by Li to efficiently replicate BUM packets throughout the fabric. One would be motivated to combine these teachings because maintaining a list with each destination only appearing once at each switch device prevents duplicate BUM traffic from being sent a plurality of times to the same device and helps to maximize bandwidth utilization.
Regarding claim 2, Ruan teaches the method of claim 1, wherein the leaf-spine network fabric is a CLOS network (the switch fabric itself may be implemented using multiple layers of interconnected switches as in a CLOS network, [0076]; a data center network in which a set of DPUs 200 are connected to a typical CLOS switch fabric 210 used in a large-scale data center in which the intermediate switching/routing devices are arranged in a multi-stage switching arrangement, [0262]).
Regarding claim 3, Ruan teaches the method of claim 1, wherein performing the ingress replication includes upstream replication of the network traffic to a first spine device having the corresponding depth that is one level higher than the corresponding depth of the given leaf device (forwards the FCP packet of the sending (upstream) FCP Leg of the same color. In addition, DPU 200A constructs the output header of the FCP packet to specify a destination address of the peak point for the given FCP path. Finally, DPU 200A outputs the FCP packet on the network interface having the FCP color of the selected FCP Path Index, [0288], see FIG. 29), the first spine device being one of the one or more spine devices (One example FCP Path is, for example, the end-to-end path from DPU A to DPU B made up of a first FCP leg from DPU A to Spine 1 assigned the FCP Path Index “Path Index 2”, [0274]; Switching devices switch fabric 210 forward the FCP packet toward the peak point switching device along the FCP leg according to the outer IP header using standard IP-based switching/routing mechanism, [0279]).
Regarding claim 4, Ruan teaches the method of claim 3, wherein the ingress replication includes downstream replication of the network traffic, by the first spine device, to one or more additional leaf devices that are in a same zone of the one or more zones as the given leaf device (a second FCP leg from Spine 1 to DPU B having the same FCP Path Index “Path Index 2.”, [0274]; The “WithinPoD” sub-pool represents FCP Path Index values for FCP legs where a spine switch is the peak point (e.g., Spine 1-1 for traffic between DPU A and DPU B, [0275]; forwarding paths used between spine switches and the destination DPUs within a PoD (i.e., the downstream FCP Leg from the peak point of the FCP path), [0290]).
Regarding claim 5, Ruan teaches the method of claim 3, wherein,
the ingress replication includes upstream replication of the network traffic, by the first spine device, to a second spine device having the corresponding depth that is one level higher that the corresponding depth of the first spine device (Switching devices switch fabric 210 forward the FCP packet toward the peak point switching device along the FCP leg according to the outer IP header using standard IP-based switching/routing mechanism, [0279]; DPU 200A constructs the FCP packet to include an outer IP header having a destination IP address for super spine switch 230 such that the FCP packet is tunneled to the peak point (super spine switch 230) for the selected FCP Path 232, [0293]; see FIG. 30), and
the second spine device performs downstream replication of the network traffic to at least one third spine device with the corresponding depth one level lower that the corresponding depth of the second spine device, each of the at least one third spine device being in a different one of the one or more zones than the given leaf device (each spine switch of the PoDs in the data center is connected to at least one spine switch in each of the other PoDs in the data center by one or more super spine switches, which provide a third switching stage of switch fabric 210 referred to as the “super spine.”, [0264]; The “AcrossPoD” sub-pool represents FCP Path Index values for FCP legs where a super spine switch is the peak point (e.g., Super Spine 4 for traffic between DPU A and DPU F, [0275]; When super spine switch 230 removes the outer header of the packet, the super spine switch will be forced to forward the packet toward spine switch “Spine 2-2” in order to reach the YELLOW network interface of DPU 220F via ToR 234, [0293]).
Regarding claim 6, Ruan teaches the method of claim 5, wherein the upstream replication of the network traffic is repeated until the network traffic reaches at least one spine device with the corresponding depth having a highest value (the techniques define the concept of a “peak point,” which is the middle point (highest-level switching device) between a pair of DPUs within the data center switch fabric, [0006]; deterministic forwarding labels are used within the header of the FCP packets to direct each FCP packet toward the correct peak point for an FCP Path selected for the FCP packet, [0008]; the techniques define the concept of a “peak point,” which is the middle point (highest-level switching device) between two DPUs within the data center switch fabric, which is typically symmetric in arrangement. Depending on the physical connectivity between two DPUs, the peak point between the pair of DPUs could be a TOR switch, a spine switch or a super spine switch, [0269]; outer IP header having a destination IP address for super spine switch 230 such that the FCP packet is tunneled to the peak point (super spine switch 230), [0293]), with each receiving spine device performing a corresponding downstream replication to different zones of the one or more zones than a zone assigned to the given leaf device (When super spine switch 230 removes the outer header of the packet, the super spine switch will be forced to forward the packet toward spine switch “Spine 2-2” in order to reach the YELLOW network interface of DPU 220F via ToR 234, [0293]; see FIG. 30).
Regarding claim 7, Ruan does not explicitly disclose the method of claim 1, wherein the network traffic is Broadcast, Unknown Unicast, and Multicast (BUM) traffic.
Li teaches wherein network traffic is Broadcast, Unknown Unicast, and Multicast (BUM) traffic (the dual-homing port may be used to transmit a BUM packet including a broadcast packet, a multicast packet, and an unknown unicast packet, [0061]; the BUM packet including the broadcast packet, the multicast packet, and the unknown unicast packet may be replicated to the VXLAN tunnel in the broadcast domain, [0085]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to recognize BUM traffic in a leaf and spine fabric in the system/method of Ruan as suggested by Li to provide additional communication functionalities. One would be motivated to combine these teachings because it would enable services such as broadcast discovery requests, allowing traffic to endpoints with addresses not yet learned, and supporting efficient delivery of data to multiple receivers.
Claims 8 and 15 recite limitations equivalent to those in claim 1, and are therefore rejected in view of the same rationale.
Claims 9 and 16 recite limitations equivalent to those in claim 2, and are therefore rejected in view of the same rationale.
Claims 10 and 17 recite limitations equivalent to those in claim 3, and are therefore rejected in view of the same rationale.
Claims 11 and 18 recite limitations equivalent to those in claim 4, and are therefore rejected in view of the same rationale.
Claims 12 and 19 recite limitations equivalent to those in claim 5, and are therefore rejected in view of the same rationale.
Claims 13 and 20 recite limitations equivalent to those in claim 6, and are therefore rejected in view of the same rationale.
Claim 14 recites limitations equivalent to those in claim 7, and is therefore rejected in view of the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Dayama et al. US 2015/0131655 – optimizing replication of multicast packets among leaf and spine nodes using a segment identifier to define a distribution group.
Ghanwani et al. US 2015/0188808 – improving network routing efficiencies in spine-leaf networking systems.
Li US 2019/0394115 – extension to routing protocols to include an area identifier for routing in a spine-leaf topology.
Malhotra et al. US 2020/0067823 – assigning group identities to identify an active leaf node within an assigned group for forwarding a data packet.
Chhibber et al. US 2021/0377153 – establishing flow paths using routing tables between leaf nodes and spine nodes in multicast groups.
Chhibber et al. US 2022/0337441 – spine and leaf switches storing routing tables to service multicast traffic.
Xu et al. US 2022/0360519 – packet forwarding in a spine-leaf architecture using routing tables carrying routing direction identifiers.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MADHU WOOLCOCK whose telephone number is (571)270-3629. The examiner can normally be reached Tuesday, Thursday 9-6 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Parry can be reached at 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MADHU WOOLCOCK
Examiner
Art Unit 2451
/MADHU WOOLCOCK/Primary Examiner, Art Unit 2451