Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 12, and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Since the independent claims 1 and 12 remain rejected, the rejection of the dependent
claims persist.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 1-2, 10, 12-13, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over POWER, et al. (US 10797990 B2, hereinafter, "POWER") in view of VAHDAT, et al. (US 20100020806 A1, hereinafter, "VAHDAT") and SURYANARAYANA, et al. (US 11870677 B2, hereinafter, "SURYANARAYANA").
Regarding claim 20, POWER teaches a computing device (column 15, lines 10-18; figure
8, computing device: 800) comprising:
a processor (column 15, lines 10-18; figure 8, processor(s): 810);
and a memory including instructions that, when executed with the processor (column 15, lines
31-41; figure 8, system memory: 820), cause the computing device to, at least:
provide a compute fabric block communicatively coupled to the plurality of blocks of switches,
the compute fabric block including:
POWER writes, “Referring to FIG. 7, operation 701 illustrates communicatively coupling a plurality of
servers to at least two top of rack switches” (column 10, lines 38-40).
(i) a set of one or more racks, each rack in the set of one or more racks comprising one or
more servers configured to execute one or more workloads of a customer,
POWER writes, “Illustrated is a rack of 24 servers 510 that are each connected to two Tier-0 ToR
switches 520, 530” (column 9, lines 30-31; figure 5, rack of servers: 510). POWER continues, “FIG. 1
illustrates an example computing environment in which the embodiments described herein may be
implemented. FIG. 1 illustrates a data center 100 that configured to provide computing resources to
users 100a, 100b, or 100c (which may be referred herein singularly as ‘a user 100’ or in the plural as ‘the
users 100’) via user computers 102a, 102b, and 102c (which may be referred herein singularly as “a
computer 102” or in the plural as “the computers 102”) via a communications network 130” (column 3,
lines 54-62). POWER adds, “Data center 100 may include servers 116a, 116b, and 116c (which may be
referred to herein singularly as ‘a server 116’ or in the plural as ‘the servers 116’) that provide
computing resources available as virtual machines 118a and 118b (which may be referred to herein
singularly as ‘a virtual machine 118’ or in the plural as ‘the virtual machines 118’)” (column 4, lines 19-
24, figure 1). POWER draws attention to figure 5, which depict a rack of servers connected to two ToR
switches. POWER displays in figure 1 a data center that provides computing resources to users via
computers via a communications network. The data center as, POWER explains, may include servers
that provide computing resources available as virtual machines.
and provide a network fabric block communicatively coupled to the plurality of blocks of
switches, the network fabric block including:
POWER writes, “Referring to FIG. 7, operation 701 illustrates communicatively coupling a plurality of
servers to at least two top of rack switches” (column 10, lines 38-40).
POWER fails to explicitly disclose information regarding, “provide a plurality of blocks of switches;”, “and (ii) a first plurality of switches organized into a first plurality of levels, the first plurality of switches communicatively coupling the set of one or more racks to the plurality of blocks of switches;”, “(i) a plurality of edge devices including a first edge device providing connectivity to a first external resource, the first edge device enabling access to the first external resource by a workload executed by a server included in a rack in the set of one or more racks,”, and “and (ii) a second plurality of switches organized into a second plurality of levels, wherein the first edge device is communicatively coupled at one end to the second plurality of switches and coupled at another end to the first external resource.”
However, in analogous art, VAHDAT teaches provide a plurality of blocks of switches;
VAHDAT writes, “The system may include a plurality of switches configured as a network…” (paragraph
0008).
and (ii) a first plurality of switches organized into a first plurality of levels, the first plurality of
switches communicatively coupling the set of one or more racks to the plurality of blocks of switches;
VAHDAT writes, “FIG. 7 depicts an example of a packaging and wiring scheme for use with a fat-tree
network 700” (paragraph 0063; figure 1). VAHDAT continues, “Each individual pod is a replication unit
for the larger cluster, as depicted at FIG. 7. Each pod thus includes 576 hosts and 48 individual 48-port
Gigabit Ethernet switches (paragraph 0064). In a topology with 27,648 total hosts, there are 48 total
pods, each housing 12 of the required core switches” (paragraph 0066). VAHDAT illustrates in figure 7 a
fat-tree network where each pod demonstrates a plurality of switches communicatively coupled.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and invention of POWER to include aspects described by VAHDAT that “relates to networking.” VAHDAT provides the motivation for modification stating, “In some implementations, the switches were not collected into a central rack. In this approach, two 48 port switches are distributed to each rack. Hosts would interconnect to the switches in sets of 24. This approach has the advantage, in some implementations, of requiring much shorter cables to connect hosts to their first hop switch and for eliminating these cables all together if the racks were appropriately internally packaged” (paragraph 0068).
POWER and VAHDAT fail to explicitly disclose information regarding, “(i) a plurality of edge devices including a first edge device providing connectivity to a first external resource, the first edge device enabling access to the first external resource by a workload executed by a server included in a rack in the set of one or more racks,” and “and (ii) a second plurality of switches organized into a second plurality of levels, wherein the first edge device is communicatively coupled at one end to the second plurality of switches and coupled at another end to the first external resource.”
However, in analogous art, SURYANARAYANA teaches (i) a plurality of edge devices including a first edge device providing connectivity to a first external resource, the first edge device enabling access to the first external resource by a workload executed by a server included in a rack in the set of one or more racks,
SURYANARAYANA writes, “In general, data center 10 provides an operating environment for applications and services for customers 4 coupled to the data center 10 by service provider network 6. Customers 4 are coupled to service provider network 6 by provider edge (PE) device 12” (column 5, lines 47-51; figure 1). SURYANARAYANA continues, “In this example, data center 10 includes a set of storage systems and application servers interconnected via an IP fabric 20 provided by one or more tiers of physical network switches and routers” (column 6, lines 3-6).
and (ii) a second plurality of switches organized into a second plurality of levels, wherein the first edge device is communicatively coupled at one end to the second plurality of switches and coupled at another end to the first external resource.
SURYANARAYANA writes, “FIG. 1 is a block diagram illustrating an example network system 5...In network system 5...SDN gateways 8A-8B (‘SDN gateways 8’), and nodes of Internet Protocol (IP) fabric 20 operate in accordance with the techniques described herein to ensuring customer traffic flow and customer applications executing within the cloud data center continue without interruption” (column 5, lines 37-46; figure 1). SURYANARAYANA adds, “...IP fabric 20 is provided by a set of interconnected leaf switches 24A-24N (collectively, ‘leaf switches 24’) coupled to a distribution layer of spine switches 22A-22M (collectively, ‘spine switches 22’). Leaf switches 24 may also be referred to as top-of-rack (TOR) switches...” (column 6, lines 12-17). SURYANARAYANA continues, “...SDN gateways 8, also referred to as gateway routers, are routing devices that perform layer 3 routing to route network traffic between data center 10 and customers 4 by service provider network 6. SDN gateways 8 provide redundant gateways to forward and receive packets between IP fabric 20 and service provider network 6” (column 6, lines 36-41).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and invention of POWER and VAHDAT to include aspects described by SURYANARAYANA that “relates to computer networks and, more particularly, to forwarding packets within virtual networks.” SURYANARAYANA provides the motivation for modification stating, “The techniques of this disclosure may provide one or more advantages. For example, the techniques of this disclosure may provide better integration of the compute nodes with the IP fabric. This approach considers various elements of a virtualized infrastructure: the network virtualization solution (including the SDN controller and virtual routers/virtual agents on compute nodes), the underlay switching layer (e.g., switch-based IP Fabric), as well as the SDN Gateway (e.g. routers)” (column 3, lines 14-23).
Claims 1 and 12 are method and memory claims corresponding to apparatus claim 20 that has
already been rejected above. The applicant’s attention is directed to the rejection of claim 20. Claims 1
and 12 are rejected under the same rational as claim 20.
Regarding claim 2, POWER, VAHDAT, and SURYANARAYANA teach the method of claim 1,
Additionally, POWER teaches wherein the first external resource is a public communication
network, and wherein the first edge device is a gateway providing connectivity to the public
communication network.
POWER writes, “Referring to FIG. 1, communications network 130 may, for example, be a publicly
accessible network of linked networks and may be operated by various entities, such as the Internet”
(column 4, lines 35-38; figure 1, communication network: 130). POWER adds, “In the example data
center 100 shown in FIG. 1, a router 111 may be utilized to interconnect the servers 116a and 116b.
Router 111 may also be connected to gateway 140, which is connected to communications network
130” (column 5, lines 12-15; figure 1, gateway: 140). POWER indicates the communication network may
be a publicly accessible network. POWER illustrates in figure 1 the communications network connected
to a gateway.
Regarding claim 10, POWER, VAHDAT, and SURYANARAYANA teach the method of claim 1,
Additionally, POWER teaches wherein the one or more edge devices include a gateway, a
backbone edge device, a metro edge device, and a route reflector.
POWER writes, “Referring to FIG. 1, communications network 130 may, for example, be a publicly
accessible network of linked networks and may be operated by various entities, such as the Internet”
(column 4, lines 35-38; figure 1, communication network: 130). POWER adds, “In the example data
center 100 shown in FIG. 1, a router 111 may be utilized to interconnect the servers 116a and 116b.
Router 111 may also be connected to gateway 140, which is connected to communications network
130” (column 5, lines 12-15; figure 1, gateway: 140). POWER indicates the communication network may
be a publicly accessible network. POWER illustrates in figure 1 the communications network connected
to a gateway.
Claim 13 is a memory claim corresponding to apparatus claim 2 that has already been rejected
above. The applicant’s attention is directed to the rejection of claim 2. Claim 13 is rejected under the
same rational as claim 2.
Claim(s) 3-5, 7-8, 11, 14-16, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over POWER, VAHDAT, and SURYANARAYANA as applied to claims 1 and 12 above, and further in view of K N, et al. (US 11336570 B1, hereinafter, "K N").
Regarding claim 3, POWER, VAHDAT, and SURYANARAYANA teach the method of claim 1,
Additionally, VAHDAT teaches wherein the first plurality of levels associated with the first
plurality of switches in the compute fabric block further includes:
VAHDAT writes, “FIG. 7 depicts an example of a packaging and wiring scheme for use with a fat-tree
network 700” (paragraph 0063; figure 1). VAHDAT continues, “Each individual pod is a replication unit
for the larger cluster, as depicted at FIG. 7. Each pod thus includes 576 hosts and 48 individual 48-port
Gigabit Ethernet switches (paragraph 0064). In a topology with 27,648 total hosts, there are 48 total
pods, each housing 12 of the required core switches” (paragraph 0066). VAHDAT illustrates in figure 7 a
fat-tree network where each pod demonstrates a plurality of switches communicatively coupled.
POWER, VAHDAT, and SURYANARAYANA fail to explicitly disclose information regarding, “(i) a first tier-one level of switches,”, “and (ii) a first tier-two level of switches,”, “wherein the first tier-one level of switches are communicatively coupled at a first end to the set of one or more racks and are communicatively coupled at a second end to the first tier-two level of switches,”, and “and wherein the first tier-two level of switches connect the first tier-one level of switches to the plurality of blocks of switches.”
However, in analogous art, K N teaches (i) a first tier-one level of switches,
K N writes, “Consider a 2-tier spine/leaf topology for switch fabric 14 in which virtual network
controller 22 configures a spine (e.g., using one or more chassis switches 18) with the routing and
bridging (RB) role as an edge routed bridging (ERB) unicast gateway and a route-reflector and virtual
network controller 22 configures the leafs (e.g., one or more of TOR switches 16) with ERB access”
(column 13, lines 23-29; figure 2).
and (ii) a first tier-two level of switches,
K N writes, “Consider a 2-tier spine/leaf topology for switch fabric 14 in which virtual network
controller 22 configures a spine (e.g., using one or more chassis switches 18) with the routing and
bridging (RB) role as an edge routed bridging (ERB) unicast gateway and a route-reflector and virtual
network controller 22 configures the leafs (e.g., one or more of TOR switches 16) with ERB access”
(column 13, lines 23-29; figure 2).
wherein the first tier-one level of switches are communicatively coupled at a first end to the
set of one or more racks and are communicatively coupled at a second end to the first tier -two level
of switches,
K N writes, “Switch fabric 14 is provided by a set of interconnected top-of-rack (TOR) switches 16A-16N
(collectively, “TOR switches 16”) coupled to a distribution layer of chassis switches 18A-18M
(collectively, “chassis switches 18”)” (column 4, lines 49-53; figure 1).
and wherein the first tier-two level of switches connect the first tier-one level of switches to
the plurality of blocks of switches.
K N writes, “Switch fabric 14 is provided by a set of interconnected top-of-rack (TOR) switches 16A-16N
(collectively, “TOR switches 16”) coupled to a distribution layer of chassis switches 18A-18M
(collectively, “chassis switches 18”)” (column 4, lines 49-53; figure 1).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and invention of POWER, VAHDAT, and SURYANARAYANA to include aspects described by K N that “relate generally to computer networks and more particularly to virtual networks.” K N provides the motivation for modification stating, “In this respect, various aspects of the techniques may facilitate more efficient utilization of the underlying virtual network while also reducing consumption of resources of the virtual router (or the servers supporting execution of the virtual router)” (column 2, lines 42-46). K N notes, “...improving load balancing without requiring proprietary LAG protocols, the techniques may enable better operation of virtual networks that enable inter-manufacturer compatibility while also promoting load balancing across multiple links so as to provide high availability, redundancy, higher bandwidth throughput, and the like. The techniques may also reduce operational expenditures associated with configuring MC-LAG on switches and the servers” (column 2, lines 51-59).
Regarding claim 4, POWER, VAHDAT, SURYANARAYANA, and K N teach the method of claim 3,
Additionally, K N teaches wherein the first tier-one level of switches in the compute fabric
block include eight switches,
K N writes, “Consider a 2-tier spine/leaf topology for switch fabric 14 in which virtual network
controller 22 configures a spine (e.g., using one or more chassis switches 18) with the routing and
bridging (RB) role as an edge routed bridging (ERB) unicast gateway and a route-reflector and virtual
network controller 22 configures the leafs (e.g., one or more of TOR switches 16) with ERB access”
(column 13, lines 23-29; figure 2). K N points out the leaf and spine may use one or more switches,
therefore indicating eight switches may be used.
and the first tier-two level of switches in the compute fabric block include four switches.
K N writes, “Consider a 2-tier spine/leaf topology for switch fabric 14 in which virtual network
controller 22 configures a spine (e.g., using one or more chassis switches 18) with the routing and
bridging (RB) role as an edge routed bridging (ERB) unicast gateway and a route-reflector and virtual
network controller 22 configures the leafs (e.g., one or more of TOR switches 16) with ERB access”
(column 13, lines 23-29; figure 2). K N points out the leaf and spine may use one or more switches,
therefore, indicating four switches may be used.
Regarding claim 5, POWER, VAHDAT, SURYANARAYANA, and K N teach the method of claim 3,
Additionally, K N teaches wherein each switch in the first tier-one level of switches in the
compute fabric block is connected to each switch in the first tier-two level of switches in the compute
fabric block.
K N writes, “Switch fabric 14 is provided by a set of interconnected top-of-rack (TOR) switches 16A-16N
(collectively, “TOR switches 16”) coupled to a distribution layer of chassis switches 18A-18M
(collectively, “chassis switches 18”)” (column 4, lines 49-53; figure 1).
Regarding claim 7, POWER, VAHDAT, and SURYANARAYANA teach the method of claim 1,
POWER, VAHDAT, and SURYANARAYANA fail to explicitly disclose information regarding, “wherein the second plurality of levels associated with the second plurality of switches in the network fabric block further includes:”, “(i) a second tier-one level of switches,”, “and (ii) a second tier-two level of switches,”, “wherein a first subset of switches included in the second tier-one level of switches are communicatively coupled, at a first end, to the one or more edge devices,”, “and a second subset of switches included in the second tier-one level of switches are communicatively coupled, at the first end, to the plurality of blocks of switches,”, and “and wherein the first subset and the second subset of switches included in the second tier-one level of switches are coupled, at a second end, to the second tier-two level of switches included in the network fabric block.”
However, in analogous art, K N teaches wherein the second plurality of levels associated with
the second plurality of switches in the network fabric block further includes:
K N writes, “In a typical data center, clusters of storage systems and application servers are
interconnected via a high-speed switch fabric provided by one or more tiers of physical network
switches and routers” (column 1, lines 19-22). K N indicates that in the example furnished of the data
center, the data center’s switch fabric is provided by one or more tiers of physical network switches and
routers.
(i) a second tier-one level of switches,
K N writes, “Consider a 2-tier spine/leaf topology for switch fabric 14 in which virtual network
controller 22 configures a spine (e.g., using one or more chassis switches 18) with the routing and
bridging (RB) role as an edge routed bridging (ERB) unicast gateway and a route-reflector and virtual
network controller 22 configures the leafs (e.g., one or more of TOR switches 16) with ERB access”
(column 13, lines 23-29; figure 2). K N points out the leaf and spine may use one or more switches,
therefore indicating four switches may be used.
and (ii) a second tier-two level of switches,
K N writes, “Consider a 2-tier spine/leaf topology for switch fabric 14 in which virtual network
controller 22 configures a spine (e.g., using one or more chassis switches 18) with the routing and
bridging (RB) role as an edge routed bridging (ERB) unicast gateway and a route-reflector and virtual
network controller 22 configures the leafs (e.g., one or more of TOR switches 16) with ERB access”
(column 13, lines 23-29; figure 2). K N points out the leaf and spine may use one or more switches,
therefore indicating four switches may be used.
wherein a first subset of switches included in the second tier-one level of switches are
communicatively coupled, at a first end, to the one or more edge devices,
K N writes, “Switch fabric 14 is provided by a set of interconnected top-of-rack (TOR) switches 16A-16N
(collectively, “TOR switches 16”) coupled to a distribution layer of chassis switches 18A-18M
(collectively, “chassis switches 18”)” (column 4, lines 49-53; figure 1).
and a second subset of switches included in the second tier-one level of switches are
communicatively coupled, at the first end, to the plurality of blocks of switches,
K N writes, “Switch fabric 14 is provided by a set of interconnected top-of-rack (TOR) switches 16A-16N
(collectively, “TOR switches 16”) coupled to a distribution layer of chassis switches 18A-18M
(collectively, “chassis switches 18”)” (column 4, lines 49-53; figure 1).
and wherein the first subset and the second subset of switches included in the second tier -one level of switches are coupled, at a second end, to the second tier-two level of switches included in the network fabric block.
K N writes, “Switch fabric 14 is provided by a set of interconnected top-of-rack (TOR) switches 16A-16N
(collectively, “TOR switches 16”) coupled to a distribution layer of chassis switches 18A-18M
(collectively, “chassis switches 18”)” (column 4, lines 49-53; figure 1).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and invention of POWER, VAHDAT, and SURYANARAYANA to include aspects described by K N that “relate generally to computer networks and more particularly to virtual networks.” K N provides the motivation for modification stating, “In this respect, various aspects of the techniques may facilitate more efficient utilization of the underlying virtual network while also reducing consumption of resources of the virtual router (or the servers supporting execution of the virtual router)” (column 2, lines 42-46). K N notes, “...improving load balancing without requiring proprietary LAG protocols, the techniques may enable better operation of virtual networks that enable inter-manufacturer compatibility while also promoting load balancing across multiple links so as to provide high availability, redundancy, higher bandwidth throughput, and the like. The techniques may also reduce operational expenditures associated with configuring MC-LAG on switches and the servers” (column 2, lines 51-59).
Regarding claim 8, POWER, VAHDAT, SURYANARAYANA, and K N teach the method of claim 7,
Additionally, K N teaches wherein the second tier-one level of switches in the network fabric
block include eight switches, and the second tier-two level of switches in the network fabric block
include four switches.
K N writes, “Consider a 2-tier spine/leaf topology for switch fabric 14 in which virtual network
controller 22 configures a spine (e.g., using one or more chassis switches 18) with the routing and
bridging (RB) role as an edge routed bridging (ERB) unicast gateway and a route-reflector and virtual
network controller 22 configures the leafs (e.g., one or more of TOR switches 16) with ERB access”
(column 13, lines 23-29; figure 2). K N points out the leaf and spine may use one or more switches,
therefore, indicating eight switches may be used for the leaf and four switches may be used for the spine or vice versa.
Regarding claim 11, POWER, VAHDAT, SURYANARAYANA, and K N teach the method of claim 7,
Additionally, VAHDAT teaches wherein a workload executed by a server included in a rack of
the compute fabric block accesses the first external resource by establishing a connection to a first
switch in the plurality of blocks of switches, the connection being routed:
VAHDAT writes, “FIG. 1 depicts a network 100 including a plurality of hosts 105A-P coupled to switches
110A-T via communication links, such as communication links 107A-P, 150A-O, and 160A-P. Each of
the hosts 105A-P may be implemented as one or more of the following: a processor, a computer, a
blade, a server, and the like. The switches 110A-T enable communications among any of the hosts 105A-
P. The network 100 is implemented using a fat-tree topology. Moreover, the fat-tree network 100 of
FIG. 1 may be used in various applications, one of which is as a mechanism to couple a cluster of
computers (e.g., the computers of a data center)” (paragraph 0019, figure 1).
(i) from the first switch to a second switch included in the second subset of switches in the
second tier-one level of switches,
VAHDAT writes, “Returning to the 4-ary fat-tree network 100 at pod zero 112A, the first layer (which is
also referred to as an edge layer) includes two (2) switches 110A-B, and the second layer (which is also
referred to as an aggregation layer) includes switches 110C-D. The other pods 112B-D are similarly
configured with switches in an edge layer and an aggregation layer” (paragraph 0023).
(ii) from the second switch to a third switch included in the second tier-two level of switches,
VAHDAT writes, “The fat-tree network 100 also includes (k/2).sup.2 k-port core switches, each of which
is connected to every pod. Referring again to fat-tree network 100, it has four core switches 110Q-T,
each of which has four ports. Each of the four ports of a core switch is connected to one of the pods”
(paragraph 0025).
(iii) from the third switch to a fourth switch included in the first subset of switches in the second tier-one level of switches,
VAHDAT writes, “Although FIG. 1 depicts a specific implementation of a fat-tree topology, variations to
the fat-tree topology 100 (e.g., additional links, additional pods, partial pods, partial core layers, and the
like) may be implemented as well” (paragraph 0026). Though not depicted in the figures in the prior art
from VAHDAT, VAHDAT indicates the fat-tree topology discussed may have other variations including
additional devices including a fourth switch in the first subset of switches in the second tier-one level.
The fourth switch, like the other switches illustrated, may include a connection being routed as
indicated.
Additionally, K N teaches and (iv) from the fourth switch to a gateway.
K N writes, “Although not shown, data center 10 may also include, for example, one or more non-edge
switches, routers, hubs, gateways, security devices such as firewalls, intrusion detection, and/or
intrusion prevention devices, servers, computer terminals, laptops, printers, databases, wireless mobile
devices such as cellular phones or personal digital assistants, wireless access points, bridges, cable
modems, application accelerators, or other network devices” (column 4, lines 53-60). K N indicates other
network devices including non-edge switches and gateways, though not displayed, may also be included.
Claims 14-16 and 18-19 are memory claims corresponding to apparatus claims 3-5, 7, and 11
that have already been rejected above. The applicant’s attention is directed to the rejection of claims 3-
5, 7, and 11. Claims 14-16 and 18-19 are rejected under the same rational as claims 3-5, 7, and 11.
Claim(s) 6, 9, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over POWER, VAHDAT, SURYANARAYANA, and K N as applied to claims 4 and 16 above, and further in view of NAYAK, et al. (US 20190014049 A1, hereinafter, "NAYAK").
Regarding claim 6, POWER, VAHDAT, SURYANARAYANA, and K N teach the method of claim 4,
Additionally, K N teaches and wherein each switch in the first tier-two level of switches in the
compute fabric block is connected to at least one switch in each block of the plurality of blocks of
switches.
K N writes, “Switch fabric 14 is provided by a set of interconnected top-of-rack (TOR) switches 16A-16N
(collectively, “TOR switches 16”) coupled to a distribution layer of chassis switches 18A-18M
(collectively, “chassis switches 18”)” (column 4, lines 49-53; figure 1).
POWER, VAHDAT, SURYANARAYANA, and K N fail to explicitly disclose information regarding, “wherein each of the plurality of blocks of switches includes a predetermined number of switches,”
However, in analogous art, NAYAK teaches wherein each of the plurality of blocks of switches
includes a predetermined number of switches,
NAYAK writes, “The rack 103 can have a preconfigured number of switches, or a preconfigured number
of slots for switches or other network devices” (paragraph 0015).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and invention of POWER, VAHDAT, SURYANARAYANA, and K N to include aspects described by NAYAK for “network management in hyper-converged infrastructures.” NAYAK provides the motivation for modification stating, “The user interface elements 806, when activated, can set or configure the workload domain to have a particular performance” (paragraph 0080). NAYAK explains, “Each performance setting can have predefined expected bandwidth, dedicated bandwidth, number of switches, switch buffer, and other network bandwidth resource settings associated with each selectable bandwidth setting” (paragraph 0082).
Regarding claim 9, POWER, VAHDAT, SURYANARAYANA, K N, and NAYAK teach the method of claim 6,
Additionally, NAYAK teaches wherein the predetermined number of switches included in each
of the plurality of blocks of switches is four.
NAYAK writes, “The rack 103 can have a preconfigured number of switches, or a preconfigured number
of slots for switches or other network devices” (paragraph 0015). NAYAK adds, “While four switches
115A, 115B, 121A, and 121B are shown in FIG. 1, the site 107 can include additional switches, including
spine switches and other switches of the site 107” (paragraph 0074). NAYAK indicates that the rack can
have a preconfigured number of switches, and provides in figure 1 an example with four switches.
However, NAYAK informs the reader that additional switches, including spine switches and other
switches, may be employed.
Claim 17 is a memory claim corresponding to apparatus claim 6 that has already been rejected
above. The applicant’s attention is directed to the rejection of claim 6. Claim 17 is rejected under the
same rational as claim 6.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER A REYES whose telephone number is (703)756-4558. The examiner can normally be reached Monday - Friday 8:30 - 5:00 EDT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KHALED KASSIM can be reached at (571) 270-3770. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Christopher A. Reyes/Examiner, Art Unit 2475 2/6/2026
/KHALED M KASSIM/supervisory patent examiner, Art Unit 2475