DETAILED ACTION
This application has been examined. Claims 1,3-13,15-24 are pending. Claims 2,14,25 are cancelled.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/16/2025 has been entered.
Response to Arguments
Applicant's arguments filed 10/16/2025 have been fully considered but they are moot in view of the new grounds for rejection.
Qu-Nakashima disclosed (re. Claim 1) wherein the input comprises a communication pattern, (Qu-Paragraph 18-19, producing the computation graph corresponding to the task to be processed… wherein a node of the computation graph represents one operator of the task to be processed; an edge of the computation graph represents a relationship between two adjacent nodes )
While Qu-Nakashima substantially disclosed the claimed invention Qu-Nakashima does not disclose (re. Claim 1) determining, based on the communication pattern, a number of optical circuit connections required to operatively interconnect each pair of leaf switches from the plurality of leaf switches; and operatively interconnecting, using the optical circuit connections, the plurality of leaf switches.
Sindhu Figure 10, Column 3 Lines 65, Column 11 Lines 15-25 disclosed wherein optical permutors are utilized to interconnect endpoints in a full mesh network in which each access node is logically connected to each of M groups of core switches. Servers 12 are arranged into multiple different server groups, each including any number of servers up to, for example, n servers 12.sub.1-12.sub.n. Servers 12 provide computation and storage facilities for applications and data associated with customers. Multiple access nodes 17 (e.g., 4 access nodes) may be positioned within a common access node group 19 for servicing a group of servers (e.g., 16 servers).
Sindhu disclosed (re. Claim 1) determining, based on the communication pattern, a number of optical circuit connections required to operatively interconnect each pair of leaf switches from the plurality of leaf switches; (Sindhu-Column 5 Lines 5-10, SDN controller 21 may operate in response to configuration input received from a network administrator. In some examples, SDN controller 21 operates to configure access nodes 17 to logically establish one or more virtual fabrics as overlay networks dynamically configured on top of the physical underlay network provided by switch fabric 14) and operatively interconnecting, using the optical circuit connections, the plurality of leaf switches. (Sindhu-Column 11 Lines 60-65, optical Ethernet connections may connect to one or more optical devices within the switch fabric, e.g., optical permutation devices, Column 27 Lines 30-35, each of groups 211 of access nodes for servers 215 has at least one optical coupling to each of optical permutors 204 and, by extension due to operation of optical permutors 204, has at least one optical coupling to each of switches 202.)
Qu,Nakashima and Sindhu are analogous art because they present concepts and practices regarding configuration optimization. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Sindhu into Qu-Nakashima. The motivation for the said combination would have been to enable each access node in group 211A includes at least one point-to-point connection to source switching components and destination switching components in every other access node in group 211A, thereby allowing communications to or from switching tier 210 to fan-out/fan-in through the access nodes so as to originate from or be delivered to any of the servers 215 via a set of parallel data paths. Connections of full mesh 220A may represent Ethernet connections, optical connections or the like.(Sindhu-Column 27 Lines 45-55)
Priority
This application claims benefits of priority from Foreign Application GR20230101060 (Greece) filed December 20,2023.
The effective date of the claims described in this application is December 20,2023.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7,11-19,23-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qu (USPGPUB 20240314197) further in view of Nakashima (USPGPUB 20180227169) further in view of Zhao (USPGPUB 20190312772) further in view of Sindhu (US Patent 10686729)
Regarding Claim 1
Qu Paragraph 262 disclosed wherein the cross-device heterogeneous resources in the IDEC system can be fully utilized for Distributed (or called decentralized) execution of computing-intensive deep learning tasks in a multi-device collaboration manner helps distributed edge computing systems improve the deployment and execution efficiency of edge-side intelligent applications.
Qu Paragraph 200, Paragraph 217, Paragraph 242 disclosed wherein the intelligent computing task allocation ( ICTA module ) realizes the cross-device distribution of the underlying deep learning operators. The ICTA module uses the graph convolutional network (GCN) and Deep learning algorithms such as deep neural network (DNN) realize the task allocation strategy corresponding to the best system performance by learning the inherent statistical laws of complex and changeable task scheduling problems. The third functional component (that is the ICTA module ) can predict the performance of the corresponding task allocation strategy through the prediction network.
Qu disclosed (re. Claim 1) a method for allocation of network resources for executing a deep learning task, (Qu-Paragraph 262,the cross-device heterogeneous resources in the IDEC system can be fully utilized for Distributed (or called decentralized) execution of computing-intensive deep learning tasks in a multi-device collaboration manner helps distributed edge computing systems improve the deployment and execution efficiency of edge-side intelligent applications)
the method comprising: receiving a task (Qu-Paragraph 221, Step 201: acquiring a task to be processed; and producing a computation graph corresponding to the task to be processed) and an input specifying information associated with execution of the task. (Qu-Paragraph 242, Based on the generated resource graph and computation graph, ICTA realizes the cross-device distribution of the underlying deep learning operators.)
While Qu substantially disclosed the claimed invention Qu does not disclose (re. Claim 1)
wherein the input comprises a plurality of hosts; determining a plurality of leaf switches based on the plurality of hosts; operatively coupling each leaf switch to a subset of the plurality of hosts to configure a network structure; and triggering the execution of the task using the network structure.
Nakashima disclosed re. Claim 1)
wherein the input comprises a plurality of hosts; (Nakashima-Paragraph 64, computer 10 exerts the function as the job scheduler 10 that allocates a job that uses a predetermined number of nodes 100,Paragraph 109, The job scheduler 10 first allocates the job A, which uses eight nodes 100, )
determining a plurality of leaf switches based on the plurality of hosts; (Nakashima-Paragraph 76, the allocator 22 allocates the job to the predetermined number of unoccupied nodes 100 being connected to each of two or more leaf switches 200 and not exceeding the number of valid links among the links L connected to the leaf switch 200, Paragraph 93, In cases where the job is not able to be allocated to nodes 100 subordinate to a single leaf switch 200, which means that the job is to be extendedly allocated to nodes 100 subordinate to multiple leaf switches )
operatively coupling each leaf switch to a subset of the plurality of hosts to configure a network structure; (Nakashima-Paragraph 46, allocating a job to nodes 100 belonging to different leaf switches 200, the job allocation is controlled such that job A is allocated to unoccupied nodes of each leaf switches 200 not exceeding the number of valid upper links of the leaf switch 200.)
and triggering the execution of the task using the network structure.(Nakashima-Paragraph 69, job specification information to specify a job and the number of nodes 100 that is to be used for the job.)
Qu and Nakashima are analogous art because they present concepts and practices regarding configuration optimization. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Nakashima into Qu. The motivation for the said combination would have been to enable the job scheduler 10 to allocate the job such that the number of allocated jobs (i.e., the number of nodes in a single leaf switch 200 to be allocated thereto the job) of each of leaf switches 200 that are to be allocated thereto the job does not exceed the number of effective upper links.(Nakashima-Paragraph 93)
While Qu-Nakashima substantially disclosed the claimed invention Qu-Nakashima does not disclose (re. Claim 1) wherein the input comprises a communication pattern; determining a network structure and a corresponding subset of the plurality of hosts that serve the communication pattern; determining a plurality of leaf switches based on the subset of the plurality of hosts; operatively coupling the plurality of leaf switches.
Zhao Paragraph 20 disclosed wherein a service request can include various user-specified conditions and demands for executing a given job (e.g., DL training) associated with the service request. For example, a service request may specify (i) a desired number (N) of accelerator devices (e.g., GPU devices) to provision for the requested job.
Zhao Paragraph 24 disclosed a dynamic “topology aware” and “bandwidth usage aware” computing resource provisioning method to efficiently provision a group of computing resources (e.g., GPU devices) in a specific configuration (e.g., Ring-AllReduce communication configuration) to execute a HPC computing job (e.g., DL training) in an optimal manner.
Zhao disclosed (re. Claim 1) wherein the input comprises a plurality of hosts (Zhao-Paragraph 20,service request can include various user-specified conditions and demands for executing a given job (e.g., DL training) associated with the service request. For example, a service request may specify (i) a desired number (N) of accelerator devices (e.g., GPU devices) to provision for the requested job) and a communication pattern; (Zhao-Paragraph 24,dynamic “topology aware” and “bandwidth usage aware” computing resource provisioning method to efficiently provision a group of computing resources (e.g., GPU devices) in a specific configuration (e.g., Ring-AllReduce communication configuration) ) determining a network structure and a corresponding subset of the plurality of hosts that serve the communication pattern; (Zhao-Paragraph 24,dynamic “topology aware” and “bandwidth usage aware” computing resource provisioning method to efficiently provision a group of computing resources (e.g., GPU devices) in a specific configuration (e.g., Ring-AllReduce communication configuration) )
Qu, Nakashima and Zhao are analogous art because they present concepts and practices regarding configuration optimization. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Zhao into Qu-Nakashima. The motivation for the said combination would have been to consider the current status of bus/networking connection usage (bandwidth) to fully utilize bidirectional bus/networking between provisioned devices.(Zhao-Paragraph 31)
Qu-Nakashima-Zhao disclose (re. Claim 1) determining a plurality of leaf switches (Nakashima-Paragraph 76, the allocator 22 allocates the job to the predetermined number of unoccupied nodes 100 being connected to each of two or more leaf switches 200 and not exceeding the number of valid links among the links L connected to the leaf switch 200, Paragraph 93, In cases where the job is not able to be allocated to nodes 100 subordinate to a single leaf switch 200, which means that the job is to be extendedly allocated to nodes 100 subordinate to multiple leaf switches ) based on the subset of the plurality of hosts; (Zhao-Paragraph 24,dynamic “topology aware” and “bandwidth usage aware” computing resource provisioning method to efficiently provision a group of computing resources (e.g., GPU devices) in a specific configuration (e.g., Ring-AllReduce communication configuration) )
operatively coupling the plurality of leaf switches (Nakashima-Paragraph 46, allocating a job to nodes 100 belonging to different leaf switches 200, the job allocation is controlled such that job A is allocated to unoccupied nodes of each leaf switches 200 not exceeding the number of valid upper links of the leaf switch 200.)
Qu-Nakashima disclosed (re. Claim 1) wherein the input comprises a communication pattern, (Qu-Paragraph 18-19, producing the computation graph corresponding to the task to be processed… wherein a node of the computation graph represents one operator of the task to be processed; an edge of the computation graph represents a relationship between two adjacent nodes )
While Qu-Nakashima substantially disclosed the claimed invention Qu-Nakashima does not disclose (re. Claim 1) determining, based on the communication pattern, a number of optical circuit connections required to operatively interconnect each pair of leaf switches from the plurality of leaf switches; and operatively interconnecting, using the optical circuit connections, the plurality of leaf switches.
Sindhu Figure 10,Column 3 Lines 65, Column 11 Lines 15-25 disclosed wherein optical permutors are utilized to interconnect endpoints in a full mesh network in which each access node is logically connected to each of M groups of core switches. Servers 12 are arranged into multiple different server groups, each including any number of servers up to, for example, n servers 12.sub.1-12.sub.n. Servers 12 provide computation and storage facilities for applications and data associated with customers. Multiple access nodes 17 (e.g., 4 access nodes) may be positioned within a common access node group 19 for servicing a group of servers (e.g., 16 servers).
Sindhu disclosed (re. Claim 1) determining, based on the communication pattern, a number of optical circuit connections required to operatively interconnect each pair of leaf switches from the plurality of leaf switches; (Sindhu-Column 5 Lines 5-10, SDN controller 21 may operate in response to configuration input received from a network administrator. In some examples, SDN controller 21 operates to configure access nodes 17 to logically establish one or more virtual fabrics as overlay networks dynamically configured on top of the physical underlay network provided by switch fabric 14) and operatively interconnecting, using the optical circuit connections, the plurality of leaf switches. (Sindhu-Column 11 Lines 60-65, optical Ethernet connections may connect to one or more optical devices within the switch fabric, e.g., optical permutation devices, Column 27 Lines 30-35, each of groups 211 of access nodes for servers 215 has at least one optical coupling to each of optical permutors 204 and, by extension due to operation of optical permutors 204, has at least one optical coupling to each of switches 202.)
Qu,Nakashima and Sindhu are analogous art because they present concepts and practices regarding configuration optimization. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Sindhu into Qu-Nakashima. The motivation for the said combination would have been to enable each access node in group 211A includes at least one point-to-point connection to source switching components and destination switching components in every other access node in group 211A, thereby allowing communications to or from switching tier 210 to fan-out/fan-in through the access nodes so as to originate from or be delivered to any of the servers 215 via a set of parallel data paths. Connections of full mesh 220A may represent Ethernet connections, optical connections or the like.(Sindhu-Column 27 Lines 45-55)
Regarding Claim 13
Claim 13 (re. system) recites substantially similar limitations as Claim 1. Claim 13 is rejected on the same basis as Claim 1.
Regarding Claim 24
Claim 24 (re. computer program product) recites substantially similar limitations as Claim 1. Claim 24 is rejected on the same basis as Claim 1.
Regarding Claim 3,15
Qu-Nakashima-Zhao-Sindhu disclosed (re. Claim 3,15) wherein operatively interconnecting, using the optical circuit connections, the plurality of leaf switches forms a complete graph. (Nakashima-Paragraph 46, allocating a job to nodes 100 belonging to different leaf switches 200, the job allocation is controlled such that job A is allocated to unoccupied nodes of each leaf switches 200 not exceeding the number of valid upper links of the leaf switch 200.)
Regarding Claim 4,16
Qu-Nakashima-Zhao-Sindhu disclosed (re. Claim 4,16) wherein the optical circuit connections are bidirectional links.(Sindhu-Column 33 Lines 50-55, each port comprises a bidirectional 400 Gigabit optical interface)
Regarding Claim 5,17
Qu-Nakashima-Zhao-Sindhu disclosed (re. Claim 5,17) wherein the communication pattern comprises at least one of an all-to-all, a reduction operation, or scatter-gather.(Sindhu-Column 25 Lines 45, full bi-directional, full-mesh point-to-point connectivity for transporting communications for servers 12 to/from core switches 22.)
Regarding Claim 6,18
Qu-Nakashima-Zhao-Sindhu disclosed (re. Claim 6,18) wherein the number of optical circuit connections determined to satisfy a bandwidth requirement associated with the network structure.(Nakashima-Paragraph 35, This configuration can secure a bandwidth for communication between nodes 100 via links L between leaf switches 200 and spine switches 300)
Regarding Claim 7,19
Qu-Nakashima-Zhao-Sindhu disclosed (re. Claim 7,19) wherein each pair of leaf switches comprises a first leaf switch and a second leaf switch, wherein the number of optical circuit connections for each pair of leaf switches is determined based on at least the subset of the plurality of hosts operatively coupled to the first leaf switch (hi), the subset of the plurality of hosts operatively coupled to the second leaf switch (hj), and the plurality of hosts (N). (Nakashima-Paragraph 76, the allocator 22 allocates the job to the predetermined number of unoccupied nodes 100 being connected to each of two or more leaf switches 200 and not exceeding the number of valid links among the links L connected to the leaf switch 200, Paragraph 93, In cases where the job is not able to be allocated to nodes 100 subordinate to a single leaf switch 200, which means that the job is to be extendedly allocated to nodes 100 subordinate to multiple leaf switches )
Regarding Claim 11
Qu-Nakashima-Zhao-Sindhu disclosed (re. Claim 11) wherein each leaf switch comprises a plurality of uplink ports configured to operatively couple said leaf switch to a plurality of optical switches and a plurality of downlink ports configured to operatively couple said leaf switch to the plurality of hosts. (Sindhu-Figure 15,Column 16 Lines 55-65, TOR device 72 comprises an optical permutor that transports optical signals between access nodes 17 and core switches 22 and that is configured such that optical communications are “permuted” based on wavelength so as to provide full-mesh connectivity between the upstream and downstream ports without any optical interference.)
Regarding Claim 12,23
Qu-Nakashima-Zhao-Sindhu disclosed (re. Claim 12,23) wherein the task is a deep learning recommendation model (DLRM) task. (Qu-Paragraph 262,the cross-device heterogeneous resources in the IDEC system can be fully utilized for Distributed (or called decentralized) execution of computing-intensive deep learning tasks in a multi-device collaboration manner helps distributed edge computing systems improve the deployment and execution efficiency of edge-side intelligent applications)
Claim(s) 8-9,20-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qu (USPGPUB 20240314197) further in view of Nakashima (USPGPUB 20180227169) further in view of Zhao (USPGPUB 20190312772) further in view of Sindhu (US Patent 10686729) further in view of Bataineh (USPGPUB 20220210058)
Regarding Claim 8,20
While Qu-Nakashima-Sindhu substantially disclosed the claimed invention Qu-Nakashima-Sindhu does not disclose (re. Claim 8,20) wherein the number of optical circuit connections is determined using an integer approximation function to round the number of optical circuit connections up to a whole number, wherein the integer approximation function comprises at least a ceiling function or a rounding function.
Bataineh Paragraph 190 disclosed wherein Geometry is the Links down per switch in each stage: E or S×E or S×M×E, where S, M, E are the number of down-links of each spine, middle, edge switch. Switches in each stage is E or S:E or S:M:E, where S, M, E are the number of switches in the spine, middle, edge stages.
Bataineh disclosed (re. Claim 8,20) wherein the number of optical circuit connections is determined using an integer approximation function to round the number of optical circuit connections up to a whole number, wherein the integer approximation function comprises at least a ceiling function or a rounding function.(Bataineh-Paragraph 190, Geometry is the Links down per switch in each stage: E or S×E or S×M×E, where S, M, E are the number of down-links of each spine, middle, edge switch. Switches in each stage is E or S:E or S:M:E, where S, M, E are the number of switches in the spine, middle, edge stages.)
Qu,Nakashima and Bataineh are analogous art because they present concepts and practices regarding configuration optimization. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Bataineh into Qu-Nakashima. The motivation for the said combination would have been to perform flow-specific traffic management to ensure the health of the entire network and fair treatment of the flows. (Bataineh-Paragraph 26)
Regarding Claim 9,21
Qu-Nakashima-Sindhu-Bataineh disclosed (re. Claim 9,21) wherein the number of optical circuit connections for each pair of leaf switches is determined based on: q*hi*hjN (Bataineh-Paragraph 190, Geometry is the Links down per switch in each stage: E or S×E or S×M×E, where S, M, E are the number of down-links of each spine, middle, edge switch. Switches in each stage is E or S:E or S:M:E, where S, M, E are the number of switches in the spine, middle, edge stages.)
Claim(s) 10,22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qu (USPGPUB 20240314197) further in view of Nakashima (USPGPUB 20180227169) further in view of Zhao (USPGPUB 20190312772) further in view of Sindhu (US Patent 10686729) further in view of Bataineh (USPGPUB 20220210058) further in view of Schlansker (USPGPUB 20110270987)
Regarding Claim 10,22
While Qu-Nakashima substantially disclosed the claimed invention Qu-Nakashima does not disclose (re. Claim 10,22) wherein if the bandwidth requirement is a full bisection bandwidth requirement, q is set to 1, and wherein if the bandwidth requirement is less than the full bisection bandwidth requirement, q is set to a value less than 1.
Schlansker Paragraph 67 disclosed wherein the predetermined amount (K) by which the allocated bandwidth is increased or decreased may be, for example, a fixed amount, such as 2 Gb/s, or a percentage, such as 20%.
Schlansker disclosed (re. Claim 10,22) wherein if the bandwidth requirement is a full bisection bandwidth requirement, q is set to 1, and wherein if the bandwidth requirement is less than the full bisection bandwidth requirement, q is set to a value less than 1. (Schlansker-Paragraph 67,the predetermined amount (K) by which the allocated bandwidth is increased or decreased may be, for example, a fixed amount, such as 2 Gb/s, or a percentage, such as 20%.)
Qu,Nakashima and Schlansker are analogous art because they present concepts and practices regarding configuration optimization. Before the time of the effective filing date of the claimed invention it would have been obvious to combine Schlansker into Qu-Nakashima. The motivation for the said combination would have been to enable a dynamic spatial bandwidth allocation mechanism that enables any traffic class to use all of its allocated bandwidth should it be required to do so. Furthermore, a mechanism is provided which enables unused bandwidth within the network to be temporarily allocated to some or all of the traffic classes.(Schlansker-Paragraph 29)
Conclusion
Examiner’s Note: In the case of amending the claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GREG C BENGZON whose telephone number is (571)272-3944. The examiner can normally be reached on Monday - Friday 8 AM - 4:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Follansbee can be reached on (571) 272-3964. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GREG C BENGZON/ Primary Examiner, Art Unit 2444