Prosecution Insights
Last updated: April 19, 2026
Application No. 18/500,480

ROUTING IN A GPU SUPER-CLUSTER

Non-Final OA §101§103
Filed
Nov 02, 2023
Examiner
SUN, CHARLIE
Art Unit
2198
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
1 (Non-Final)
91%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
440 granted / 484 resolved
+35.9% vs TC avg
Moderate +12% lift
Without
With
+12.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
23 currently pending
Career history
507
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
24.7%
-15.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 484 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 4, 8-9, 12, 16-17, and 20 are objected to because of the following informalities: As per claim 1, “the network device”, ll8-9 should be “the each network device”. “a first network device”, ll10-11 should be “a first network device from the plurality of network devices”. As per claim 4, “unsatisfied”, ll2 should be “unsatisfied,”” As per claim 8, “a first GPU cluster”, ll5 should be “a first GPU cluster from the plurality of GPU clusters”. “a second GPU cluster”, ll5-6 should be “a second GPU cluster from the plurality of GPU clusters”. As per claim 9, see objection on claim 1. As per claim 12, see objection on claim 4. As per claim 16, see objection on claim 8. As per claim 17, see objection on claim 1. As per claim 20, see objection on claim 4. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter. Claims 1-20 are rejected under 35 U.S.C. 101. As per claim 1, the claim recites a series of steps, therefore is a process. The claim recites the limitation of “determining an incoming port-link of the first network device on which the packet was received . . . identifying based on the configuring, an outgoing port-link corresponding to the incoming port-link”. These limitations, as drafted, are processes that, under its broadest reasonable interpretation, cover performance of the limitation in the mind but for the recitation of generic computer components. Thus, the claim recites a mental process. The limitation of “forwarding the packet on the outgoing port-link of the network device” amounts to data gathering which is considered to be insignificant extra solution activity (MPEP 2106.05(g); this limitation is also a mere generic transmission and presentation of collected and analyzed data which is considered to be insignificant extra solution activity (MPEP 2106.05(g). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea. As discussed above, “forwarding the packet on the outgoing port-link of the network device” amounts to data gathering which is considered to be insignificant extra solution activity (MPEP 2106.05(g). “providing a plurality of graphical processing unit (GPU) clusters, the plurality of GPU clusters being communicatively coupled with one another via a plurality of network devices that are arranged in a hierarchical structure, wherein the plurality of GPU clusters includes at least a first GPU cluster operating at a first speed and a second GPU cluster operating at a second speed that is different than the first speed . . . a routing policy for each network device of the plurality of network devices, wherein the configuring includes establishing a mapping of each incoming port-link of the network device to a unique outgoing port-link of the network device . . . ” is simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) and Berkheimer Memo. See Ma/Dhamal. The claim is ineligible. As per claim 2, see rejection on claim 1. “verifying . . . forwarding . . . “ is simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) and Berkheimer Memo. See Ma/Dhamal. The claim is ineligible. As per claim 3, see rejection on claim 2. “the condition . . . determining . . . “ is simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) and Berkheimer Memo. See Ma/Dhamal/Ayoub. The claim is ineligible. As per claim 4, see rejection on claim 2. “responsive . . . obtaining . . . executing . . . forwarding . . . “ is simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) and Berkheimer Memo. See Ma/Dhamal/Billor. The claim is ineligible. As per claim 5, see rejection on claim 1. “repeating . . . “ is simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) and Berkheimer Memo. See Ma/Dhamal/Billor. The claim is ineligible. As per claim 6, see rejection on claim 1. “ the packet . . . GPU workload” is simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) and Berkheimer Memo. See Ma/Dhamal. The claim is ineligible. As per claim 7, see rejection on claim 1. “the plurality of network devices . . . switches” is simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) and Berkheimer Memo. See Ma/Dhamal/Bergeron. The claim is ineligible. As per claim 8, see rejection on claim 7. “the first tier of switches . . . cluster” is simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) and Berkheimer Memo. See Ma/Dhamal/Bergeron/De Grace. The claim is ineligible. As per claim 9, see rejection on claim 1. As per claim 10, see rejection on claim 2. As per claim 11, see rejection on claim 3. As per claim 12, see rejection on claim 4. As per claim 13, see rejection on claim 5. As per claim 14, see rejection on claim 6. As per claim 15, see rejection on claim 7. As per claim 16, see rejection on claim 8. As per claim 17, see objection on claim 1. As per claim 18, see rejection on claim 2. As per claim 19, see rejection on claim 3. As per claim 20, see rejection on claim 4. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 6, 9-10, 14, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Ma (US 2010/0278180) (hereinafter Ma) in view of Dhamal (US 12526225) (hereinafter Dhamal). As per claim 1, Ma teaches: A method comprising: configuring a routing policy for each network device of the plurality of network devices (Ma, [0040]—under BRI, a routing policy can be a policy in forwarding policy table ), wherein the configuring includes establishing a mapping of each incoming port-link of the network device to a unique outgoing port-link of the network device (Ma, [0040]—under BRI, a unique outgoing port-link can be the port that Mac address indicates); and for a transmitted by a device and received by a first network device (MA, Fig 100, 110, [0007]—under BRI, a first network device can be the destination that port 3 forward the packet to), determining an incoming port-link of the first network device on which the packet was received (Ma, Fig 2 210) ; identifying based on the configuring, an outgoing port-link corresponding to the incoming port-link (Ma, Fig 2 220); and forwarding the packet on the outgoing port-link of the network device (MA, Fig 2 250). Ma does not expressly teach: providing a plurality of graphical processing unit (GPU) clusters, the plurality of GPU clusters being communicatively coupled with one another via a plurality of network devices that are arranged in a hierarchical structure, wherein the plurality of GPU clusters includes at least a first GPU cluster operating at a first speed and a second GPU cluster operating at a second speed that is different than the first speed; wherein the device is a GPU of a host machine; However, Dhamal discloses: providing a plurality of graphical processing unit (GPU) clusters (Dhamal, Fig 35B, col 64 ll10-13—under BRI, a plurality of graphical processing unit (GPU) clusters can be linked GPGPU 3530) , the plurality of GPU clusters being communicatively coupled with one another via a plurality of network devices that are arranged in a hierarchical structure (Dhamal, Fig 35B, col 64, ll52-55—under BRI, a plurality of network devices that are arranged in a hierarchical structure can be host IO hub 3539 GPU link 3540 ), wherein the plurality of GPU clusters includes at least a first GPU cluster operating at a first speed and a second GPU cluster operating at a second speed that is different than the first speed (Dhamal, col 64, ll 15-20—under BRI, a first GPU cluster operating at a first speed can be GPGPU 3530 running at PCIe speed; a second GPU cluster operating at a second speed can be GPGPU 3530 running at a vendor specific communications interface or communications fabric speed); wherein the device is a GPU of a host machine (Dhamal, Fig 35B GPGPU 3530); Both Dhamal and Ma pertain to the art of networked devices. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use Dhamal’s method to create GPU clusters because it is well-known in the art that GPU clusters provide massive, parallel computing power essential for AI training, inferencing, and high-performance computing (HPC). By interconnecting multiple GPU-enabled servers, these systems offer significantly faster processing speeds, high scalability, and better energy efficiency compared to traditional CPU-only systems. As per claim 2, MA/Dhamal teaches: The method of claim 1 (See rejection on claim 1), wherein the step of forwarding further comprises: verifying a condition associated with the outgoing port-link of the first network device (MA, [0040]—under BRI, a condition can be Mac address matched ; and responsive to the condition being satisfied, forwarding the packet on the outgoing port- link of the first network device( Ma, Fig 2 250). As per claim 6, Ma/Dhamal teaches: The method of claim 1 (see rejection on claim 1), wherein the packet belongs to a GPU workload (Dhamal, col 64, ll11-under BRI, GPU workload can be workloads in CUDA). As per claim 9, see rejection on claim 1. As per claim 10, see rejection on claim 2. As per claim 14, see rejection on claim 6. As per claim 17, see rejection on claim 1. As per claim 18, see rejection on claim 2. Claims 3, 11, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ma/Dhamal as applied above, and further in view of Ayoub et al (US 12301460) (hereinafter Ayoub). As per claim 3, Ma/Dhamal teaches: The method of claim 2 (see rejection on claim 2). Ma/Dhamal does not expressly teach: wherein the condition corresponds to determining whether the outgoing port-link of the first network device is active. However, Ayoub discloses: wherein the condition corresponds to determining whether the outgoing port-link of the first network device is active(Ayoub, col 10, ll38-40) . Both Ayoub and Ma/Dhamal pertain to the art of networked devices. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use Ayoub’s method to check whether a link is active because it is well-known in the art that to successfully deliver packets to destinations, a link needs to be active. As per claim 11, see rejection on claim 3. As per claim 19, see rejection on claim 3. Claims 4, 12, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ma/Dhamal as applied above, and further in view of Billor et al (US 2023/0362083) (hereinafter Billor). As per claim 4, Ma/Dhamal teaches: The method of claim 2 (See rejection on claim 2). Ma/Dhamal does not expressly teach: further comprising: responsive to the condition being unsatisfied obtaining, by the first network device, flow information associated with the packet; executing, by the first network device, an equal cost multi-path algorithm to obtain a new outgoing port-link of the first network device based on the flow information; and forwarding, by the first network device, the packet on the new outgoing port-link of the first network device. However, Billor discloses: further comprising: responsive to the condition being unsatisfied obtaining, by the first network device, flow information associated with the packet (Billor, [0030]—under BRI, flow information can be address info); executing, by the first network device, an equal cost multi-path algorithm to obtain a new outgoing port-link of the first network device based on the flow information (Billor, [0030]; and forwarding, by the first network device, the packet on the new outgoing port-link of the first network device (Billor, [0031]). Both Billor and Ma/Dhamal pertain to the art of networked devices. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use Billor’s method to use an equal cost multi-path algorithm to obtain a new outgoing port-link because it is well-known in the art that an equal cost multi-path algorithm offers benefits including enhanced network resource utilization, increased bandwidth capacity, and improved network reliability through load balancing. As per claim 12, see rejection on claim 4. As per claim 20, see rejection on claim 4. Claim(s) 5 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Ma/Dhamal as applied above, and further in view of Liu et al (US 2017/0005921) (hereinafter Liu). As per claim 5, Ma/Dhamal teaches: The method of claim 1 (see rejection on claim 1). Ma/Dhamal does not expressly teach: further comprising: repeating, the determining, the identifying, and the forwarding, until the packet is delivered to a destination host machine. However, Liu discloses: further comprising: repeating, the determining, the identifying, and the forwarding, until the packet is delivered to a destination host machine (Liu, [0006]). Both Liu and Ma/Dhamal pertain to the art of networked devices. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use Liu’s method to resend packets because it is well-known in the art that communication links are not always reliable. A system would need to resend packets when links are down. As per claim 13, see rejection on claim 5. Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Ma/Dhamal as applied above, and further in view of Bergeron et al (US 2023/0179506) (hereinafter Bergeron). As per claim 7, Ma/Dhamal teaches: The method of claim 1 (see rejection on claim 1). Ma/Dhamal does not expressly teach: wherein the plurality of network devices correspond to a plurality of switches arranged in the hierarchical structure, the hierarchical structure including a first tier of switches, a second tier of switches, and a third tier of switches. However, Bergeron discloses: wherein the plurality of network devices correspond to a plurality of switches arranged in the hierarchical structure (Bergeron, [0099]), the hierarchical structure including a first tier of switches, a second tier of switches, and a third tier of switches (Bergeron, [0099]). Both Bergeron and Ma/Dhamal pertain to the art of networked devices. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use Bergeron’s method use a 3-tier switch hierarchy because it is well-known in the art that three-tier switch architecture provides superior scalability, high availability, and improved traffic management for large enterprise networks As per claim 15, see rejection on claim 7. Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Ma/Dhamal/Bergeron as applied above, and further in view of De Grace et al (US 12395430) (hereinafter De Grace) . As per claim 8, Ma/Dhamal/Bergeron teaches: The method of claim 7 (See rejection on claim 7), wherein the first tier of switches communicatively couples the host machine to the second tier of switches (Bergeron, [0095]—under BRI, first tier switch can be TORSW-4) , and the second tier of switches communicatively couples the first tier of switches to the third tier of switches (Bergeron, Fig 5 SPSW1 444, PODSW1 440), wherein a first cluster and a second cluster are GPU clusters (Dhamal, col 64, ll15-20); Ma/Dhamal/Bergeron does not expressly teach: wherein the third tier of switches communicatively couples a first block including a first one or more racks hosting the first cluster to a second block including a second one or more racks hosting the second cluster. However, De Grace discloses: wherein the third tier of switches communicatively couples a first block including a first one or more racks hosting the first cluster to a second block including a second one or more racks hosting the second cluster (De Grace, Fig 2A Server1-24 ). Both De grace and Ma/Dhamal/Bergeron pertain to the art of networked devices. It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to use De Grace’s method to use a tier of switches communicatively couples a first block to a second block because it is well-known in the art that switch architectures—such as leaf-spine, three-tier, and modular designs—provide critical benefits including enhanced, predictable network performance, high scalability, improved redundancy, and lower, efficient latency As per claim 16, see rejection on claim 8. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2017/0289064 teaches a method of sending packets from input ports through switches. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLIE SUN whose telephone number is (571)270-5100. The examiner can normally be reached 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached at (571) 272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLIE SUN/Primary Examiner, Art Unit 2198
Read full office action

Prosecution Timeline

Nov 02, 2023
Application Filed
Feb 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596586
MANAGING STATE OF DISTRIBUTED CLOUD ENVIRONMENT IN PEER-TO-PEER NETWORK
2y 5m to grant Granted Apr 07, 2026
Patent 12596577
RESOURCE PROVISIONING
2y 5m to grant Granted Apr 07, 2026
Patent 12596587
CLOUD DISTRIBUTED DATABASE CAPACITY PLANNING AND ADJUSTMENT USING TIME-SERIES DATA ANALYSIS
2y 5m to grant Granted Apr 07, 2026
Patent 12596588
Edge Computing Method and System, Edge Device and Control Server
2y 5m to grant Granted Apr 07, 2026
Patent 12596589
SYSTEM AND METHOD FOR WORKLOAD MANAGEMENT BETWEEN HARDWARE COMPONENTS
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+12.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 484 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month