Prosecution Insights
Last updated: April 19, 2026
Application No. 18/765,672

METHODS, DEVICES, AND COMPUTER-READABLE MEDIA FOR LOAD BALANCING IN PORT CHANNELS

Non-Final OA §103§112
Filed
Jul 08, 2024
Examiner
WOOLCOCK, MADHU
Art Unit
2451
Tech Center
2400 — Computer Networks
Assignee
Cisco Technology Inc.
OA Round
1 (Non-Final)
55%
Grant Probability
Moderate
1-2
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
159 granted / 287 resolved
-2.6% vs TC avg
Strong +72% interview lift
Without
With
+72.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
12 currently pending
Career history
299
Total Applications
across all art units

Statute-Specific Performance

§101
15.1%
-24.9% vs TC avg
§103
43.2%
+3.2% vs TC avg
§102
5.7%
-34.3% vs TC avg
§112
32.6%
-7.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 287 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This communication is in response to claims 1-20 filed on 07/08/2024. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically, for the following reason: 2. Claims 1, 8, and 14 recite the limitation "the load balancing of the network traffic" in the second limitation of the claims and “the network traffic” in the third limitation of the claims, however the claims previous recite “load balancing of network traffic transmitted by a first port channel” and “load balancing of network traffic transmitted by a second port channel” in the first and second limitations. It is therefore unclear which of these previous recitations the subsequent recitations of “the load balancing of the network traffic” and “the network traffic” are intended to refer to, or if each reference to load balancing network traffic is intended to refer to the same load balancing of the same network traffic. For purposes of examination, each recitation of network traffic is interpreted as referring to any traffic on the network, “the load balancing of the network traffic” at the end of the first limitation is interpreted as referring to the load balancing of network traffic transmitted by a first port channel, and “the load balancing of the network traffic” in the second limitation is interpreted as referring to the load balancing of network traffic transmitted by the second port channel. Claims 2-7, 9-13 and 15-20 are rejected in view of their respective dependencies from claims 1, 8, and 14. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 3. Claims 1, 8 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Mizrahi et al. (US 2020/0007448) in view of Kotha et al. (US 2011/0085570) and in further view of LAN Switching Configuration Guide, Cisco IOS XE Release 2 (referred to as Cisco hereafter). Regarding claim 1, Mizrahi teaches a method comprising: causing a first change for performing load balancing of network traffic transmitted by a first port channel within a network to a first configuration for the load balancing of the network traffic (the load balancer 126 determines group-specific load-balancing values to independently control load balancing on network interfaces that belong to particular network interface groups, [0022]; the load balancing configuration engine 142 is configured to overwrite configuration information in the entry of the per group load balancing memory 134 with new configuration information that will cause the group-specific load balancing value generator 128 to begin generating a different group-specific load balancing value 129 for packets that are forwarded to the particular group, [0027]; Reconfiguring group-specific load balancing configuration corresponding to the network interface group includes updating the configuration information stored in the memory, [0051]); causing a second change for performing load balancing of network traffic transmitted by a second port channel within the network to a second configuration for performing the load balancing of the network traffic (reconfiguring the group-specific load balancing configuration includes updating configuration information, [0043]; Reconfiguring group-specific load balancing configuration corresponding to the network interface group includes updating the configuration information stored in the memory, [0051]), the second configuration is different from the first configuration (the sub-hash configuration fields 304-1 in different entries 302, corresponding to different groups, specify different subsets of bits (bits at different bit indices) and/or different offsets to be used for generating group-specific sub-hash values for packets directed to the corresponding groups, [0030]; reconfiguring, at the network device, the group-specific load balancing configuration corresponding to the second network interface group to redistribute selection of network interfaces, among the set of network interfaces belonging to the second network interface group, for transmission of packets subsequently directed to the network interface group without modifying selection of network interfaces among the set of network interfaces belonging to the first network interface group, [0057]); and load balancing the network traffic using the first configuration for the network traffic over the first port channel (determining, at the network device based on group-specific load balancing configuration corresponding to the network interface group, a group-specific load balancing value for the packet; selecting, at the network device based on the group-specific load balancing value, a network interface, from among the set of network interfaces belonging to the network interface group, for transmission of the packet; transmitting the packet towards the destination of the packet via the network interface selected for transmission of the packet, [0044]) and the second configuration for the network traffic over the second port channel (a second group-specific load balancing value for a second packet received by the network device, the second packet directed to a second group of network interfaces different from the first group of network interfaces, including determining the second group-specific load balancing value based on group-specific load balancing configuration corresponding to the second network interface group different from the group-specific load balancing configuration corresponding to the first network interface group, [0054]). However, Mizrahi does not explicitly disclose the first configuration is a first hash algorithm and the second configuration is a second hash algorithm. Kotha teaches a first port channel within a network (first-level LAG 116a of FIG. 1) having a first hash algorithm for load balancing of network traffic (when a message is communicated to a particular first-level LAG 116, a hashing algorithm (which may be different, identical, or similar hashing algorithm than that used at second-level LAG 118) may be used to determine the member physical port 112 to which the message is communicated, [0032]); a second port channel within the network (first-level LAG 116b of FIG. 1) having a second hash algorithm for performing the load balancing of the network traffic (when a message is communicated to a particular first-level LAG 116, a hashing algorithm (which may be different, identical, or similar hashing algorithm than that used at second-level LAG 118) may be used to determine the member physical port 112 to which the message is communicated, [0032]), the second hash algorithm is different from the first hash algorithm (the hashing algorithm used by first-level LAG 116a may be different, identical, or similar hashing algorithm than that used at first-level LAG 116b, [0033]); and load balancing the network traffic using the first hash algorithm for the network traffic over the first port channel and the second hash algorithm for the network traffic over the second port channel (in response to a determination that the outgoing logical port is a LAG, the information handling system may determine hashing keys and a hashing algorithm for the LAG. In some embodiments, the hashing keys and/or hashing algorithm may be predetermined and/or preset, [0039]; based on the hashing algorithm and context information, the information handling system may determine the outgoing logical port within the link aggregation group, [0041]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to offer different hash functions for LAGs in the system/method of Mizrahi as suggested by Kotha to provide more independence between the load balancing of links in different port groups. One would be motivated to combine these teachings as an alternative offering improved protection against congestion or faults on one particular group of ports when managing distribution of multiple high-traffic flows. However, Mizrahi-Kotha do not explicitly disclose a fault algorithm. Cisco teaches causing a first change from a default algorithm for performing load balancing by a first port channel (Flow-based load balancing is enabled by default at the global level, Load Balancing on Port Channels page 63) to a first hash algorithm (The port-channel configuration overrides the global configuration, Load Balancing on Port Channels page 63; Applies a load-balancing method to the specific port channel, Configuring Load Balancing on a Port Channel page 65); and causing a second change from the default algorithm for performing load balancing by a second port channel to a second hash algorithm (To configure load balancing on a port channel, perform the following steps. Repeat these steps for each GEC interface, Configuring Load Balancing on a Port Channel page 64), the second hash algorithm is different from the first hash algorithm (The following example shows a configuration where flow-based load balancing is configured on port channel 2, Flow-Based Load Balancing Example page 67). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize a global default setting in the system/method of Mizrahi-Kotha as suggested by Cisco in order to offer a standard and consistent traffic distribution configuration. One would be motivated to combine these teachings to initially provide a stable baseline for load balancing and then allow flexibility for adjusting the configuration for each channel based on particular traffic patterns. Claims 8 and 14 recite limitations equivalent to those of claim 1, and are therefore rejected in view of the same rationale. 4. Claims 2, 4, 9, 11, 15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Mizrahi-Kotha-Cisco in view of Biswas et al. (US 2014/0050091). Regarding claim 2, Mizrahi does not explicitly disclose the method of claim 1, wherein the first port channel is within a wide area network (WAN). Kotha teaches wherein the first port channel is within a wide area network (WAN) (Network 110 may be implemented as, or may be a part of, a storage area network (SAN), personal area network (PAN), local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), [0027]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to recognize a WAN in the system/method of Mizrahi as suggested by Kotha as a network that allows connections over large geographic distances. One would be motivated to combine these teachings for efficient bandwidth use when sending packets over links to communicate with remote devices and applications. However, Mizrahi-Kotha-Cisco do not explicitly disclose Layer 2 connectivity over Layer 3 networks, or wherein the first hash algorithm calculates a hash based on an inner source media access control address (MAC address) and destination MAC address. Biswas teaches wherein the first port channel is within a wide area network (WAN) (the networks 104, 106 may each take any form including, but not limited to a LAN, a VLAN, a WAN such as the Internet, [0036]) that supports Layer 2 connectivity over Layer 3 networks (The overlay network 300 has the capability of tunneling Layer-2 (L2) packets over the Layer-3 (L3) network, [0048]), and wherein the first hash algorithm calculates a hash based on an inner source media access control address (MAC address) and destination MAC address (The hash may be based on one or more parameters, including: a virtual port corresponding to the VM (such as the virtual port assigned to the VM), an inner packet header SMAC (inner_smac) address, an inner packet header Destination MAC (inner_dmac) address, [0070]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to hash inner packet MAC addresses in the system/method of Mizrahi-Kotha-Cisco as suggested by Biswas in a WAN network capable of tunneling L2 packets over a L3 network. One would be motivated to combine these teachings for faster decisions based on MAC addresses without IP routing lookups and for consistent connections between a source and destination. Regarding claim 4, Mizrahi does not explicitly disclose the method of claim 1, wherein the first port channel is a transport-side port channel. Kotha teaches wherein the first port channel is a transport-side port channel (see FIG. 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize hash algorithms for distributing traffic on transport LAGS in the system/method of Mizrahi as suggested by Kotha in order to efficiently utilize link bandwidth when transporting packets across a network. One would be motivated to combine these teachings to maintain consistent paths for flows between switches. However, Mizrahi-Kotha-Cisco do not explicitly disclose wherein the first hash algorithm calculates a hash based on attributes of inner IP packets. Biswas teaches wherein a first hash algorithm calculates a hash based on attributes of inner IP packets (The hash may be based on one or more parameters, including: a virtual port corresponding to the VM (such as the virtual port assigned to the VM), an inner packet header SMAC (inner_smac) address, an inner packet header Destination MAC (inner_dmac) address, [0070]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to hash inner MAC address information in the system/method of Mizrahi-Kotha-Cisco as suggested by Biswas for load balancing traffic among network interfaces. One would be motivated to combine these teachings for faster decisions based on MAC addresses without IP routing lookups and for consistent connections between a source and destination. Claims 9 and 15 recite limitations equivalent to those of claim 2, and are therefore rejected in view of the same rationale. Claims 11 and 17 recite limitations equivalent to those of claim 4, and are therefore rejected in view of the same rationale. 5. Claims 3, 10, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mizrahi-Kotha-Cisco in view of Biswas and in further view of Shen et al. (US 2023/0188492, referred to herein as Shen1). Regarding claim 3, Mizrahi-Kotha-Cisco do not explicitly disclose the method of claim 1, wherein the first hash algorithm calculates a hash based on a source IP address of a packet. Biswas teaches wherein the first hash algorithm calculates a hash based on a source IP address of a packet (a hash may be performed on the Source IP (SIP) address to choose a team member, [0094]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a source IP address for performing hashing in the system/method of Mizrahi-Kotha-Cisco as suggested by Biswas to ensure keeping traffic from a same source on a consistent path. One would be motivated to combine these teaches as a simple way for packets from the source to be communicated over a network in a maintained order. However, Mizrahi-Kotha-Cisco-Biswas do not explicitly disclose wherein the network utilizes Network Address Translation - Direct Internet Access (NAT-DIA) capabilities. Shen1 teaches wherein a network utilizes Network Address Translation - Direct Internet Access (NAT-DIA) capabilities (For DIA, NAT translation for packets exiting from SD-WAN edge router 150 into Internet 170 may be enabled on SD-WAN edge router 150, [0026]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize NAT DIA in the system/method of Mizrahi-Kotha-Cisco-Biswas as suggested by Shen1 to enable devices on a private network to share a public IP address when communicating with external destinations. One would be motivated to combine these teachings to provide secure communications with external networks, such as the Internet, in a way that conserves public IP addresses and leaves internal address schemes unaffected. Claims 10 and 16 recite limitations equivalent to those of claim 3, and are therefore rejected in view of the same rationale. 6. Claims 5, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Mizrahi-Kotha-Cisco in view of Shen et al. (US 2022/0413893, referred to herein as Shen2). Regarding claim 5, Mizrahi-Kotha-Cisco do not explicitly disclose the method of claim 1, wherein the first port channel is a service side port channel, and wherein the first hash algorithm calculates a hash based on a combination of source IP addresses, destination IP addresses, and a transport layer port number of the network. Shen2 teaches wherein a first port channel is a service side port channel (identifies that the packet requires transmission via an overlay network to the destination MFE (e.g., based on one or more destination addresses of the packet, such as MAC and/or IP addresses), [0003]; if the destination MFE has multiple tunnel endpoints that may be used, the source MFE uses a similar mechanism (e.g., calculating a hash value of certain packet characteristics) to select a destination tunnel endpoint. With both the source and destination tunnel endpoints selected, the MFE can encapsulate the packet and transmit the packet onto the physical network between the two endpoints, [0047]), and wherein a first hash algorithm calculates a hash based on a combination of source IP addresses, destination IP addresses, and a transport layer port number of a network (a hash of the source MAC and/or IP address, a hash of the standard connection 5-tuple (source and destination IP addresses, source and destination transport layer ports, and transport protocol), [0030]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to perform hash load balancing when tunneling packets to a destination device in the system/method of Mizrahi-Kotha-Cisco as suggested by Shen2 to efficiently deliver traffic to the destination over a consistent tunnel. One would be motivated to combine these teachings to ensure that packets of a particular flow are both sent and received using the same tunnel endpoints. Claims 12 and 18 recite limitations equivalent to those of claim 5, and are therefore rejected in view of the same rationale. 7. Claims 6, 13, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Mizrahi-Kotha-Cisco in view of Tatsumi (US 2015/0049765). Regarding claim 6, Mizrahi-Kotha-Cisco do not explicitly disclose the method of claim 1, wherein the first port channel is a box to box port channel, and wherein the first hash algorithm calculates a hash based on Layer 3 and Layer 4 elements of an inner packet of the network traffic. Tatsumi teaches wherein the first port channel is a box to box port channel (a plurality of box-type switching devices (here, referred to as port switch) and a plurality of box-type switching devices (here, referred to as fabric switch) which function to relay a frame between the port switches are provided. Each port switch has a link to each of the plurality of fabric switches, and sets a link aggregation group (hereinafter, abbreviated to as LAG) to the plurality of links, [0006]), and wherein the first hash algorithm calculates a hash based on Layer 3 (it is possible to select a first mode in which the hashing operation is performed by using the source IP address and the destination IP address in addition to the port number of UDP/TCP, [0072]) and Layer 4 elements (the port number used for the hashing operation of the LAG is not limited to the UDP (L4) port number, and the L4 TCP (Transmission Control Protocol) port number can also be used, [0054]) of an inner packet of the network traffic (a value calculated by hashing operation of the inner frame, [0047]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply hash load balancing in a box fabric in the system/method of Mizrahi-Kotha-Cisco as suggested by Tatsumi to efficiently spread traffic across multiple physical links and prevent bottlenecks. One would be motivated to combine these teachings and recognize that using uniquely identifying fields, such as IP addresses and ports, for hashing would help to keep flow sessions intact across network switches, such as in a box fabric. Claims 13 and 19 recite limitations equivalent to those of claim 6, and are therefore rejected in view of the same rationale. 8. Claims 7 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mizrahi-Kotha-Cisco in view of Matthews et al. (US 10,574,577). Regarding claim 7, Mizrahi-Kotha-Cisco do not explicitly disclose not explicitly disclose the method of claim 1, wherein the first hash algorithm and the second hash algorithm are configurable from a controller. Matthews teaches wherein a first hash algorithm and a second hash algorithm are configurable from a controller (Hash function selector 560 may use the collected statistics to identify an optimal hash function to select at a given time for a given group of paths. Since the performance of a hash function is not known initially, and may change over time, various learning techniques may be utilized in selecting an active hash function, column 27 lines 16-21). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize a hash function selector in the system/method of Mizrahi-Kotha-Cisco as suggested by Matthews to dynamically identify an optimal hash function at a given time for a given group. One would be motivated to combine these teachings to monitor and select a most efficient hash function to be used for particular traffic based on changing real-time network conditions. Claim 20 recites limitations equivalent to those of claim 7, and is therefore rejected in view of the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Manur et al. US 7,190,696 – first hash value used to drive LAG distribution. Subramanian et al. US 8,014,278 – a load balancing process selecting different techniques for different physical interface groupings. S P et al. US 8,259,585 – bins mapped to multiple different hash functions mapped to output links. Matthews et al. US 9,246,810 – performing different hash-based load balancing when selecting a destination link. Redgate US 2006/0031506 – supporting different load balancing algorithms within a service group. Salt US 2009/0187663 - each EIS Logical Group having its own Load Balancer instance and individual Logical Groups configured to use different load balancing algorithms. Chowdhury et al. US 2014/0198647 – changing LAG hashing algorithms used by switches. Natarajan et al. US 2014/0219081 – selecting from a plurality of hashing algorithms for sending traffic via ports on a sub-LAG. Hendel US 2015/0078375 – different hash functions causing a load balancing module to generate different hashes that correspond to different outgoing ports for packets in different data flows. Jain et al. US 2016/0094643 – adjusting load balancing among a plurality of service nodes where each port has its own load balancer. Baradaran et al. US 2018/0309822 – invoking an instance of a load balancer and switching to a different load balancing function. Wang et al. US 2020/0151649 – evaluating workflow rules on a per channel basis. LI et al. US 2020/0259748 – traffic management policies performed on a per-group basis. Mittal et al. US 2021/0135993 – hash-based traffic load-balancing across a plurality of devices in different domain. Marrotte US 2021/0306254 – a first hash to determine which LAG to use and a second hash to determine which path within the selected LAG. Fettes et al. US 2025/0310114 – using a first and second hash on packets to select an outgoing port channel. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MADHU WOOLCOCK whose telephone number is (571)270-3629. The examiner can normally be reached Tuesday, Thursday 9-6 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Parry can be reached at 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MADHU WOOLCOCK Examiner Art Unit 2451 /MADHU WOOLCOCK/Primary Examiner, Art Unit 2451
Read full office action

Prosecution Timeline

Jul 08, 2024
Application Filed
Mar 17, 2026
Non-Final Rejection — §103, §112
Apr 06, 2026
Interview Requested
Apr 14, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598149
APPARATUS AND METHOD FOR POOLING NETWORK INTERFACE CARDS IN A CLOUD NETWORK
2y 5m to grant Granted Apr 07, 2026
Patent 12580876
DYNAMIC SKILL HANDLING MECHANISM FOR BOT PARTICIPATION IN SECURE MULTI-USER COLLABORATION WORKSPACES
2y 5m to grant Granted Mar 17, 2026
Patent 12562962
PARALLEL EXECUTION OF NETWORK SERVICES WITH OVERLAPPING DEVICE CONFIGURATION
2y 5m to grant Granted Feb 24, 2026
Patent 12549637
ELECTRONIC DEVICE ESTABLISHING DATA SESSION WITH NETWORK SLICE, AND METHOD FOR OPERATING SAME
2y 5m to grant Granted Feb 10, 2026
Patent 12537735
Mobile network synchronization domain anomaly identification and correlation
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
55%
Grant Probability
99%
With Interview (+72.0%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 287 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month