Prosecution Insights
Last updated: April 19, 2026
Application No. 18/430,952

RUNTIME CONFIGURABLE MODULAR PROCESSING TILE

Non-Final OA §103
Filed
Feb 02, 2024
Examiner
DOMAN, SHAWN
Art Unit
2183
Tech Center
2100 — Computer Architecture & Software
Assignee
Arm Limited
OA Round
3 (Non-Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
183 granted / 275 resolved
+11.5% vs TC avg
Strong +23% interview lift
Without
With
+23.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
47 currently pending
Career history
322
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
18.0%
-22.0% vs TC avg
§112
26.3%
-13.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 275 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1 and 19 have been amended. Claims 1, 3, 4, 8, 9, 11-16, and 19 have been examined. The claim objections in the previous Office Action have been addressed and are withdrawn. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 10, 2025 has been entered. Claim Objections Claims 1, 3, 4, 8, 9, and 11-16 are objected to because of the following informalities. Claim 1 recites, “perform said processing task such that to said configuration instruction.” This appears to be a typographical error. Applicant may have intended “perform said processing task such that [[to]] said configuration instruction.” Claims 3, 4, 8, 9, 11-16 are objected to as depending from objected to base claims and failing to remedy the deficiencies of those claims. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 4, 8, 9, 11-16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over US Publication No. 2010/0049961 by Bell et al. (hereinafter referred to as “Bell”) in view of US Publication No. 2021/0126877 by Jalal et al. (hereinafter referred to as “Jalal”) in view of US Patent No.7,346,759 by Ansari et al. (previously cited and hereinafter referred to as “Ansari”). Regarding claims 1 and 19, taking claim 1 as representative, Bell discloses: a data processing system comprising: a plurality of modular processing tiles, each of the plurality of modular processing tiles comprising runtime-configurable processing circuitry (Bell discloses, at Figure 1 and related description, processing units, which discloses modular processing tiles, that each include, as disclosed at Figure 2 and related description, programmable circuitry.); and a control module to control the plurality of modular processing tiles, wherein the plurality of modular processing tiles are arranged to communicate with said control module over a communication network, said control module comprising (Bell discloses, at Figure 2 and related description, a function manager that reprograms the cores and that is coupled to cores , which discloses a control module to control the plurality of modular processing tiles, wherein the plurality of modular processing tiles are arranged to communicate with said control module over a communication network.): instruction decoding circuitry to decode instructions (Bell discloses, at Figure 1 and related description, the single thread optimization module provides instructions, which, given that the cores can be implemented as PowerPC cores, discloses decoding circuitry to decode the instructions.); information collecting circuitry to collect information relating to said plurality of modular processing tiles …wherein said information relating to the plurality of modular processing tiles comprises tile telemetry of said plurality of modular processing tiles… (Bell discloses, at Figure 1 and related description, the single thread optimization module collects performance monitor information. Bell also discloses, at Figure 2 and related description, collecting performance data, which discloses tile telemetry.); and instruction processing circuitry to process instructions decoded by the instruction decoding circuitry (Bell discloses, at Figure 2 and related description, core units, which discloses instruction processing circuitry to process instructions decoded by the instruction decoding circuitry.), wherein, in response to an instruction to perform a processing task decoded by said instruction decoding circuitry, said instruction processing circuitry of said control module is configured to generate …[configuration information] for at least a portion of said plurality of modular processing tiles based on the collected information relating to said portion of said plurality of modular processing tiles…(Bell discloses, at Figure 2 and related description, dynamically reprogramming core units based on measured workload activity, which discloses generating configuration information based on collected information. The workload activity is performed in response to an instruction to perform a processing task based on an instruction decoded by the decoding circuitry.), and wherein, responsive to said …[configuration information], the control module is configured to, at runtime, select, arrange and functionally combine said portion of said plurality of modular processing tiles into an arrangement and, at runtime, to configure a flow of data …to adapt the arrangement into an interconnected network to perform said processing task such that to said configuration instruction causes said runtime-configurable processing circuitry of said portion of said plurality of modular processing tiles to be configured at runtime to perform said processing task as an interconnected network (Bell discloses, at Figure 2 and related description, dynamically reprogramming core units, which are coupled together, based on measured performance data, which discloses responsive to said …[configuration information], the control module is configured to, at runtime, select, arrange and functionally combine said portion of said plurality of modular processing tiles into an arrangement and, at runtime, to configure a flow of data …to adapt the arrangement into an interconnected network to perform said processing task such that to said configuration instruction causes said runtime-configurable processing circuitry of said portion of said plurality of modular processing tiles to be configured at runtime to perform said processing task as an interconnected network. See also Figure 1, which shows the core units are coupled together, which discloses an interconnected network.). Bell does not explicitly disclose the aforementioned collected information includes information relating to said communication network, said information relating to said communication network comprises network telemetry, the aforementioned configuration information is a configuration instruction, and the aforementioned configuration information is further based on the collected information relating to said communication network, and the aforementioned flow of data is between each of said plurality of modular processing tiles in the arrangement. However, Bell discloses reprogramming core units based on performance monitoring information analyzed by a thread optimization module. See ¶ [0020] et seq. Bell also discloses implementing the thread optimization module as executable instructions. Given this disclosure, it would have been obvious to a person having ordinary skill in the art to implement the configuration information used to reconfigure the core units as a configuration instruction. Doing would improve performance by implementing a simple and direct approach to convey reconfiguration operations. Also in the same field of endeavor (e.g., processing using interconnected elements) Jalal discloses: detecting information related to an interconnect fabric and modifying operation in response to the information (Jalal discloses, at ¶ [0042], determining whether an interconnect fabric is congested, which discloses collecting information that includes information relating to said communication network and comprises network telemetry. Jalal also discloses, at ¶ [0042], performing different actions based on the collected information, which discloses configuration based on the collected information relating to said communication network.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Bell to include the collecting network information and configuring based on the network information, as disclosed by Jalal, in order to ensure efficient transmission of data between interconnected elements of a computing system. Also in the same field of endeavor (e.g., processing using interconnected elements) Ansari discloses: data between tiles (Ansari discloses, at Figures 28 and 29 and related description, transmitting data between adjacent tiles.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Bell to include transmitting data between processing elements, as disclosed by Ansari, in order to improve performance by efficiently using multiple processing elements. Regarding claim 3, Bell discloses the elements of claim 2, as discussed above. Bell also discloses: said tile telemetry comprises performance statistics of said plurality of modular processing tiles, said performance statistics being statistics relating to performance or power status of said modular processing tile, and, optionally, said performance statistics comprises power availability, memory availability, processing capacity, processing speed, processing latency, or any combination thereof (Bell discloses, at Figure 2 and related description, the performance data includes load data, which discloses processing capacity.). Regarding claim 4, Bell discloses the elements of claim 1, as discussed above. Bell also discloses: said plurality of modular processing tiles comprises local storage to store one or more past configurations of said runtime-configurable processing circuitry, and said information relating to plurality of modular processing tiles comprises said one or more past configurations (Bell discloses, at Figure 2 and related description, the processors store performance history, which discloses past configurations.). Regarding claim 8, Bell discloses the elements of claim 1, as discussed above. Bell also discloses: said at least a portion of said plurality of modular processing tiles forming said interconnected network is configured …in response to said configuration … [information] (Bell discloses, at Figure 2 and related description, configuring the processors, which discloses in response to a configuration instruction.). Bell does not explicitly disclose the configuration comprises one or more cortical columns and the aforementioned configuration information is a configuration instruction. However, Bell discloses reprogramming core units based on performance monitoring information analyzed by a thread optimization module. See ¶ [0020] et seq. Bell also discloses implementing the thread optimization module as executable instructions. Given this disclosure, it would have been obvious to a person having ordinary skill in the art to implement the configuration information used to reconfigure the core units as a configuration instruction. Doing would improve performance by implementing a simple and direct approach to convey reconfiguration operations. Also in the same field of endeavor (e.g., processing using interconnected elements) Ansari discloses: columns of processing elements (Ansari discloses, at Figures 28 and 29 and related description, columns of processing elements.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Bell to include connecting processing elements in columns, as disclosed by Ansari, in order to improve performance by efficiently using multiple processing elements. Regarding claim 9, Bell discloses the elements of claim 8, as discussed above. Bell also discloses: said at least a portion of said plurality of modular processing tiles forming said interconnected network is configured to aggregate …in response to said configuration … [information] (Bell discloses, at Figure 1 and related description, configuring the processors to work together, which discloses aggregating in response to a configuration instruction.). Bell does not explicitly disclose aggregating the processors into a cascade such that one or more modular processing tiles of said interconnected network output directly to one or more other modular processing tiles of said interconnected network and the aforementioned configuration information is a configuration instruction. However, Bell discloses reprogramming core units based on performance monitoring information analyzed by a thread optimization module. See ¶ [0020] et seq. Bell also discloses implementing the thread optimization module as executable instructions. Given this disclosure, it would have been obvious to a person having ordinary skill in the art to implement the configuration information used to reconfigure the core units as a configuration instruction. Doing would improve performance by implementing a simple and direct approach to convey reconfiguration operations. Also in the same field of endeavor (e.g., processing using interconnected elements) Ansari discloses: transmitting between adjacent elements (Ansari discloses, at Figures 28 and 29 and related description, transmitting data between adjacent processing elements.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Bell to include transmitting data between processing elements, as disclosed by Ansari, in order to improve performance by efficiently using multiple processing elements. Regarding claim 11, Bell discloses the elements of claim 1, as discussed above. Bell also discloses: said network telemetry comprises bandwidth availability, network latency, network congestion, queue utilization, planned tasks, or any combination thereof. However, in the same field of endeavor (e.g., processing using interconnected elements) Jalal discloses: detecting congestion of an interconnect fabric (Jalal discloses, at ¶ [0042], determining whether an interconnect fabric is congested, which discloses network congestion.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Bell to include the collecting network information and configuring based on the network information, as disclosed by Jalal, in order to ensure efficient transmission of data between interconnected elements of a computing system. Regarding claim 12, Bell discloses the elements of claim 1, as discussed above. Bell also discloses disclose: said instruction processing circuitry is configured to generate said configuration …[information] further based on statistics on past task executions, one or more workload sub-classes, one or more hardware sub-classes, one or more cost models, one or more service priorities, one or more task priorities, a content resolution or a range of content resolution, or any combination thereof (Bell discloses, at Figure 2 and related description, the processors store performance history, which discloses statistics on past performance and using the history to generate configuration information.). Bell does not explicitly disclose the aforementioned configuration information is a configuration instruction. However, Bell discloses reprogramming core units based on performance monitoring information analyzed by a thread optimization module. See ¶ [0020] et seq. Bell also discloses implementing the thread optimization module as executable instructions. Given this disclosure, it would have been obvious to a person having ordinary skill in the art to implement the configuration information used to reconfigure the core units as a configuration instruction. Doing would improve performance by implementing a simple and direct approach to convey reconfiguration operations. Regarding claim 13, Bell discloses the elements of claim 1, as discussed above. Bell also discloses: said instruction processing circuitry of said control module is configured to generate said configuration …[information] to select said at least a portion of said plurality of modular processing tiles from said plurality of modular processing tile to arrange said interconnected network based on a current workload of each of said at least a portion of said plurality of modular processing tiles (Bell discloses, at Figure 2 and related description, configuring the processors based on utilization, i.e., workload.). Bell does not explicitly disclose the aforementioned configuration information is a configuration instruction. However, Bell discloses reprogramming core units based on performance monitoring information analyzed by a thread optimization module. See ¶ [0020] et seq. Bell also discloses implementing the thread optimization module as executable instructions. Given this disclosure, it would have been obvious to a person having ordinary skill in the art to implement the configuration information used to reconfigure the core units as a configuration instruction. Doing would improve performance by implementing a simple and direct approach to convey reconfiguration operations. Regarding claim 14, Bell discloses the elements of claim 1, as discussed above. Bell also discloses: said control module and said plurality of modular processing tiles are local to a single processing resource (Bell discloses, at Figure 1 and related description, the system includes the single thread optimization module and the processors, which discloses being local to a single processing resource.). Regarding claim 15, Bell discloses the elements of claim 14, as discussed above. Bell also discloses: a plurality of said processing resources and a global control module communicating with said plurality of processing resources over a communication network, said global control module being configured to centrally control said plurality of modular processing tiles of each processing resource through said control module of each processing resource (Bell discloses, at Figure 1 and related description, multiple processors, and a hypervisor that controls the system.). Regarding claim 16, Bell discloses the elements of claim 1, as discussed above. Bell also discloses: said plurality of modular processing tiles are distributed across more than one processing resource, and said control module is configured to centrally control said plurality of modular processing tiles over said communication network (Bell discloses, at Figure 1 and related description, the processors can be implemented on removable cards, which discloses more than one processing resource being centrally controlled over the communication network.). Response to Arguments On pages 8-9 of the response filed November 10, 2025 (“response”), the Applicant argues, “it appears that the Examiner considers the instruction + priority value to be a configuration instruction. Applicant submits that Bell does not disclose (emphasis added) "responsive to said configuration instruction, the control module is configured to, at runtime, select, arrange and functionally combine said portion of said plurality of modular processing tiles into an arrangement and, at runtime, to configure a flow of data between each of said plurality of modular processing tiles in the arrangement to adapt the arrangement into an interconnected network to perform said processing task such that said configuration instruction causes said runtime-configurable processing circuitry of said portion of said plurality of modular processing tiles to be configured at runtime to perform said processing task as an interconnected network." Specifically, Applicant notes that Bell does not disclose selecting, arranging and functionally combining tiles into an arrangement and configuring a flow of data between tiles in the arrangement to adapt the arrangement into an interconnected network to perform a processing task in response to the instruction + priority value, which the Examiner considers to be a configuration instruction, as discussed above. Applicant submits that paragraph [0020] of Bell disclosing that priority values are assigned to codes identified from performance monitor information has no bearing on the performance monitor dynamically configuring additional function core units in paragraph [0021]. Indeed, Applicant submits that there is no causal link there; Bell does not disclose any configuring of any sort of a plurality of modular processing tiles in response to anything that could be considered the configuration instruction, and particularly not in response to the instruction + priority value of paragraph [0020]. For completeness, Applicant notes that while paragraphs [0021]-[0024] may disclose configuring core units of the processor of Bell, those paragraphs do not disclose or even hint towards configuring the core units in response to the instruction + priority value. Instead, configuration based on, for example, a volume of instructions at a given core unit, see paragraph [0022], rather than based on the substance of the instruction in combination with any priority, is disclosed. Thus, Applicant submits that Bell cannot be relied upon as disclosing "responsive to said configuration instruction, the control module is configured to, at runtime, select, arrange and functionally combine said portion of said plurality of modular processing tiles into an arrangement and, at runtime, to configure a flow of data between each of said plurality of modular processing tiles in the arrangement to adapt the arrangement into an interconnected network to perform said processing task such that said configuration instruction causes said runtime-configurable processing circuitry of said portion of said plurality of modular processing tiles to be configured at runtime to perform said processing task as an interconnected network" as recited in claim 1.” (emphasis added) Though fully considered, the Examiner respectfully disagrees. The Applicant argues that Bell does not disclose the claimed limitation of performing certain configuration operations “in response to a configuration instruction.” As noted in the emphasized portion of the Applicant’s arguments, Bell discloses configuring core units. This is explicitly recited at, for example, Bell’s ¶ [0021], which discloses, “A function manager … dynamically configures additional function core units 210 having field programmable arrays to perform the function of the most active core 210 so that the reprogrammed core 210 performs the function in an efficient manner.” Bell does explicitly recite that the dynamic configuration is done in response to a configuration instruction. However, the claimed configuration instruction is subject to a broad interpretation. The Examiner submits that configuring a core, as disclosed by Bell, implicitly discloses doing so in response to a configuration instruction. The fact that the core is configured indicates the presence of a configuration instruction. Otherwise, what is the configuring performed in response to? There must be some stimulus. And whatever the stimulus is, it is reasonable to characterize the stimulus as a configuration instruction since the stimulus initiates configuration of the core, or instructs the core to configure. However, in order to expedite prosecution, the Examiner has changed the grounds of rejection regarding the configuration instruction from anticipation to obviousness. Rather than rely on the proposition that the configuration instruction is implicitly disclosed by virtue of the occurrence of a configuration, the Examiner submits that even if not disclosed (explicitly or implicitly), it would have been obvious to a person having ordinary skill in the art to use a configuration instruction to configure the cores. Doing so is notoriously well-known. Furthermore, Bell discloses implementing an optimization module, which performs operations associated with configuration, as executable instructions. See ¶ [0020]. Accordingly, the Applicant’s arguments are moot. On pages 9-10 of the response the Applicant argues, “The related description seems to be paragraphs [0021]-[0024]. These paragraphs disclose reprogramming core units of a field programmable array. Applicant submits that those paragraphs do not disclose core units being configured into an interconnected network. Indeed, Applicant submits that the mere presence of an interconnect coupling processors in Bell (see, for example, paragraphs [0018] and [0019]) does not disclose any tiles being configured into an interconnected network. Claim 1 is amended to clarify this distinction between Bell and the claimed invention. Thus, Applicant submits that Bell does not disclose (emphasis added) "responsive to said configuration instruction, the control module is configured to, at runtime, select, arrange and functionally combine said portion of said plurality of modular processing tiles into an arrangement and, at runtime, to configure a flow of data between each of said plurality of modular processing tiles in the arrangement to adapt the arrangement into an interconnected network to perform said processing task such that said configuration instruction causes said runtime-configurable processing circuitry of said portion of said plurality of modular processing tiles to be configured at runtime to perform said processing task as an interconnected network" as recited in amended claim 1. Though fully considered, the Examiner respectfully disagrees. Configuring core units “into an interconnected network” is subject to broad interpretation. The Examiner submits that all is needed is that the core units be interconnected. Bell discloses, at Figures 1 and 2 that the core units are interconnected. The claims include additional limitations related to the flow of data between the core units. The Examiner submits that the presence of a connection between the core units implicitly discloses data flow between the units. What else could the connection possibly be used for? Why would anyone connect two units but not allow the flow of data between them? However, rather than rely on the implicit disclosure of data flow between connected units, the Examiner has included the reference Ansari, which explicitly discloses data flow between adjacent units. See, e.g., Figures 28 and 29. Accordingly, the Applicant’s arguments are moot. On page 10 of the response the Applicant argues that Figure 1 and related description at paragraphs [0016]-[0020] does not disclose configuring the processors and requests clarification. The Applicant also argues, “Moreover, even if Fig. 1 and related description paragraphs [0016]-[0020] did disclose configuring the processors to work together, that would not disclose aggregating in response to a configuration instruction.” The examiner has clarified that Bell teaches configuration at Figure 2 and related description. Please see ¶ [0021] et seq. The Examiner has also responded to the Applicant’s arguments regarding in response to a configuration instruction above. That is, the Examiner has updated the basis of rejection to indicate that configuring in response to a configuration instruction is obvious over Bell’s disclosure of configuring and instructions related to performance-based reconfiguring. On page 11 of the response the Applicant argues, “Applicant submits that Bell's disclosure at [0016] of "The processor cores may … employ both pipelining and out-of-order execution of instructions" does not render obvious the cortical column of claim 8 and the cascade of claim 9.” Though fully considered, the Examiner respectfully disagrees. The limitations in question, which recite arranging tiles into a cortical column and into a cascade are subject to broad interpretation. The Examiner submits that all that is required is configuring the tiles to sequentially transmit data to adjacent tiles. The Examiner maintains that pipelined processors perform in this manner. That is, the output from one stage is passed to the next and so on. However, in order to expedite prosecution, the Examiner has updated the ground of rejection to rely on Ansari, which explicitly disclose tiles arranged in columns and rows, and transmitting data to adjacent tiles. See, e.g., Figures 28 and 29 and related description. Accordingly, the Applicant’s arguments are moot. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAWN DOMAN whose telephone number is (571)270-5677. The examiner can normally be reached on Monday through Friday 8:30am-6pm Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jyoti Mehta can be reached on 571-270-3995. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAWN DOMAN/Primary Examiner, Art Unit 2183
Read full office action

Prosecution Timeline

Feb 02, 2024
Application Filed
Apr 07, 2025
Non-Final Rejection — §103
Jul 10, 2025
Response Filed
Aug 09, 2025
Final Rejection — §103
Oct 14, 2025
Response after Non-Final Action
Nov 10, 2025
Request for Continued Examination
Nov 16, 2025
Response after Non-Final Action
Feb 22, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585469
Trace Cache Access Prediction and Read Enable
2y 5m to grant Granted Mar 24, 2026
Patent 12572358
System, Apparatus And Methods For Minimum Serialization In Response To Non-Serializing Register Write Instruction
2y 5m to grant Granted Mar 10, 2026
Patent 12561142
METHOD AND SYSTEM FOR PREVENTING PREFETCHING A NEXT INSTRUCTION LINE BASED ON A COMPARISON OF INSTRUCTIONS OF A CURRENT INSTRUCTION LINE WITH A BRANCH INSTRUCTION
2y 5m to grant Granted Feb 24, 2026
Patent 12554498
QUANTUM COMPUTER WITH A PRACTICAL-SCALE INSTRUCTION HIERARCHY
2y 5m to grant Granted Feb 17, 2026
Patent 12541368
LOOP EXECUTION IN A RECONFIGURABLE COMPUTE FABRIC USING FLOW CONTROLLERS FOR RESPECTIVE SYNCHRONOUS FLOWS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
90%
With Interview (+23.4%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 275 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month