Prosecution Insights
Last updated: April 19, 2026
Application No. 18/905,936

COHERENT MEMORY SYSTEM

Final Rejection §103§DP
Filed
Oct 03, 2024
Examiner
BATAILLE, PIERRE MICHE
Art Unit
2138
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
93%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 93% — above average
93%
Career Allow Rate
1100 granted / 1186 resolved
+37.7% vs TC avg
Moderate +6% lift
Without
With
+6.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
26 currently pending
Career history
1212
Total Applications
across all art units

Statute-Specific Performance

§101
5.4%
-34.6% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
31.1%
-8.9% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1186 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 remain pending in the application under prosecution and have been re-examined. In the response to this Office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Response to Arguments Applicant's arguments with respect to the double patenting rejection, filed 12/08/2025, have been fully considered. Applicant remarks regarding the double patenting rejection has been noted. The non-statutory double patenting is held until submission of a terminal disclaimer that would overcome the rejection. Applicant’s arguments with respect to amended claims 1-20 have been considered but are moot in view of new ground of rejection. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-15, and 17-20 of Co-Pending U.S. Patent Application 17/372,309. Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1, 3-15, and 17-20 of Co-Pending U.S. Patent Application 17/372,309 anticipate claims 1-20 of the instant application. This is a provisional nonstatutory double patenting rejection. Claims 1 & 2 (Application 18/905,936) Claim 1 (Application 17/372,309) A system, comprising: a first memory device, the first memory device comprising: a first controller connected to a processing circuit over a cache coherent interface; a second controller coupled to the first controller; and a first memory coupled to the second controller, wherein the first controller is configured to communicate with the second controller to transport a first data packet between the processing circuit and the first memory via the cache coherent interface; wherein the first controller is configured to translate the first data packet that is based on an interface of the first memory to a second data packet that is based on the cache coherent interface. 2. The system of claim 1, wherein the first memory includes a memory management unit cache configured to perform a physical address lookup, in the memory management unit cache, of a physical address, based on a virtual address. A system, comprising: a first memory device, the first memory device comprising: a cache coherent controller connected to a processing circuit over a cache coherent interface; a volatile memory controller; a volatile memory; a nonvolatile memory controller; and a nonvolatile memory, wherein the first memory device is configured: to receive a quality of service requirement; and to selectively enable a first feature in response to the quality of service requirement; wherein the cache coherent interface is configured to transport a first data packet between the processing circuit and the volatile memory via the cache coherent controller, and is further configured to transport a second data packet between the processing circuit and the nonvolatile memory via the cache coherent controller, wherein the cache coherent controller is configured to translate at least the first data packet that is based on an interface of the volatile memory to a data packet that is based on the cache coherent interface, wherein the first feature comprises a memory management unit cache in the first memory device; and the first memory device is configured to: allocate a portion of the volatile memory as the memory management unit cache, and perform a physical address lookup, in the memory management unit cache, of a physical address, based on a virtual address. Claim 1 of Co-Pending application 17/372,309 contains the elements of claims 1 and 2, combined, therefore, anticipates claims 1 and 2. Independent claim 15 of Co-Pending application 17/372,309 repeats all features or contains the features recited in Claim 15 and 16, combined, therefore anticipates claims 15 and 16 of the instant application. Claim 3-14 and 17-20 of the instant application corresponds to claims 3-14 and 17-20 of co-pending application 17/372,309, respectively, therefore, the features of claims 3-14 and 17-20 of co-pending application 17/372,309 anticipate the features of claims 3-14 and 17-20 of the instant application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 20200019515 A1(KOUFATI et al) in view of US 20220035742 A1 (SCHUMACHER et al) and further in view of US 20220050780 (PASSINT). With respect to claims 1 and 15, KOUFATI teaches system, comprising a first memory device, the first memory device comprising: a first controller connected to a processing circuit over a cache coherent interface; second controller coupled to the first controller; and a first memory coupled to the second controller (cache coherent interconnect Par. 0056; memory controller 1022 coupled to memory and nonvolatile controller 1082; storage controller 1084; Input Output Memory Management Unit (IOMMU) to receive a direct memory access (DMA) request containing virtual address and determine translation cache entry available, and the IOMMU to collect factors and determine allocation and restriction (existing entry, priority; capacity, maximum number of TLB entries) based on the factors) [Fig. 7-9; Par. 0025-0026; Par. 0094-0097]. KOUFATI’s system provides address translation for devices and avoids a need to convert a virtual address of a process to a physical address and limits a number of entries permitted to be stored in the first table and a second table for a source of the received virtual address [Par. 0038-0039]; but fails to specifically teach: the first controller is configured to communicate with the second controller to transport a first data packet between the processing circuit and the first memory via the cache coherent interface; wherein the first controller is configured to translate the first data packet that is based on an interface of the first memory to a second data packet that is based on the cache coherent interface. However, SHUMACHER (US 20220035742 A1) teaches the invention, a system (hybrid non-uniform memory access (NUMA) system including plurality of nodes or hubs)) comprising: a first memory device, the first memory device comprising: a first controller connected to a processing circuit over a cache coherent interface (each node including separate memory module and including local node controller attached to processor; the node controllers having processors attached and configured to manage cache coherency for memories attached to the processors (each of the node controllers in FIG. 1 shown to be coupled to a single CPU or memory module) [Fig. 1; Par. 0012-0013] with each node controller remote memory-access requests and including logic that implements a cache-coherence protocol for remotely access command within the ccNUMA system [Par. 0021-0024] ; a second controller coupled to the first controller (node controllers interconnected to each other, via node-controller interfaces, to allow memory access from one node controller to any other node controller such that memory-access requests can be sent from one node controller to another to access memories attached to a different node controller [Fig. 1; Par. 0013-0014; Par. 0021; Par. 0024]; and a first memory coupled to the second controller, wherein the first controller is configured to communicate with the second controller to transport a first data packet between the processing circuit and the first memory via the cache coherent interface node controllers interconnected to each other, via node-controller interfaces, to allow memory access from one node controller to any other node controller such that memory-access requests can be sent from one node controller to another to access memories attached to a different node controller [Fig. 1; Par. 0013-0014; Par. 0021-0024]; wherein the first controller is configured to translate the first data packet that is based on an interface of the first memory to a second data packet that is based on the cache coherent interface (interactions between a local node controller and a remote node controller during a remote-memory-access operation where the remote home node performs various operations, including operations needed to maintain cache coherency, i.e. reformatting the request message from the processor-interconnect format to a node-controller-interconnect format) [Par. 0030-0031; Par. 0035-0037]. Neither SHUMACHER nor KOUFATI teaches the first memory is associated with a first memory technology having a first access granularity translation of data packet based on the first access granularity of the first memory technology to a second data packet having a second access granularity that is based on the cache coherent interface. However, PASSINT teaches system providing hardware-managed cache coherency for processor-attached memory and software-managed cache coherency for non-processor-attached memory including node controller to include a number of interfaces that can facilitate communication between a number of components within a hybrid cache-coherent system; the node controller includes interface that can provide NUMA links to facilitate communication with other node controllers in the hybrid cache-coherent NUMA system, the node controller to further include a cache coherence management logic that can operate two different hardware modules, one for managing hardware cache coherency of processor-attached memory and the other for facilitating software cache coherency of non-processor attached memory [Par. 0037-0038]; the node controller further include an interface to connect with other node controllers in the cache-coherent fabric, to allow a node controller in one node in the cache coherent fabric to have access to memory in another node [Fig. 2; Par. 0028-0031]; the node controller to include an interface to connect with other node controllers in the cache-coherent fabric, to allow a node controller in one node in the cache coherent fabric to have access to memory in another node wherein each interface controller can provide an interface including a plurality of ports for connecting to a local memory fabric [Par. 0029-0031]; the node controller to include a first interface to interface with one or more processors, a second interface to interface with other node controllers in the cache-coherent interconnect network; cache coherence management logic to operate a first circuitry, in response to determining that a memory access request is destined to a hardware-managed cache coherent space in the cache-coherent interconnect network; the cache coherence management logicto operate the second circuitry, in response to determining that the memory access request is destined to a software-managed cache coherent space in the cache-coherent interconnect network [Par. 0053-0057]. Therefore, it would have been obvious to one having at least ordinary to feature KOUFATI’s system within the hardware-coherent memory node with the cache coherent interface of SCHUMACHER, in order to produce interface system that facilitates hardware-based coherence tracking, as taught by SCHUMACHER, [Par. 0050-0052]. It would have been obvious to further use within the combined disclosure of a first memory is associated with a first memory technology having a first access granularity translation different from a second memory technology of a second access granularity, as taught by PASSINT, in order to facilitate a system that provides flexibility to a plurality of independently scale attached memory devices, with interface that facilitate a plurality of Non-uniform Memory Access links for coupling with other node controllers in the cache-coherent interconnect network, as taught by PASSINT [Par. 0045; Par. 0035]. The combination is proper because: PASSINT teaches cache coherence management logic to implement cache-coherence protocol including a set of procedures or rules that dictate how node controller is to interact with associated memory depending upon the current coherence status for a particular memory block; and SCHUMACHER teaches multiprocessor system to include a first node controller that is directly coupled to a processor and a second identical node controller that is not directly coupled to any processor and is coupled to a fabric-attached memory, wherein: he first node controller is to manage cache coherence for a local memory of the processor; and the second node controller is to operate in a second mode to manage cache coherence for the fabric attached memory. With respect to claims 2 and 16, KOUFATI, SCHUMACHER, and PASSINT combined teach the system, wherein the first memory includes a memory management unit cache configured to perform a physical address lookup, in the memory management unit cache, of a physical address, based on a virtual address (perform memory access translation to translate virtual to physical, the operation including lookup operation and manage allocation and restriction) [KOUFATI’s Par. 32-0034; 0040-0043; Par. 0027-029). With respect to claims 3 and 17, KOUFATI, SCHUMACHER, and PASSINT, combined teach the system, wherein performing the physical address lookup comprises: looking up a base address, in a first page table of the memory management unit cache, based on a first portion of the virtual address; determining a starting address of a second page table of the memory management unit cache based on the base address; and looking up the physical address in the second page table, based on a second portion of the virtual address (IOMMU to receive DMA request with need to perform memory access operation and translation need to perform page table walk operation identifying page table entry and indicating virtual-to-physical address translation for the received virtual address; the operation including lookup operation and manage allocation and restriction, implement replacement entry by invalidating existing entry) [KOUFATI’s Par. 0032-0034, Par. 0040-0043]. With respect to claim 4, KOUFATI, SCHUMACHER, and PASSINT, combined teach the system, further comprising a host, wherein: the host comprises: a memory management unit, and a translation lookaside buffer; and the host is configured to request, from the first memory device, the physical address lookup (IOMMU to perform operation including lookup operation and translation to manage allocation and restriction) [KOUFATI’s Par. 00276-0029; Par. 0040-0043]. With respect to claims 5 and 18, KOUFATI, SCHUMACHER, and PASSINT, combined teach the system, wherein the first memory device is further configured: to receive: a first input/output (IO) request, the first IO request comprising a first tag indicating a first time of issuance of the first IO request, and a second IO request, the second IO request comprising a second tag indicating a second time of issuance of the second IO request; and to execute: the second IO request, and the first IO request, wherein the first time is different from the second time (address translation table to include an I/O TLB field identifying entry level, the entry allocation permission) [KOUFATI’s Par. 0034-0036; Par. 0077-0078). With respect to claims 6 and 19, KOUFATI, SCHUMACHER, and PASSINT, combined teach the system, wherein the first IO request further comprises a third tag indicating a third time of processing of the first IO request (entries stored based on permission level and replaced based on permission within address translation table to include an I/O TLB field identifying level) [KOUFATI’s Par. 0032-0034; Par. 0043-0045; Par. 0077-0078]. With respect to claim 7, KOUFATI, SCHUMACHER, and PASSINT, combined teach the system, wherein the first memory device is further configured to report, through the first controller, a load value for the first memory (IOMMU to receive DMA request and determine memory access operation, the operation including lookup operation and manage allocation and restriction) [KOUFATI’s Par. 0032-0035; Par. 0040-0043]. With respect to claim 8, KOUFATI, SCHUMACHER, and PASSINT, combined teach the system, wherein first memory device further includes a second memory, wherein the first memory is volatile memory and the second memory is nonvolatile memory, and the first memory device is further configured to report, through the first controller, a load value for the nonvolatile memory (translation table field used for processing entry based on priory such that higher priority allow load values to indicate number of entries allocated based on a priority) [KOUFATI Par. 0034-0036]. With respect to claim 9, KOUFATI, SCHUMACHER, and PASSINT, combined teach the system, further comprising a second memory device, wherein: the first memory device is configured to store: first data having a first physical address, and second data having a second physical address; and the second memory device is configured to store: third data having a third physical address, the third physical address being greater than the first physical address and less than the second physical address (translation table field for processing entry based on priory with higher priority allowing more entries than a lower priority) [KOUFATI’s Fig. 2; Par. 0034-0036]. With respect to claim 10, KOUFATI, SCHUMACHER, and PASSINT, combined teach the system, wherein the first memory device is further configured to: receive a read request; read data from a first page; decode the data based on an error correcting code; determine that a number of errors corrected exceeds a threshold; and send, through the first controller, a report requesting action related to the first page (using translation table field and processing entry based on priory such that higher priority with allocation of space in performing replacement) [KOUFATI’s Par. 0032-0035]. With respect to claim 11, KOUFATI, SCHUMACHER, and PASSINT, combined teach the system, wherein the first memory device is further configured to: receive a write request comprising unencrypted data; encrypt the unencrypted data to form encrypted data; and store the encrypted data in the first memory (receiving request and using cryptography services such as public key encryption (PKE), cipher, hash/authentication capabilities, decryption, or other capabilities or service) [KOUFATI’s Par. 23-0027]. With respect to claim 12, KOUFATI, SCHUMACHER, and PASSINT, combined teach the system, wherein the first memory device is further configured to: receive a read request; read encrypted data from the first memory; decrypt the encrypted data to form unencrypted data; and transmit the unencrypted data through the first controller (IOMMU to receive DMA request, determine need to perform memory access operation and permission , the operation including lookup operation and manage allocation and restriction, implement replacement entry by invalidating existing entry) [KOUFATI’s Par. 0019-0021; Par. 0027-0029]. . With respect to claim 13, KOUFATI, SCHUMACHER, and PASSINT, combined, teach the system, wherein the first controller is configured to adhere to a cache coherent protocol (Cache Coherent Interconnect for Accelerators implement various types of computing and networking switches, routers) [KOUFATI’s Fig. 1; Par. 0025-0027]. With respect to claims 14 and 20, KOUFATI, SCHUMACHER, and PASSINT, combined, teach the system, wherein the first memory device further includes a second memory, wherein the first memory is volatile memory and the second memory is nonvolatile memory (Cache Coherent Interconnect for Accelerators implement various types of computing and networking switches, routers, racks, and blade servers) [KOUFATI’s Par. 0024-0026]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US7752281 (Rowlands ) teaching system for managing data in multiple data processing devices using common data paths comprising a first data processing system comprising a cacheable coherent memory space; and a second data processing system communicatively coupled to the first data processing system with the second data processing system comprising at least one bridge, wherein the bridge is operable to perform an uncacheable remote access to the cacheable coherent memory space of the first data processing system. SANO (US 20030105828 A1 ) teaching apparatus comprising: a first system comprising a first plurality of interface circuits, each of the first plurality of interface circuits configured to couple to a separate interface; and a second system comprising a second plurality of interface circuits, each of the second plurality of interface circuits configured to couple to a separate interface; wherein a first interface circuit of the first plurality of interface circuits and a second interface circuit of the second plurality of interface circuits are coupled to a first interface, and wherein the first interface circuit and the second interface circuit are configured to communicate packets, coherency commands, and noncoherent commands on the first interface. US 20200117609 (FINKBEINER et al) teaching first cache controller coupled to the first processing resource and to the first cache line can be configured to provide coherent access to data stored in the second cache line and corresponding to a memory address, a second cache controller coupled through an interface to the apparatus and coupled to the second cache line can be configured to provide coherent access to the data stored in the first cache line and corresponding to the memory address, the controller to control the movement of data between the processing resource and the cache and the movement of data between first cache and second cache; the data packets and/or blocks being different granularities can be transferred via cache coherency bus utilizing protocols. S. Min, M. Alian, W. -M. Hwu and N. S. Kim, "Semi-Coherent DMA: An Alternative I/O Coherency Management for Embedded Systems," in IEEE Computer Architecture Letters, vol. 17, no. 2, pp. 221-224, 1 July-Dec. 2018. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to PIERRE MICHEL BATAILLE whose telephone number is (571)272-4178. The examiner can normally be reached Monday - Thursday 7-6 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TIM VO can be reached at (571) 272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PIERRE MICHEL BATAILLE/Primary Examiner, Art Unit 2136
Read full office action

Prosecution Timeline

Oct 03, 2024
Application Filed
Sep 06, 2025
Non-Final Rejection — §103, §DP
Nov 17, 2025
Examiner Interview Summary
Nov 17, 2025
Applicant Interview (Telephonic)
Dec 08, 2025
Response Filed
Feb 18, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602175
Charge Domain Compute-in-DRAM for Binary Neural Network
2y 5m to grant Granted Apr 14, 2026
Patent 12596655
SYSTEMS AND METHODS FOR TRANSFORMING LARGE DATA INTO A SMALLER REPRESENTATION AND FOR RE-TRANSFORMING THE SMALLER REPRESENTATION BACK TO THE ORIGINAL LARGE DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596649
MEMORY ACCESS DEVICE AND OPERATING METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12591523
PRIORITY-BASED CACHE EVICTION POLICY GOVERNED BY LATENCY CRITICAL CENTRAL PROCESSING UNIT (CPU) CORES
2y 5m to grant Granted Mar 31, 2026
Patent 12579082
Automated Participation of Solid State Drives in Activities Involving Proof of Space
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
93%
Grant Probability
99%
With Interview (+6.2%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 1186 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month