Prosecution Insights
Last updated: April 19, 2026
Application No. 19/030,691

METHOD AND APPARATUS FOR KNOWLEDGE REPRESENTATION AND REASONING IN ACCOUNTING

Non-Final OA §103
Filed
Jan 17, 2025
Examiner
LU, KUEN S
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
Pwc Product Sales LLC
OA Round
2 (Non-Final)
85%
Grant Probability
Favorable
2-3
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
781 granted / 914 resolved
+30.4% vs TC avg
Strong +15% interview lift
Without
With
+15.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
16 currently pending
Career history
930
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
18.5%
-21.5% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 914 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is response to the summary of the interview conducted on 01/30/2026. The previous non-final rejections of 10/22/2025 is hereby voided and replaced by the instant action. Claims 1-20 stand rejected, objected to and are pending in this Office Action. Claims 1, 4 and 5 are independent claims. Priority Applicant’s claim for the benefit of a prior-filed application a continuation of 18181280, filed 03/09/2023, now U.S. Patent #12229195 issued 02/182025 to which the instant application is a divisional, under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, or 365(c) is acknowledged. Claim Rejections - 35 USC § 103 The following is a quotation of - 35 USC § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 4-5, 1-3, 6 and 9-16 are rejected under 35 U.S.C. § 103 as being unpatentable over Sivakumar et al.: “CACHING OF TEXT ANALYTICS BASED ON TOPIC DEMAND AND MEMORY CONSTRAINTS” (United States Patent Application Publication 20240095270 A1, DATE PUBLISHED 2024-03-21; and DATE FILED 2022-09-21, hereafter “Sivakumar”) in view of Bailey et al.: “QUERY REVISION USING KNOWN HIGHLY-RANKED QUERIES” (United States Patent Application Publication 20060224554 A1, DATE PUBLISHED 2006-10-05; and DATE FILED 2005-11-22, hereafter “Bailey”), and further in view of Crabtree et al.: “AUTOMATED SCALABLE CONTEXTUAL DATA COLLECTION AND EXTRACTION SYSTEM” (United States Patent Application Publication 20210056117 A1, DATE PUBLISHED 2021-02-25; and DATE FILED 2020-07-02, hereafter “Crabtree”). As per claim 4, Sivakumar teaches a method for automatically generating a response to an input query, the method comprising: receiving, by a computer, an input query (See [0020], the system receives a user query); automatically identifying a topic cluster associated with the input query (See Abstract: a user query to identify via natural language processing (NLP) a query topic). As cited, the identifying a topic cluster is based on natural language processing (NLP) model. Sivakumar does not explicitly teach the identifying of topic cluster is based on a first topic prediction model and second topic prediction model. However, Bailey teaches the identifying of topic cluster is based on one or both of a first topic prediction model and second topic prediction model (See [0088], The reviser confidence estimator 112 applies the original query and revised queries to the predictive model to obtain the prediction measures serving as the previously mentioned confidence measures, and the revision server 107 uses the confidence measures, as described above, to select and order which revised queries will be shown to the user.), It would have been obvious to one having ordinary skill in the art at the time the Applicant’s application was filed to combine Bailey’s teaching with Sivakumar because Sivakumar is dedicated to caching of text analytics based on topic demand and memory constraints and Bailey is dedicated to revising user queries, the combined teaching of Sivakumar and Bailey references would have enabled Blaney to improve accuracy of topic cluster query by using prediction measures to obtain more accurate topic cluster query. Sivakumar in view of Bailey further teaches: wherein the topic cluster comprises a plurality of topic entities (See Sivakumar: [0025], determines sets of topics and sentiments that form clusters. Here the topics and sentiments are entities of a topic cluster); directing the input query to a data structure associated with the identified topic cluster (See Sivakumar: [0003] and [0025], mapping the first query topic to a first topic cluster at a first node of a hierarchical model of a text database. Here mapping query topic to a topic cluster associated with topics and sentiments teaches the directing the query to the cluster of topics and sentiments, the data structure), wherein the data structure comprises a plurality of nodes, each node representing one of the topic entities in the topic cluster (See Sivakumar: [0003] and [0025], mapping the first query topic to a first topic cluster at a first node of a hierarchical model of a text database), and wherein at least one of the nodes in the topic cluster is associated with one or more of the other nodes in the topic cluster (See Sivakumar: Page 13, claim 10, associating nodes of the hierarchical dendrogram with respective topic clusters extracted from the text database). Sivakumar in view of Bailey does not explicitly teach based on one or more linguistic modalities, the linguistic modalities defining a relationship linking the respective nodes. However, Crabtree teaches based on one or more linguistic modalities, the linguistic modalities defining a relationship linking the respective nodes (See [0011] and [0033], extracting information from linguistic modalities within a richly formatted dataset relating to the context provided by the user, using the extraction engine and transforming the extracted data into a graph and time series-based dataset, and the directed computational graph module 155 representing all data as directed graphs where the transformations are nodes and the result messages between transformations edges of the graph. Here the edge in a directed graph teaches a relationship linking the respective nodes). It would have been obvious to one having ordinary skill in the art at the time the Applicant’s application was filed to combine Crabtree’s teaching with Sivakumar in view of Bailey because Sivakumar is dedicated to caching of text analytics based on topic demand and memory constraints, Bailey is dedicated to revising user queries, and Crabtree is dedicated to collecting and extracting contextual data, the combined teaching of Sivakumar, Bailey and Crabtree references would have enabled Sivakumar in view of Bailey reference’s predictive model to analyze data from richly formatted texts including unstructured text. Sivakumar in view of Bailey and further in view of Crabtree further teaches: generating a response to the input query (See Sivakumar: [0002], generating a response to a user's query). As per claim 5, the claim recites a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device (See Sivakumar: [0005], a computer system includes a processor, a computer-readable memory, and a computer-readable storage medium, and program instructions stored on the storage medium for execution by the processor via the memory) to perform the steps as recited limitations of the claim 4 and as rejected under 35 U.S.C. § 103 as being unpatentable over Sivakumar in view of Bailey and further in view of Crabtree. Accordingly, claim 5 is rejected along the same rationale that rejected claim 4. As per claim 1, he claim recites a system for automatically generating a response to an input query, the system comprising one or more processors configured to cause the system (See Sivakumar: [0005], a computer system includes a processor, a computer-readable memory, and a computer-readable storage medium, and program instructions stored on the storage medium for execution by the processor via the memory) to perform the steps as recited limitations of the claim 4 and as rejected under 35 U.S.C. § 103 as being unpatentable over Sivakumar in view of Bailey and further in view of Crabtree. Accordingly, claim 1 is rejected along the same rationale that rejected claim 4. As per claim 2, Sivakumar in view of Bailey and further in view of Crabtree teaches the system of claim 1, wherein generating a response to the input query comprises selecting, based on the data structure comprising the identified topic cluster, a response from a predefined group of responses (See Sivakumar: [0025] and [0086], the process applies a clustering algorithm to the lines of text based on the identified topics and sentiments. In some embodiments, the process uses a known technique that determines sets of topics and sentiments that form clusters; and the process constructs a hierarchical topic model based on the topic and sentiment clusters. In some embodiments, the hierarchical topic model has nodes that are associated with respective topics and sentiments that are arranged in a hierarchical manner.. Here the identified and arranged hierarchically suggests pre-defined). As per claim 3, Sivakumar in view of Bailey and further in view of Crabtree teaches the system of claim 1, wherein generating a response to the input query comprises generating, using the associated nodes of the data structure, a response to the input query (See Sivakumar: [0027], the demand data includes counts of how many times each node has been used to generate a response to a user query). As per claim 6, Sivakumar in view of Bailey and further in view of Crabtree teaches the system of claim 1, wherein the first prediction model is a trained semantic classification model (See Bailey: [0048] and [0086], the probability of revision (PR) and revision probability (RP) of a nearby query can be determined based on semantic similarity, syntactic similarity, behavioral similarity, or any combination thereof, and the reviser confidence estimator 112 can train the predictive model to predict the likelihood of a long click given the various features of the revised query and the original query). As per claim 9, Sivakumar in view of Bailey and further in view of Crabtree teaches the system of claim 1, wherein the one or more processors are configured to cause the system to: identify the topic cluster based on a prediction by both the first and second topic prediction model (See Sivakumar: [0003], identifying the first topic cluster as a first topic-cache candidate based on the query demand data. The embodiment also includes comparing a first required amount of memory required for storing text associated with the first topic cluster to available cache memory in a database cache. The embodiment also includes storing, responsive to identifying the first topic cluster as the first topic-cache candidate and determining that the available cache memory is greater than the first required amount of memory, the text associated with the first topic cluster in the database cache). As per claim 10, Sivakumar in view of Bailey and further in view of Crabtree teaches the system of claim 1, wherein the input query is a natural language input (See Sivakumar: [0003], analyzing text content of a first user query to identify via natural language processing a first query topic defined by words of the text content). As per claim 11, Sivakumar in view of Bailey and further in view of Crabtree teaches the system of claim 1, wherein the input query is extracted from structured or unstructured textual data (See Sivakumar: [0003], analyzing text content of a first user query to identify via natural language processing a first query topic defined by words of the text content). As per claim 12, Sivakumar in view of Bailey and further in view of Crabtree teaches the system of claim 11, wherein the input query is extracted from a predefined set of questions and answers (See Sivakumar: [0018] A typical information retrieval system performs several NLP tasks, including NLP tasks on ingested documents and NLP tasks on user queries. Hereinafter, a request for information presented in any correct or incorrect, complete or incomplete, colloquial or formal, grammatical form of a natural language, during a conversation occurring with an illustrative embodiment described herein, is interchangeably referred to as a “question” or “query” unless expressly disambiguated where used.). As per claim 13, Sivakumar in view of Bailey and further in view of Crabtree teaches the system of claim 1, wherein generating the response to the input query comprises: generating, using the interconnected nodes of the data structure, a response to the input query (See Sivakumar: [0027], the process generates demand data indicative of the demand for topics and sentiments in hierarchical topic model based on the user queries. In some embodiments, the demand data includes counts of how many times each node has been used to generate a response to a user query.). As per claim 14, Sivakumar in view of Bailey and further in view of Crabtree teaches the method of claim 1, wherein generating the response to the input query comprises: traversing between the at least one node in the topic cluster and the one or more other associated nodes using one or more edges connecting the nodes (See Sivakumar: [0028], the process traverses the hierarchical topic model for topic matching queries to nodes, the process maintains statistical data that includes a count for each node indicating the number of times that node has been used to answer a query.); and generating a response to the input query based on the traversed nodes and edges (See Sivakumar: [0028], the process traverses the hierarchical topic model for topic matching queries to nodes, the process maintains statistical data that includes a count for each node indicating the number of times that node has been used to answer a query.). As per claim 15, Sivakumar in view of Bailey and further in view of Crabtree teaches the system of claim 1, wherein generating the response to the input query comprises: selecting, based on the data structure comprising the identified topic cluster, a response from a predefined group of responses (See Sivakumar: [0025] and [0086], the process applies a clustering algorithm to the lines of text based on the identified topics and sentiments. In some embodiments, the process uses a known technique that determines sets of topics and sentiments that form clusters; and the process constructs a hierarchical topic model based on the topic and sentiment clusters. In some embodiments, the hierarchical topic model has nodes that are associated with respective topics and sentiments that are arranged in a hierarchical manner.. Here the identified and arranged hierarchically suggests pre-defined). As per claim 16, Sivakumar in view of Bailey and further in view of Crabtree teaches the system of claim 1, wherein the generated response comprises at least one of: a natural language description of an accounting topic, a natural language description of a business entity, a natural language description of an audit method, a natural language description of a mathematical relationship, and a natural language explanation of the generated response to the input query (See Sivakumar: [0003], analyzing text content of a first user query to identify via natural language processing a first query topic defined by words of the text content. ). Claims 7-8 and 17 are rejected under 35 U.S.C. § 103 as being unpatentable over Sivakumar in view of Bailey and further in view of Crabtree, as applied to claims 4-5, 1-3, 6 and 9-16 above and further in view of Tappin; Isabella: “STATEFUL, REAL-TIME, INTERACTIVE, AND PREDICTIVE KNOWLEDGE PATTERN MACHINE” (United States Patent Application Publication US 20220188661 A1, DATE PUBLISHED 2022-06-16; and DATE FILED 2022-03-02, hereafter “Tappin”). As per claim 7, Sivakumar in view of Bailey and further in view of Crabtree does not explicitly teach the system of claim 6, wherein the second prediction model is a semantic embedding model. However, Tappin teaches the system of claim 6, wherein the second prediction model is a semantic embedding model (See [0057], the embedding of a processed input query, may be generated by an embedding model (either a syntactic embedding model, a semantic embedding model, or a hybrid model). Such a model may be trained based using a natural language model and is capable of providing embedding vectors for any input query). It would have been obvious to one having ordinary skill in the art at the time the Applicant’s application was filed to combine Tappin’s teaching with Sivakumar in view of Bailey and further in view of Crabtree because Sivakumar is dedicated to caching of text analytics based on topic demand and memory constraints, Bailey is dedicated to revising user queries, Crabtree is dedicated to collecting and extracting contextual data and Tappin is dedicated to intelligent interactive data analytics in a real-time predictive knowledge pattern machine , the combined teaching of Sivakumar, Bailey and Crabtree, Tappin references would have enabled Sivakumar in view of Bailey and further in view of Crabtree reference’s predictive model to utilize syntactic embedding, the semantic embedding, or a combination (or hybrid) thereof to better match the input query and the query result. As per claim 8, Sivakumar in view of Bailey and further in view of Crabtree and Tappin further teaches the system of claim 7, wherein the semantic embedding model is configured to: extract a plurality of query entities from the input query (See Sivakumar: [0025], determines sets of topics and sentiments that form clusters. Here the topics and sentiments are entities of a topic cluster); apply a clustering process to generate one or more clusters of query nodes (See Sivakumar: [0028], the process traverses the hierarchical topic model for topic matching queries to nodes, the process maintains statistical data that includes a count for each node indicating the number of times that node has been used to answer a query. The matched queries teaches the query nodes cluster); compute an average semantic embedding for one or more of the generated clusters of query nodes, wherein each average semantic embedding represents a generated cluster of query nodes (See Tappin: [0057], semantic embedding techniques may be used to focus more on semantic information (i.e. the meaning of the words) and embedding that information (e.g. using pre-trained language models to find the query embedding). The matching of the input query and the pre-computed signal may be based on the syntactic embedding, the semantic embedding, or a combination (or hybrid) thereof); compute an average semantic embedding for one or more topic clusters of a plurality of topic clusters (See Tappin: [0057], the matching of the input query and the pre-computed signal may be based on the syntactic embedding, the semantic embedding, or a combination (or hybrid) thereof.), wherein each average semantic embedding represents a topic cluster (See Tappin: [0057], both syntactic and semantic embedding could be used simultaneously, and the better result may be adopted or combined. Alternatively, both embedding techniques could be run sequentially. The embedding of a processed input query, may be generated by an embedding model (either a syntactic embedding model, a semantic embedding model, or a hybrid model). Such a model may be trained based using a natural language model and is capable of providing embedding vectors for any input query); and select a topic cluster for the input query based on a comparison of at least one average semantic embedding representing a generated cluster and at least one average semantic embedding representing a topic cluster (See Sivakumar: [0003], comparing a first required amount of memory required for storing text associated with the first topic cluster to available cache memory in a database cache; and Tappin: [0057], both syntactic and semantic embedding could be used simultaneously, and the better result may be adopted or combined. Alternatively, both embedding techniques could be run sequentially. The embedding of a processed input query, may be generated by an embedding model (either a syntactic embedding model, a semantic embedding model, or a hybrid model). Such a model may be trained based using a natural language model and is capable of providing embedding vectors for any input query). As per claim 17, Sivakumar in view of Bailey and further in view of Crabtree and Tappin teaches the system of claim 1, wherein the data structure is a knowledge graph (See Tappin: [0020], The pattern machine may be further configured to automatically identify relevant information entities in the formalized query and expand to additional relevant information entities and correlations, or a lack thereof, between all the different relevant information entities based on one or more natural language understanding module(s), one or more query database(s), one or more knowledge graph(s), signals database(s), and/or events database(s). The knowledge graphs, signals databases, and events databases may be precomputed but automatically and continuously updated.). Claim 18 is are rejected under 35 U.S.C. § 103 as being unpatentable over Sivakumar in view of Bailey and further in view of Crabtree, as applied to claims 4-5, 1-3, 6 and 9-16 above and further in view of LI et al.: “AI-AUGMENTED AUDITING PLATFORM INCLUDING TECHNIQUES FOR AUTOMATED ADJUDICATION OF COMMERCIAL SUBSTANCE, RELATED PARTIES, AND COLLECTABILITY” (United States Patent Application Publication US 20230004590 A1, DATE PUBLISHED 2023-01-05; and DATE FILED 2022-06-30, hereafter “LI”). As per claim 18, Sivakumar in view of Bailey and further in view of Crabtree does not explicitly teach the system of claim 1, wherein the one or more linguistic modalities comprise at least one of a deontic linguistic modality and an epistemic linguistic modality. However, LI teaches the system of claim 1, wherein the one or more linguistic modalities comprise at least one of a deontic linguistic modality and an epistemic linguistic modality (See [0123] In some embodiments, performing semantic analysis comprises leveraging topic modeling in natural language processing (NLP) so that the intention of one or more sections, subsections, and/or paragraphs of a document is correctly identified. Linguistic analysis may classify sentences based on modality into either epistemic vs. deontic. Obligations in contracts are usually expressed in deontic modality.). It would have been obvious to one having ordinary skill in the art at the time the Applicant’s application was filed to combine LI’s teaching with Sivakumar in view of Bailey and further in view of Crabtree because Sivakumar is dedicated to caching of text analytics based on topic demand and memory constraints, Bailey is dedicated to revising user queries, Crabtree is dedicated to collecting and extracting contextual data, and LI is dedicated to AI-augmented automated analysis of documents for use in auditing platforms to automated adjudication of commercial substance, related parties, and collectability, the combined teaching of Sivakumar, Bailey and LI references would have enabled Sivakumar in view of Bailey and further in view of Crabtree reference’s predictive model to utilize automated adjudication of commercial substance, related parties, and collectability to better analyze the query and the query result. Related Prior Arts The prior art made of record and not relied upon is considered pertinent to applicant's disclosure can be found in the PTO-892 Notice of Reference Cited. Conclusion Examiner has cited particular columns and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. SEE MPEP 2141.02 [R-5] VI. PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS: A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert. denied, 469 U.S. 851 (1984) In re Fulton, 391 F.3d 1195, 1201, 73 USPQ2d 1141, 1146 (Fed. Cir. 2004). >See also MPEP §2123. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUEN S LU whose telephone number is (571)272-4114. The examiner can normally be reached on M-F, 8-19, Mid-Flex 2 hours. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mr. Aleksandr Kerzhner can be reached on 571-270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. KUEN S LU /Kuen S Lu/ Art Unit 2156 Primary Patent Examiner March 13, 2026
Read full office action

Prosecution Timeline

Jan 17, 2025
Application Filed
Oct 18, 2025
Non-Final Rejection — §103
Jan 23, 2026
Interview Requested
Jan 30, 2026
Applicant Interview (Telephonic)
Jan 30, 2026
Response Filed
Jan 30, 2026
Examiner Interview Summary
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566775
MANAGING CONTENT ACROSS DISCRETE SYSTEMS
2y 5m to grant Granted Mar 03, 2026
Patent 12561282
DYNAMIC CLUSTERING BASED ON ATTRIBUTE RELATIONSHIPS
2y 5m to grant Granted Feb 24, 2026
Patent 12561343
SYSTEM AND METHOD FOR STRUCTURING AND ACCESSING TENANT DATA IN A HIERARCHICAL MULTI-TENANT ENVIRONMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12561292
METHODS AND APPARATUS TO ESTIMATE CARDINALITY THROUGH ORDERED STATISTICS
2y 5m to grant Granted Feb 24, 2026
Patent 12554687
GATEWAY SYSTEM THAT MAPS POINTS INTO A GRAPH SCHEMA
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+15.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 914 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month