Prosecution Insights
Last updated: April 19, 2026
Application No. 18/714,677

LANGUAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM

Non-Final OA §101§103
Filed
May 30, 2024
Examiner
MASTERS, KRISTEN MICHELLE
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
87%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
25 granted / 40 resolved
+0.5% vs TC avg
Strong +25% interview lift
Without
With
+24.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
36 currently pending
Career history
76
Total Applications
across all art units

Statute-Specific Performance

§101
35.2%
-4.8% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§101 §103
Detailed Action This communication is in response to the Application filed on 5/30/2024. Claims 1-8 are pending and have been examined. Independent Claims 1, and 7 are Device and Method claims, respectively. Apparent priority: 12/01/2021. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The independent Claims are directed to statutory categories: Claim 1 is a device claim and directed to the machine or manufacture category of patentable subject matter. Claim 7 is a method claim and is directed to the machine or manufacture category of patentable subject matter. Independent claim 1 recites, “1. A language processing device configured to perform language processing, the language processing device comprising: (This relates to a human using speech or pen and paper to process language using natural language processing capabilities in the human mind.) circuitry configured to generate an error sentence corresponding to an original sentence based on pronunciation corresponding to text data indicating the original sentence; (This relates to a human using speech or pen and paper to generate an error sentence.) use a language model generate a prediction sentence from the error sentence based on a language model parameter of the language model; (This relates to a human using speech or pen and paper to generate a prediction sentence.) and update the language model parameter based on a difference between the original sentence and the prediction sentence.” (This relates to a human updating parameters using speech or pen and paper.) The Dependent Claims do not include additional limitations that could incorporate the abstract idea into a practical application or cause the Claim as a whole to amount to significantly more than the underlying abstract idea. Regarding Independent Claim 7, claim 7 is a method claim with limitations similar to that of Claim 1 and is rejected under the same rational. This judicial exception is not integrated into a practical application. In particular, claims 1 and 8 recite additional elements of “device” For example, in [0022-0023] of the as filed specification, there is description of using The processor 301 serves as a control unit that controls the entire language processing device 3, and includes various arithmetic devices such as a central processing unit (CPU). The processor 301 reads and executes various programs on the memory 302. Note that the processor 301 may include a general-purpose computing on graphics processing units (GPGPU). [0023] The memory 302 includes a main storage device such as a read only memory (ROM) and a random access memory (RAM). The processor 301 and the memory 302 form a so- called computer, and the processor 301 executes various programs read on the memory 302, so that the computer realizes various functions. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a device is noted as a general computer. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitation in the claims noted above are directed towards insignificant solution activity. The claims are not patent eligible. Dependent claim 2 recites, “2. The language processing device according to claim 1, wherein the circuitry is configured to generate the error sentence by converting, into a second morpheme, a first morpheme included in the text data indicating the original sentence based on pronunciation of the first morpheme; (This relates to a human using speech or using pen and paper to generate an error sentence.) No additional limitations present. convert the second morpheme into a predetermined standard notation; and generate the error sentence based on the predetermined standard notation. (This relates to a human using speech, or using pen and paper to convert a morpheme.) Dependent claim 3 recites, “3. The language processing device according to the language processing device according to wherein the circuitry is configured to set, as the second morpheme, a morpheme that is selected randomly from a first morpheme string, the first morpheme string being obtained by performing morphological analysis on the text data indicating the original sentence. (This relates to a human using pen and paper to set a second morpheme.) No additional limitations present. Dependent claim 4 recites, “4. The language processing device according to claim 2, wherein the circuitry is configured to convert a third morpheme having a standard notation into the predetermined standard notation, the third morpheme being selected from among third morphemes that are obtained by connecting adjacent second morphemes and performing morphological analysis.” (This relates to a human using speech, or using pen and paper to convert a morpheme.) No additional limitations present. Dependent claim 5 recites, “5. The language processing device according to claim 2, wherein the second morpheme is a hiragana in a case where the original sentence is in Japanese.” (This relates to a human observing using natural language understanding in the human mind that the second morpheme is a hiragana.) No additional limitations present. Dependent claim 6 recites, “6. The language processing device according to claim 1, wherein the circuitry is configured to create a correct token string based on comparison information for correcting an error sentence token string to an original sentence token string, the error sentence token string and the original sentence token string being each obtained by dividing a corresponding sentence among the error sentence and the original sentence, in a process unit; (This relates to a human using speech or pen and paper to create a correct token string.) generate, based on the language model parameter, a prediction token string included in the prediction sentence, from a token string of the error sentence and update the language model parameter based on the correct token string and the prediction token string.” (This relates to a human generating a prediction sentence using pen and paper or using speech or in the human mind.) No additional limitations present. Dependent claim 8 recites, “8. A non-transitory computer medium storage medium storing a program for causing a computer to execute the language processing method of claim 7. (This relates to a human naturally processing language in the human mind.) Device storage is an additional limitation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6, 7 are rejected under 35 U.S.C. 103 as being unpatentable over Masakazu (NPL Pseudo error data creation and evaluation for Japanese speech recognition error correction), and further in view of ABUAMMAR (U.S. Patent Number US 20200372217 A1). Regarding independent Claim 1, Masakazu teaches 1. A language processing device configured to perform language processing, the language processing device comprising: circuitry configured to generate an error sentence corresponding to an original sentence based on pronunciation corresponding to text data indicating the original sentence; (see Masakazu 2. paragraph 1 “ …we used a text-to-speech model to convert text to speech and a speech recognition model to convert text to create a parallel corpus containing pseudo-errors…”) use a language model based on a neural network model and generate a prediction sentence from the error sentence based on a language model parameter of the language model; and (see Masakazu 2 paragraph 1 “… and correct is using a Bidirectional LSTM . in this study, we create pseud-data bu applying pseudo-errors directly to the raw corpus”) (see Masakazu 1. introduction paragraph 2 “Therefore, we consider correcting speech recognition results including speech recognition errors in the same way as grammatical error correction. Although the performance of grammatical error correction has improved due to the rise of deep learning- based methods using pre-learned language models there is no large-scale Japanese corpus including speech recognition errors that can be applied to this method. Therefore, we analyzed the recognition error tendency from a small speech recognition corpus and formulated an error assignment rule. and applied the rule to a large Japanese corpus to automatically create a pseudo-speech recognition error corpus. In the pre-learning we selected data so that the sampling from a large Japanese corpus was closer to the domain to be evaluated and compared the accuracy with random sampling. In this study, we conducted an experiment of error under multiple conditions for pre-learning. We evaluated the effect of the setting of corpus creation on the accuracy of speech recognition error correction. As a result, although prior learning using pseudo-error data contributed to the improvement of accuracy, there was almost no difference in error correction accuracy between sampling from a large corpus and sampling using data close to the domain.”)(see Masakazu 3. Creation of pseudo voice error data paragraphs 1-4 “We performed speech recognition using google speech to Text on the Japanese speech-to-text corpora,… from the results we randomly sampled 100 items each, separated them using MeCab, assigned them correct and incorrect labels… using and manually annotated the error patterns for the speech recognition error locations… proper noun errors were the most common, followed by word partial changes and homophone errors. Since proper nouns are assumed to be included in less or less of the speech recognizers learning data than general terms, we believe that this result is valid. We also assume that homophones are difficult to recognize because they cannot be correctly recognized without deep consideration of the context before and after them…. Based on the results of the analysis in Section 3.1, we apply three types of pseudo-error generation rules: homonyms, partial changes of words, and random changes. Proper nouns are treated as part of partial changes because it is difficult to impart pseudo-similar nouns due to their nature… homonyms in the operation of homonyms, we use ksasao’s synonym dictionary as the homonym dictionary, and randomly replace the target word with a homonym. Partial changes in words The procedure for making errors in partial changes in words in shown in Figure 1. First, all words in the corpus are read from ipad. For the reading of the word to be erroneously assigned, the similar character search library simstring is used to search the set and often similar words. The threshold value is set to 0.8 or more for the cosine similarity of the character unigram, and the word is randomly selected form the top 10 words. If the target word is Chinese characters, google translate converts the word to Chinese characters and then replace the word. Random change finally the random change operation replaces the target word with a randomly selected word from the words in the corpus pseudo-data creation was performed in the same way… by directly applying an error to each word in each sentence with a certain probability. In this setup, an error was generated for each word in each sentence with the following probability….”) (see Masakazu Figure 1 Figure 2 Masakazu does not specifically teach update the language model parameter based on a difference between the original sentence and the prediction sentence. However, ABUAMMAR does teach this limitation (see ABUAMMAR [0184] The model updater 1650 may control a data recognition model to be renewed based on a evaluation of a recognition result provided by the recognition result provider 1640. For example, the model updater 1650 may control the model trainer 1540 to renew a data recognition model by providing the model trainer 1540 with a recognition result provided by the recognition result provider 1640.”) Masakazu and ABUAMMAR are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device of Masakazu to incorporate update the language model parameter based on a difference between the original sentence and the prediction sentence of ABUAMMAR. This allows for improved efficiency of analysis as recognized by ABUAMMAR [0087]. Regarding Independent Claim 7, claim 7 is a method claim with limitations similar to that of Claim 1 and is rejected under the same rational. As to Claim 6 Masakazu in view of ABUAMMAR teaches 6. The language processing device according to claim 1, Furthermore, ABUAMMAR teaches wherein the circuitry is configured to create a correct token string based on comparison information for correcting an error sentence token string to an original sentence token string, the error sentence token string and the original sentence token string being each obtained by dividing a corresponding sentence among the error sentence and the original sentence, in a process unit; (see ABUAMMAR [0016] The method may further include performing at least one of a tokenizing process and a normalizing process with respect to the plurality of words constituting the source sentence, wherein context vectors representing words obtained as a result of performing the above processes may be obtained. generate, based on the language model parameter, a prediction token string included in the prediction sentence, from a token string of the error sentence and update the language model parameter based on the correct token string and the prediction token string. (see ABUAMMAR [0088] In operation S530, a server 100 may obtain a plurality of paraphrased sentences for the source sentence by inputting words, which are tokenized and normalized through the pre-processing process of operation S530, to the trained network model. A method of determining a plurality of paraphrased sentences from a source sentence may correspond to the method described above with reference to FIG. 1.)”) Masakazu and ABUAMMAR are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device of Masakazu to incorporate wherein the circuitry is configured to create a correct token string based on comparison information for correcting an error sentence token string to an original sentence token string, the error sentence token string and the original sentence token string being each obtained by dividing a corresponding sentence among the error sentence and the original sentence, in a process unit; a prediction token string included in the prediction sentence, from a token string of the error sentence and update the language model parameter based on the correct token string and the prediction token string of ABUAMMAR. This allows for improved efficiency of analysis as recognized by ABUAMMAR [0087]. Claims 2 -4 are rejected under 35 U.S.C. 103 as being unpatentable over Masakazu (NPL Pseudo error data creation and evaluation for Japanese speech recognition error correction), and further in view of ABUAMMAR (U.S. Patent Number US 20200372217 A1), and further in view of Oshima (U.S. Patent Number US 10839155 B2). As to Claim 2, Masakazu in view of ABUAMMAR teaches 2. The language processing device according to claim 1, Furthermore, Masakazu teaches and generate the error sentence based on the predetermined standard notation. (see Masakazu 2 paragraph 1 “… and correct is using a Bidirectional LSTM . in this study, we create pseud-data bu applying pseudo-errors directly to the raw corpus”) Masakazu in view of ABUAMMAR do not specifically teach wherein the circuitry is configured to generate the error sentence by converting, into a second morpheme, a first morpheme included in the text data indicating the original sentence based on pronunciation of the first morpheme; (see Oshima Figure 8 “modified clause” ) convert the second morpheme into a predetermined standard notation; (see Oshima figure 9 “morpheme analysis result and tree” see Figure 10 “Morpheme and combine”) Masakazu in view of ABUAMMAR and Oshima are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device of Masakazu to incorporate wherein the circuitry is configured to generate the error sentence by converting, into a second morpheme, a first morpheme included in the text data indicating the original sentence based on pronunciation of the first morpheme; convert the second morpheme into a predetermined standard notation of Oshima. This allows for improved operability and searching as recognized by Oshima (11:60-61). As to Claim 3 Masakazu in view of ABUAMMAR and further in view of Oshima teaches 3. The language processing device according to claim 2, Furthermore, ABUAMMAR teaches wherein the circuitry is configured to set, as the second morpheme, a morpheme that is selected randomly from a first morpheme string, the first morpheme string being obtained by performing morphological analysis on the text data indicating the original sentence. (see ABUAMMAR [0070] When an arbitrary paraphrased sentence (examiner interprets selected randomly as “arbitrarily paraphrased”) is similar to a source sentence for a sentence length, a probability value, parts of speech of words constituting sentences, and honorific titles, a server according to an embodiment of the disclosure may determine a similarity level between the source sentence and the arbitrary paraphrased sentence to be higher than a similarity level between the source sentence and a paraphrased sentence without such similarities..”) Masakazu in view of ABUAMMAR and further in view of Oshima are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device of Masakazu and ABUAMMAR and Oshima to incorporate wherein the circuitry is configured to set, as the second morpheme, a morpheme that is selected randomly from a first morpheme string, the first morpheme string being obtained by performing morphological analysis on the text data indicating the original sentence of ABUAMMAR. This allows for improved efficiency of analysis as recognized by ABUAMMAR [0087]. As to Claim 4 Masakazu in view of ABUAMMAR teaches 4. The language processing device according to claim 2, Masakazu in view of ABUAMMAR do not specifically teach wherein the circuitry is configured to convert a third morpheme having a standard notation into the predetermined standard notation, the third morpheme being selected from among third morphemes that are obtained by connecting adjacent second morphemes and performing morphological analysis. However, Oshima does teach this limitation (see Oshima Figure 4, see (4:30-57) “(16) A text analysis program according to a further aspect of the invention is a program under which a computer functions as: a unit configured to decompose input text into morphemes; a pre-tag setting unit configured to set a pre-tag for a concerned morpheme with reference to attribute dictionaries each of which specifies a correspondence relationship between specific morphemes and types of attribute; a syntax analysis unit configured to identify a dependency relationship between respective morphemes or a dependency relationship between respective clauses each of which is an aggregation of respective morphemes; an index generation unit configured to generate an index where a combination of identification information that identifies a morpheme or a clause including a pre-tag and a type of the pre-tag are recorded; a determination rule storage unit configured to store a plurality of determination rules each including a combination of an application condition that designates at least a morpheme or a clause including a specific type of pre-tag, and an application effect that specifies a morpheme or a clause for which an attribute tag is set and a type of the attribute tag to be set; and a determination unit configured to set, with reference to the index and the determination rule storage unit, an attribute tag of a type designated in a morpheme or a clause designated in an application effect when a determination rule whose application condition matches with a concerned text is present.”)( (see Oshima (14:38-15:5) “claim 1 set pre-tags in the memory for the morphemes with reference to the attribute dictionaries, each pre-tag specifying a correspondence relationship between specific morphemes and an attribute type; identify a dependency relationship between respective morphemes or respective clauses, each clause being an aggregation of at least two of the morphemes; generate an index, in the memory, combining identification information that identifies a first one of the morphemes or the clauses, the pre-tag set for the first one of the morphemes or the clauses, and a tag type of the pre-tag; store a plurality of determination rules, in the memory, each including an application condition that designates at least a second one of the morphemes or the clauses, including a specific type of the pre-tags, and an application effect that specifies a third one of the morphemes or the clauses for which an attribute tag is set and the attribute type of the attribute tag;”) Masakazu in view of ABUAMMAR and Oshima are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device of Masakazu and ABUAMMAR to incorporate the circuitry is configured to convert a third morpheme having a standard notation into the predetermined standard notation, the third morpheme being selected from among third morphemes that are obtained by connecting adjacent second morphemes and performing morphological analysis of Oshima. This allows for improved operability and searching as recognized by Oshima (11:60-61). Claims 5, is rejected under 35 U.S.C. 103 as being unpatentable over Masakazu (NPL Pseudo error data creation and evaluation for Japanese speech recognition error correction), and further in view of ABUAMMAR (U.S. Patent Number US 20200372217 A1), and further in view of Oshima (U.S. Patent Number US 10839155 B2), and further in view of Itsui (U.S. Patent Number US 20190080688 A1). As to Claim 5 Masakazu in view of ABUAMMAR and further in view of Oshima teaches 5. The language processing device according to claim 2, Masakazu in view of ABUAMMAR and further in view of Oshima do not specifically teach wherein the second morpheme is a However, Itsui does teach this limitation (see Itsui [0122] FIG. 20 is a diagram illustrating a list 601 of paraphrases obtained by unifying paraphrases expressed in different terms and the average likelihoods thereof according to the second embodiment. In FIG. 20, the Japanese phrases “ii oto de kiku (listen to good quality sound)”, “yoi oto de kiku (listen to good quality sound, “yoi” here is written in a combination of kanji and hiragana)”, and “yoi oto de kiku (listen to good quality sound, “yoi” here is written in hiragana)” illustrated in FIG. 12 in the first embodiment are unified into the phrase “ii oto de kiku (listen to good quality sound)”. In addition, the Japanese phrases “kawaii koe de kiku (hear in a cute voice, “kawaii” here is written in hiragana)” and “kawaii koe de kiku (hear in a cute voice, “kawaii” here is written in a combination of kanji and hiragana)” are unified into the phrase “kawaii koe de kiku (hear in a cute voice, “kawaii” here is written in hiragana)”.”) Masakazu in view of ABUAMMAR and further in view of Oshima and Itsui are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the device of Masakazu and ABUAMMAR and Oshima to incorporate second morpheme is a hiragana in a case where the original sentence is in Japanese of Itsui. This allows for improved rate of recognition of the sequences of words as recognized by Itsui [0049]. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Masakazu (NPL Pseudo error data creation and evaluation for Japanese speech recognition error correction), and further in view of ABUAMMAR (U.S. Patent Number US 20200372217 A1), and further in view of Itsui (U.S. Patent Number US 20190080688 A1). As to Claim 8 Masakazu in view of ABUAMMAR teaches a computer to execute the language processing method of claim 7. Masakazu in view of ABUAMMAR do not specifically teach 8. A non-transitory computer medium storage medium storing a program for causing However, Itsui does teach this limitation (see Itsui [00148] claim 9. “…A non-transitory computer-readable recording medium that stores therein a program to cause a computer to execute:”) Masakazu in view of ABUAMMAR and Itsui are in the same field of endeavor of signal processing, therefore, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified method of combination of Masakazu and ABUAMMAR to incorporate the storage medium of Itsui. This allows for improved rate of recognition of the sequences of words as recognized by Itsui [0049]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KRISTEN MICHELLE MASTERS whose telephone number is (703)756-1274. The examiner can normally be reached M-F 8:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KRISTEN MICHELLE MASTERS/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

May 30, 2024
Application Filed
Jan 08, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592219
Hearing Device User Communicating With a Wireless Communication Device
2y 5m to grant Granted Mar 31, 2026
Patent 12548569
METHOD AND SYSTEM OF DETECTING AND IMPROVING REAL-TIME MISPRONUNCIATION OF WORDS
2y 5m to grant Granted Feb 10, 2026
Patent 12548564
SYSTEM AND METHOD FOR CONTROLLING A PLURALITY OF DEVICES
2y 5m to grant Granted Feb 10, 2026
Patent 12547894
ENTROPY-BASED ANTI-MODELING FOR MACHINE LEARNING APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12547840
MULTI-STAGE PROCESSING FOR LARGE LANGUAGE MODEL TO ANSWER MATH QUESTIONS MORE ACCURATELY
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
87%
With Interview (+24.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month