Prosecution Insights
Last updated: April 19, 2026
Application No. 18/638,155

PSEUDOTELEPATHY HEADSET

Non-Final OA §102§103
Filed
Apr 17, 2024
Examiner
WOZNIAK, JAMES S
Art Unit
2655
Tech Center
2600 — Communications
Assignee
UNIVERSITY OF UTAH RESEARCH FOUNDATION
OA Round
1 (Non-Final)
59%
Grant Probability
Moderate
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
227 granted / 385 resolved
-3.0% vs TC avg
Strong +40% interview lift
Without
With
+40.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
42 currently pending
Career history
427
Total Applications
across all art units

Statute-Specific Performance

§101
18.1%
-21.9% vs TC avg
§103
40.1%
+0.1% vs TC avg
§102
18.4%
-21.6% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 385 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive and misleading. A new title is required that is clearly indicative of the invention to which the claims are directed. Specifically, the claimed invention has nothing to do with mind reading or even a relation to sending messages via the mind. Instead, the invention is more akin to lip/facial reading for speech understanding and synthesis. The following title is suggested: --Headset using an Array of Distance Measurement Devices to Correlate Speech Pantomimes with Phonemes--. Examiner Comment on Subject Matter Eligibility under 35 U.S.C. 101 The pending claims were given consideration under the 2019 Patent Subject Matter Eligibility Guidelines (2019 PEG). Specifically, the independent claims all regard varying forms of a headset having a frame with distributed distance measurement devices, and thus, would be directed towards a practical application with a specific structure in step 2A prong 2 even though a human could understand and transcribe phonemes for pantomimed speech via lip reading. Thus, claims 1-28 are found to be directed towards patent eligible subject matter under the 2019 PEG. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: the “output module” in claim 16 where the corresponding structure in the specification is a speaker or equivalents thereof (see Specification, page 15). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 and 7-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Igarashi, et al. (“Silent Speech Eyewear Interface: Silent Speech Recognition Method using Eyewear with Infrared Distance Sensors,” 2022). With respect to Claim 1, Igarashi discloses: A headset (device worn on the head of a user, Abstract; Section 3, Page 34; and Fig. 1), comprising: a headset frame adapted to be worn on a head of a user, the headset frame having a front frame portion (the headset has a frame that is worn on the head of a user include a frontal portion to go along with side portions, Fig. 1 (particularly A, C, and D)); and an array of distance measurement devices distributed across the front frame portion of the headset frame ("sensor arrays consisting of...infrared distance sensors, Section 3.1, Page 34; see also Fig. 1 (particularly B and D)); wherein the array of distance measurement devices are oriented adjacent facial regions of the user associated with speech when the headset frame is worn by the user (see that the infrared distance sensors are adjacent to different facial regions when the device is worn in Fig. 1 (particularly A, C, and D) to detect movement of those regions such as in the mouth and jaw, Section 3.1, Page 34). With respect to Claim 7, Igarashi further discloses: The headset of claim 1, wherein the headset frame comprises a first ear piece and a second earpiece, wherein the front frame portion extends between the first ear piece and the second earpiece (ear pieces that hold the head frame into position on either side of the frontal portion, see the sides of the user's head in 1C and see the side supports in 1A). With respect to Claim 8, Igarashi further discloses: The headset of claim 7, wherein the front frame portion comprises a first support member having a nose bridge (see the nose bridge of the frontal frame shown above the user’s nose in Fig. 1C). With respect to Claim 9, Igarashi further discloses: The headset of claim 8, wherein the front frame portion further comprises a second support member configured to reside adjacent a region between a mouth and a chin of the user when the headset is worn by the user (frontal support member containing a jaw joint sensor support that is adjacent to a mouth and chin of a user when the device is worn by a user, See Fig 1A and 1C). With respect to Claim 10, Igarashi further discloses: The headset of claim 9, wherein the front frame portion further comprises a third support bar configured to reside under the chin of the user when the headset is worn by the user (end portion of the support for the jaw sensor that can be positioned beneath the chin of the user when the headset device is worn by the user, see positioning of 1D jaw sensor when the device is worn in 1C). With respect to Claim 11, Igarashi further discloses: The headset of claim 10, wherein the front frame portion further comprises a first cantilevered member and a second cantilevered member (see the arms of the headset device attached to the front frame portion but unsupported on the other end- i.e., they are cantilevered members, Figs. 1A and 1C). With respect to Claim 12, Igarashi further discloses: The headset of claim 1, further comprising a microphone for capturing vocalizations from the user (microphone, Section 3.1, Page 34; Fig. 1E). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-6 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, et al. in view of Alameh, et al. (U.S. PG Publication: 2020/0404424 A1). With respect to Claim 2, Igarashi discloses the head worn device for measuring silent speech using an infrared (IR) distance sensor array as applied to Claim 1. Igarashi appears to take the approach/stance that the subcomponents of an infrared distance sensor (i.e., for light emission and reception to measure distance) are well-known and fails to explicitly that these devices include a light emitter and an associated light sensor as set forth in claim 2. Alameh, however, provides the details lacking in Igarashi in the form of an "infrared proximity sensor" that includes a LED signal emitter and a signal receiver (Paragraph 0102). Igarashi and Alameh are analogous art because they are from a similar field of endeavor in speech interfaces using infrared proximity/distance sensors. Thus, it would have been obvious to one of ordinary skill before the effective filing date to utilize the subcomponents of an IR distance sensor taught by Alameh in the IR distance sensors taught by Igarashi in order to predictably generate and measure the signals used to effectively detect facial/mouth/jaw/etc. movement in Igarashi. With respect to Claim 3, Alameh further discloses: The headset of claim 2, wherein each light emitter emits infrared light and its associated light sensor detects infrared light (an "infrared proximity sensor" that includes a LED signal emitter and a signal receiver, Paragraph 0102). With respect to Claim 4, Alameh further discloses: The headset of claim 3, wherein each light sensor outputs a signal responsive to detected infrared light (signals and associated characteristics output from the receiver and used to identify distance to an object (e.g., such as the facial structures of Igarashi), Paragraph 0102). With respect to Claim 5, Alameh further discloses: The headset of claim 2, wherein each light emitter is a light emitting diode (LED) (LED, Paragraph 0102). With respect to Claim 6, Alameh further discloses: The headset of claim 2, wherein each light emitter emits incoherent light (an LED is inherently a type of incoherent light source, Paragraph 0102). Claims 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, et al. in view of Maizels, et al. (U.S. PG Publication: 2023/0215437 A1). With respect to Claim 13, Igarashi teaches the head worn device for measuring silent speech using an infrared (IR) distance sensor array and also including a microphone as applied to Claim 12. Igarashi uses the light-based signals to recognize speech (see Section 3.2, Pages 34-35), but does not teach the ability to use data from the distance measurement devices and the microphone to train an artificial intelligence network via an in-communication computing device. Maizels, however, discloses a connected computer server (Fig. 1, Element 38) that uses light sensor data and corresponding "spoken words" as ground truth to train a neural network (Paragraphs 0057-0059). Igarashi and Maizels are analogous art because they are from a similar field of endeavor in silent speech interfaces. Thus, it would have been obvious to one of ordinary skill before the effective filing date to use the microphones and IR sensors of Igarashi as inputs of the ground truth spoken words and light-based inputs to train a neural network as taught by Maizels to provide a predictable result of allowing the head worn device of Igarashi to be used as a communication device to communicate with others (Maizels, Paragraph 0038). With respect to Claim 14, Igarashi discloses: A system for enabling conversion of speech pantomimes of a user into synthesized speech, the system comprising: a headset having a headset frame adapted to secure to a head of the user, the headset frame having a front frame portion (the headset has a frame that is worn on the head of a user include a frontal portion to go along with side portions, Fig. 1 (particularly A, C, and D)) and one or more distance measurement devices distributed across the front frame ("sensor arrays consisting of...infrared distance sensors, Section 3.1, Page 34; see also Fig. 1 (particularly B and D showing the sensors across the front frame));; a microphone for capturing vocalizations from the user (microphone, Section 3.1, Page 34; Fig. 1E). Although Igarashi teaches the headset component of the claimed system, Igarashi does not teach the computing device that uses data from the distance measurement devices and the microphone to train an artificial intelligence network to correlate speech pantomimes of the user with phonemes. Maizels, however, discloses: a computing device in communication with the distance measurement devices and the microphone (server in communication with user equipment, Fig. 1, Element 38), the computing device programmed to use sensor data from the one or more distance measurement devices and audio data from the microphone to train an artificial intelligence network to correlate speech pantomimes of the user with phonemes (using light sensor data and corresponding "spoken words" as ground truth to train a neural network to transform "silent speech" (i.e., pantomimed speech as claimed) into phonemes and words, Paragraphs 0049, 0057-0059, and 0081). Igarashi and Maizels are analogous art because they are from a similar field of endeavor in silent speech interfaces. Thus, it would have been obvious to one of ordinary skill before the effective filing date to use the microphones and IR sensors of Igarashi as inputs of the ground truth spoken words and light-based inputs to train a neural network as taught by Maizels to provide a predictable result of allowing the head worn device of Igarashi to be used as a communication device to communicate with others (Maizels, Paragraph 0038). With respect to Claim 15, Igarashi further discloses: The system of claim 14, further comprising an array of distance measuring devices distributed across the front frame portion ("sensor arrays consisting of...infrared distance sensors” that are distributed across the front frame, Section 3.1, Page 34; see also Fig. 1 (particularly B and D)). With respect to Claim 16, Maizels further discloses: The system of claim 15, further comprising an output module for synthesizing speech from the phonemes generated by the artificial intelligence network (voice synthesizer for generating an audio output signal based upon silent speech using the phonemes generated by the neural network, Paragraphs 0049, 0059, and 0081; speaker, see Paragraphs 0049, 0056, and 0063). Claims 17-22 are rejected under 35 U.S.C. 103 as being unpatentable over Igarashi, et al. in view of Maizels, et al. and further in view of Alameh, et al. (U.S. PG Publication: 2020/0404424 A1). With respect to Claim 17, Igarashi in view of Maizels discloses the head worn device for measuring silent speech using an infrared (IR) distance sensor array and computing device for generating phonemes from the silent speech as applied to Claim 14. Igarashi appears to take the approach/stance that the subcomponents of an infrared distance sensor (i.e., for light emission and reception to measure distance) are well-known and fails to explicitly that these devices include a light emitter and an associated light sensor as set forth in claim 17. Alameh, however, provides the details lacking in Igarashi in the form of an "infrared proximity sensor" that includes a LED signal emitter and a signal receiver (Paragraph 0102). Igarashi, Maizels, and Alameh are analogous art because they are from a similar field of endeavor in speech interfaces using light-based sensors. Thus, it would have been obvious to one of ordinary skill before the effective filing date to utilize the subcomponents of an IR distance sensor taught by Alameh in the IR distance sensors taught by Igarashi in the combination of Igarashi in view of Maizels in order to predictably generate and measure the signals used to effectively detect facial/mouth/jaw/etc. movement in Igarashi. Claim 18 contains subject matter similar to Claim 3, and thus, is rejected under similar rationale. Claim 19 contains subject matter similar to Claim 4, and thus, is rejected under similar rationale. Claim 20 contains subject matter similar to Claim 5, and thus, is rejected under similar rationale. Claim 21 contains subject matter similar to Claim 6, and thus, is rejected under similar rationale. With respect to Claim 22, Igarashi further discloses: The system of claim 14, further comprising a display operable to display training sets to the user (a screen that displays commands for recognition, Section 4, Page 35; note that Maizels teaches that training sets can be comprised of light imaging and vocalized speech for speaking certain words, Paragraphs 0057-0058). Claims 23-24 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Maizels, et al. in view of Igarashi, et al. With respect to Claim 23, Maizels discloses: A method for enabling conversion of speech pantomimes of a user into synthesized speech, the method comprising: capturing a first set of sensor data plurality of training sounds (Paragraph 0058- “the training data may comprise signals collected from sensing devices 20 while users articulate certain sounds and words;” wherein this sensor 20 is light-based, Paragraph 0048); capturing audio data using a microphone while the user vocalizes the plurality of training sounds (training data also includes vocalized ground truth utterances gathered by a microphone as a part of the sensor device, Paragraphs 0052 and 0057-0058); training an artificial intelligence network using the sensor data and the audio data to correlate speech pantomimes of the user with phonemes (using light sensor data and corresponding "spoken words" as ground truth to train a neural network to transform user "silent speech" (i.e., pantomimed speech as claimed) into phonemes and words, Paragraphs 0049, 0057-0059, and 0081); capturing a second set of sensor data (silent speech data features are first collected from the sensors and then supplied to the trained artificial neural network (ANN) to generate phonemes and words corresponding to the silent speech, Paragraphs 0041, 0043, 0057, and 0059). Maizels does not specifically teach the silent/pantomimed speech sensors featured in the claim in the form of an array of distance measurement devices distributed across a front frame portion of a headset frame of a headset. Igarashi, however, discloses an array of distance measurement devices distributed across a front frame portion of a headset frame of a headset ("sensor arrays consisting of...infrared distance sensors, Section 3.1, Page 34; see also Fig. 1 (particularly B and D showing the sensors across the front frame)). Maizels and Igarashi are analogous art because they are from a similar field of endeavor in silent speech interfaces. Thus, it would have been obvious to one of ordinary skill before the effective filing date to substitute the IR distance sensors structures of Igarashi as the sensors for silent speech taught by Maizels to provide a predictable result of silent speech sensors that are lightweight and use little data for low power consumption (Igarashi, Section 2.2, Page 34). With respect to Claim 24, Maizels further discloses: The method of claim 23, further comprising synthesizing speech from the phonemes (converted signals into phonemes used in speech generation via synthesis, Paragraphs 0023, 0043, 0049, and 0059). With respect to Claim 28, Igarashi further discloses: The method of claim 23, wherein the headset frame comprises a first ear piece and a second earpiece, wherein the front frame portion extends between the first ear piece and the second earpiece (ear pieces that hold the head frame into position on either side of the frontal portion, see the sides of the user's head in 1C and see the side supports in 1A). Claims 25-27 are rejected under 35 U.S.C. 103 as being unpatentable over Maizels, et al. in view of Igarashi, et al. and further in view of Alameh, et al. (U.S. PG Publication: 2020/0404424 A1). With respect to Claim 25, Igarashi in view of Maizels discloses the head worn device process for measuring silent speech using an infrared (IR) distance sensor array and computing device process for generating phonemes from the silent speech as applied to Claim 23. Maizels in view Igarashi appears to take the approach/stance that the subcomponents of an infrared distance sensor (i.e., for light emission and reception to measure distance) (specifically Igarashi in the combination) are well-known and fails to explicitly that these devices include a light emitter and an associated light sensor as set forth in claim 25. Alameh, however, provides the details lacking in Igarashi in the form of an "infrared proximity sensor" that includes a LED signal emitter and a signal receiver (Paragraph 0102). Maizels, Igarashi, and Alameh are analogous art because they are from a similar field of endeavor in speech interfaces using light-based sensors. Thus, it would have been obvious to one of ordinary skill before the effective filing date to utilize the subcomponents of an IR distance sensor taught by Alameh in the IR distance sensors taught by Igarashi in the combination of Maizels in view of Igarashi in order to predictably generate and measure the signals used to effectively detect facial/mouth/jaw/etc. movement in Igarashi. Claim 26 contains subject matter similar to Claim 3, and thus, is rejected under similar rationale. Claim 27 contains subject matter similar to Claim 6, and thus, is rejected under similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Zhang, et al. (U.S. PG Publication: 2023/0077010 A1)- teaches a wearable device including an IR camera with and IR LED and IR receiver to capture the emitted light reflected by the skin to reconstruct facial images for silent speech detection (Paragraphs 0049 and 0128). Fazeldehkordi (U.S. PG Patent: 10,832,660)- teaches a whisper speech decoding/understanding device that trains a deep neural network (DNN) using both whispered and normal speech (see Fig. 2). Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES S WOZNIAK whose telephone number is (571)272-7632. The examiner can normally be reached 7-3, off alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant may use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JAMES S. WOZNIAK Primary Examiner Art Unit 2655 /JAMES S WOZNIAK/Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Apr 17, 2024
Application Filed
Nov 05, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597422
SPEAKING PRACTICE SYSTEM WITH RELIABLE PRONUNCIATION EVALUATION
2y 5m to grant Granted Apr 07, 2026
Patent 12586569
Knowledge Distillation with Domain Mismatch For Speech Recognition
2y 5m to grant Granted Mar 24, 2026
Patent 12511476
CONCEPT-CONDITIONED AND PRETRAINED LANGUAGE MODELS BASED ON TIME SERIES TO FREE-FORM TEXT DESCRIPTION GENERATION
2y 5m to grant Granted Dec 30, 2025
Patent 12512100
AUTOMATED SEGMENTATION AND TRANSCRIPTION OF UNLABELED AUDIO SPEECH CORPUS
2y 5m to grant Granted Dec 30, 2025
Patent 12475882
METHOD AND SYSTEM FOR AUTOMATIC SPEECH RECOGNITION (ASR) USING MULTI-TASK LEARNED (MTL) EMBEDDINGS
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
59%
Grant Probability
99%
With Interview (+40.1%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 385 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month