Prosecution Insights
Last updated: April 19, 2026
Application No. 18/338,827

WEARABLE SILENT SPEECH DEVICE, SYSTEMS, AND METHODS

Final Rejection §103
Filed
Jun 21, 2023
Examiner
SHIN, SEONG-AH A
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Wispr AI Inc.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
321 granted / 409 resolved
+16.5% vs TC avg
Strong +20% interview lift
Without
With
+20.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
434
Total Applications
across all art units

Statute-Specific Performance

§101
20.8%
-19.2% vs TC avg
§103
45.2%
+5.2% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 409 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-12 and 14-21 are pending in this application. Claim 13 is canceled. Response to Arguments Regarding Rejection under 35 U.S.C. 102 Applicant’s amendment and arguments with respect to rejections have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection. The amended limitations, “one or more sensors configured to determine a signal indicative of a position of a tongue of the user, at least one of the one or more sensors being disposed adjacent a cheek of the user; a processing module configured to receive the electrical signals from the plurality of electrodes and the signal indicative of the position of the tongue of the user”, as recited in claim 1, raise new grounds for rejections and further tat the Examiner is therefore applying a new reference. CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “module” in claims 1, 11-12, 14, and 17. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-5, 10, 15-17, and 21 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Kapur et al., (US Pub. 2019/0074012) in view of Chen et al., (US Pub. 2012/0259554). Regarding claim 1, Kapur discloses a wearable device, comprising: a plurality of electrodes, wherein a subset of the plurality of electrodes are configured to measure electrical signals at a face, head, or neck of a user, the electrical signals being indicative of the user's speech activation patterns while the user is silently speaking or whispering (Fig. 9 and [0008][0079][0080] a silent speech interface (SSI) system detects silent, internal articulation of words by a human user; Electrodes may measure voltage at positions on the user's skin, head and neck); a processing module configured to receive the electrical signals from the plurality of electrodes and [the signal indicative of the position of the tongue of the user] and perform one or more processing operations of the electrical signals (Fig. 9 and [0079][0080] Computer may receive and analyze data that encodes electrode measurements); and a communication module communicatively coupled to an external device ([0008][0035][0079][0080] the SSI system facilitates private communication by a user wearing the SSI). Kapur does not explicitly teach the bracketed limitation however Chen does explicitly teach including the bracketed limitation; one or more sensors configured to determine a signal indicative of a position of a tongue of the user, at least one of the one or more sensors being disposed adjacent a cheek of the user; and a processing module configured to receive the electrical signals from the plurality of electrodes and [the signal indicative of the position of the tongue of the user] (Figs. 4D and 4E, [0033][0034] One or more electrodes can be used to track tongue movements … the person can wear a headset … with one sensor 412 on the left cheek touching the skin and one electrode 414 on the right cheek touching the skin…determining the user's one or more tongue orientation characteristics”). Therefore, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to incorporate the method and system of silent speech interface as taught by Kapur with adapting tongue tracking interface apparatus as taught by Chen to improve performance in analyzing the sound. Regarding claim 2, Kapur in view of Chen discloses the wearable device of claim 1, and Kapur further discloses: wherein the plurality of electrodes is supported to contact the user's face by a sensor arm, the sensor arm being supported at a side of the user's head (Fig. 1, [0047] SSI device has clip-on extension 120 houses electrodes 134 and 135 that are worn on the user's skin at a side of the user’s head). Regarding claim 3, Kapur in view of Chen discloses the wearable device of claim 2, and Kapur further discloses: wherein the sensor arm is coupled to an ear hook, the ear hook configured to support the wearable device at an ear of the user 9Fig. 1, [0047] SSI device 100 may house a bone conduction transducer 108 that is positioned behind the user's ear). Regarding claim 4, Kapur in view of Chen discloses the wearable device of claim 2, and Kapur further discloses: wherein the sensor arm is coupled to a headset, the headset configured to support the sensor arm at a side of the head of the user (Fig. 1, [0047] a portion of the main body of SSI device 100 extends below the jawline and houses electrodes 131, 132, and 133 that are worn on the user's skin at a side of the user’s head). Regarding claim 5, Kapur in view of Chen discloses the wearable device of claim 2, and Kapur further discloses: wherein the sensor arm is coupled to a temple of glasses or goggles (Kapur, [0139] “smart-glasses may directly communicate with the SSI device and provide contextual information to, or obtain contextual information from, the SSI device”). Regarding claim 10, Kapur in view of Chen discloses the wearable device of claim 1, and Kapur further discloses: wherein the plurality of electrodes is a first plurality of electrodes and further comprising a second plurality of electrodes, and wherein the first plurality of electrodes is supported to contact a user's face by a first sensor arm and the second plurality of electrodes is supported to contact a user's face by a second sensor arm (Kapur, [0047] “houses electrodes 131, 132, and 133 that are worn on the user's skin in the submaxillary region. Likewise, in FIG. 1, clip-on extension 120 houses electrodes 134 and 135 that are worn on the user's skin in the oral (lip) region and mental (chin) region, respectively”). Regarding claim 15, Kapur in view of Chen discloses the wearable device of claim 1, and Kapur further discloses: a camera (Kapur, [0139] cameras may directly communicate with the SSI device and provide contextual information to the SSI device). Regarding claim 16, Kapur in view of Chen discloses the wearable device of claim 1, and Kapur further discloses: an accelerometer configured to record movement of a jaw, the cheek, facial muscles, the head or the neck of the user ([0079] “Electrodes 905 may measure voltage at positions on the user's skin (e.g., positions on the user's head and neck)”; [0085] “a user's inner speech (e.g., mental speech) or mental verbal imagery 1003 may produce efferent nerve signaling 1005, which in turn may cause internal articulation 1000 (e.g., neural activation at neuromuscular junctions in Articulator Muscles).”) Regarding claim 17, Kapur in view of Chen discloses the wearable device of claim 1, and Kapur further discloses: wherein the communication module is communicatively coupled to an external device; and the external device, wherein the external device is configured to receive one or more silent or whispered speech signals from the communication module and the external device is configured to determine one or more phrases silently spoken by the user from the silent or whispered speech signals by executing a neural network on the silent or whispered speech signals (Kapur, [0028][0029][0089][0113]-[0116] speech recognition is performed by a trained neural network; Fig. 9 and [0008][0035][0079][0080] a silent speech interface (SSI) system detects silent, internal articulation of words by a human user; the SSI system facilitates private communication by a user wearing the SSI). Regarding claim 21, Kapur in view of Chen discloses the wearable device of claim 1, and Kapur further discloses: wherein the external device is configured to execute the neural network on the silent or whispered speech signals by: providing silent or whispered speech signals comprising a first data type as input to a first portion of the neural network; and providing silent or whispered speech signals comprising a second data type as input to a second portion of the neural network ([0065][0066][0071]-[0073] training two different ANN types: a classification neural network and a sequence-to-sequence neural network). Claim 6 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Kapur et al., (US Pub. 2019/0074012) in view of Chen et al., (US Pub. 2012/0259554) and further in view of McClung, III (US Pub. 2012/0095768, hereinafter McClung). Regarding claim 6, Kapur in view of Chen discloses the wearable device of claim 2. Kapur in view of Chen does not explicitly teach however McClung does explicitly teach: wherein the sensor arm is configured to be rotatably positioned about an anchor point, and wherein the sensor arm is configured to be linearly positioned closer and farther from the user's mouth (McClung, Figs. 4A-4C, an arm of a headset is anchored and a blocker may be rotated to block the lips of a user). Therefore, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to incorporate the method and system of silent speech interface as taught by Kapur in view of Chen with a headset which has an anchored arm as taught by McClung to provide flexibility with added material around or adjacent a person's mouth in order to block an addition to a person's breath when breath is expelled (McClung, [0124]). Claims 7-9 and 18 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Kapur et al., (US Pub. 2019/0074012) in view of Wang et al., (US Pub. 2023/0190196). Regarding claim 7, Kapur in view of Chen discloses the wearable device of claim 2. Kapur in view of Chen does not explicitly teach however Wang does explicitly teach: a spring configured to maintain contact between the sensor arm and cheek of the user (Wang, [0036] “headset 300 may also include a stabilizing band 314 connected to the housing unit 302 and configured to extend across the head 318 of the user … maintain the first electrode (312) in contact with the first region of the head of the user with a consistent amount of force and the second electrode comprises a spring-loaded electrode holder”). Therefore, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to incorporate the method and system of silent speech interface as taught by Kapur in view of Chen with force-controlled electroencephalogram (EEG) monitoring device which maintains a constant pressure between electrodes and the scalp of a user as taught by Wang to provide an advantage of providing an amount of force for electrodes that is adapted to and comfortable for specific regions on the head of the user (Wang, [0069]). Regarding claim 8, Kapur in view of Chen discloses the wearable device of claim 1. Kapur in view of Chen does not explicitly teach however Wang does explicitly teach: a reference electrode supported to contact the body of the user above or behind an ear of the user and provide a bias voltage to the body of the user (Wang, [0036][0043] “An operation amplifier is a DC-coupled high-gain electronic voltage amplifier”). Regarding claim 9, Kapur in view of Chen and further in view of Wang discloses the wearable device of claim 8. Kapur in view of Chen does not explicitly teach however Wang does explicitly teach: wherein the plurality of electrodes electrode is configured as a differential amplifier (Wang, [0043] “Electronics that may be included in the housing unit 302 can include one or more amplifiers … may be any of an instrument amplifier, an operation amplifier, and/or a bio-signal amplifier”), wherein the electrical signals represent a difference between a first voltage measured by a first subset of electrodes of the plurality of electrodes and a second voltage measured by a second subset of electrodes of the plurality of electrodes (Wang, Fig. 4, first and second arms have a continuous smooth curve from the housing unit to the first and second electrode; [0043] “An operation amplifier is a DC-coupled high-gain electronic voltage amplifier with a differential input”). Regarding claim 18, Kapur in view of Chen discloses the system of claim 17, and Kapur further discloses: wherein the external device is configured to perform natural language processing to determine one or more commands from the one or more words or phrases and control the user interface based on the one or more commands (Kapur, [0080][0137][0139] performing NLP (natural language processing) to detect content of internal articulation by user and using it with virtual reality applications; communicate with the SSI device and provide contextual information to, or obtain contextual information from, the SSI device). Kapur in view of Chen does not explicitly teach however Wang does explicitly teach: wherein the external device comprises a display configured to display a user interface (Wang, [0037] integrated or combined with other wearable devices such as a head-mounted display (HMD) that provides a virtual reality, an augmented reality, or a mixed reality interface; [0056] the input/output controller 916 may provide output to a display screen). Claims 11 and 12 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Kapur et al., (US Pub. 2019/0074012) in view of Chen et al., (US Pub. 2012/0259554) and further in view of Maisels et al., (US Pub. 2023/0230574). Regarding claim 11, Kapur in view of Chen discloses the wearable device of claim 1. Kapur in view of Chen does not explicitly teach however Maisels does explicitly teach: a control module configured to change a mode of the wearable device, in response to an activation signal recognized by the processing module from electrical signals recorded by the plurality of electrodes (Maisels, [0052] automatically switch from idle mode to high power consumption mode based on differing trigger types, such as a sensed input). Therefore, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to incorporate the method and system of silent speech interface as taught by Kapur in view of Chen with the method of determining speech from facial skin movement as taught by Maizels to improve audio quality of conversations made by mobile telephones in loud public spaces, by cleaning and removing background signals from audio (Maisels, [0045]). Regarding claim 12, Kapur in view of Chen and further in view of Maisels discloses the wearable device of claim 11, and Kapur further discloses: one or more input sensors configured to provide signals to the control module (Kapur, [0033] SSI system may be used to control other devices). Claim 14 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Kapur et al., (US Pub. 2019/0074012) in view of Chen et al., (US Pub. 2012/0259554) and further in view of O’neill et al., (US Pat. 9432768). Regarding claim 14, Kapur in view of Chen discloses the wearable device of claim 1. Kapur in view of Chen does not explicitly teach however O’neill does explicitly teach: a plurality of microphones, wherein the processing module is configured to receive signals from the plurality of microphones and perform beamforming on the signals received from the plurality of microphones (O’neill, Col. 1, line 66 – Col. 2, line 14, “beamforming techniques may be used with a microphone array of a wearable computer”). Therefore, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to incorporate the method and system of silent speech interface as taught by Kapur in view of Chen with adapting beam forming technology for a wearable computer as taught by O’Neil to provide an advantage of obtaining a higher quality signal by improving the signal-to-noise ratio leads to improved interpretation of audio within the environment and to isolate a user's speech from extraneous audio signals occurring within a physical environment (O’neill, Col. 2, lines 1-3). Claims 19 and 20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Kapur et al., (US Pub. 2019/0074012) in view of Chen et al., (US Pub. 2012/0259554) further in view of Wang et al., (US Pub. 2023/0190196) and further in view of Maisels et al., (US Pub. 2023/0230574). Regarding claim 19, Kapur in view of Chen in view of Wang discloses the wearable device of claim 18. Kapur in view of Chen and further in view of Wang does not explicitly teach however Maisels does explicitly teach: wherein the external device is configured to provide a virtual assistant platform and wherein the external device is configured to provide the one or more words or phrases as inputs to the virtual assistant platform (Maisel, [0096] extracting from the inner/silent speech and can be used in human—machine communication e.g., personal assistant/ “Alexa” type devices). Therefore, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to incorporate the method and system of silent speech interface as taught by Kapur in view of Chen and further in view of Wang with the method of determining speech from facial skin movement and conveying to device as taught by Maizels to improve human-machine communication (Maisels, [0096]). Regarding claim 19, Kapur in view of Chen and further in view of Wang further in view of Maisels discloses the wearable device of claim 18. Kapur in view of Chen and further in view of Wang does not explicitly teach however Maisels does explicitly teach: a speaker, wherein the external device is configured to transmit a response to the one or more words or phrases from the virtual assistant platform to the wearable device, and the wearable device is configured to play the response on the speaker, in response to receiving the response (Maisel, [0050] giving user feedback with respect to the speech output). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see attached form PTO-892. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEONG-AH A. SHIN whose telephone number is (571)272-5933. The examiner can normally be reached 9 AM-3PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Seong-ah A. Shin Primary Examiner Art Unit 2659 /SEONG-AH A SHIN/Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jun 21, 2023
Application Filed
Jun 24, 2025
Non-Final Rejection — §103
Sep 19, 2025
Response Filed
Oct 10, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598095
DISPLAY DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591452
INVOKING AN AUTOMATED ASSISTANT TO PERFORM MULTIPLE TASKS THROUGH AN INDIVIDUAL COMMAND
2y 5m to grant Granted Mar 31, 2026
Patent 12585696
REDUCING METADATA TRANSMITTED WITH AUTOMATED ASSISTANT REQUESTS
2y 5m to grant Granted Mar 24, 2026
Patent 12555568
DEVICE CONTROL METHOD AND APPARATUS, READABLE STORAGE MEDIUM AND CHIP
2y 5m to grant Granted Feb 17, 2026
Patent 12554935
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+20.5%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 409 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month