Prosecution Insights
Last updated: April 19, 2026
Application No. 18/235,241

USE OF NON-AUDIBLE SILENT SPEECH COMMANDS FOR AUTOMATED ASSISTANTS

Final Rejection §101
Filed
Aug 17, 2023
Examiner
LE, THUYKHANH
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
307 granted / 393 resolved
+16.1% vs TC avg
Strong +37% interview lift
Without
With
+37.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
19 currently pending
Career history
412
Total Applications
across all art units

Statute-Specific Performance

§101
18.6%
-21.4% vs TC avg
§103
41.8%
+1.8% vs TC avg
§102
20.1%
-19.9% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 393 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments/Arguments 2. With respect to 102/103 rejection, Applicant has amended the independent claim 17 by incorporation of claim 20. Claim 20 were previously indicated as allowed in view of the prior art of record. Thus, Claims 17-19 are as allowed in view of the prior art of record. With respect to 35 U.S.C. 101 Rejection, Applicant argues on pages 2-3 of the Remarks that “The 101 Memorandum “provides guidance on the ... topics that arise when examiners assess Step 2A of the USPTO's subject matter eligibility analysis” including “(a) reliance on the mental process grouping of abstract ideas; (b) distinguishing claims that recite a judicial exception from claims that merely involve a judicial exception (c) analysis of the claim as a whole; and (d) consideration of whether a claim is directed to an improvement in the functioning of a computer or ‘any other technology or technical field’”. 101 Memorandum, p. 1. With respect to “(a) reliance on the mental process grouping of abstract ideas”, the 101 Memorandum specifies that “a claim does not recite a mental process when it contains limitation(s) that cannot practically be performed in the human mind, for instance when the human mind is not equipped to perform the claim limitation(s)”. With respect to “(c) analysis of the claim as a whole”, the 101 Memorandum clarifies that the "analysis in Step 2A Prong Two considers the claim as a whole” and cautions that “the additional limitations should not be evaluated in a vacuum”. 101 Memorandum, p. 3. Instead, “the analysis should take into consideration all the claim limitations and how these limitations interact and impact each other when evaluating whether the exception is integrated into a practical application.” 101 Memorandum, pp. 3-4. With respect to “(d) consideration of whether a claim is directed to an improvement in the functioning of a computer or 'any other technology or technical field”, the 101 Memorandum specifies that an “important consideration in determining whether a claim improves technology or a technical field is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome”. 101 Memorandum, p. 4. The 101 Memorandum also specifies that another “consideration when determining whether a claim integrates a judicial exception into a practical application in Step 2A Prong Two is whether the additional elements amount to Page 8 of 11 more than a recitation of the words ‘apply it’”, “Examiners are cautioned not to oversimplify claim limitations”, and that consideration should be given to whether “claim covers a particular solution to a problem or a particular way to achieve a desired outcome” and to the “particularity or generality of the application of the judicial exception”. 101 Memorandum, pp. 4-5. Applicant's Attorney requests that when the 101 rejection is reconsidered in view of the amendments, that the Office Action performs an “(c) analysis of the claim as a whole”. 101 Memorandum, p. 1. More particularly, consider “all the claim limitations and how those limitations interact and impact each other” as required by MPEP 2106.04(d)(III). Applicant's attorney requests reconsideration of how implementations of the combination of features of the amended independent claims can, as described in paragraphs [0003] and [0008] of the instant Application, “suppress[] full processing of audible speech in various situations" and/or “shorten[]the duration of the interaction with the automated assistant” thereby preventing the utilization of “additional processing”. In performing such reconsideration, Applicant’s attorney requests consideration of how the 101 Memorandum specifies that an “important consideration in determining whether a claim improves technology or a technical field is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome” and that consideration should be given to whether “claim covers a particular solution to a problem or a particular way to achieve a desired outcome” and to the “particularity or generality of the application of the judicial exception”. 101 Memorandum, pp. 4-5. For at least these reasons, Applicant's attorney respectfully requests that the Office Action's 101 rejections be reconsidered and withdrawn.” In response, Examiner respectfully notes that “suppressing full processing of the audible data in response to determining that there is a lack of correspondence between the detected non-audible silent speech data and the detected audible data” is a mental process. More specifically, a human could listen to the user (i.e., detect audible data from the user), and look at the user’s lip (i.e., detect non-audible silent speech from the user). If the human could not hear anything from the user (i.e., no audible data is detected, only non-audible silent speech is detected), the human could stop listening to the user and start interpreting the user’s lip movement. Applicant argues that “suppress[] full processing of audible speech in various situations” and/or “shorten[]the duration of the interaction with the automated assistant” thereby preventing the utilization of “additional processing”.” Examiner respectfully indicates that since the improvement is a part of the abstract mental process (i.e., “suppressing full processing of the audible data in response to determining that there is a lack of correspondence between the detected non-audible silent speech data and the detected audible data”), it would not available to qualify as an improvement to technology in step 2A prong 2. Applicant’s arguments are not persuasive, and thus for these reasons, Examiner respectfully disagrees. Claim Rejections - 35 USC § 101 3. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 4. Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 recites “1. (Currently Amended) A method implemented using one or more processors, the method comprising: detecting, at the client device, audible data based on one or more audible sensors and temporally corresponding non-audible silent speech data based on one or more non- audible sensors; determining, at the client device, whether there is correspondence between the detected non-audible silent speech data and the detected audible data; and in response to determining that there is a lack of correspondence between the detected non-audible silent speech data and the detected audible data: suppressing full processing of the audible data; and determining to activate one or more aspects of non-audible silent speech processing, the one or more aspects of non-audible silent speech processing including: generating recognized text based on processing the non-audible silent speech data, and/or performing one or more actions or initiating one or more fulfillments based on the recognized text generated based on the non- audible silent speech data.” The limitations recited in Claim 1 as drafts covers a mental processes. More specifically, the underlying abstract idea revolved around what happen once a human is interacting with a user. The human could listen to the user, and look at the user’s lip. If the human does not hear anything from the user, the human could understand a command/a question by observing the user’s lip movements/from the user’s gesture and write down the command/the question on the paper. The limitations recited in Claim 1 as drafts covers a mental processes. More specifically, a human could listen to the user (i.e., detect audible data from the user), and look at the user’s lip (i.e., detect non-audible silent speech from the user). If the human could not hear anything from the user (i.e., no audible data is detected, only non-audible silent speech is detected), the human could stop listening to the user and start interpreting the user’s lip movement. Claim 17 recites “17. (Currently Amended) A method implemented using one or more processors, the method comprising: authenticating, at a client device, a user that is actively utilizing the client device; in response to authenticating the user, without any explicit invocation, and for a duration that the authentication of the user is active: activating non-audible silent speech recognition, which includes: receiving, at the client device, non-audible silent speech data based on one or more non-audible sensors; generating recognized text based on processing the non-audible silent speech data; and performing one or more actions or initiating one or more fulfillments based on the non-audible silent speech data, wherein performing the one or more actions or initiating the one or more fulfillments based on the non-audible silent speech data includes: determining, based on the recognized text for the non- audible silent speech data, whether to activate on-device natural language understanding of the recognized text and/or to on-device fulfillment that is based on the on-device natural language understanding; and when it is determined to activate the on-device natural language understanding and/or to activate the on-device fulfillment: performing the on-device natural language understanding and/or initiating, on-device, the fulfillment.” The limitations recited in Claim 17 as drafts covers a mental processes. More specifically, the underlying abstract idea revolved around what happen once a human is interact with a user. At the beginning, the human could look at the user to verify a user’s identity. Next, the human could look at the user’s lip to understand speech from observing the user’s lip movements, the human could write down text from the user’s lip movement and finally the human could determine whether to start interpreting the text from the user’s lip movement. The judicial exception is not integrated into a practical application. In particular, claims recite the additional limitations of the client device, one or more audible sensors and one or more non-audible sensors. The additional element(s) or combination of elements such as of the client device, one or more audible sensors and one or more non-audible sensors and in the claim(s) other than the abstract idea per se amount(s) to no more than (i) mere instructions to implement the idea on a computer, and/or (ii) recitation of generic computer structure that serves to perform generic computer functions that are well-understood, routine, and conventional activities previously known to the pertinent industry. Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. There is further no improvement to the computing device other than interpreting the user’s intent based on the lip, the gesture, the tongue and/or larynx movement. The mere recitation of a memory and a processor and/or the like is akin of adding the word “apply it” and/or “use it” with a computer in conjunction with the abstract idea. The paragraphs [0053-0054] of the specification disclose “[0053] Storage subsystem 324 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 324 may include the logic to perform selected aspects of the method of FIG. 2, as well as to implement various components depicted in FIGS. 1A and 1B. [0054] These software modules are generally executed by processor 314 alone or in combination with other processors. Memory 325 used in the storage subsystem 324 can include a number of memories including a main random-access memory (RAM) 330 for storage of instructions and data during program execution and a read only memory (ROM) 332 in which fixed instructions are stored. A file storage subsystem 326 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem 326 in the storage subsystem 324, or in other machines accessible by the processor(s) 314.” As filed in the specification, the computer is listed as a general-purpose computer and are mainly used as an application thereof. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a computer is noted as a general computer. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. The dependent claims further do not remedy the issues noted above. Claim 2 recites activating natural language understanding of the recognized text. Understanding, interpreting the text is a human process. There is no additional limitation presented. Claim 3 recites generating a portion of text based on processing the non-audible silent speech data and generating a audible recognized text based on processing the audible data and determining whether there is corresponding between two of the generated text. This reads on the human could interpret the lip reading and the speech and compare to determine whether the lip reading and the speech are matching. There is no additional limitation presented. Claim 4 recites performing one or more actions or initiating one or more fulfillments based on the recognized text generated based on the non-audible silent speech data. This reads on the human could perform one or more actions (e.g., turn on the television) based on a result of the lip reading. There is no additional limitation presented. Claim 5 recites determining the correspondence between the detected non-audible silent speech data and the detected audio data based on temporal correspondence. This reads on the human could determine the correspondence between the lip reading and the speech by determining whether they are overlapped. There is no additional limitation presented. Claims 6-7 recites a microphone, a camera, an accelerometer, a magnetometer and/or a gyroscope. The claimed invention uses a microphone, a camera, an accelerometer, a magnetometer and/or a gyroscope as a tool to implement an otherwise abstract mental process practically performed by a human (e.g., receive a user’s speech and/or observe the user’s gesture). There is no additional limitation presented. Claim 8 recites determining whether a silent mode is activated and generating the recognized text when the silent mode is activated. This reads on the human could start lip-reading and write down a result from observing the user’s lip movements. There is no additional limitation presented. Claim 9 recites activating the silent mode in a noisy environment. This reads on the human could start looking at the user’s mouth and guess the user’s mouth words in the noisy environment. There is no additional limitation presented. Claim 10 recites activating the silent mode in a predetermined location. This reads on the human start looking at the user’s mouth and guess the user’s mouth words in the supermarket. There is no additional limitation presented. Claim 11 recites activating the silent mode in response to detecting a predetermined user input. This reads on the human could start looking at the user’s mouth and guess the user’s mouth words after the human hears the user say “This is privacy information, I do not want to anybody hear my voice, just look at my mouth.” There is no additional limitation presented. Claim 12 recites providing a user with a feedback output to indicate the silent mode has been activated. This reads on the human could tell the user that do not speak loudly, just murmur. There is no additional limitation presented. Claim 13 recites authenticating a user and generating a text based on processing a non-audible silent speech. This read on giving the user permission to murmur and interpret the murmur. There is no additional limitation presented. Claim 14 recites generating a text by using a trained silent model. Generating the text by interpreting the non-audible silent speech is a mental process. The step of generating the text using the trained silent model involves a high-level process that is practically implemented by a human. The trained silent model is generic in nature and merely stands in for human mind in an otherwise mental process. Claim 15 recites determining whether there is correspondence between the detected non-audible silent speech data and the detected audible data further. This reads on the human could determine that there is the correspondence if the human do not hear anything from the user. There is no additional limitation presented. Claim 16 recites fully generating the recognized text in response to determining that there is the lack of correspondence. This reads on the human could write the text of the non-audio silent speech when there is the lack of correspondence. There is no additional limitation presented. Claim 18 give the permission to the user when the user is in the physical contact. Giving the permission to the user is a human process. There is no additional limitation presented. Claim 19 recites the authentication of the user is active for a predetermined period of time. This reads on the human could give the permission to the user for a predetermined period of time (e.g., 5 second). There is no additional limitation presented. For at least the supra provided reasons, claims 1-19 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Allowable Subject Matter 5. Claims 1-19 are allowed in view of the prior art of record. However, Claims 1-19 stand rejected under 101 Abstract idea, and for the application to pass to allowance this rejection need to be overcome. Any amendments to overcome the 101 rejection that results in any change in scope require further search and/or consideration in order to determine it allowability. The following is a statement of reasons for the indication of allowable subject matter: the prior art(s) taken alone or in combination fail(s) to teach the following element(s) in combination with the other recited elements in the claim(s). “in response to determining that there is a lack of correspondence between the detected non-audible silent speech data and the detected audible data: suppressing full processing of the audible data; and determining to activate one or more aspects of non-audible silent speech processing, the one or more aspects of non-audible silent speech processing including: generating recognized text based on processing the non-audible silent speech data, and/or performing one or more actions or initiating one or more fulfillments based on the recognized text generated based on the non-audible silent speech data.” as recited in Claim 1. “determining, based on the recognized text for the non- audible silent speech data, whether to activate on-device natural language understanding of the recognized text and/or to on-device fulfillment that is based on the on-device natural language understanding; and when it is determined to activate the on-device natural language understanding and/or to activate the on-device fulfillment: performing the on-device natural language understanding and/or initiating, on-device, the fulfillment.” as recited in Claim 17. The closest prior arts found as following. a. Whitmire et al. (US 10,665,243 B1.) In this reference, Whitmire et al. disclose a method for detecting mouthed and subvocalized commands from the user (Whitmire et al. col. 1 lines 38-56 A system for subvocalized speech recognition includes a plurality of sensors that detect mouthed and subvocalized (e.g., murmur, whisper, etc.) commands provided by a user. The plurality of sensors are coupled to an eyeglass-type platform representing a near-eye-display (NED). The NED may be part of an artificial reality system. Here, the plurality of sensors are configured to capture non-audible and subvocalized commands by a user. The plurality of sensors include sensors that can detect commands (e.g., a camera, in-ear proximity sensor) when no audible sound is detected (i.e., user mouths a command) and other sensors (e.g., a non-audible murmur (NAM) microphone and an air microphone) that can detect subvocalizations of a user (e.g., user murmurs/whispers the command). Data from the sensors is collected and processed using machine learning techniques, to extract features and identify one or more commands provided by the user. The system allows for users to interact with systems in a private manner and/or in noisy environments.) Whitmire et al. utilizes the plurality of sensors to capture non-audible and subvocalized commands provided by the user, collects the data from the sensors, extracts features and identifies one or more commands provided by the user when no audible sound is detected. However, Whitmire et al. does not detect audible sound and temporally corresponding non-audible silent speech, determine whether there is correspondence between the audible sound and the non-audible silent speech and activate one or more aspects of non-audible silent speech processing in response to determining that there is a lack of corresponding between the detected audible sound and the non-audible silent speech as recited in Claim 1. Whitmire et al. does not teach and/or suggest determining, based on the recognized text for the silent speech data, whether to activate on-device natural language understanding of the recognized text and/or to activate on-device fulfillment that is based on the on-device natural language understanding as recited in Claim 17. Thus, Whitmire et al. fail to teach and/or suggest the allowable subject matter. b. Mahadeva et al. (US 2021/0280186 A1.) In this reference, Mahadeva et al. disclose switching from a voice input mode to a non-voice input mode in response to a privacy triggering event. In the non-voice input mode, the voice assistant device captures the lip shape with the camera to obtain the privacy data (Mahadeva et al. [0009] In accordance with an aspect of the disclosure, a method of managing private data in a voice assistant device, may include: detecting a privacy triggering event while obtaining at least one voice input from a first user in a voice input mode; switching from the voice input mode to a non-voice input mode in response to the privacy triggering event; obtaining a non-voice input from the first user in the non-voice input mode; and executing an operation of the voice assistant device corresponding to the non-voice input, [0072] The processor 302 may then determine whether at least one of the voice assistant device 102 and the at least one smart device 106 is capable of receiving a non-voice input based on at least one of the first plurality of parameters, the second plurality of parameters, and the determined or detected presence and proximity of the second user 120. In an embodiment, the voice assistant device 102 is a smartphone having a touch-sensitive display and the smart device 106 is a voice-controlled intelligent assistant device with speakers. As such, the processor 302 may determine the touch-sensitive display of the smartphone is available and capable of receiving text inputs to provide the private data. In another example, the voice assistant device 102 is a voice-controlled intelligent assistant device with speakers and the smart device 106 is equipped with a smart camera. The processor 302 may determine that the smart camera is available and capable of receiving a lip shape input in a lip-reading mode to obtain the private data. In an embodiment, the voice assistant device 102 may capture the lip shape with the camera 316 to obtain the private data. In another example, the voice assistant device 102 is a smartphone with a touch-sensitive display and a camera and the smart device 106 is equipped with a smart camera. The smart device 106 is located in proximity to the first user 104. Then, the processor 302 may determine that the smart device 106 with the smart camera is available and capable of receiving lip-shape inputs in the lip-reading mode to obtain the private data.) In Mahadeva et al. switches to non-voice mode in response to detecting the privacy triggering event. The privacy triggering event is detected while obtaining at least one voice input from a user in a voice input mode. However, Mahadeva et al. does not teach detecting both voice input and non-voice input, determining whether there is corresponding between the voice input and the non-voice input and activating one or more aspects of non-voice processing in response to determining that there is a lack of correspondence between the voice input and the non-voice input to starting the non-voice processing as recited in Claim 1. Mahadeva et al. does not teach and/or suggest determining, based on the recognized text for the silent speech data, whether to activate on-device natural language understanding of the recognized text and/or to activate on-device fulfillment that is based on the on-device natural language understanding as recited in Claim 17. Thus, Mahadeva et al. fail to teach and/or suggest the allowable subject matter noted above. c. Hengerer et al. (US 2017/0123030 A1.) In this reference, Hengerer et al. disclose activating a gesture detection in response to detecting the user’s verification (Hengerer et al. [0019] The gesture control unit includes one or more verification switches that can be provided within reach of a user during operation of the imaging system. In certain embodiments, the verification switch can be an electrical switch, a pneumatic switch, or a mechanical switch. In one embodiment, the verification switch can be a foot-operated switch. In another embodiment, the verification switches 220 can be an inductive switch that can be activated based on proximity of a part of the user's body. In a further embodiment, the verification switch can be mounted to a tabletop or work surface, such that the user can activate it by pressing or contacting it with an elbow, a forearm, or the like, [0023] the verification switch can be configured to activate or enable gesture detection, gesture recognition, and/or gesture control of certain features of the imaging system only when the verification switch is activated.) In Hengerer et al., the gesture processing is activated in response to the user’s verification. In Hengerer et al., the gesture processing is not activated in response to determining that there is lack of correspondence between the detected gesture and the detected audio as recited in Claim 1. Hengerer et al. does not teach and/or suggest determining, based on the recognized text for the silent speech data, whether to activate on-device natural language understanding of the recognized text and/or to activate on-device fulfillment that is based on the on-device natural language understanding as recited in Claim 17. Thus, Hengerer et al. fail to teach and/or suggest the allowable subject matter noted above. Thus, Hengerer et al. fail to teach and/or suggest the allowable subject matter. Conclusion 6. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See PTO-892. a. Sivakumar et al. (US 2021/0350072 A1.) In this reference, Sivakumar et al. disclose initiating the natural language processing engine to process the received text. b. Kim et al. (US 2021/0156961 A1.) In this reference, Kim et al. disclose activating the gesture recognition. c. Ham (US 2023/0382349 A1.) In this reference, Ham discloses activating a microphone to receive the voice command when the user authentication through the facial recognition is completed. 7. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THUYKHANH LE whose telephone number is (571)272-6429. The examiner can normally be reached Mon-Fri: 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew C. Flanders can be reached on 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THUYKHANH LE/Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Aug 17, 2023
Application Filed
Sep 18, 2025
Non-Final Rejection — §101
Dec 18, 2025
Applicant Interview (Telephonic)
Dec 19, 2025
Examiner Interview Summary
Dec 19, 2025
Response Filed
Jan 08, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597413
ELECTRONIC DEVICE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12592218
COMMUNICATION DEVICE, COMMUNICATION METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12592239
ACTIVE VOICE LIVENESS DETECTION SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586577
AUTOMATIC SPEECH RECOGNITION USING MULTIPLE LANGUAGE MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579365
INFORMATION ACQUISITION METHOD AND APPARATUS, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+37.1%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 393 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month