Prosecution Insights
Last updated: April 19, 2026
Application No. 18/633,897

ELECTRONIC DEVICE AND VOICE RECOGNITION METHOD THEREOF

Final Rejection §101§103§112
Filed
Apr 12, 2024
Examiner
KY, KEVIN
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
420 granted / 549 resolved
+14.5% vs TC avg
Strong +25% interview lift
Without
With
+25.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
33 currently pending
Career history
582
Total Applications
across all art units

Statute-Specific Performance

§101
17.6%
-22.4% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
9.9%
-30.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 549 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Claim Objections Claim 23 is objected to because of the following informalities: “anther” is claimed. Appropriate correction is required to correct spelling errors. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 23 recites the limitation "the anther user voice input" in claim 21. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21–40 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception without significantly more. The claim(s) recite(s) limitations that fall under the grouping of abstract ideas, including “Certain Methods of Organizing Human Activity”, such as concepts relating to managing human behavior (Step 2A, Prong One), and “Mental Processes”, i.e., concepts performed in the human mind such as observation, evaluation, judgment, and opinion (Step 2A, Prong One). Specifically, claim 21 is directed to receiving voice input, comparing it with previously stored voice data, and triggering a device operation based on a match. These steps are processes that can be conceptualized as fundamental human cognitive functions. The claim limitations merely recite steps involving the reception and evaluation of voice inputs and performing corresponding functions, which amount to mental acts of recognizing and interpreting speech, and responding accordingly. This judicial exception is not integrated into a practical application because the claim as a whole does not impose any meaningful limits on the abstract idea. Instead, the claim merely applies the abstract idea using generic computer components, such as a voice input receiver, communicator, storage, and a controller. These elements are recited at a high level of generality and perform their conventional, expected functions, which do not add any technological improvement or specialized implementation that could render the claim significantly more than the abstract idea (Step 2A, Prong Two). Furthermore, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception (Step 2B). The use of known components to carry out routine functions, such as receiving voice input, identifying a match with stored information, and executing a command, amounts to no more than implementing the abstract idea on a computer. This is not enough to meet the threshold of inventive concept, as these elements and functions are well-understood, routine, and conventional in the field, consistent with the court decisions and guidance outlined in MPEP § 2106.05(d). Additionally, the dependent claims further recite human activity and limitations that do not amount to significantly more. For example, the dependent claims further recite various human or mental processes such as determining if the voice input is a trigger word, and further recite various generic computer elements such as a display, a remote control and an image receiver. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chadha (US 20060074658) in view of Koganei (US 20140181865). Regarding claim 21, Chadha discloses an electronic device (Fig. 1 one or more user devices 110a-d) comprising: a voice input receiver (¶23 Voice input may, according to some embodiments, be received via a microphone and/or may otherwise include the receipt of a signal); a communicator (¶49 antenna 504 may be any type and/or configuration of device for transmitting and/or receiving communications signals that is or becomes known); a storage configured to store voice information corresponding to a user voice input (¶27 any portions within the voice input may be compared to a stored list of pre-defined commands); and a controller configured to (¶48 The system 500 may include, for example, one or more control circuits 502, which may be any type or configuration of processor, microprocessor, micro-engine, and/or any other type of control circuit that is or becomes known or available); based on a user voice input being received through the voice input receiver, identify whether the user voice input corresponds to voice information stored in the storage (¶53 the memory 514 may store a database, tables, lists, and/or other data that allow the system 500 to identify and/or otherwise determine executable commands. The memory 514 may, for example, store a list of recognizable commands that may be compared to received voice input to determine actions that the system 500 is desired to perform; ¶58 the activation module 612 may, for example, cause the system 600 to enter an activation state in the case that voice sounds and/or voice commands are received from a recognized user and/or that include a particular activation identifier (e.g., a name associated with the system 600; ¶59 the language module 614 may identify and/or interpret the voice input that has been received (e.g., via the input device 606 and/or the communication interface 604). The language module 614 may, for example, determine that received voice input is associated with a recognized user and/or determine one or more commands that may be associated with the voice input).), based on the user voice input corresponding to the voice information stored in the storage, change the mode of the electronic device to the voice recognition mode and perform a function corresponding to the user voice input received from the external device without identifying whether the user voice input received from the external device corresponds to the trigger word (¶38 the laptop 410a may recognize the voice of the first user 402 and may, for example, accept and/or process the first voice command 442) based on the user voice input not corresponding to the voice information stored in the storage, not perform the function related to the voice information corresponding to the user voice input (¶24 At 204, the method 200 may continue by determining if the voice input is associated with a recognized user; ¶39 the laptop 410a may ignore such commands because they do not originate from the first user 402). Chadha fails to teach where Koganei teaches based on the user voice input corresponding to the voice information stored in the storage and corresponding to a trigger word, change a mode of the electronic device to a voice recognition mode for performing a voice recognition function (¶42 The command recognition unit 102 analyzes the speech acquired by the speech acquisition unit 101 and identifies a preset command. To be more specific, the command recognition unit 102 references the speech-command information previously stored in the storage unit 170, to identify the command included in the speech acquired by the speech acquisition unit 101. In the speech-command information, speech is associated with a command representing command information to be given to the TV 10; ¶43 The recognition result acquisition unit 103 acquires a recognition result that is obtained when the speech acquired by the speech acquisition unit 101 is recognized by the command recognition unit 102 or the keyword recognition unit 50; ¶44 Here, the keyword recognition unit 50 acquires the part other than the command included in the speech acquired by the speech acquisition unit 101. The keyword recognition unit 50 recognizes, as a keyword, the part of the speech other than the command, and converts this part of the speech into a corresponding character string (this conversion is referred to as "dictation" hereafter).); and based on the user voice input being received from an external device external to the electronic device through the communicator after a button of the external device is pressed (¶41 acquire the speech of the user that is acquired by the microphone 21 built in the remote control 20 or by the microphone 31 built in the mobile terminal 30; ¶56 A first method is to press a microphone button (not illustrated) that is included in the input unit 22 of the remote control 20. More specifically, when the user presses the microphone button of the remote control 20, the operation receiving unit 110 of the TV 10 receives this operation where the microphone button of the remote control 20 is pressed. Moreover, the TV 10 sets the current volume level of sound outputted from a speaker (not illustrated) of the TV 10 to a preset volume level that is low enough to allow the speech to be easily collected by the microphone 21. Then, when the current volume level of the sound outputted from the speaker of the TV 10 is set to the preset volume level, the speech recognition apparatus 100 starts the speech recognition processing), identify whether the user voice input corresponds to the voice information stored in the storage (¶62 the command recognition unit 102 compares the speech uttered to the TV 10 by the user with the speech-command information previously stored in the storage unit 170, to identify the command.). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of based on the user voice input corresponding to the voice information stored in the storage and corresponding to a trigger word, change a mode of the electronic device to a voice recognition mode for performing a voice recognition function and based on the user voice input being received from an external device external to the electronic device through the communicator after a button of the external device is pressed, identify whether the user voice input corresponds to the voice information stored in the storage from Koganei into the electronic device as disclosed by Chadha. The motivation for doing this is to improve speech recognition to enhance user interaction. Regarding claim 22, Chadha discloses an electronic device of claim 21, wherein the controller is configured to obtain voice information from a user voice input received through the voice input receiver, and control to store the obtained voice information in the storage (¶24 participate in a process to learn how to determine if voice input is associated with a recognized user. The user of a user device such as a cell phone may, for example, teach the cell phone how to recognize the user's voice. In some embodiments, the user may speak various words and/or phrases to the device and/or may otherwise take actions that may facilitate recognition of the user's voice by the device. In some embodiments, the learning process may be conducted for any number of potential users of the device (e.g., various family members that may use a single cell phone; ¶53 The memory 514 may, for example, store a list of recognizable commands that may be compared to received voice input to determine actions that the system 500 is desired to perform. In some embodiments, the memory 514 may store other instructions such as operation and/or command execution rules, security features (e.g., passwords), and/or user profiles.).; . Regarding claim 23, Chadha discloses an electronic device of claim 21, wherein the controller is configured to: based on the user voice input received through the voice input receiver corresponding to the trigger word, execute the voice recognition mode for performing a voice recognition function (¶33 At 306, the method 300 may continue, for example, by initiating an activation state in the case that the voice input is associated with the recognized activation identifier. Upon receiving and identifying a specific activation identifier (such as "Alpha"), for example, a user device may become active and/or initiate voice-activation features), and based on the anther user voice input received through the voice input receiver while the voice recognition mode is executed, identify whether voice information obtained from the received another user voice input corresponds to the voice information stored in the storage (¶34 once the device is activated it may "listen" for commands; ¶53 The memory 514 may, for example, store a list of recognizable commands that may be compared to received voice input to determine actions that the system 500 is desired to perform). Regarding claim 24, Chadha discloses the electronic device of claim 23, but fails to teach where Koganei teaches wherein the controller is configured to, based on the voice recognition mode being executed, control to display a user interface indicating that the voice recognition mode is being executed (Koganei ¶60 i.e. display an indicator 202 indicating the volume level of collected speech, in a lower part of an image 200 as shown in FIG. 1). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the controller is configured to, based on the voice recognition mode being executed, control to display a user interface indicating that the voice recognition mode is being executed from Koganei into the device as disclosed by Chadha. The motivation for doing this is to improve speech recognition to enhance user interaction. Regarding claim 25, Chadha discloses the electronic device of claim 21, wherein the controller is configured to: based on the user voice input being received from the external device not the voice input receiver, identify whether voice information obtained from the user voice input received from the external device corresponds to the voice information stored in the storage without identifying whether the user voice input received from the external device corresponds to the trigger word (¶43 The second user 404 may, for example, provide the second voice command 444, directed to the second user device 410b (e.g., one of the cellular telephones). According to some embodiments, the cell phone 410b may be configured to enter an activation state in response to an activation identifier. The cell phone 410b may, for example, be associated with, labeled, and/or named "Alpha". The second user 404 may, in some embodiments (such as shown in FIG. 4), speak an initial portion of a second voice command 444a that includes the phrase "Alpha, activate.). Regarding claim 26, Chadha discloses the electronic device of claim 25, but fails to teach where Koganei teaches wherein the external device comprises a remote control, and wherein the controller is configured to receive through the communicator the user voice input received through a microphone of the remote control (Koganei ¶56 i.e. press a microphone button (not illustrated) that is included in the input unit 22 of the remote control 20…allow the speech to be easily collected by the microphone 21 (of the remote control 20, see Fig. 2)). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of a communicator, wherein the external device comprises a remote control, and wherein the controller is configured to receive through the communicator the user voice input received through a microphone of the remote control from Koganei into the device as disclosed by Chadha. The motivation for doing this is to improve speech recognition to enhance user interaction. Regarding claim 27, the combination of Chadha and Koganei discloses the electronic device of claim 26, wherein the controller is configured to, based on the user voice input being received from the remote control through the communicator not the voice input receiver after an input button of the remote control is pressed, execute the voice recognition mode for performing the voice recognition function even if the user voice input received from the remote control does not correspond to the trigger word (Koganei ¶56 i.e. when the user presses the microphone button of the remote control 20, the operation receiving unit 110 of the TV 10 receives this operation where the microphone button of the remote control 20 is pressed. Moreover, the TV 10 sets the current volume level of sound outputted from a speaker (not illustrated) of the TV 10 to a preset volume level that is low enough to allow the speech to be easily collected by the microphone 21. Then, when the current volume level of the sound outputted from the speaker of the TV 10 is set to the preset volume level, the speech recognition apparatus 100 starts the speech recognition processing; ¶0065 i.e. when the acquired speech includes a command, such as "Search", the speech recognition apparatus 100 causes the TV 10 to perform the processing according to this command). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the controller is configured to, based on the user voice input being received from the remote control through the communicator not the voice input receiver after an input button of the remote control is pressed, execute the voice recognition mode for performing the voice recognition function even if the user voice input received from the remote control does not correspond to the trigger word from Koganei into the device as disclosed by Chadha. The motivation for doing this is to improve speech recognition to enhance user interaction. Regarding claim 28, Chadha discloses the electronic device of claim 27, wherein the trigger word is a predetermined set by a user (Chadha ¶32 According to some embodiments, a user device may be assigned and/or otherwise associated with a particular activation identifier. The device may, for example, be given a name such as "Bob" or "Sue" and/or other assigned other word identifiers such as "Alpha" or "Green".). Regarding claim 29, Chadha discloses the electronic device of claim 25, but fails to teach where Koganei teaches an image receiver (Koganei Fig. 1 TV 10); and wherein the user voice input is received through the voice input receiver while broadcast content received through the image receiver is being displayed (Koganei ¶57 i.e. "Hi, TV" is an example of the start command and that the start command may be different words; speech recognition apparatus 100 starts the speech recognition processing; it can be seen in Fig. 1 that “Hi, TV” is uttered while broadcast content received through the image receiver is being displayed). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of an image receiver; and wherein the user voice input is received through the voice input receiver while broadcast content received through the image receiver is being displayed from Koganei into the device as disclosed by Chadha. The motivation for doing this is to improve speech recognition to enhance user interaction. Regarding claim 30, the combination of Chadha and Koganei discloses the electronic device of claim 29, further comprising a display, and wherein the controller is configured to control the display to display a user interface corresponding to the user voice input received through the voice input receiver on a portion of the display while the broadcast content is continuously displayed on the display (Koganei ¶60 i.e. When the speech recognition apparatus 100 starts the speech recognition processing as described above, the display control unit 107 causes the display unit 140 to display a speech recognition icon 201 indicating that the speech recognition has been started; this is displayed as broadcast content is continuously displayed see Fig. 1). Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of a display, and wherein the controller is configured to control the display to display a user interface corresponding to the received user voice input on a portion of the display while the broadcast content is continuously displayed on the display from Koganei into the device as disclosed by Chadha. The motivation for doing this is to improve speech recognition to enhance user interaction. Regarding claim(s) 31-39 (drawn to a method): The rejection/proposed combination of Chadha, explained in the rejection of device claim(s) 21-23, 25-30 anticipates/renders obvious the steps of the method of claim(s) 31-39 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 21-23, 25-30 is/are equally applicable to claim(s) 31-39. Regarding claim(s) 40 (drawn to a CRM): The rejection/proposed combination of Chadha, explained in the rejection of method claim(s) 21, anticipates/renders obvious the steps of the computer readable medium of claim(s) 40 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 21 is/are equally applicable to claim(s) 40. See Chadha ¶22-23. Response to Arguments Applicant's arguments filed 7/8/2025 have been fully considered but they are not persuasive. The applicant first argues that the claims are eligible under 35 U.S.C §101. Regarding this argument, the examiner respectfully disagrees. The rejection under 35 U.S.C §101 of amended claim 21 is properly maintained because the claim remains directed to an abstract idea without reciting significantly more to transform the nature of the claim into patent-eligible subject matter. Under Step 2A, Prong One of the Alice/Mayo test, the claim is still directed to certain methods of organizing human activity, including namely, receiving, identifying, and responding to voice commands. These all fall within the USPTO’s category of mental processes or methods of organizing human behavior. Although the applicant argues that the operations “cannot be performed by a human,” the core functionality of recognizing voice input and triggering a response based on stored information is fundamentally a cognitive process that can be performed mentally, and as such falls within the abstract idea grouping. Under Step 2A, Prong Two, the applicant argues that the claim integrates the judicial exception into a practical application. However, the alleged improvement of changing the mode of an electronic device based on voice input, including input received from an external device, merely amounts to a generic implementation of voice recognition technology using conventional hardware components such as a voice input receiver, a communicator, a storage, and a controller. These elements perform their basic, expected functions and are not claimed in a manner that improves the functioning of the electronic device itself or transforms it into something more than a tool for executing the abstract idea. Citing to the specification does not demonstrate that the claimed invention improves the functioning of the computer or another technology, as required under USPTO guidance. Furthermore, under Step 2B, the claim does not include an inventive concept sufficient to transform the abstract idea into patent-eligible subject matter. The claim merely recites the use of generic components configured to carry out routine operations, such as voice input reception, matching to stored information, and execution of a command, without any technical innovation or non-conventional arrangement. The ordered combination of elements does not reflect any unconventional or non-routine activity beyond what is standard in voice recognition systems. See the above rejection under 35 U.S.C §101 for further details. The applicant further argues that Chadha does not teach “based on the user voice input being received from an external device external to the electronic device through the communicator after a button of the external device is pressed identify whether the user voice input corresponds to the voice information stored in the storage” and “based on the user voice input corresponding to the voice information stored in the storage, change the mode of the electronic device to the voice recognition mode and perform a function corresponding to the user voice input received from the external device without identifying whether the user voice input received from the external device corresponds to the trigger word”. The applicant also argues that Koganei does not make up for the elements of the claims that are missing from Chadha. Regarding the above argument, the examiner disagrees. Chadha teaches “based on the user voice input corresponding to the voice information stored in the storage, change the mode of the electronic device to the voice recognition mode and perform a function corresponding to the user voice input received from the external device without identifying whether the user voice input received from the external device corresponds to the trigger word” in ¶38 the laptop 410a may recognize the voice of the first user 402 and may, for example, accept and/or process the first voice command 442. In this paragraph, the function corresponding to the user voice input is performed without identifying whether the user voice input received from the external device corresponds to a trigger word. For example, "Save Sue's e-mail address” is performed without needing to determine a trigger word. Koganei teaches “based on the user voice input being received from an external device external to the electronic device through the communicator after a button of the external device is pressed identify whether the user voice input corresponds to the voice information stored in the storage” in 42 The command recognition unit 102 analyzes the speech acquired by the speech acquisition unit 101 and identifies a preset command. To be more specific, the command recognition unit 102 references the speech-command information previously stored in the storage unit 170, to identify the command included in the speech acquired by the speech acquisition unit 101. In the speech-command information, speech is associated with a command representing command information to be given to the TV 10; ¶43 The recognition result acquisition unit 103 acquires a recognition result that is obtained when the speech acquired by the speech acquisition unit 101 is recognized by the command recognition unit 102 or the keyword recognition unit 50; ¶44 Here, the keyword recognition unit 50 acquires the part other than the command included in the speech acquired by the speech acquisition unit 101. The keyword recognition unit 50 recognizes, as a keyword, the part of the speech other than the command, and converts this part of the speech into a corresponding character string (this conversion is referred to as "dictation" hereafter. That is, a preset command is included in the speech, and once this is determined to be the case, voice recognition mode starts. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEVIN KY whose telephone number is (571)272-7648. The examiner can normally be reached Monday-Friday 9-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEVIN KY/Primary Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Apr 12, 2024
Application Filed
Apr 29, 2024
Response after Non-Final Action
Nov 15, 2024
Non-Final Rejection — §101, §103, §112
Jan 22, 2025
Examiner Interview Summary
Jan 22, 2025
Applicant Interview (Telephonic)
Feb 20, 2025
Response Filed
Apr 04, 2025
Final Rejection — §101, §103, §112
Jul 08, 2025
Request for Continued Examination
Jul 09, 2025
Response after Non-Final Action
Jul 12, 2025
Non-Final Rejection — §101, §103, §112
Oct 15, 2025
Response Filed
Feb 13, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597158
POSE ESTIMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12597291
IMAGE ANALYSIS FOR PERSONAL INTERACTION
2y 5m to grant Granted Apr 07, 2026
Patent 12586393
KNOWLEDGE-DRIVEN SCENE PRIORS FOR SEMANTIC AUDIO-VISUAL EMBODIED NAVIGATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586559
METHOD AND APPARATUS FOR GENERATING SPEECH OUTPUTS IN A VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12579382
NATURAL LANGUAGE GENERATION USING KNOWLEDGE GRAPH INCORPORATING TEXTUAL SUMMARIES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+25.3%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 549 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month