Prosecution Insights
Last updated: April 19, 2026
Application No. 18/477,271

METHOD AND TERMINAL FOR AUDIBLY BROADCASTING AND INPUTTING CONTENT

Non-Final OA §103
Filed
Sep 28, 2023
Examiner
FIBBI, CHRISTOPHER J
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
3 (Non-Final)
53%
Grant Probability
Moderate
3-4
OA Rounds
4y 3m
To Grant
90%
With Interview

Examiner Intelligence

Grants 53% of resolved cases
53%
Career Allow Rate
199 granted / 376 resolved
-2.1% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
40 currently pending
Career history
416
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
62.9%
+22.9% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 376 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the RCE dated 26 February 2026 which incorporates the Amendment dated 15 January 2026. Claims 1, 5, 9, 14 and 19 are amended. No claims have been added or cancelled. Claims 1-20 remain pending and have been considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-12 are rejected under 35 U.S.C. 103 as being unpatentable over Life After Sight Loss “VoiceOver 101: 3 Different Typing Modes”, dated 19 April 2018, <URL: https://www.youtube.com/watch?v=WRs4SfBX_9U> in view of Mckiel (US 2014/0282002 A1). As for independent claim 1, Life After Sight Loss teaches a method comprising: in response to a first operation on a first character, broadcasting, by a terminal, the first character, but skipping inputting the first character by [(e.g. see Life After Sight Loss @02:15-02:23) ”the user drags their finger over the keys of the soft keyboard and the VoiceOver announces which letter is currently selected underneath of their finger. In this scenario, the user navigates and applies touch input to the ‘B’ key and the VoiceOver announces capital B / Bravo. The keyboard does not enter the character ‘B’ into the text entry field of the Notes application during the announcement of the selected letter”]. displaying, by the terminal, input content comprising text information comprising the first character in a display area on a display using a virtual input program [(e.g. see Life After Sight Loss @02:02-03:16) ”The user applying input to the virtual/soft keyboard displayed on the device to select certain character keys … the virtual/soft keyboard displaying every available keyboard character as text”]. broadcasting, by the terminal, the first voice using the [screen reading] software, but skipping inputting the first character [(e.g. see Life After Sight Loss @02:15-02:23) ”the user drags their finger over the keys of the soft keyboard and the VoiceOver announces which letter is currently selected underneath of their finger. In this scenario, the user navigates and applies touch input to the ‘B’ key and the VoiceOver technology announces capital B / Bravo. The keyboard does not enter the character ‘B’ into the text entry field of the Notes application during the announcement of the selected letter”]. Examiner notes that, while Life After Sight Loss teaches the announcement of the selected letter using VoiceOver technology, it does not explicitly state “screen reader.” However, see secondary reference Mckiel below for discussion of “screen reading” software. and in response to a second operation of the first character, inputting, by the terminal, the first character, but skipping broadcasting the first character, [by the screen reading software], wherein the second operation is a preset operation corresponding to inputting and skipping broadcasting [(e.g. see Life After Sight Loss @02:23-02:28) ”While the current letter is selected, as shown by the white highlighting box around the letter ‘B’ for example, the user may perform a double tap on the ‘B’ key to confirm that is the correct letter and have it typed into the text field. The VoiceOver does not repeat the announcement of the letter when the character is inserted into the text field of the Notes application”]. Examiner notes that, while Life After Sight Loss teaches VoiceOver accessibility technology, it does not explicitly state “screen reader.” However, see secondary reference Mckiel below for discussion of “screen reading” software. Life After sight Loss does not specifically teach extracting, by the terminal, the first character of the text information from the display area using screen reading software, converting, by the terminal, the first character into a first voice using the screen reading software or by the screen reading software. However, in the same field of invention, Mckiel teaches: extracting, by the terminal, the first character of the text information from the display area using screen reading software [(e.g. see Mckiel paragraphs 0056, 0076, 0077) ” One application of particular note is the audible accessibility function 514, an example of which is the well-known accessibility feature called `VoiceOver` used in the Apple iPhone. As will be described in detail later herein, this component is aimed at providing blind or low-vision users with audible readout describing elements that are on the display screen of the host device and allows users to locate and interact with some of these display elements … `page_reading_in_progress` indicator 823, which serves to indicate whether or not the VoiceOver process is currently in a mode of continually extracting textual information from content that is currently displayed on the screen and streaming this information through a speech synthesis process 830 to read the contents of the screen to the user … VoiceOver process 514 is shown to comprise or work in conjunction with a data retrieval service 828 which serves to extract and assemble items of descriptive text 812 and user element type 814 from one or more user interface elements 810 that may be on the screen at any moment. Data retrieval service 828 may also check data as indicated in accessibility indicator 811 to control whether any such descriptive text 812 is subsequently retrieved and assembled for output to user”]. converting, by the terminal, the first character into a first voice using the screen reading software [(e.g. see Mckiel paragraph 0078) ”VoiceOver process 514 is also shown to comprise or work in conjunction with a text and sound coordinator service 829. This service determines which items of descriptive text 812, as retrieved from using data retrieval function 821, are to be output to a user through the audio output. This service also controls the sequencing of textual output in coordination with any applicable sound effects. Text and sound coordinator service 829 can buffer textual output and pass this text string to a speech synthesis subsystem 830 so that the text can be rendered as playable audio data. Text and sound coordinator service 829 also controls the sequencing and timing between textual readout and sound effects”]. by the screen reading software [(e.g. see Mckiel paragraphs 0056, 0076) ”One application of particular note is the audible accessibility function 514, an example of which is the well-known accessibility feature called `VoiceOver` used in the Apple iPhone. As will be described in detail later herein, this component is aimed at providing blind or low-vision users with audible readout describing elements that are on the display screen of the host device and allows users to locate and interact with some of these display elements … a speech synthesis process 830 to read the contents of the screen to the user”]. Therefore, considering the teachings of Life After Sight Loss and Mckiel, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add extracting, by the terminal, the first character of the text information from the display area using screen reading software, converting, by the terminal, the first character into a first voice using the screen reading software and by the screen reading software, as taught by Mckiel, to the teachings of Life After Sight Loss because adding descriptive textual labels to displayed interface control elements that can be announced by speech synthesis allows a user, without seeing the display, to probe the display and elicit audible responses until finding a desired function or control or content (e.g. see Mckiel paragraph 0003). As for dependent claim 2, Life After Sight Loss and Mckiel teach the method as described in claim 1 and Life After Sight Loss further teaches: wherein the virtual input program is a soft keyboard [(e.g. see Life After Sight Loss @02:02-03:16) ”The user applying input to the virtual/soft keyboard displayed on the device to select certain character keys”]. wherein the broadcasting the first character, but skipping inputting the first character comprises receiving, by the terminal, the first character by using the soft keyboard [(e.g. see Life After Sight Loss @02:15-02:23) ”the user drags their finger over the keys of the soft keyboard and the VoiceOver announces which letter is currently selected underneath of their finger. In this scenario, the user navigates and applies touch input to the ‘B’ key and the VoiceOver technology announces capital B / Bravo. The keyboard does not enter the character ‘B’ into the text entry field of the Notes application during the announcement of the selected letter”]. As for dependent claim 3, Life After Sight Loss and Mckiel teach the method as described in claim 1 and Life After Sight Loss further teaches: wherein the first operation comprises a linear touch [(e.g. see Life After Sight Loss @02:18-02:23, 03:10-03:12) ”The user navigating the keyboard to find a particular key they want by linearly dragging their finger between the keys of the keyboard”]. As for dependent claim 4, Life After Sight Loss and Mckiel teach the method as described in claim 1 and Life After Sight Loss further teaches: wherein the second operation comprises lifting a hand to cancel a touch [(e.g. see Life After Sight Loss @03:10-@03:16) ”While in the VoiceOver touch typing mode, the user performs a lift-off to confirm the selected key, which ends the typing gesture and places the character into the text entry field of the Notes application”]. As for independent claim 5, Life After Sight Loss and Mckiel teach a terminal. Claim 5 discloses substantially the same limitations as claim 1. Therefore, it is rejected with the same rational as claim 1. As for dependent claim 6, Life After Sight Loss and Mckiel teach the terminal as described in claim 5; further, claim 6 discloses substantially the same limitations as claim 2. Therefore, it is rejected with the same rational as claim 2. As for dependent claim 7, Life After Sight Loss and Mckiel teach the terminal as described in claim 5; further, claim 7 discloses substantially the same limitations as claim 3. Therefore, it is rejected with the same rational as claim 3. As for dependent claim 8, Life After Sight Loss and Mckiel teach the terminal as described in claim 5; further, claim 8 discloses substantially the same limitations as claim 4. Therefore, it is rejected with the same rational as claim 4. As for independent claim 9, Life After Sight Loss and Mckiel teach a non-transitory computer readable medium. Claim 9 discloses substantially the same limitations as claim 1. Therefore, it is rejected with the same rational as claim 1. As for dependent claim 10, Life After Sight Loss and Mckiel teach the medium as described in claim 9; further, claim 10 discloses substantially the same limitations as claim 2. Therefore, it is rejected with the same rational as claim 2. As for dependent claim 11, Life After Sight Loss and Mckiel teach the medium as described in claim 9; further, claim 11 discloses substantially the same limitations as claim 3. Therefore, it is rejected with the same rational as claim 3. As for dependent claim 12, Life After Sight Loss and Mckiel teach the medium as described in claim 9; further, claim 12 discloses substantially the same limitations as claim 4. Therefore, it is rejected with the same rational as claim 4. Claims 13-15 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Life After Sight Loss “VoiceOver 101: 3 Different Typing Modes”, dated 19 April 2018, <URL: https://www.youtube.com/watch?v=WRs4SfBX_9U> in view of Mckiel (US 2014/0282002 A1), as applied to claim 1 above, and further in view of Tech Review Mania “iPhone 6S / Plus: How to Get Out of :Voce Over Mode” Step by Step”, dated 03 November 2015, <URL: https://www.youtube.com/watch?v=SMDtDlAFenU>. As for dependent claim 13, Life After Sight Loss and Mckiel teach the method as described in claim 1, but do not specifically teach wherein the first character is a character of a lock screen password, the broadcasting the first character, but skipping inputting the first character comprising broadcasting the character of the lock screen password, but skipping inputting the character of the lock screen password. However, in the same field of invention, Tech Review Mania teaches: wherein the first character is a character of a lock screen password, the broadcasting the first character, but skipping inputting the first character comprising broadcasting the character of the lock screen password, but skipping inputting the character of the lock screen password [(e.g. see Tech Review Mania @00:00-01:30) ”showing that while in the VoiceOver mode and accessing the lock screen requiring entry of the user’s passcode, when the user performs a single tap on a particular number of the keypad, the VoiceOver announces the single tapped number (e.g. 8 / ‘eight’). The announced number is not entered into the passcode unless the user proceeds to double tap to confirm the announced single tapped number”]. Therefore, considering the teachings of Life After Sight Loss, Mckiel and Tech Review Mania, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add wherein the first character is a character of a lock screen password, the broadcasting the first character, but skipping inputting the first character comprising broadcasting the character of the lock screen password, but skipping inputting the character of the lock screen password, as taught by Tech Review Mania, to the teachings of Life After Sight Loss and Mckiel because it assists users with vision impairment to gain access to their device (e.g. see Tech Review Mania @00:00-01:30). As for dependent claim 14, Life After Sight Loss, Mckiel and Tech Review Mania teach the method as described in claim 13, but Life After Sight Loss does not specifically teach the following limitations. However, Mckiel teaches: extracting, by the terminal, the character of the lock screen password from the display area using the screen reading software [(e.g. see Mckiel paragraphs 0056, 0076, 0077) ” One application of particular note is the audible accessibility function 514, an example of which is the well-known accessibility feature called `VoiceOver` used in the Apple iPhone. As will be described in detail later herein, this component is aimed at providing blind or low-vision users with audible readout describing elements that are on the display screen of the host device and allows users to locate and interact with some of these display elements … `page_reading_in_progress` indicator 823, which serves to indicate whether or not the VoiceOver process is currently in a mode of continually extracting textual information from content that is currently displayed on the screen and streaming this information through a speech synthesis process 830 to read the contents of the screen to the user … VoiceOver process 514 is shown to comprise or work in conjunction with a data retrieval service 828 which serves to extract and assemble items of descriptive text 812 and user element type 814 from one or more user interface elements 810 that may be on the screen at any moment. Data retrieval service 828 may also check data as indicated in accessibility indicator 811 to control whether any such descriptive text 812 is subsequently retrieved and assembled for output to user”]. Please see Tech Review Mania below for discussion of lock screen password. converting, by the terminal, the character of the lock screen password into the first voice using the screen reading software [(e.g. see Mckiel paragraph 0078) ”VoiceOver process 514 is also shown to comprise or work in conjunction with a text and sound coordinator service 829. This service determines which items of descriptive text 812, as retrieved from using data retrieval function 821, are to be output to a user through the audio output. This service also controls the sequencing of textual output in coordination with any applicable sound effects. Text and sound coordinator service 829 can buffer textual output and pass this text string to a speech synthesis subsystem 830 so that the text can be rendered as playable audio data. Text and sound coordinator service 829 also controls the sequencing and timing between textual readout and sound effects”]. Please see Tech Review Mania below for discussion of lock screen password. Life After Sight Loss and Mckiel do not specifically teach the following limitations. However, Tech Review Mania teaches: wherein the broadcasting the character of the lock screen password, but skipping inputting the character of the lock screen password comprises: [(e.g. see Tech Review Mania @00:00-01:30) ”showing that while in the VoiceOver mode and accessing the lock screen requiring entry of the user’s passcode, when the user performs a single tap on a particular number of the keypad, the VoiceOver announces the single tapped number (e.g. 8 / ‘eight’). The announced number is not entered into the passcode unless the user proceeds to double tap to confirm the announced single tapped number”]. while the display is locked by the lock screen [(e.g. see Tech Review Mania @00:50-01:30) ”showing VoiceOver announcing the single tapped numbers while on the lock screen on the display”]. broadcasting, by the terminal, the first voice by using the [screen reading] software, but skipping inputting the first character [(e.g. see Tech Review Mania @00:00-01:30) ”showing that while in the VoiceOver mode and accessing the lock screen requiring entry of the user’s passcode, when the user performs a single tap on a particular number of the keypad, the VoiceOver announces the single tapped number (e.g. 8 / ‘eight’). The announced number is not entered into the passcode unless the user proceeds to double tap to confirm the announced single tapped number”]. Examiner notes that, while Tech Review Mania teaches the announcement of the selected number using VoiceOver technology, it does not explicitly state “screen reader.” However, see secondary reference Mckiel above for discussion of “screen reading” software. The motivation to combine is the same as that used for claim 13 As for dependent claim 15, Life After Sight Loss, Mckiel and Tech Review Mania teach the method as described in claim 14 and Life After Sight Loss further teaches: wherein the first operation comprises a linear touch [(e.g. see Life After Sight Loss @02:18-02:23, 03:10-03:12) ”The user navigating the keyboard to find a particular key they want by linearly dragging their finger between the keys of the keyboard”]. and wherein the second operation comprises lifting a hand to cancel the linear touch [(e.g. see Life After Sight Loss @03:10-@03:16) ”While in the VoiceOver touch typing mode, the user performs a lift-off to confirm the selected key, which ends the typing gesture and places the character into the text entry field of the Notes application”]. As for dependent claim 18, Life After Sight Loss and Mckiel teach the terminal as described in claim 5; further, claim 18 discloses substantially the same limitations as claim 13. Therefore, it is rejected with the same rational as claim 13. As for dependent claim 19, Life After Sight Loss, Mckiel and Tech Review Mania the terminal as described in claim 18; further, claim 19 discloses substantially the same limitations as claim 14. Therefore, it is rejected with the same rational as claim 14. As for dependent claim 20, Life After Sight Loss and Mckiel teach the medium as described in claim 9; further, claim 20 discloses substantially the same limitations as claim 13. Therefore, it is rejected with the same rational as claim 13. Claims 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Life After Sight Loss “VoiceOver 101: 3 Different Typing Modes”, dated 19 April 2018, <URL: https://www.youtube.com/watch?v=WRs4SfBX_9U> in view of Mckiel (US 2014/0282002 A1), as applied to claim 1 above, and further in view of Voorhees et al. (US 2014/0058733 A1). As for dependent claim 16, Life After Sight Loss and Mckiel teach the method as described in claim 1, but do not specifically teach wherein the first operation comprises long pressing a mouse or wherein the second operation comprises releasing the mouse. However, in the same field of invention, Voorhees teaches: wherein the first operation comprises long pressing a mouse [(e.g. see Voorhees paragraphs 0030, 0034, 0036, 0037) ”In FIG. 2, mouse indicia 200 hovers over copy text button 115 putting it in focus 210 as visually apparent from the change in the border of copy text button 115. A low vision user hovering over copy text button 210 with mouse indicia 200 will likely already know which control they indicated to activate … For the situation in FIG. 2, the verbosity may be automatically set low by the screen reader software so that only "copy" is output to speech … It should be noted that alternative mouse and keyboard events are anticipated and included within this invention such as scrolling a mouse wheel and other input events enacted by the end user that set focus on a control … a Win32 API function called SetWindowsHookEx 705 allows installing a system-wide keyboard and/or mouse hook. Using these hooks it's possible to detect whether a keyboard or mouse event was the most recent input and act accordingly … left mouse button down”]. Voorhees teaches that a particular mouse event can trigger active focus and initiate screen reading. Given that there are a finite number of mouse events, it would have been obvious to one of ordinary skill in the art, namely a software/hardware developer, to try any available mouse event that would achieve a desired user input, with reasonable success (i.e. different mouse events cause different actions), as hooking into different mouse callback functions is shown in Voorhees. A person of ordinary skill has good reason to pursue the known options within his or her technical grasp. If this leads to the anticipated success, it is likely the product not of invention but of ordinary skill and common sense. See MPEP 2143(I)(E) – KSR: “Obvious to Try”. wherein the second operation comprises releasing the mouse [(e.g. see Voorhees paragraph 0053) ”Focus is the state of a control in a graphic user interface indicating it is targeted for end user interaction. This may be achieved by keystrokes and/or mouse navigation over the location of the control. In most environments, setting focus visually changes the appearance of the control by creating a slight variation in background of the control and/or its border. Non-visual feedback may be tactile and audio-based. Once a control is in focus, further user interaction by mouse or keyboard manipulates the control. For example, a button in focus may be activated by depressing the enter key or by a left mouse down click”]. Therefore, considering the teachings of Life After Sight Loss, Mckiel and Voorhees, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add wherein the first operation comprises long pressing a mouse and wherein the second operation comprises releasing the mouse, as taught by Voorhees, to the teachings of Life After Sight Loss and Mckiel because it allows a screen reader to automatically adjust the verbosity of the speech output to better convey an optimal amount of information to the end user (e.g. see Voorhees paragraph 0009). As for dependent claim 17, Life After Sight Loss, Mckiel and Voorhees teach the method as described in claim 16, but Life After Sight Loss and Mckiel do not specifically teach the following limitations. However, Voorhees teaches: wherein the first operation comprises a first plurality of manners for locking the first character comprising long pressing the mouse and a linear touch [(e.g. see Voorhees paragraphs 0030, 0034, 0036, 0037) ”In FIG. 2, mouse indicia 200 hovers over copy text button 115 putting it in focus 210 as visually apparent from the change in the border of copy text button 115. A low vision user hovering over copy text button 210 with mouse indicia 200 will likely already know which control they indicated to activate … For the situation in FIG. 2, the verbosity may be automatically set low by the screen reader software so that only "copy" is output to speech … It should be noted that alternative mouse and keyboard events are anticipated and included within this invention such as scrolling a mouse wheel and other input events enacted by the end user that set focus on a control … a Win32 API function called SetWindowsHookEx 705 allows installing a system-wide keyboard and/or mouse hook. Using these hooks it's possible to detect whether a keyboard or mouse event was the most recent input and act accordingly … left mouse button down … The origin of focus 610 is determined as either … a mouse over 620 or a keystroke combination 625”]. Examiner notes that Life After Sight Loss also teaches linear touch (see claim 3 above). wherein the second operation comprises a second plurality of manners for unlocking the first character comprises releasing the mouse and lifting a hand to cancel the linear touch [(e.g. see Voorhees paragraph 0053) ”Focus is the state of a control in a graphic user interface indicating it is targeted for end user interaction. This may be achieved by keystrokes and/or mouse navigation over the location of the control. In most environments, setting focus visually changes the appearance of the control by creating a slight variation in background of the control and/or its border. Non-visual feedback may be tactile and audio-based. Once a control is in focus, further user interaction by mouse or keyboard manipulates the control. For example, a button in focus may be activated by depressing the enter key or by a left mouse down click”]. Examiner notes that Life After Sight Loss also teaches cancelling linear touch (see claim 4 above). The motivation to combine is the same as that used for claim 16. Response to Arguments Applicant's arguments, filed 26 February 2026, have been fully considered but they are not persuasive. Applicant argues that [“Mark does not teach screen reading software that extracts a character in text information from input content displayed in a display area” (Page 9).]. The argument described above, in paragraph number 7, with respect to the newly added limitations to the independent claims has been considered, but is moot in view of the new grounds of rejection. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. PGPub 2004/0145607 A1 issued to Alderson on 29 July 2004. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g. screen reader extracting text from a user interface control). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER J FIBBI whose telephone number is (571)-270-3358. The examiner can normally be reached Monday - Thursday (8am-6pm). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at (571)-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER J FIBBI/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Nov 08, 2023
Response after Non-Final Action
Apr 08, 2025
Non-Final Rejection — §103
Jul 09, 2025
Response Filed
Oct 24, 2025
Final Rejection — §103
Jan 15, 2026
Response after Non-Final Action
Feb 26, 2026
Request for Continued Examination
Mar 09, 2026
Response after Non-Final Action
Mar 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585866
AUTOMATED ENTRY OF EXTRACTED DATA AND VERIFICATION OF ACCURACY OF ENTERED DATA THROUGH A GRAPHICAL USER INTERFACE
2y 5m to grant Granted Mar 24, 2026
Patent 12561152
METHODS AND SYSTEMS FOR ADAPTIVE CONFIGURATION
2y 5m to grant Granted Feb 24, 2026
Patent 12535930
INTEROPERABILITY FOR TRANSLATING AND TRAVERSING 3D EXPERIENCES IN AN ACCESSIBILITY ENVIRONMENT
2y 5m to grant Granted Jan 27, 2026
Patent 12535941
USER INTERFACE FOR MANAGING INPUT TECHNIQUES
2y 5m to grant Granted Jan 27, 2026
Patent 12519999
Location Based Playback System Control
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
53%
Grant Probability
90%
With Interview (+37.6%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 376 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month