Prosecution Insights
Last updated: April 19, 2026
Application No. 18/660,516

REAL-TIME TELEPROMPTER FOR A VIRTUAL MEETING

Non-Final OA §102§103
Filed
May 10, 2024
Examiner
HAILU, TADESSE
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
82%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
747 granted / 960 resolved
+22.8% vs TC avg
Minimal +4% lift
Without
With
+4.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
989
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
41.1%
+1.1% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 960 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. Claims 1-20 are pending. All the pending claims are examined herein. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 2. Claims 1-6, 8-14, and 16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by CHANDRAN et al (US 20230127120 A1). CHANDRAN et al ( “CHANDRAN”), as in the current invention, is directed to MACHINE LEARNING DRIVEN TELEPROMPTER. As per claim 9, CHANDRAN discloses a system (see Fig. 9), comprising: a memory; and a processing device, coupled to the memory, configured to perform operations (see Fig. 9, a data processing system comprising: a processor; and a computer-readable medium storing executable instructions that, when executed, cause the processor to perform operations, also see claim 1), comprising: causing, during a virtual meeting between a plurality of participants, a first virtual meeting user interface (UI) to be displayed to a first participant of the plurality of participants, ([0046] Examples of the presentation interface are shown in FIGS. 3A-3I, which are described in detail in the examples which follow. The presentation interface may present the prediction to the presenter in response to the user accessing and/or editing the teleprompter transcript) wherein: the first participant is a current presenter ([0049]The presentation content pane 310 may also include an inset pane (shown in the bottom right corner of the presentation content pane 310) that provides a view of the presenter. The inset pane may show video of the presenter captured by the client device 105 of the presenter as they present the presentation, also see the presenter image displayed in Fig. 3A-3I), a first audio stream produced by a client device of the first participant pertains to a presentation of the first participant ([0058] Audio and/or video content of the presenter may be captured by the client device 105 of the presenter and streamed to the presentation and communications platform 110, and the presentation and communications platform 110 may then transmit one or more media streams including the presentation content and the audiovisual content of the presenter to the respective client devices 105 of the remote participants), and the first virtual meeting UI comprises a first region displaying teleprompter notes for the presentation of the first participant ([0050] The presentation application may also present the teleprompter transcript 315 on the user interface. In the example layout shown in FIG. 3A, the teleprompter transcript 315 is shown in a first position located at the bottom of the user interface. The presentation application user interface may provide tools that permit the contents of the teleprompter script 315 and/or the presentation content shown in the presentation content pane 310 to be edited). identifying, using an artificial intelligence (AI) model and using the first audio stream as input to the AI model, a first portion of the teleprompter notes that corresponds to a first presentation segment currently covered by the first participant (techniques performed by a data processing system for a machine learning driven teleprompter include displaying a teleprompter transcript associated with a presentation on a display of a computing device associated with a presenter; receiving audio content of the presentation including speech of the presenter in which the presenter is reading the teleprompter transcript; analyzing the audio content of the presentation using a first machine learning model to obtain a real-time textual translation of the audio content, the first machine learning model being a natural language processing model trained to receive audio content including speech and to translate the audio content into a textual representation of the speech (see Abstract); [0027] The presentation coaching unit 220 may analyze audio, video, and presentation content with machine learning models trained to identify aspects of the presenter's presentation skills and the presentation content are good and those that may benefit from improvement. [0033] The teleprompter unit 225 and/or the presentation coaching unit 220 may be configured to annotate the audio content, video content, and/or the teleprompter script with annotation indications that identify specific improvements that the presenter may make to the presentation content, the presenter's presentation skills, and/or the teleprompter transcript); and causing the first region displaying the teleprompter notes to include a first visual indication that is associated with the first portion of the teleprompter notes ([0056]FIG. 3C shows an example implementation of the presentation user interface that shows a text editor popup 320 being used to highlight a portion of the teleprompter transcript. In some implementations, the text editor popup 320 may be displayed in response to the user selecting a portion of the text of the teleprompter script 315 using a mouse, touchscreen, or other user interface features of the client device 105 of the presenter). As per claim 10, CHANDRAN further discloses that the system of claim 9, wherein the AI model comprises a speech-to-text AI model ([0018] the presenter's speech may be analyzed by a voice-to-text natural language processing (NLP) model). As per claim 11, CHANDRAN further discloses that the system of claim 9, wherein the first visual indication associated with the first portion of the teleprompter notes comprises at least one of: highlighting the first portion of the teleprompter notes; or appearance of the first portion of the teleprompter notes in larger font ([0056] the presentation user interface of the presentation and communications platform 110 may be configured to may be configured to provide tools that enable the presenter to highlight text of the teleprompter transcript, change the font and/or font size of sections of the teleprompter transcript, select bold text and/or italicized text, and/or otherwise highlight words within the transcript. The presenter may wish to highlight certain words to indicate that these words should be emphasized in some way as the presenter is reading the teleprompter transcript. Also see Figs. 3C-3F). As per claim 12, CHANDRAN further discloses that the system of claim 9, further comprising: identifying, using the AI model and using the first audio stream as input to the AI model, a second portion of the teleprompter notes that corresponds to a second presentation segment currently covered by the first participant ([0005] analyzing the real-time textual representation and the teleprompter transcript with a second machine learning model to obtain transcript position information, the second machine learning model being configured to receive a first textual input and a second textual input and determine a position of the first textual input in; and automatically scrolling the teleprompter transcript on the display of the computing device based on the transcript position information on a display of a computing device associated with the presenter. [0024] The content processing models 230 may also be configured to analyze presentation content to provide other types of service, such as but not limited to automated text scrolling of a presentation script for a teleprompter interface and eye direction correction for video content); causing the first region displaying the teleprompter notes to remove the first visual indication associated with the first portion of the teleprompter notes (as shown in fig. 3E, the visual indication (bolded text 370 of Fig. 3D) associated with the first portion of the teleprompter notes is longer highlighted, it is removed). causing the first region displaying the teleprompter notes to include a second visual indication associated with the second portion of the teleprompter notes ([0078] FIGS. 3E and 3F show an example of in which the teleprompter text is automatically scrolled as the user moves from one section to the next section of the teleprompter text. In the example shown in FIGS. 3E and 3F, the current line of text 325 which the presenter is predicted to be currently reading is highlighted. However, in other implementations, other segments of the text may be highlighted. For example, the current sentence, bullet point, list item, or other segment of the text may be highlighted so that the presenter may keep track of their current position in the teleprompter text. Examiner’s Note: also see the second or another visual indicator of highlighter in the second paragraph text of Fig. 3F). As per claim 13, CHANDRAN further discloses that the system of claim 12, wherein the first portion of the teleprompter notes and the second portion of the teleprompter notes are separated by a plurality of other portions of the teleprompter notes ([0056] in some implementations, the teleprompter transcript for a presentation may also be broken up in to segments and each segment may be associated with a particular slide. The presentation user interface may include user interface elements that enable the user to move from slide to slide of the presentation and to annotate each segment of the teleprompter text associated with the presentation. [0066] Other configurations are also possible. For example, the annotations indicators may be added as tic marks or other indicators on the slider control that permits the presenter to select a particular portion of the audio content or video content for playback and provides an indication of a current portion of the audio content or video content being played). As per claim 14, CHANDRAN further discloses that the system of claim 9, further comprising causing the first region displaying the teleprompter notes to include a second visual indication that is associated with a second portion of the teleprompter notes, wherein the second visual indication comprises at least one of: an indication to emphasize the second portion of the teleprompter notes; or an indication to pause after the second portion of the teleprompter notes ([0078] FIGS. 3E and 3F show an example of in which the teleprompter text is automatically scrolled as the user moves from one section to the next section of the teleprompter text. In the example shown in FIGS. 3E and 3F, the current line of text 325 which the presenter is predicted to be currently reading is highlighted). As per claim 16, CHANDRAN further discloses that the system of claim 9, further comprising causing, during the virtual meeting between the plurality of participants, a second virtual meeting UI to be displayed to a second participant of the plurality of participants ([0026] The presentation hosting unit 215 may be configured to facilitate hosting of an online presentation by a presenter. The presentation hosting unit 215 may be configured to permit the presenter to share a presentation content with a plurality of participants. The presentation hosting unit 215 may also be configured to facilitate presenting a presentation to a live audience as discussed above. The presentation content, teleprompter script, and/or other content may be visible to the presenter on the client device of the presenter. The presentation content may be displayed to the live audience using via a display screen, a projector, or other means for displaying the presentation content. The teleprompter script may be presented on a display of the client device so that the presenter may refer to the teleprompter script while the presentation content only is shown to the live audience and/or sent to the client devices of the remote participants), wherein: the second participant is a current audience member ([0048] The presenter may share a link to the video of the presentation maintained by the presentation and communications platform 110 that permits the recipients to access the streaming video content of the presentation. The link may be communicated to the respective client devices 105 of the audience members as an email, text message, or other type of message that indicates to the recipient that they are invited to view the recorded presentation); the second virtual meeting UI comprises a second region displaying a video stream produced by the client device of the first participant ([0021]The presentations and communications platform 110 may also be configured to support a hybrid approach in which some participants are present at the same location as the presenter while other participants are located remotely and receive the presentation content at their respective client devices, such as the client devices 105a, 105b, 105c, and 105d. The presentation and communications platform 110 may also be configured to support recording of a presentation by the presenter into a video without a live audience. The presenter may then share the video with the applicable audience. The presentation and communications platform 110 may be configured to provide the presenter with means for exporting the video of the presentation into various video formats, including but not limited to the MP4 digital multimedia format. The presentation and communications platform 110 may also support streaming of the video to an audience selected by the presenter. The presenter may share a link to the video of the presentation maintained by the presentation and communications platform 110 that permits the recipients to access the streaming video content of the presentation); and the second virtual meeting UI is free of the first region displaying the teleprompter notes ([0022] A presenter may utilize such a communications platform to conduct a meeting, a lecture, conference, or other such event online in which participants may be able to communicate with the presenter as well as other participants via chat and audio and/or video conferencing. In such an online communications platform, a participant may serve as a presenter for part of an online communications session, while another participant may serve as a presenter for another part of the online communications session). As per method claims 1-14, and 16, since the method claims recite similar limitations as that of system claims 9-14 and 16 , the method claims are also rejected under similar citations given to the system claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3. Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over CHANDRAN in view of Zheng et al (US 20240119862 A1). Zheng et al (“Zheng”) is directed to method syllabic based pronounced assist speaker multiple language obtain closely match input text first through computer target. As per claim 15, CHANDRAN at [0056] describes that the process 400 may include an operation 410 of receiving an input indicative of mark-up of content of the teleprompter transcript of the presentation. The presentation user interface of the presentation and communications platform 110 may be configured to may be configured to provide tools that enable the presenter to highlight text of the teleprompter transcript, change the font and/or font size of sections of the teleprompter transcript, select bold text and/or italicized text, and/or otherwise highlight words within the transcript. But CHANDRAN falls short to mention text indicating a pronunciation as required in the claim. Zheng, on the other hand, discloses text indicating a pronunciation as required in the above claim. Zheng [0060] FIG. 2B illustrates the pronunciation help environment 200 that was shown in FIG. 2A but in a subsequent instance in time when the user 202 has received the syllable-based text conversion for pronunciation help according to at least one embodiment. Specifically, FIG. 2B shows that the program for syllable-based text conversion 116 has generated pronunciation help converted text 212 which shows syllable-based characters in the target language which indicate how to pronounce the input text from the selected text, e.g., from the first text box 210. In this example, the pronunciation help converted text 212 includes the Chinese characters custom-character which show a Mandarin-Chinese approximation for pronouncing the English word Marylebone. The computer implementing this program for syllable-based text conversion 116 may be deemed an enhanced teleprompter via its display of a combination of (1) original source language text to be read by a user as well as (2) pronunciation help converted text 212 in a target language. This enhanced teleprompter not only helps a reader know which words to read but also provides pronunciation help for words in the text that are difficult to pronounce and/or read. Before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the teaching of Zheng with CHANDRAN because accurate pronunciation is crucial for effective confident, and professional communications, ensuring messages are understood without ambiguity. Therefore, it would have been obvious to combine Zheng with Barret with CHANDRAN to obtain the invention as specified in claim 15. As per method claim 7, since the method claim 7 recites similar limitations as that of claim 15, claim 7 is also rejected under similar citations given to the system claim 15. Allowable Subject Matter 4. Claims 17-20 are allowed. The following is an examiner’s statement of reasons for allowance: the examiner did not find a prior art that teach a method as recited in claims 17-20. That is, no relevant art was found that teaches in part a first AI model and a second AI model to generate and output a teleprompter notes as recited in the claims. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion 5. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 12204810 B2 discloses a natural language markup for meetings is introduced that facilitates planning and facilitation of online meetings. Shared content is obtained during an online meeting. The shared content is shared by a first participant in the online meeting for display on devices of one or more second participants in the online meeting. A visual object is detected in the shared content and additional content is obtained based on detecting the visual object. The additional content is transmitted with the shared content for display on the devices of the one or more second participants. 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TADESSE HAILU whose telephone number is (571)272-4051; and the email address is Tadesse.hailu@USPTO.GOV. The examiner can normally be reached Monday- Friday 9:30-5:30 (Eastern time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bashore, William L. can be reached (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TADESSE HAILU/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

May 10, 2024
Application Filed
Jan 31, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596435
CONTACT OR CONTACTLESS INTERFACE WITH TEMPERATURE HAPTIC FEEDBACK
2y 5m to grant Granted Apr 07, 2026
Patent 12578976
SYSTEMS AND METHODS FOR AFFINITY-DRIVEN INTERFACE GENERATION
2y 5m to grant Granted Mar 17, 2026
Patent 12578849
METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM FOR PAGE PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12572198
USER INTERFACES FOR GAZE TRACKING ENROLLMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12566621
CUSTOMIZATION AND ENRICHMENT OF USER INTERFACES USING LARGE LANGUAGE MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
82%
With Interview (+4.5%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 960 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month