Prosecution Insights
Last updated: April 19, 2026
Application No. 18/369,279

GENERATING A VIRTUAL PRESENTATION STAGE FOR PRESENTATION IN A USER INTERFACE OF A VIDEO CONFERENCE

Non-Final OA §103
Filed
Sep 18, 2023
Examiner
ANWAH, OLISA
Art Unit
2692
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
3 (Non-Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
2y 1m
To Grant
93%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
1036 granted / 1162 resolved
+27.2% vs TC avg
Minimal +4% lift
Without
With
+4.2%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
38 currently pending
Career history
1200
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
29.1%
-10.9% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1162 resolved cases

Office Action

§103
DETAILED ACTION Claim Rejections - 35 USC § 103 1. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 2. Claims 1, 3-9, 11-17, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mayfield, U.S. Patent No. 11,412,180 (hereinafter Mayfield) combined with Chandran et al, U.S. Patent Application Publication No. 2023/0127120 (hereinafter Chandran) in further view of Afrasiabi, U.S. Patent Application Publication No. 2022/0286625 (hereinafter Afrasiabi). Regarding claim 1, Mayfield discloses a method comprising: receiving, from a camera (from Figure 11, see 1152) of a first client device of a first participant of a plurality of participants (from Figure 4, see 430a, 430b, 430c and 430d) of a video conference, a first participant video stream representing the first participant; creating a combined video stream (from Figure 4, see 440) comprising a background (from column 16, see background) image, one or more images (from Figure 4, see 442) representing one or more content items presentable by the first participant during the video conference, the first participant video stream (from Figure 4, see 444), and one or more other content items (from column 19, see other content items, such as pages, video clips, etc.); and providing, for display on the first client device (from Figure 4, see 400) of the first participant, a user interface (UI) (from Figure 4, see 410) comprising a visual item corresponding to the combined video stream while the first participant is presenting at least one of the one or more content items to one or more other participants of the video conference. Further regarding claim 1, Mayfield does not teach one or more teleprompter notes associated with at least one of the one or more content items, wherein a location of the one or more teleprompter notes in the combined video stream is configured to enable the first participant to read the one or more teleprompter notes while appearing to look at the camera. All the same, Chandran discloses one or more teleprompter notes associated with at least one of the one or more content items (from abstract, see teleprompter transcript associated with a presentation on a display) wherein a location of the one or more teleprompter notes is configured to enable the first participant to read the one or more teleprompter notes while appearing to look at the camera (from paragraph 0053, see The teleprompter unit 225 may automatically reconfigure the layout of the presentation interface to position the teleprompter script 315 as close as possible to the camera associated with the client device 105 of the presenter so that the eye gaze of the presenter appears to be centered on the camera if possible). Therefore, it would have been obvious to one of ordinary skill in the art to modify Mayfield with one or more teleprompter notes associated with at least one of the one or more content items, wherein a location of the one or more teleprompter notes is configured to enable the first participant to read the one or more teleprompter notes while appearing to look at the camera as taught by Chandran. This modification would have improved the system’s reliability by allowing the presenter to read the text word for word to ensure the presentation is consistent and accurate as suggested by Chandran. Still on the issue of claim 1, the combination of Mayfield and Chandran does not teach the one or more teleprompter notes being set to automatically scroll at a user-selectable scrolling speed. All the same, Afrasiabi discloses the one or more teleprompter notes being set to automatically scroll at a user-selectable scrolling speed (from paragraph 0049, see Additionally, the speech and text parameters may be pre-set at a desired pace (like a teleprompter) so as to initiate the scrolling function for a field 220 comprising a script for a speech once the user begins a speech or live presentation). Therefore, it would have been obvious to one of ordinary skill in the art to further modify the combination of Mayfield and Chandran wherein the one or more teleprompter notes being set to automatically scroll at a user-selectable scrolling speed as taught by Afrasiabi. This modification would have improved the system’s flexibility by providing different ways of setting the scrolling speed as suggested by Afrasiabi. Regarding claim 3, the combination of Mayfield and Chandran discloses that combining the background image, the one or more images representing the one or more content items, the first participant video stream, and the one or more teleprompter (from abstract of Chandran, see teleprompter transcript) notes is performed by a driver (from Figure 11 of Mayfield, see 1160) associated with the camera (from Figure 11, see 1152) of the first client device (from Figure 11, see 1100). Regarding claim 4, the combination of Mayfield and Chandran discloses: receiving, via the UI, input of the first participant to identify the one or more content items; and obtaining, via an application programming interface (API) to a content editing application (from Figure 2 of Chandran, see 205), the one or more content items and the one or more teleprompter notes associated with the at least one of the one or more content items. Regarding claim 5, Mayfield discloses the content item is one of a document, a spreadsheet, a set of slides, or a multimedia content item (from Figure 4, see 462). Regarding claim 6, Mayfield discloses receiving, from one or more client devices associated with other participants of the video conference, one or more other participant video streams and adding, to the UI displayed on the first client device of the first participant, one or more visual items (from Figure 4, see Participant Window) corresponding to the one or more other participant video streams. Regarding claim 7, Mayfield as modified by Chandran discloses wherein a second UI displayed on each of the one or more client devices associated with the other participants of the video conference comprises a visual item corresponding to a second combined video stream comprising the background image, an image representing one of the one or more content items being presented by the first participant and the first participant video stream, wherein the one or more teleprompter notes associated with the at least one of the one or more content items are not visible on the second UI displayed on each of the one or more client devices associated with the other participants of the video conference (from paragraph 0026 of Chandran, see while the presentation content only is shown to the live audience and/or sent to the client devices of the remote participants). Regarding claim 8, Mayfield as modified by Chandran discloses upon a selection of a control UI element in the UI by the first participant, modifying a visual representation of the at least one content item of the one or more content items; and modifying at least one teleprompter note of the one or more teleprompter notes associated with the at least one content item (from paragraph 0029 of Chandran, see The content creation and editor unit 205 may provide the presenter with another option for creating and/or editing the presentation content and/or the teleprompter script via a web-based application or via a native application installed on the client device 105a of the presenter. The content creation and editor unit 205 may provide a user interface that may be accessed via the browser application 255b of the client device 105a of the presenter that allows the presenter to create and/or edit the content of the presentation online). Regarding claim 9, Mayfield discloses a system comprising: a memory device (from Figure 11, see 1120); and a processing device (from Figure 11, see 1110) coupled to the memory device, the processing device to perform operations comprising: receiving, from a camera (from Figure 11, see 1152) of a first client device of a first participant of a plurality of participants (from Figure 4, see 430a, 430b, 430c and 430d) of a video conference, a first participant video stream representing the first participant; creating a combined video stream (from Figure 4, see 440) comprising a background (from column 16, see background) image, one or more images (from Figure 4, see 442) representing one or more content items presentable by the first participant during the video conference, the first participant video stream (from Figure 4, see 444), and one or more other content items (from column 19, see other content items, such as pages, video clips, etc.); and providing, for display on the first client device (from Figure 4, see 400) of the first participant, a user interface (UI) (from Figure 4, see 410) comprising a visual item corresponding to the combined video stream while the first participant is presenting at least one of the one or more content items to one or more other participants of the video conference. Further regarding claim 9, Mayfield does not teach one or more teleprompter notes associated with at least one of the one or more content items, wherein a location of the one or more teleprompter notes in the combined video stream is configured to enable the first participant to read the one or more teleprompter notes while appearing to look at the camera. All the same, Chandran discloses one or more teleprompter notes associated with at least one of the one or more content items (from abstract, see teleprompter transcript associated with a presentation on a display) wherein a location of the one or more teleprompter notes is configured to enable the first participant to read the one or more teleprompter notes while appearing to look at the camera (from paragraph 0053, see The teleprompter unit 225 may automatically reconfigure the layout of the presentation interface to position the teleprompter script 315 as close as possible to the camera associated with the client device 105 of the presenter so that the eye gaze of the presenter appears to be centered on the camera if possible). Therefore, it would have been obvious to one of ordinary skill in the art to modify Mayfield with one or more teleprompter notes associated with at least one of the one or more content items, wherein a location of the one or more teleprompter notes is configured to enable the first participant to read the one or more teleprompter notes while appearing to look at the camera as taught by Chandran. This modification would have improved the system’s reliability by allowing the presenter to read the text word for word to ensure the presentation is consistent and accurate as suggested by Chandran. Still on the issue of claim 9, the combination of Mayfield and Chandran does not teach the one or more teleprompter notes being set to automatically scroll at a user-selectable scrolling speed. All the same, Afrasiabi discloses the one or more teleprompter notes being set to automatically scroll at a user-selectable scrolling speed (from paragraph 0049, see Additionally, the speech and text parameters may be pre-set at a desired pace (like a teleprompter) so as to initiate the scrolling function for a field 220 comprising a script for a speech once the user begins a speech or live presentation). Therefore, it would have been obvious to one of ordinary skill in the art to further modify the combination of Mayfield and Chandran wherein the one or more teleprompter notes being set to automatically scroll at a user-selectable scrolling speed as taught by Afrasiabi. This modification would have improved the system’s flexibility by providing different ways of setting the scrolling speed as suggested by Afrasiabi. Claim 11 is rejected for the same reasons as claim 3. Claim 12 is rejected for the same reasons as claim 4. Claim 13 is rejected for the same reasons as claim 5. Claim 14 is rejected for the same reasons as claim 6. Claim 16 is rejected for the same reasons as claim 8. Regarding claim 17, Mayfield discloses a non-transitory computer readable storage medium (from column 1, see non-transitory computer-readable medium) comprising instructions for a server that, when executed by a processing device, cause the processing device to perform operations comprising: receiving, from a camera (from Figure 11, see 1152) of a first client device of a first participant of a plurality of participants (from Figure 4, see 430a, 430b, 430c and 430d) of a video conference, a first participant video stream representing the first participant; creating a combined video stream (from Figure 4, see 440) comprising a background (from column 16, see background) image, one or more images (from Figure 4, see 442) representing one or more content items presentable by the first participant during the video conference, the first participant video stream (from Figure 4, see 444), and one or more other content items (from column 19, see other content items, such as pages, video clips, etc.); and providing, for display on the first client device (from Figure 4, see 400) of the first participant, a user interface (UI) (from Figure 4, see 410) comprising a visual item corresponding to the combined video stream while the first participant is presenting at least one of the one or more content items to one or more other participants of the video conference. Further regarding claim 17, Mayfield does not teach one or more teleprompter notes associated with at least one of the one or more content items, wherein a location of the one or more teleprompter notes in the combined video stream is configured to enable the first participant to read the one or more teleprompter notes while appearing to look at the camera. All the same, Chandran discloses one or more teleprompter notes associated with at least one of the one or more content items (from abstract, see teleprompter transcript associated with a presentation on a display) wherein a location of the one or more teleprompter notes is configured to enable the first participant to read the one or more teleprompter notes while appearing to look at the camera (from paragraph 0053, see The teleprompter unit 225 may automatically reconfigure the layout of the presentation interface to position the teleprompter script 315 as close as possible to the camera associated with the client device 105 of the presenter so that the eye gaze of the presenter appears to be centered on the camera if possible). Therefore, it would have been obvious to one of ordinary skill in the art to modify Mayfield with one or more teleprompter notes associated with at least one of the one or more content items, wherein a location of the one or more teleprompter notes is configured to enable the first participant to read the one or more teleprompter notes while appearing to look at the camera as taught by Chandran. This modification would have improved the system’s reliability by allowing the presenter to read the text word for word to ensure the presentation is consistent and accurate as suggested by Chandran. Still on the issue of claim 17, the combination of Mayfield and Chandran does not teach the one or more teleprompter notes being set to automatically scroll at a user-selectable scrolling speed. All the same, Afrasiabi discloses the one or more teleprompter notes being set to automatically scroll at a user-selectable scrolling speed (from paragraph 0049, see Additionally, the speech and text parameters may be pre-set at a desired pace (like a teleprompter) so as to initiate the scrolling function for a field 220 comprising a script for a speech once the user begins a speech or live presentation). Therefore, it would have been obvious to one of ordinary skill in the art to further modify the combination of Mayfield and Chandran wherein the one or more teleprompter notes being set to automatically scroll at a user-selectable scrolling speed as taught by Afrasiabi. This modification would have improved the system’s flexibility by providing different ways of setting the scrolling speed as suggested by Afrasiabi. Claim 19 is rejected for the same reasons as claim 3. Claim 20 is rejected for the same reasons as claim 4. 3. Claims 2, 10 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Mayfield combined with Chandran and Afrasiabi in further view of VanBlon et al, U.S. Patent No. 9,875,224 (hereinafter VanBlon). Regarding claim 2, although Mayfield discloses the one or more images representing the one or more content items are overlaid over the background image in the combined video stream, the first participant video stream is overlaid over at least a part of the background image (from Figure 4, see 440), the combination of references does not teach that the teleprompter notes are overlaid over at least a part of a content item of the one or more content items in the combined video stream. All the same, VanBlon discloses the notes are overlaid over at least a part of a content item of the one or more content items (from Figure 3B, see Note). Therefore, it would have been obvious to one of ordinary skill in the art to further modify the combination of references wherein the notes are overlaid over at least a part of a content item of the one or more content items as taught by VanBlon. This modification would have improved the system’s convenience by allowing the presenter to mark different content elements of the slide as suggested by VanBlon. Claim 10 is rejected for the same reasons as claim 2. Claim 18 is rejected for the same reasons as claim 2. Response to Arguments 4. Applicant’s arguments have been considered but are deemed to be moot in view of the new grounds of rejection. Conclusion 5. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLISA ANWAH whose telephone number is 571-272-7533. The examiner can normally be reached Monday to Friday from 8.30 AM to 6 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached on 571-270-7136. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications. Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is 571-272-2600. Olisa Anwah Patent Examiner January 14, 2026 /OLISA ANWAH/Primary Examiner, Art Unit 2692
Read full office action

Prosecution Timeline

Sep 18, 2023
Application Filed
May 29, 2025
Non-Final Rejection — §103
Aug 26, 2025
Applicant Interview (Telephonic)
Aug 26, 2025
Examiner Interview Summary
Sep 02, 2025
Response Filed
Sep 07, 2025
Final Rejection — §103
Oct 22, 2025
Interview Requested
Oct 29, 2025
Examiner Interview Summary
Oct 29, 2025
Applicant Interview (Telephonic)
Dec 11, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Jan 14, 2026
Non-Final Rejection — §103
Apr 02, 2026
Applicant Interview (Telephonic)
Apr 05, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604130
HEARING DEVICE WITH A BLEEDING CIRCUIT FOR DELIVERING MESSAGES TO A CHARGING DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598710
Terminal Device
2y 5m to grant Granted Apr 07, 2026
Patent 12597251
VIDEO FRAMING BASED ON TRACKED CHARACTERISTICS OF MEETING PARTICIPANTS
2y 5m to grant Granted Apr 07, 2026
Patent 12596515
FIRST DEVICE, COMMUNICATION SERVER, SECOND DEVICE AND METHODS IN A COMMUNICATIONS NETWORK
2y 5m to grant Granted Apr 07, 2026
Patent 12598437
EARPHONES AND EARPHONE SYSTEM
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
93%
With Interview (+4.2%)
2y 1m
Median Time to Grant
High
PTA Risk
Based on 1162 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month