Prosecution Insights
Last updated: April 19, 2026
Application No. 18/159,617

System and Methods for Enhancing Videoconferences

Non-Final OA §103
Filed
Jan 25, 2023
Examiner
PROVIDENCE, VINCENT ALEXANDER
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Zenapptic AI Inc.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
15 granted / 18 resolved
+21.3% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
38 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
0.9%
-39.1% vs TC avg
§103
82.4%
+42.4% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
0.9%
-39.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed January 2nd 2026 has been entered. Claims 1-11 are pending in the application. Response to Arguments Applicant’s arguments, filed January 2nd, 2026, have been fully considered and are addressed below. Applicant’s argument regarding the “real-time rendering engine and camera driver” of claim 1 was considered persuasive. However, upon further search and consideration, newly found reference Zotto (US 20220414943 A1), as best understood by the Examiner, cures the deficiencies highlighted in Applicant’s argument. The Applicant argues: “In addition, unlike Applicant's claimed method, the cited references do not disclose an AI module that analyzes real-time progress based on presentation data, agenda, and additional inputs, compares the real-time progress to the agenda, and provides real-time visual prompts to attendees via attendee computing devices. Instead, Nelson describes generic timers or reminders that are different from Applicant's claimed agenda-aware analysis and attendee-directed distribution.” Claim 1 recites: “… by an Al module analyzing the real time progress of a videoconference, based on the real time presentation data, the meeting agenda and the one or more additional data inputs, and comparing the real time progress to the agenda; and based on results providing real time visual prompts to attendees of the meeting, via an attendee computing device, to notify the attendees of one or more of elapsed time, agenda items, and progress of a presenting attendee in relation to a meeting agenda.” The Examiner respectfully disagrees that Nelson merely describes generic timers or reminders as opposed to agenda-aware analysis and attendee-directed distribution, because Nelson teaches: “According to one embodiment, artificial intelligence is used to provide agenda management functionality during electronic meetings. […] Example functionality includes, without limitation, enforcing time constraints for agenda items, changing designated amounts of time for agenda items, changing, deleting and adding agenda items, including providing missing or supplemental information for agenda items, and agenda navigation.” [0154], emphasis added. Functionality such as “enforcing time constraints for agenda items” inherently requires analysis of the meeting relative to the agenda and real-time progress, as discussed in Note 1F in the previous action. Nelson also teaches in the Abstract that “The artificial intelligence may analyze a wide variety of data such as data pertaining to other electronic meetings, data pertaining to organizations and users, and other general information pertaining to any topic.” Therefore, one of ordinary skill in the art would find it obvious to utilize the artificial intelligence to analyze not just the remaining time for an agenda item, but also other presentation data or data inputs In order to prompt the user with the reminders detailed in paragraph [0156] of Nelson. For at least the reasons described above, the Examiner is not convinced that “the cited references do not disclose an AI module that analyzes real-time progress based on presentation data, agenda, and additional inputs, compares the real-time progress to the agenda, and provides real-time visual prompts to attendees via attendee computing devices.” The Applicant argues that: “Nelson does not disclose a real-time interactive graphics generator producing visual prompts per attendee nor does it teach injecting graphics into a camera feed or virtual camera driver behavior. In Applicant's independent claim 6, graphics are delivered to attendees during a live video session via a camera feed or virtual camera driver. Applicant's claim 6 also discloses overlaying generated visuals […] in a video teleconferencing camera feed or stream.” However, Claim 6 does not appear to contain all the same limitations as Claim 1. Claim 6 recites “a real-time interactive graphics generator operable to generate graphics for visual content for each attendee” but does not describe “overlaying generated visuals”. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "graphics generator operable to generate graphics" in claim 6. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The corresponding structure for the “graphics generator operable to generate graphics" in claim 6 is the real-time rendering engine 18 from the specification and drawings. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Nelson (US 20180101281 A1) in view of Bujnowski (US 20190394057 A) and Zotto (US 20220414943 A1). Nelson teaches: A method for enhancing a videoconference presentation comprising: by a multimedia module: receiving presentation data and one or more additional data inputs from one or more servers (Nelson: Meeting intelligence apparatus 102 may access meeting content data as if it were a node associated with a participant in an electronic meeting. Thus, meeting intelligence apparatus 102 may access any meeting content data that is transmitted from any of the one or more nodes 104A-N involved in an electronic meeting [0076]; Nelson: multiple nodes may be communicatively coupled with each other via a centralized server [0083]); creating an agenda based on the presentation data and one or more additional data inputs (Nelson: Referring to the prior example, meeting intelligence apparatus 102 may detect, in first meeting content data 302, the command, “new agenda item Y,” along with attribute data for new agenda item Y, such as a description of the new agenda item. This command may have been spoken, written or selected by a meeting participant, as indicated by the first meeting content data 302 [0160]; see Note 1A); and by a real-time rendering engine (Nelson: render computer system 1000 [0240]) and camera (Nelson: The node may also include any of a number of input/output mechanisms, such as a camera [0084]), overlaying the presentation output in a video teleconferencing feed (Nelson: In some situations, a large amount of additional information may be available for suggested meeting participants. […] The additional information may be displayed, for example, in a pop-up box or overlaid window and may include, for example, any of the information described above, such as key quotes from prior meetings, etc. [0147]; see Note 1D and Note 1E); by an Al module, analyzing the real time progress of a videoconference (Nelson: monitor the progress of an electronic meeting and enforce time constraints with respect to individual agenda items [0156]), based on the real time presentation data, the meeting agenda and the one or more additional data inputs (see Note 1B), and comparing the real time progress to the agenda (Nelson: A determined correspondence between a current point in an electronic meeting and a meeting agenda may be used to monitor the progress of an electronic meeting and enforce time constraints with respect to individual agenda items, groups of agenda items, and/or an entire electronic meeting [0156]); and based on results providing real time visual prompts to attendees of the meeting, via an attendee computing device, to notify the attendees of one or more of elapsed time, agenda items (Nelson: in addition to the timer provided in agenda window 218 (FIG. 2D), a visual and/or audible indication may be provided when an amount of time designated for an agenda item, group of agenda items, or an entire electronic meeting, is nearing expiration or has expired [0156]), and progress of a presenting attendee in relation to a meeting agenda (see Note 1C), and based on results providing real time (see Note 1F) visual prompts to attendees of the meeting, via an attendee computing device, to notify the attendees of one or more of elapsed time (Nelson: a visual and/or audible indication may be provided when an amount of time designated for an agenda item, group of agenda items, or an entire electronic meeting, is nearing expiration or has expired [0156]), agenda items, and progress of a presenting attendee in relation to a meeting agenda. Note 1A: As described in paragraph [0019] of the specification, “presentation data and one or more data inputs” includes “static graphics”. Nelson teaches that the meeting content data is data that is “transmitted from any of the one or more nodes 104A-N involved in an electronic meeting,” [0076], and that the meeting content data may comprise images: “meeting content data, such as documents, images, and/or any other data shared during an electronic meeting,” [0210]. Therefore, the meeting content data is analogous to the presentation data and one or more data inputs. Note 1B: Nelson teaches that: “meeting intelligence apparatus 102 may intervene during electronic meetings to provide any of a variety of intervention data, such as visual indications, messages in message window 224 [etc.],” [0148]. That is, an intervention of the meeting with a notification or similar is caused by intervention data. Nelson teaches that the intervention data is generated based on “Audio/video data 300 may be one or more data packets, a data stream, and/or any other form of data that includes audio and/or video information related to an electronic meeting,” [0149], and a “cue 304 [which may] include, without limitation, one or more keywords, tones, sentiments, facial recognitions, etc., that can be discerned from audio/video data 300. Other examples of cue 304 include whiteboard sketches and/or gestures that may not be part of audio/video data 300,” [0149]. I.e., the AI module or “meeting intelligence apparatus” of Nelson may analyze various real-time data of the meeting in order to determine when to send a notification. Note 1C: Nelson teaches that a visible or audible indication may be provided when the amount of time for an agenda item “is nearing expiration,” [0156]. At that time the time allotted to the agenda has not elapsed and merely indicates that progress towards the end of the time exceeds a certain amount. Therefore the indication taught by Nelson is analogous to a prompt of the progress in relation to the meeting agenda. Nelson further teaches: “Speech or text recognition logic 400 may process first meeting content data 302 by parsing to detect keywords that are mapped to a meeting agenda,” i.e., speech or text from participants is associated with meeting agenda in real time. Therefore, the indication taught by Nelson is also analogous to a prompt of the progress of a presenting attendee in relation to the meeting agenda. Note 1D: With respect to the “any of the information described above” cited by Nelson in [0147] above, Nelson teaches “Examples of additional information include, without limitation, information about a suggested meeting participant, such as […] published books, papers, studies, articles,” [0145]. According to the present specification, the presentation data and one or more additional data inputs includes “static graphics,” [0019]. Papers and articles can be considered a form of static graphics, and therefore, the additional data is analogous to the presentation data and one or more additional data inputs. Note 1E: Nelson teaches “Electronic meeting screen 212 includes a content window 213 that includes content 214 for a current electronic meeting, which may represent a videoconferencing session, [0095]. A videoconferencing session inherently comprises receiving and displaying video teleconferencing camera feed. Nelson teaches that the session takes place within a window on the computer. Nelson teaches that additional information may be displayed in an “overlaid window”: “The additional information may be displayed, for example, in a pop-up box or overlaid window,” [0147]. Therefore, Nelson teaches the presentation output may be overlaid over the video teleconferencing camera feed. Note 1F: Nelson teaches that: “A determined correspondence between a current point in an electronic meeting and a meeting agenda may be used to monitor the progress of an electronic meeting and enforce time constraints,” [0156]. This, based on the current time (or, based on “real-time results”) a visual notification will be displayed to the attendees of the meeting. Nelson fails to teach: by a real-time rendering engine and virtual camera driver, overlaying the presentation output in a video teleconferencing camera feed; and based on results providing real time visual prompts to attendees of the meeting, via an attendee computing device, to notify the attendees of one or more of elapsed time, agenda items, and progress of a presenting attendee in relation to a meeting agenda. Bujnowski teaches: by an AI module (Bujnowski: The algorithm may include a learning algorithm such as a convolutional neural network (CNN), a recurrent neural network (RNN), a deep neural network (DNN), and the like. [0046]) provide visual prompts (Bujnowski: notification message [0120]) to attendees (Bujnowski: first user 411, second user 412, third user 413, Fig. 4 [0154]) of the meeting, via an attendee computing device (Bujnowski: electronic device 421 [0070]), to notify the attendees of one or more of (Bujnowski: The first electronic device 421 may display, through the second area 1220, information about the meeting that is in progress [0202]) elapsed time (Bujnowski: total meeting time [0202]), agenda items (Bujnowski: information about a participant who is going to speak next, [0202]; Bujnowski: meeting pattern [0068]), and progress of a presenting attendee (Bujnowski: information on the remaining time of the current speaker [0202]) in relation to a meeting agenda. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Bujnowski with Nelson. Introducing an AI module to track overall meeting time, and based on an agenda, to allot the attendees a pre-defined amount of time, and to provide visual prompts by the graphics generator, as in Bujnowski, would benefit the Nelson teachings by automating the tedious and/or time-consuming task of setting up an agenda and planning which attendee should speak when, and for how long. Nelson in view of Bujnowski still fails to teach: by a real-time rendering engine and virtual camera driver, overlaying the presentation output in a video teleconferencing camera feed; Zotto teaches: by a real-time rendering engine and virtual camera driver, overlaying the presentation output in a video teleconferencing camera feed (Zotto: In other examples, a proxy camera or virtual camera can be utilized to intercept the image data and alter the image data to include the additional elements. In some examples, the image data bar can be overlaid on the image data at a selected location and the drive device transform can transmit the image data with the image data bar to a teleconference application or teleconference portal. [0013]); and Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Zotto with Bujnowski and Nelson. Overlaying the presentation output in a video teleconferencing camera feed by a real-time rendering engine and virtual camera driver, as in Zotto, would benefit the Nelson teachings by showcasing information useful to viewers without having to manually overlay specific information: “it can be time consuming to exchange information and/or make introductions when a plurality of users are utilizing the teleconference application.” (Zotto, [0008]). Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Nelson (US 20180101281 A1) in view of Bujnowski (US 20190394057 A), Zotto (US 20220414943 A1) and Makker (US 20210390953 A1). Regarding claim 2: Nelson in view of Bujnowski and Zotto teaches: The method of claim 1 (as shown above), Nelson in view of Bujnowski and Zotto fails to teach: further comprising providing a user input mechanism to control the one or more additional data inputs. Makker teaches: further comprising providing a user input mechanism (Makker: icons 1454, Fig. 14 [0131]) to control the one or more additional data inputs (Makker: auxiliary content 1453, Fig. 14 [0131]). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Makker with Nelson in view of Bujnowski and Zotto. Providing a user input mechanism to control the one or more additional data inputs, as in Makker, would benefit the Nelson in view of Bujnowski and Zotto teachings by allowing the user to influence how the additional data inputs are handled by the system. Regarding claim 3: Nelson in view of Bujnowski and Zotto teaches: The method of claim 1 (as shown above), Nelson in view of Bujnowski and Zotto fails to teach: wherein the one or more additional data inputs comprise one or more of presentation templates, real-time data, static graphics, virtual whiteboards and video streams. Makker teaches: wherein the one or more additional data inputs (Makker: auxiliary content 1453 (Fig. 14) [0108]) comprise one or more of presentation templates (Makker: framing setup 600 (Fig. 6) [0106], Makker: any other document or exhibit [0103]), real-time data (Makker: shared auxiliary content [0108]), static graphics (Makker: picture [0103]), virtual whiteboards (Makker: whiteboard capability [0108]) and video streams (Makker: video [0103]). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Makker with Nelson in view of Bujnowski and Zotto. Providing a user input mechanism to control the one or more additional data inputs, as in Makker, would benefit the Nelson in view of Bujnowski, Zotto, teachings by allowing the user to influence how the additional data inputs are handled by the system. Claims 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Nelson (US 20180101281 A1) in view of Bujnowski (US 20190394057 A), Zotto (US 20220414943 A1), Makker (US 20210390953 A1) and Kim (KR 20130142458 A). Regarding claim 4: Nelson in view of Bujnowski, Zotto, and Makker teaches: The method of claim 3 (as shown above), Nelson in view of Bujnowski, Zotto, and Makker fails to teach: wherein the presentation templates comprise 2D or 3D environments simulating a 3D space, containing the presentation data and one or more data inputs; at least one of said one or more data inputs selected from the following list: live camera feeds, slide decks, real time data, static graphics, online documents, photos, videos and collaborative virtual whiteboards. Kim teaches: wherein the presentation templates (Kim: lecture screen configuration Pg. 19, par. 9, Fig. 37) comprise 2D (Kim: Studio design with 2D multi-lay method Pg. 13, par. 8) or 3D environments (Kim: 3D studio design Pg. 13, par. 9) simulating a 3D space containing the presentation data (Kim: presentation material Pg. 5, par. 1) and one or more data inputs; at least one of said one or more data inputs selected from the following list: live camera feeds (Kim: real-time recording of the lecture or remote presentation Pg. 5, par. 1), slide decks, real time data, static graphics (Kim: pictures of the notebook Pg. 5, par. 1), online documents (Kim: document file Pg. 5, par. 1), photos, videos (Kim: video file Pg. 5, par. 1) and collaborative virtual whiteboards (Kim: electronic blackboard Pg. 5, par. 1). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Kim with Nelson in view of Bujnowski, Zotto, and Makker. Having the presentation templates comprise 2D or 3D environments simulating a 3D space, containing the presentation data and one or more data inputs, as in Kim, would benefit the Nelson in view of Bujnowski, Zotto, and Makker teachings by allowing the user to beautify their presentation quickly, without having to create environments or art specifically for each presentation. Regarding claim 5: Nelson in view of Bujnowski, Zotto, Makker and Kim teaches: The method of claim 4 (as shown above), wherein the visual prompts (Kim: presentation material Pg. 5, par. 1) are placed within the environments (Kim: Fig. 13, see Note 5A) at pre-determined locations (Kim: Various output modes are illustrated in the screen configuration converter 500 of FIG. 17. For example, a mode in which the lecturer is largely positioned in the center of the screen, a mode in which the lecturer is shown on the left screen, a mode in which the presentation screen is located on the upper right of the lecturer, and the like are illustrated. Pg. 7, par. 9; see Note 5B) and/or placed between a viewpoint and the environment (Kim: Fig. 13, see Note 5A). Note 5A: In Fig. 13, Kim showcases that the presentation material is placed within an environment near a speaker. Because the presentation material is visible and within an environment, it must be “placed between a viewpoint and the environment”. Note 5B: The configurations as showcased by Kim describe “pre-determined locations” to place the presentation content. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Nelson (US 20180101281 A1) in view of Bujnowski (US 20190394057 A). Nelson teaches: A system (Nelson: FIG. 10 depicts an example computer system upon which embodiments may be implemented. [0035]) for facilitating a video conference coupling multiple meeting participants via a network, the system comprising: a communication network configured to provide data transmission from a source to one or more destinations (Nelson: Arrangement 100 includes a meeting intelligence apparatus 102 and one or more nodes 104A-N, communicatively coupled via network infrastructure 106 [0071]); a plurality of user computers, coupled to the communication network (Nelson: In an embodiment, arrangement 100 involves a network of computers. A “computer” may be one or more physical computers, virtual computers, and/or computing devices [0073]), configured to be utilized by meeting attendees for a video conference (Nelson: Nodes 104A-N are associated with a plurality of electronic meeting participants 108A-N, also referred to herein as “participants,” [0071]); and a server coupled to the plurality of client user computers via the communication network (Nelson: For example, multiple nodes may be communicatively coupled with each other via a centralized server or via a peer-to-peer network. [0083]) and configured to manage the video conference between the attendees (Nelson: Electronic meeting application 112 is configured to interact with one or more other electronic meeting applications on other computing devices and/or one or more electronic meeting managers or servers, [0084]; see Note 6A), the server configured to include: a real-time interactive graphics generator operable to generate graphics for visual content for each attendee (Nelson: FIG. 2E depicts an electronic meeting management screen 230 displayed by an electronic meeting application in response to a user selecting a control from meeting controls 222 [0096]; see Note 6B); an AI module configured to receive presentation data and one or more additional data inputs and use the presentation data and one or more additional data inputs to allot a pre-defined amount of time in a meeting agenda (see Note 6C); and the Al module further configured to analyze the progress of a videoconference in real time and based on the real time presentation data, the meeting agenda and the one or more additional data inputs (Nelson: Meeting intelligence apparatus 102 may analyze meeting content data using any of a number of tools, such as speech or text recognition, voice or face identification, sentiment analysis, object detection, gestural analysis, thermal imaging, etc. [0076]), determine the real time progress of the meeting compared to the meeting agenda (Nelson: monitor the progress of an electronic meeting and enforce time constraints with respect to individual agenda items [0156]; see Note 1B), and based on results provide real time visual prompts by the graphics generator to the attendees to notify a presenting attendee of their status in relation to the pre-defined amount of time (Nelson: in addition to the timer provided in agenda window 218 (FIG. 2D), a visual and/or audible indication may be provided when an amount of time designated for an agenda item, group of agenda items, or an entire electronic meeting, is nearing expiration or has expired [0156]; see Note 1C) and/or to notify at least one of the attendees of the time until the beginning of their pre-defined amount of time. Note 6A: Nelson teaches that “Electronic meeting application 112 is configured to interact with one or more other electronic meeting applications on other computing devices and/or one or more electronic meeting managers or servers,” [0084] and that “multiple nodes may be communicatively coupled with each other via a centralized server or via a peer-to-peer network,” [0083]. The server is taught to be analogous to an “electronic meeting manager”: “electronic meeting managers or servers” [0084]. Therefore, when the server is a “centralized server” as taught in [0083] above, the server may function as a electronic meeting manager. Nelson further teaches: “Examples of electronic meetings include, […] videoconferencing sessions” [0072], and therefore the server taught by Nelson is “configured to be utilized by meeting attendees for a video conference”. Note 6B: Nelson teaches the “electronic meeting application” (real-time interactive graphics generator) may generate a “electronic meeting management screen 230” (graphics for visual content) based on user input for each attendee ([0096] as cited above). In light of the 112(f) claim interpretation, the term “graphics generator” was previously understood to include only the “real-time rendering engine” described in [0055] of the present specification. In the Remarks, the applicant argued that “In Applicant's independent claim 6, graphics are delivered to attendees during a live video session via a camera feed or virtual camera driver” (see Pg. 7 of Applicant’s Remarks). Nelson teaches a camera feed: “The node may also include any of a number of input/output mechanisms, such as a camera,” [0084]. Furthermore, the specification of the present application teaches: “In some example embodiments, the multimedia client 104 may be installed in the cloud, and in such embodiments, the virtual camera driver 24 may not be required”. Because the driver may not be required in all embodiments and the limitation in question would not lead one of ordinary skill in the art to include a “virtual camera driver”, the Examiner submits that under broadest reasonable interpretation, Nelson teaches a graphics generator as discussed above. Note 6C: Nelson teaches that each agenda item may be assigned a time limit: “a maximum amount of time of 15 minutes may be spent on each agenda item,” [0091]. Nelson further teaches: “According to one embodiment, meeting rules may be created with the assistance of meeting intelligence apparatus 102,” [0093], and that the apparatus may analyze the presentation data or “meeting content data” (analogousness was shown in Note 1A) to perform tasks: “Based on analyzing the meeting content data and/or in response to requests, […] meeting intelligence apparatus 102, either alone or in combination with one or more electronic meeting applications, performs any of a number of automated tasks,” [0076]. As shown in Note 1A above, the meeting content data is analogous to the presentation data and one or more data inputs. Therefore, Nelson teaches that the when the meeting intelligence apparatus analyzes in order to create meeting rules, it analyzes the meeting content data, or the “presentation data and one or more data inputs”: “Meeting intelligence apparatus 102 may analyze meeting content data using any of a number of tools, such as speech or text recognition, voice or face identification, sentiment analysis, object detection, gestural analysis, thermal imaging, etc,” [0076], and therefore, Nelson teaches receiving presentation data and one or more additional data inputs and use the presentation data and one or more additional data inputs to allot a pre-defined amount of time in a meeting agenda. Nelson fails to teach: receive presentation data and one or more additional data inputs and use the presentation data and one or more additional data inputs to allot to each of the attendees a pre-defined amount of time in a meeting agenda; Bujnowski teaches: an AI module (Bujnowski: The algorithm may include a learning algorithm such as a convolutional neural network (CNN), a recurrent neural network (RNN), a deep neural network (DNN), and the like. [0046]) configured to track overall meeting time and, based on an agenda (Bujnowski: In some embodiments, the meeting pattern may include, as a method of using time resources, at least one of an utterance period allocated to each of the users attending the meeting [0069]), to allot to each of the attendees (Bujnowski: first user 411, second user 412, third user 413, Fig. 4 [0154]) a pre-defined amount of time (Bujnowski: As another example, the electronic device 401 may determine the speaking time for each participant. The electronic device 401 may determine the speaking time for each participant on the basis of the speaking speed of each participant. [0110]). Bujnowski further teaches: “The meeting pattern denotes a method of using resources for conducting a meeting. The resources may include time resources allocated to at least one of the participants,” [0068] and “In some embodiments, the meeting pattern may include, as a method of using time resources, at least one of an utterance period allocated to each of the users attending the meeting,” [0069]. Therefore, when the teachings of Bujnowski are combined with the teachings of Nelson, it would be obvious to one of ordinary skill in the art to allot time to each of the attendees based on the presentation data and one or more additional data inputs. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Bujnowski with Nelson. Allotting time to each of the attendees, as in Bujnowski, would benefit the Nelson teachings by automating the tedious and/or time-consuming task of setting up an agenda and planning which attendee should speak when, and for how long. Additionally, this is applying a known technique, automatically dividing meeting time between multiple speakers, to a known device ready for improvement, the Nelson device, to yield predictable results. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Nelson (US 20180101281 A1) in view of Bujnowski (US 20190394057 A) and Burton (US 20160027442 A1). Nelson in view of Bujnowski teaches: The system of claim 6 (as shown above), Nelson in view of Bujnowski fails to teach: further configured to prompt the presenting attendee to summarize their presentation and/or automatically create a form to send to the attendees or an assigned meeting notetaker. Burton teaches: further configured to prompt the presenting attendee to summarize their presentation (Burton: In an embodiment, program 200 sends a summary to all speakers so that the speakers can proof the summary before forwarding the summary to other participants. [0025]) and/or automatically create a form to send to the attendees (Burton: In another embodiment, program 200 automatically sends a summary to all participants [0025]) or an assigned meeting notetaker. Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Burton with Nelson in view of Bujnowski. Prompting the presenting attendee to summarize their presentation and/or automatically creating a form to send to the attendees or an assigned meeting notetaker, as in Burton, would benefit the Nelson in view of Bujnowski teachings by automating the notetaking process for the meeting and ensuring that participants that did not attend have access to the meeting summary. Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Nelson (US 20180101281 A1) in view of Bujnowski (US 20190394057 A) and Makker (US 20210390953 A1). Regarding claim 8: Claim 8 is substantially similar to claim 2, and is therefore rejected for similar reasons. Claim 8 contains the following notable differences: Claim 8 is based on claim 7, which claims a system instead of a method. In the rejection of claim 6, the independent claim that claim 8 depends on, it was shown that Nelson teaches a system. Regarding claim 9: Claim 9 is substantially similar to claim 3, and is therefore rejected for similar reasons. Claim 9 contains the following notable differences: Claim 9 is based on claim 8, which claims a system instead of a method. In the rejection of claim 6, the independent claim that claim 9 depends on, it was shown that Nelson teaches a system. Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Nelson (US 20180101281 A1) in view of Bujnowski (US 20190394057 A), Makker (US 20210390953 A1) and Kim (KR 20130142458 A). Regarding claim 10: Claim 10 is substantially similar to claim 4, and is therefore rejected for similar reasons. Claim 10 contains the following notable differences: Claim 10 is based on claim 9, which claims a system instead of a method. In the rejection of claim 6, the independent claim that claim 10 depends on, it was shown that Nelson teaches a system. Regarding claim 11: Claim 11 is substantially similar to claim 5, and is therefore rejected for similar reasons. Claim 10 contains the following notable differences: Claim 11 is based on claim 10, which claims a system instead of a method. In the rejection of claim 6, the independent claim that claim 11 depends on, it was shown that Nelson teaches a system. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT ALEXANDER PROVIDENCE whose telephone number is (571)270-5765. The examiner can normally be reached Monday-Thursday 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VINCENT ALEXANDER PROVIDENCE/ Examiner, Art Unit 4138 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Jan 25, 2023
Application Filed
Nov 19, 2024
Non-Final Rejection — §103
May 26, 2025
Response Filed
Jun 23, 2025
Final Rejection — §103
Jan 02, 2026
Request for Continued Examination
Jan 21, 2026
Response after Non-Final Action
Feb 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586303
GEOMETRY-AWARE THREE-DIMENSIONAL SYNTHESIS IN ALL ANGLES
2y 5m to grant Granted Mar 24, 2026
Patent 12530847
IMAGE GENERATION FROM TEXT AND 3D OBJECT
2y 5m to grant Granted Jan 20, 2026
Patent 12530808
Predictive Encoding/Decoding Method and Apparatus for Azimuth Information of Point Cloud
2y 5m to grant Granted Jan 20, 2026
Patent 12524946
METHOD FOR GENERATING FIREWORK VISUAL EFFECT, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12380621
COMPUTER-IMPLEMENTED SYSTEMS AND METHODS FOR GENERATING ENHANCED MOTION DATA AND RENDERING OBJECTS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+25.0%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month