Prosecution Insights
Last updated: April 19, 2026
Application No. 18/792,801

INTELLIGENT METHOD AND APPARATUS TO AUGMENT MODERATOR IN LIVE SESSION LEVERAGING GENERATIVE ARTIFICIAL INTELLIGENCE

Final Rejection §101§103
Filed
Aug 02, 2024
Examiner
PUJOLS-CRUZ, MARJORIE
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BANK OF AMERICA CORPORATION
OA Round
2 (Final)
18%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
25 granted / 136 resolved
-33.6% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
50 currently pending
Career history
186
Total Applications
across all art units

Statute-Specific Performance

§101
38.7%
-1.3% vs TC avg
§103
43.3%
+3.3% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §103
DETAILED ACTION This communication is a Final Office Action rejection on the merits. Claims 1-20 are currently pending and have been addressed below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed on 01/09/2026 (related to the 103 Rejection) have been fully considered but they are not persuasive. Applicant states, on pages 14-15, that Nelson's approach is limited to semantic meeting content and organizational context, not signal-level audio analysis or spectrogram-based evaluation, as claimed in the amended claims. Thus, Nelson cannot, and does not, show or suggest the claimed architecture. Examiner respectfully disagrees with Applicant. Examiner notes that although Applicant states “as claimed in the amended claims,” independent claims 1, 9, and 17 submitted on 01/09/26 are exactly the same as independent claims 1, 9, and 17 submitted on 02/08/24. Nelson discloses a speech and text recognition to ensure that all agenda items and action items are addressed during an electronic meeting (see Paragraphs 0154-0156). As stated in Applicant’s specification, the speech spectrogram is used to record the conversation and generate a speech evaluation record (see Paragraph 0029). Based on broadest reasonable interpretation in light of the specification, Examiner interprets “speech recognition used to track agenda items during an electronic meeting” as the “spectrogram-based evaluation” since the “speech recognition” disclosed by Nelson can perform the same functions specified in the claim such as evaluating a speech record (see MPEP 2183). Therefore, the speech recognition disclosed by Nelson is equivalent to the speech spectrogram. Examiner recommends to further add how the speech spectrogram is evaluating the speech, if supported by the specification. Applicant further states, on pages 15-18, that Nelson lacks any teaching of dual-engine processing-one engine for text context extraction and another for voice analysis-from a unified evaluation record. Nelson's disclosure of agenda optimization does not encompass the claimed multi-modal extraction pipeline. Examiner respectfully disagrees with Applicant. Nelson discloses a speech and text recognition to ensure that all agenda items and action items are addressed during an electronic meeting (see Paragraphs 0154-0156). Nelson further discloses a voice analysis from the speech evaluation record to detect a voice (Paragraph 0189, voice recognition). Examiner notes that the speech and text recognition of Nelson can evaluate both speech-voice and text. Therefore, based on broadest reasonable interpretation in light of the specification, Nelson discloses a dual-engine since it can utilize multiple tools/engines to analyze the meeting. Applicant's arguments filed on 01/09/2026 (related to the 101 Rejection) have been fully considered but they are not persuasive. Applicant states, on pages 11-14, that in the case of the claimed invention, the claims do not merely recite AI and the use thereof. Rather, the claims define the claimed processing the conversation to include passing the conversation through a speech spectrogram to generate a speech evaluation record and to extract a text context from the speech evaluation record via a speech-text context extraction engine and a voice analysis from the speech evaluation record via a speech-voice analysis engine. More particularly, the claims recite processing a conversation using a speech spectrogram to generate a speech evaluation record and to extract a text context from the speech evaluation record." Applicant respectfully submits that the aforementioned claim elements are performed using hardware (especially since voice processing can only use hardware as an input prior to the processing.) See, e.g., paras. 107, 114 and 119 of the specification as originally filed. In view of the foregoing, the claims recite processing steps that can only be implemented using voice processing hardware. For at least the foregoing reasons, applicant respectfully requests that the rejection of claims 1-20 under 35 U.S.C. § 101 be withdrawn. Examiner respectfully disagrees with Applicant. Step 2A, Prong One: These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “managing interactions between people.” In this case, managing resources and time for each selected topic to present is a social activity (e.g., recommending resources and amount of time for each selected topic to present). If a claim limitation, under its broadest reasonable interpretation, covers managing interactions between people, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A, Prong Two: Claim 1 includes additional elements: a generative artificial intelligence (“AI”) engine; a plurality of user devices; a speech spectrogram; a speech-text context extraction engine; and a speech-voice analysis engine. The generative AI engine is merely used to generate an agenda for a live session based on the received information (Paragraph 0018). The user device is merely used to receive the generated talking points (Paragraph 0027). The speech spectrogram is merely used to dynamically generate a speech evaluation record (Paragraph 0029). The speech-text context extraction engine is merely used to extract a text context from the speech evaluation record (Paragraph 0031). The speech-voice analysis engine is merely used to extract a voice analysis from the speech evaluation record (Paragraph 0030). These elements of “generative AI engine,” “user device,” “speech spectrogram,” “speech-text context extraction engine,” and “speech-voice analysis engine” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element (MPEP 2106.05f). In this case, the engines are considered “field of use” since they are just used to receive and provide information for an analysis, but the technology is not improved (MPEP 2106.05h). Also, the generative AI engine includes inputs (e.g., a plurality of topics for discussion during the live session, an aggregate amount of time for conducting the live session; and one or more users to present the plurality of topics) and outputs (e.g., generate an agenda for the live session and generate talking points for each topic of the selected topics). Although the generative AI receives feedback over time to improve recommendations based on deviations (e.g., corrective action in response to a deviation of the amount of time for each selected topic to present), the claim and specification do not include any specific details about how the generative AI operates (e.g., how the talking points are generated), which is merely claiming the idea of a solution or outcome (see MPEP 2106.05(a) & example 47, claim 2 of July 2024 AI Subject Matter Eligibility. Step 2B: As discussed above with respect to integration of the abstract idea into a practical application, the claim describes how to generally “apply” the concept of evaluating a deviation from the agenda to deploy a corrective action (e.g., at least one or more of additional talking points, adjustment of the one or more topics to present, adjustment of the respective amount of time for each selected topic to present, adjustment of the respective amount of time for one or more users to present the respective topic and adjustment of the one or more users to present each topic). Also, the steps of “monitoring a live session” and “deploying a corrective action” are considered a well-understood, routing, and conventional function of “receiving or transmitting data over a network” and “performing repetitive calculations” (MPEP 2106.05(d)). Further, using a speech text, a speech spectrogram, and a speech-voice analysis are considered a well-known speech evaluation engines in the art of speech analysis. Lastly, the claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claim amounts to significantly more than the abstract idea itself. Thus, the claim is ineligible. Independent claims 9 and 17 recite similar features and therefore are rejected for the same reasons as independent claim 1. Claims 2-8, 10-16, and 18-20 are rejected for having the same deficiencies as those set forth with respect to the claims that they depend from, independent claims 1, 9, and 17. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without reciting significantly more. Independent Claim 1 Step One - First, pursuant to step 1 in the January 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) on 84 Fed. Reg. 53, the claim 1 is directed to a method which is a statutory category. Step 2A, Prong One - Claim 1 recites: A method for dynamically augmenting a live session leveraging a generative artificial intelligence (“AI”), the method comprising: receiving information relating to: a plurality of topics for discussion during the live session; an aggregate amount of time for conducting the live session; and one or more users to present the plurality of topics; generating an agenda for the live session, based on the received information, the generating comprising: selecting from the plurality of topics one or more topics to present; assigning respective users from the one or more users to present the one or more topics to present; assigning a respective amount of time for each selected topic to present; and assigning a respective amount of time for each of the one or more users to present the respective topic to present; selecting a sensitivity level for the live session; selecting one or more of a plurality of sources to link to the agenda; executing the live session by the generative AI, the executing comprising: generating talking points for each topic of the selected topics; deploying each talking point to a plurality of users, respective talking points being deployed to the user associated with the one or more users presenting the corresponding selected one or more topics; upon deployment of the respective talking points, prompting the respective one or more users to accept, decline or revise the talking points; and monitoring the live session, the monitoring comprising: actively processing conversation of participants of the live session; actively keeping track of the respective amount of time utilized during the live session for each of the selected one or more topics; and actively keeping track of the amount of time each of the one or more users is presenting; and in response to a deviation from the agenda, deploying a corrective action to each device, the deviation including one or more of a deviation from: the respective amount of time for each selected topic to present; and the respective amount of time for each of the one or more users to present the respective topic to present; wherein: the sources include websites, private servers and databases; the corrective action includes one or more of additional talking points, adjustment of the one or more topics to present, adjustment of the respective amount of time for each selected topic to present, adjustment of the respective amount of time for one or more users to present the respective topic and adjustment of the one or more users to present each topic; processing the conversation to generate a speech evaluation record; the generative AI extracts a text context from the speech evaluation record; and the generative AI extracts a voice analysis from the speech evaluation record. These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “managing interactions between people.” In this case, managing time and providing recommendations of a modified agenda to a user is a social activity. If a claim limitation, under its broadest reasonable interpretation, covers managing interactions between people, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 - The judicial exception is not integrated into a practical application. Claim 1 includes additional elements: a generative artificial intelligence (“AI”) engine; a plurality of user devices; a speech spectrogram; a speech-text context extraction engine; and a speech-voice analysis engine. The generative AI engine is merely used to generate an agenda for a live session based on the received information (Paragraph 0018). The user device is merely used to receive the generated talking points (Paragraph 0027). The speech spectrogram is merely used to dynamically generate a speech evaluation record (Paragraph 0029). The speech-text context extraction engine is merely used to extract a text context from the speech evaluation record (Paragraph 0031). The speech-voice analysis engine is merely used to extract a voice analysis from the speech evaluation record (Paragraph 0030). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of “generative AI engine,” “user device,” “speech spectrogram,” “speech-text context extraction engine,” and “speech-voice analysis engine” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. The engines are considered “field of use” since they are just used to receive and provide information for an analysis, but the technology is not improved (MPEP 2106.05h). Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Step 2B - The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” the concept of managing a live session based on time constraints and conversation of participants. The specification shows that the generative AI engine is merely used to generate an agenda for a live session based on the received information (Paragraph 0018). The user device is merely used to receive the generated talking points (Paragraph 0027). The speech spectrogram is merely used to dynamically generate a speech evaluation record (Paragraph 0029). The speech-text context extraction engine is merely used to extract a text context from the speech evaluation record (Paragraph 0031). The speech-voice analysis engine is merely used to extract a voice analysis from the speech evaluation record (Paragraph 0030). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). In this case, the claim does not provide any specific details about how the generative AI engine operates (e.g., how the talking points are generated). See 2024 AI Guidance, example 47, claim 2. Further, the step of “adjusting talking points” or “adjusting amount of time” is considered a well-understood, routing, and conventional function since it's just “performing repetitive calculations” and “receiving or transmitting data over a network” (MPEP 2106.05(d)). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Independent claim 9 is directed to a system at step 1, which is a statutory category. Claim 9 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Claim 9 further recites: a processor; a memory; and a computer readable medium – which are treated as just an explicit “processor/computer” for executing the operations and are treated under MPEP 2106.05f in the same manner as claim 1. Accordingly, these elements are viewed as “apply it on a computer” at step 2a, prong 2 and step 2b. Thus, the claim is ineligible. Independent claim 17 is directed to a method at step 1, which is a statutory category. Claim 17 recites similar limitations as claim 1 and claim 13 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Thus, the claim is ineligible. Dependent claims 2-4, 8, 10-12, and 16 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as wherein the generative AI: generates an answer to the question; deploys the answer to the user; generates the answer from data from the plurality of sources; deploys the corrective action based on the text context and the voice analysis; generates the additional talking points based on the text context and the voice analysis; and dynamically generates the additional talking points using information from the selected sources, wherein the deployed talking points are adjusted based on the selected sensitivity level. Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f) being applicable at both Step 2A, Prong 2 and Step 2B. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. In this case, the claims do not provide any specific details about how the generative AI engine operates (e.g., how the answer and the additional talking points are generated). See 2024 AI Guidance, example 47, claim 2. Further, the step of “adjusting talking points” is considered a well-understood, routing, and conventional function since it's just “performing repetitive calculations” and “receiving or transmitting data over a network” (MPEP 2106.05(d)). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Dependent claims 5-7, 13-14, and 18-20 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as wherein the plurality of user devices is used to: alert the moderator when the deviation is detected, before deploying the corrective action; receive instructions in real-time from the moderator, whether to deploy the corrective action upon receipt of the alert; prompt the user to accept or decline the corrective action; and prompt the moderator to accept or decline blocking the user from further participation in the live session. At Step 2A, Prong 2 - this is still considered “field of use” since it’s just used to receive an acceptance or rejection of a corrective action, but the device is not improved (MPEP 2106.05h). At Step 2B – this is considered a conventional computer function of “receiving and transmitting over a network” (MPEP 2106.05d). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nelson et al. (US 2018/0101281 A1), in view of Dotan-Cohen et al. (US 2024/0223726 A1). Regarding claim 1 (Currently Amended), Nelson et al. discloses a method for dynamically augmenting a live session leveraging a generative artificial intelligence (“AI”) engine, the method comprising (Paragraph 0007, The approach may also be implemented by one or more computer-implemented methods; Paragraph 0070, Artificial intelligence is introduced into an electronic meeting context to perform various tasks before, during, and/or after electronic meetings. The tasks may include a wide variety of tasks, such as agenda creation, participant selection, real-time meeting management, meeting content supplementation, and post-meeting processing): receiving information relating to: a plurality of topics for discussion during the live session (Paragraph 0116, Suggested agenda items 256 are topics for discussion that are determined to be relevant and appropriate for a particular new electronic meeting. The topics may be topics that have been scheduled for discussion, or actually discussed, in other electronic meetings, or they may be new topics); an aggregate amount of time for conducting the live session (Paragraph 0086, Meeting rules may be specified by an organization, e.g., via bylaws, or by entities external to organizations, such as governmental, judicial or law enforcement entities. One example is a time constraint (minimum or maximum) for discussion of a particular agenda item); and one or more users to present the plurality of topics (Paragraph 0102, Meeting intelligence apparatus 102 may determine, based upon an analysis of prior meetings for the Pluto project, such as a first code review meeting, that Bob. H is a good candidate to be the meeting owner of the second code review meeting, and the meeting owner field may be automatically populated with Bob H); generating an agenda for the live session, based on the received information, the generating comprising: selecting from the plurality of topics one or more topics to present (Paragraph 0112, Electronic meeting agendas may be created manually by users and may be created with the assistance of artificial intelligence provided by meeting intelligence apparatus 102. According to one embodiment, meeting intelligence apparatus 102 participates in the creation of electronic meeting agendas by providing suggested items to be included on an electronic meeting agenda. The electronic meeting application may request that meeting intelligence apparatus 102 provide suggested agenda items for an electronic meeting; Paragraph 0115, Agenda information 252 also includes suggested agenda items 256 that are generated with the assistance of meeting intelligence apparatus 102; Paragraph 0116, Suggested agenda items 256 are topics for discussion that are determined to be relevant and appropriate for a particular new electronic meeting); assigning respective users from the one or more users to present the one or more topics to present (Paragraph 0102, Meeting intelligence apparatus 102 may determine, based upon an analysis of prior meetings for the Pluto project, such as a first code review meeting, that Bob. H is a good candidate to be the meeting owner of the second code review meeting, and the meeting owner field may be automatically populated with Bob H); assigning a respective amount of time for each selected topic to present; and assigning a respective amount of time for each of the one or more users to present the respective topic to present (Paragraph 0086, Meeting rules may be specified by an organization, e.g., via bylaws, or by entities external to organizations, such as governmental, judicial or law enforcement entities. One example is a time constraint (minimum or maximum) for discussion of a particular agenda item); … a sensitivity level for the live session (Paragraph 0177, FIG. 4B is a block diagram that depicts an arrangement for performing sentiment analysis with respect to an ongoing discussion 402. Referring to FIG. 4B, meeting intelligence apparatus 102 includes sentiment analysis logic 404 that performs sentiment analysis on first meeting content data 302 related to ongoing discussion 402. For example, meeting intelligence apparatus 102 may detect an angry tone or sentiment that is a cue 304 for meeting intelligence apparatus 102 to generate intervention data 310 indicating that another electronic meeting has been automatically scheduled for continuing ongoing discussion 402); selecting one or more of a plurality of sources to link to the agenda (Paragraph 0079, In an embodiment, meeting intelligence apparatus 102 is communicatively coupled to any of a number of external data sources (not shown), such as websites, other data available via the World Wide Web, databases managed by Salesforce, Oracle, SAP, Workday, or any entity other than the entity managing meeting intelligence apparatus 102. Meeting intelligence apparatus 102 may be communicatively coupled to the external data sources via network infrastructure 106. The external data sources may provide meeting intelligence apparatus 102 with access to any of a variety of data, meeting-related or otherwise; Paragraph 0122, According to one embodiment, meeting intelligence apparatus 102 is configured to analyze a plurality of data items to identify typical agenda items for the meeting type of the new electronic meeting. In the present example, this includes determining typical agenda items for code review meetings. This may include determining the typical agenda items for code review meetings within the same organization, or searching beyond the current organization to other organizations. The search may be conducted within the same context, industry, etc., or may extend to other contexts, industries, etc. According to one embodiment, meeting intelligence apparatus 102 identifies electronic documents related to one or more topics or subjects of the new electronic meeting and then analyzes the identified electronic documents to determine one or more suggested agenda items for the Other category. In the present example, meeting intelligence apparatus 102 determines that the “Software Testing Schedule” agenda item is typical for code review meetings and is therefore included as a suggested agenda item. Other criteria besides meeting type may be used to identify suggested agenda items. For example, the meeting subject may be used as a criterion to identify suggested agenda items. In the present example, meeting intelligence apparatus may search the plurality of data items to identify data items related to the Pluto Project, and determine suggested agenda items based upon the data items related to the Pluto Project; Examiner interprets “search for data items related to a specific project” as the “plurality of sources to link to the agenda”); executing the live session by the generative AI engine, the executing comprising: generating talking points for each topic of the selected topics (Paragraph 0112, Electronic meeting agendas may be created manually by users and may be created with the assistance of artificial intelligence provided by meeting intelligence apparatus 102. According to one embodiment, meeting intelligence apparatus 102 participates in the creation of electronic meeting agendas by providing suggested items to be included on an electronic meeting agenda. The electronic meeting application may request that meeting intelligence apparatus 102 provide suggested agenda items for an electronic meeting; Paragraph 0115, Agenda information 252 also includes suggested agenda items 256 that are generated with the assistance of meeting intelligence apparatus 102; Paragraph 0116, Suggested agenda items 256 are topics for discussion that are determined to be relevant and appropriate for a particular new electronic meeting); deploying each talking point to a plurality of user devices, each user device associated with at least one of the one or more users, respective talking points being deployed to the user device associated with the one or more users presenting the corresponding selected one or more topics (Paragraph 0083, Each node of the one or more nodes 104A-N is associated with one or more participants 108A-N. Each participant is a person who participates in an electronic meeting. Each node processes data transmission between network infrastructure 106 and at least one participant. Multiple nodes 104A-N may be communicatively coupled with each other using any of a number of different configurations; Paragraph 0084, In an embodiment, a node includes a computing device that executes an electronic meeting application 112; Paragraph 0102, Meeting intelligence apparatus 102 may determine, based upon an analysis of prior meetings for the Pluto project, such as a first code review meeting, that Bob. H is a good candidate to be the meeting owner of the second code review meeting, and the meeting owner field may be automatically populated with Bob H); upon deployment of the respective talking points, prompting the respective one or more users to accept, decline or revise the talking points (Paragraph 0103, Missing information may be presented in a manner to visually indicate that the information was automatically provided, for example, via highlighting, coloring, special effects, etc., and a user may be given an opportunity to accept, reject, or edit the missing information that was automatically provided; Paragraph 0117, Suggested agenda items 256 may be organized and presented to a user in any manner that may vary depending upon a particular implementation. For a large number of suggested agenda items 256, visually organizing the suggested agenda items on a user interface may provide a more favorable user experience than merely listing all available suggested agenda items 256; Paragraph 0118, FIG. 2I depicts suggested agenda items for each category of suggested agenda items depicted in FIG. 2H. Organizing suggested agenda items by category may be more useful to some users than listing suggesting agenda items in random order, although embodiments are not limited to organizing suggested agenda items 256 by category, and other approaches may be used such as alphabetical order, etc.); and monitoring the live session, the monitoring comprising: actively processing conversation of participants of the live session (Paragraph 0155, FIG. 4A is a block diagram that depicts an arrangement in which meeting intelligence apparatus 102 includes speech or text recognition logic 400 that processes first meeting content data 302 to determine one or more corresponding agenda topics. In the example depicted in FIG. 4A, first meeting content data 302 includes the speech or text statement “Gross sales are expected to be $10.8 million next quarter.” A participant associated with node 104A may have caused first meeting content data 302 to be generated by speaking, writing, typing, or displaying the statement. Speech or text recognition logic 400 may process first meeting content data 302 by parsing to detect keywords that are mapped to a meeting agenda. In the present example, speech or text recognition logic 400 detects the keywords “next quarter.” These keywords are a cue 304 for meeting intelligence apparatus 102 to generate intervention data 310 that indicates a corresponding agenda topic. The intervention data 310 may be used by the electronic meeting application to determine a correspondence between a current point in an electronic meeting and a meeting agenda. This correspondence is used to provide agenda management functionality, including tracking the current agenda topic); actively keeping track of the respective amount of time utilized during the live session for each of the selected one or more topics; and actively keeping track of the amount of time each of the one or more users is presenting (Paragraph 0156, A determined correspondence between a current point in an electronic meeting and a meeting agenda may be used to monitor the progress of an electronic meeting and enforce time constraints with respect to individual agenda items, groups of agenda items, and/or an entire electronic meeting. This may include tracking the amount of time spent on agenda items and providing one or more indications to meeting participants; Examiner notes that each agenda item (e.g., topic) is associated with one or more users); and in response to a deviation from the agenda, deploying a corrective action to each device, the deviation including one or more of a deviation from: the respective amount of time for each selected topic to present; and the respective amount of time for each of the one or more users to present the respective topic to present (Paragraph 0154, According to one embodiment, artificial intelligence is used to provide agenda management functionality during electronic meetings. Agenda management functionality may include a wide variety of functionality that may vary depending upon a particular implementation. Example functionality includes, without limitation, enforcing time constraints for agenda items, changing designated amounts of time for agenda items, changing, deleting and adding agenda items, including providing missing or supplemental information for agenda items, and agenda navigation); wherein: the sources include websites, private servers and databases (Paragraph 0079, In an embodiment, meeting intelligence apparatus 102 is communicatively coupled to any of a number of external data sources (not shown), such as websites, other data available via the World Wide Web, databases managed by Salesforce, Oracle, SAP, Workday, or any entity other than the entity managing meeting intelligence apparatus 102. Meeting intelligence apparatus 102 may be communicatively coupled to the external data sources via network infrastructure 106. The external data sources may provide meeting intelligence apparatus 102 with access to any of a variety of data, meeting-related or otherwise); the corrective action includes one or more of additional talking points, adjustment of the one or more topics to present, adjustment of the respective amount of time for each selected topic to present, adjustment of the respective amount of time for one or more users to present the respective topic and adjustment of the one or more users to present each topic (Paragraph 0154, According to one embodiment, artificial intelligence is used to provide agenda management functionality during electronic meetings. Agenda management functionality may include a wide variety of functionality that may vary depending upon a particular implementation. Example functionality includes, without limitation, enforcing time constraints for agenda items, changing designated amounts of time for agenda items, changing, deleting and adding agenda items, including providing missing or supplemental information for agenda items, and agenda navigation; It can be noted that the claim language is written in alternative form. The limitation taught by Nelson et al. is based on “adjustment of the respective amount of time for each selected topic to present" and “one or more additional talking points”); processing the conversation includes passing the conversation through a speech spectrogram to generate a speech evaluation record (Paragraph 0156, Speech and text recognition may also be used to ensure that all agenda items and action items are addressed during an electronic meeting, which may include discussion, deferral, etc.); the generative AI engine extracts a text context from the speech evaluation record via a speech-text context extraction engine (Paragraph 0155, Speech or text recognition logic 400 may process first meeting content data 302 by parsing to detect keywords that are mapped to a meeting agenda. In the present example, speech or text recognition logic 400 detects the keywords “next quarter.” These keywords are a cue 304 for meeting intelligence apparatus 102 to generate intervention data 310 that indicates a corresponding agenda topic. The intervention data 310 may be used by the electronic meeting application to determine a correspondence between a current point in an electronic meeting and a meeting agenda. This correspondence is used to provide agenda management functionality, including tracking the current agenda topic); and the generative AI engine extracts a voice analysis from the speech evaluation record via a speech-voice analysis engine (Paragraph 0109, Sentiment analysis may use various cues that occur in speech during an electronic meeting, such as tone of voice, volume of voice, velocity of speech, lack of pauses in speech, profanity, sounds such as grunts, exhalation of air, etc.; Paragraph 0177, FIG. 4B is a block diagram that depicts an arrangement for performing sentiment analysis with respect to an ongoing discussion 402; Paragraph 0189, FIG. 4D is a block diagram that depicts an arrangement for supplementing meeting content with participant identification data. Referring to FIG. 4D, meeting intelligence apparatus 102 includes voice or face recognition logic 412, which performs voice or face recognition on first meeting content data 302 to detect a voice or a face). Although Nelson et al. discloses detecting a sensitivity level for the live session by performing a sentiment analysis with respect to an ongoing discussion (Paragraph 0177, angry tone), Nelson et al. does not specifically disclose how the method is selecting a sensitivity level for the live session. However, Dotan-Cohen et al. discloses selecting a sensitivity level for the live session (Paragraph 0003, Embodiments described in the present disclosure are directed toward technologies for improving the functionality of multimedia content generated or presented by computing applications accessible on user computing devices. In particular, this disclosure provides certain technologies to programmatically provide a modified meeting presentation that sanitizes an occurrence of unwanted content in the meeting. In one example, the modified meeting presentation is a version of the meeting presentation that has been altered, based on a sensitivity mitigation action being applied to at least partially remove the unwanted or otherwise sensitive content associated with the meeting. In one example, the sensitivity mitigation action is a modification applied to a meeting presentation, based on a comparison of aspects of the meeting. In this example, the comparison of aspects of the meeting indicates that the segment of the meeting associated with the aspects contains sensitive content. In another example, the sensitivity mitigation action is a modification applied to a segment of a meeting presentation, based on a comparison of aspects of the segment, based on aspects of different segments, and the like; Paragraph 0005, Embodiments described in the present disclosure include applying the sensitivity mitigation action to cause sensitive content to at least be partially removed either through altering, editing, obscuring, hiding, or removing visual or audio aspects of the meeting; Paragraph 0104, The sensitivity analyzer 280 may employ any suitable ranking or classification scheme to rank or classify the aspects or segments. For example, the ranking or classification scheme may include a three-tier system for classifying or ranking sensitive content. In this example, the sensitive content may include high sensitivity content, medium sensitivity content, and low sensitivity content). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for generating an agenda for the live session by selecting one or more topics to present based on a plurality of factors (e.g., time constrains, questions, action items, etc.) of the invention of Nelson et al. to further incorporate to modify the agenda based on a selected sensitivity level of the live session of the invention of Dotan-Cohen et al. because doing so would allow the method to modify a meeting presentation, based on a sensitivity mitigation action being applied to at least partially remove the unwanted or otherwise sensitive content associated with the meeting (see Dotan-Cohen et al., Paragraph 0003). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 9 (Currently Amended), Nelson et al. discloses a system for dynamically augmenting a live session leveraging a generative artificial intelligence (“AI”) engine, the system comprising (Paragraph 0007, The approach may also be implemented by one or more computer-implemented methods; Paragraph 0070, Artificial intelligence is introduced into an electronic meeting context to perform various tasks before, during, and/or after electronic meetings. The tasks may include a wide variety of tasks, such as agenda creation, participant selection, real-time meeting management, meeting content supplementation, and post-meeting processing): a processor; a memory; and a non-transitory computer readable medium storing instructions that when executed by the processor (Paragraph 0240, Computer system 1000 also includes a main memory 1006, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 1002 for storing information and instructions to be executed by processor 1004. Such instructions, when stored in non-transitory storage media accessible to processor 1004, render computer system 1000 into a special-purpose machine that is customized to perform the operations specified in the instructions): receives information relating to: a plurality of topics for discussion during the live session (Paragraph 0116, Suggested agenda items 256 are topics for discussion that are determined to be relevant and appropriate for a particular new electronic meeting. The topics may be topics that have been scheduled for discussion, or actually discussed, in other electronic meetings, or they may be new topics); an aggregate amount of time for conducting the live session (Paragraph 0086, Meeting rules may be specified by an organization, e.g., via bylaws, or by entities external to organizations, such as governmental, judicial or law enforcement entities. One example is a time constraint (minimum or maximum) for discussion of a particular agenda item); and one or more users to present the plurality of topics (Paragraph 0102, Meeting intelligence apparatus 102 may determine, based upon an analysis of prior meetings for the Pluto project, such as a first code review meeting, that Bob. H is a good candidate to be the meeting owner of the second code review meeting, and the meeting owner field may be automatically populated with Bob H); generates an agenda for the live session, based on the received information, comprising: selecting from the plurality of topics one or more topics to present (Paragraph 0112, Electronic meeting agendas may be created manually by users and may be created with the assistance of artificial intelligence provided by meeting intelligence apparatus 102. According to one embodiment, meeting intelligence apparatus 102 participates in the creation of electronic meeting agendas by providing suggested items to be included on an electronic meeting agenda. The electronic meeting application may request that meeting intelligence apparatus 102 provide suggested agenda items for an electronic meeting; Paragraph 0115, Agenda information 252 also includes suggested agenda items 256 that are generated with the assistance of meeting intelligence apparatus 102; Paragraph 0116, Suggested agenda items 256 are topics for discussion that are determined to be relevant and appropriate for a particular new electronic meeting); assigning respective users from the one or more users to present the one or more topics to present (Paragraph 0102, Meeting intelligence apparatus 102 may determine, based upon an analysis of prior meetings for the Pluto project, such as a first code review meeting, that Bob. H is a good candidate to be the meeting owner of the second code review meeting, and the meeting owner field may be automatically populated with Bob H); assigning a respective amount of time for each selected topic to present; and assigning a respective amount of time for each of the one or more users to present the respective topic to present (Paragraph 0086, Meeting rules may be specified by an organization, e.g., via bylaws, or by entities external to organizations, such as governmental, judicial or law enforcement entities. One example is a time constraint (minimum or maximum) for discussion of a particular agenda item); … a sensitivity level for the live session (Paragraph 0177, FIG. 4B is a block diagram that depicts an arrangement for performing sentiment analysis with respect to an ongoing discussion 402. Referring to FIG. 4B, meeting intelligence apparatus 102 includes sentiment analysis logic 404 that performs sentiment analysis on first meeting content data 302 related to ongoing discussion 402. For example, meeting intelligence apparatus 102 may detect an angry tone or sentiment that is a cue 304 for meeting intelligence apparatus 102 to generate intervention data 310 indicating that another electronic meeting has been automatically scheduled for continuing ongoing discussion 402); selects one or more of a plurality of sources to link to the agenda (Paragraph 0079, In an embodiment, meeting intelligence apparatus 102 is communicatively coupled to any of a number of external data sources (not shown), such as websites, other data available via the World Wide Web, databases managed by Salesforce, Oracle, SAP, Workday, or any entity other than the entity managing meeting intelligence apparatus 102. Meeting intelligence apparatus 102 may be communicatively coupled to the external data sources via network infrastructure 106. The external data sources may provide meeting intelligence apparatus 102 with access to any of a variety of data, meeting-related or otherwise; Paragraph 0122, According to one embodiment, meeting intelligence apparatus 102 is configured to analyze a plurality of data items to identify typical agenda items for the meeting type of the new electronic meeting. In the present example, this includes determining typical agenda items for code review meetings. This may include determining the typical agenda items for code review meetings within the same organization, or searching beyond the current organization to other organizations. The search may be conducted within the same context, industry, etc., or may extend to other contexts, industries, etc. According to one embodiment, meeting intelligence apparatus 102 identifies electronic documents related to one or more topics or subjects of the new electronic meeting and then analyzes the identified electronic documents to determine one or more suggested agenda items for the Other category. In the present example, meeting intelligence apparatus 102 determines that the “Software Testing Schedule” agenda item is typical for code review meetings and is therefore included as a suggested agenda item. Other criteria besides meeting type may be used to identify suggested agenda items. For example, the meeting subject may be used as a criterion to identify suggested agenda items. In the present example, meeting intelligence apparatus may search the plurality of data items to identify data items related to the Pluto Project, and determine suggested agenda items based upon the data items related to the Pluto Project; Examiner interprets “search for data items related to a specific project” as the “plurality of sources to link to the agenda”); executes the live session by the generative AI engine, comprising: generating talking points for each topic of the selected topics (Paragraph 0112, Electronic meeting agendas may be created manually by users and may be created with the assistance of artificial intelligence provided by meeting intelligence apparatus 102. According to one embodiment, meeting intelligence apparatus 102 participates in the creation of electronic meeting agendas by providing suggested items to be included on an electronic meeting agenda. The electronic meeting application may request that meeting intelligence apparatus 102 provide suggested agenda items for an electronic meeting; Paragraph 0115, Agenda information 252 also includes suggested agenda items 256 that are generated with the assistance of meeting intelligence apparatus 102; Paragraph 0116, Suggested agenda items 256 are topics for discussion that are determined to be relevant and appropriate for a particular new electronic meeting); deploying each talking point to a plurality of user devices, each user device associated with at least one of the one or more users, respective talking points being deployed to the user device associated with the one or more users presenting the corresponding selected one or more topics (Paragraph 0083, Each node of the one or more nodes 104A-N is associated with one or more participants 108A-N. Each participant is a person who participates in an electronic meeting. Each node processes data transmission between network infrastructure 106 and at least one participant. Multiple nodes 104A-N may be communicatively coupled with each other using any of a number of different configurations; Paragraph 0084, In an embodiment, a node includes a computing device that executes an electronic meeting application 112; Paragraph 0102, Meeting intelligence apparatus 102 may determine, based upon an analysis of prior meetings for the Pluto project, such as a first code review meeting, that Bob. H is a good candidate to be the meeting owner of the second code review meeting, and the meeting owner field may be automatically populated with Bob H); upon deployment of the respective talking points, prompting the respective one or more users to accept, decline or revise the talking points (Paragraph 0103, Missing information may be presented in a manner to visually indicate that the information was automatically provided, for example, via highlighting, coloring, special effects, etc., and a user may be given an opportunity to accept, reject, or edit the missing information that was automatically provided; Paragraph 0117, Suggested agenda items 256 may be organized and presented to a user in any manner that may vary depending upon a particular implementation. For a large number of suggested agenda items 256, visually organizing the suggested agenda items on a user interface may provide a more favorable user experience than merely listing all available suggested agenda items 256; Paragraph 0118, FIG. 2I depicts suggested agenda items for each category of suggested agenda items depicted in FIG. 2H. Organizing suggested agenda items by category may be more useful to some users than listing suggesting agenda items in random order, although embodiments are not limited to organizing suggested agenda items 256 by category, and other approaches may be used such as alphabetical order, etc.); and monitoring the live session, the monitoring comprising: actively processing conversation of participants of the live session (Paragraph 0155, FIG. 4A is a block diagram that depicts an arrangement in which meeting intelligence apparatus 102 includes speech or text recognition logic 400 that processes first meeting content data 302 to determine one or more corresponding agenda topics. In the example depicted in FIG. 4A, first meeting content data 302 includes the speech or text statement “Gross sales are expected to be $10.8 million next quarter.” A participant associated with node 104A may have caused first meeting content data 302 to be generated by speaking, writing, typing, or displaying the statement. Speech or text recognition logic 400 may process first meeting content data 302 by parsing to detect keywords that are mapped to a meeting agenda. In the present example, speech or text recognition logic 400 detects the keywords “next quarter.” These keywords are a cue 304 for meeting intelligence apparatus 102 to generate intervention data 310 that indicates a corresponding agenda topic. The intervention data 310 may be used by the electronic meeting application to determine a correspondence between a current point in an electronic meeting and a meeting agenda. This correspondence is used to provide agenda management functionality, including tracking the current agenda topic); actively keeping track of the respective amount of time utilized during the live session for each of the selected one or more topics; and actively keeping track of the amount of time each of the one or more users is presenting (Paragraph 0156, A determined correspondence between a current point in an electronic meeting and a meeting agenda may be used to monitor the progress of an electronic meeting and enforce time constraints with respect to individual agenda items, groups of agenda items, and/or an entire electronic meeting. This may include tracking the amount of time spent on agenda items and providing one or more indications to meeting participants; Examiner notes that each agenda item (e.g., topic) is associated with one or more users); and in response to a deviation from the agenda, deploys a corrective action to each device, the deviation including one or more of a deviation from: the respective amount of time for each selected topic to present; and the respective amount of time for each of the one or more users to present the respective topic to present (Paragraph 0154, According to one embodiment, artificial intelligence is used to provide agenda management functionality during electronic meetings. Agenda management functionality may include a wide variety of functionality that may vary depending upon a particular implementation. Example functionality includes, without limitation, enforcing time constraints for agenda items, changing designated amounts of time for agenda items, changing, deleting and adding agenda items, including providing missing or supplemental information for agenda items, and agenda navigation); wherein: the sources include websites, private servers and databases (Paragraph 0079, In an embodiment, meeting intelligence apparatus 102 is communicatively coupled to any of a number of external data sources (not shown), such as websites, other data available via the World Wide Web, databases managed by Salesforce, Oracle, SAP, Workday, or any entity other than the entity managing meeting intelligence apparatus 102. Meeting intelligence apparatus 102 may be communicatively coupled to the external data sources via network infrastructure 106. The external data sources may provide meeting intelligence apparatus 102 with access to any of a variety of data, meeting-related or otherwise); the corrective action includes one or more of additional talking points, adjustment of the one or more topics to present, adjustment of the respective amount of time for each selected topic to present, adjustment of the respective amount of time for one or more users to present the respective topic and adjustment of the one or more users to present each topic (Paragraph 0154, According to one embodiment, artificial intelligence is used to provide agenda management functionality during electronic meetings. Agenda management functionality may include a wide variety of functionality that may vary depending upon a particular implementation. Example functionality includes, without limitation, enforcing time constraints for agenda items, changing designated amounts of time for agenda items, changing, deleting and adding agenda items, including providing missing or supplemental information for agenda items, and agenda navigation; It can be noted that the claim language is written in alternative form. The limitation taught by Nelson et al. is based on “adjustment of the respective amount of time for each selected topic to present" and “one or more additional talking points”); processing the conversation includes passing the conversation through a speech spectrogram to generate a speech evaluation record (Paragraph 0156, Speech and text recognition may also be used to ensure that all agenda items and action items are addressed during an electronic meeting, which may include discussion, deferral, etc.); the generative AI engine extracts a text context from the speech evaluation record via a speech-text context extraction engine (Paragraph 0155, Speech or text recognition logic 400 may process first meeting content data 302 by parsing to detect keywords that are mapped to a meeting agenda. In the present example, speech or text recognition logic 400 detects the keywords “next quarter.” These keywords are a cue 304 for meeting intelligence apparatus 102 to generate intervention data 310 that indicates a corresponding agenda topic. The intervention data 310 may be used by the electronic meeting application to determine a correspondence between a current point in an electronic meeting and a meeting agenda. This correspondence is used to provide agenda management functionality, including tracking the current agenda topic); and the generative AI engine extracts a voice analysis from the speech evaluation record via a speech-voice analysis engine (Paragraph 0109, Sentiment analysis may use various cues that occur in speech during an electronic meeting, such as tone of voice, volume of voice, velocity of speech, lack of pauses in speech, profanity, sounds such as grunts, exhalation of air, etc.; Paragraph 0177, FIG. 4B is a block diagram that depicts an arrangement for performing sentiment analysis with respect to an ongoing discussion 402; Paragraph 0189, FIG. 4D is a block diagram that depicts an arrangement for supplementing meeting content with participant identification data. Referring to FIG. 4D, meeting intelligence apparatus 102 includes voice or face recognition logic 412, which performs voice or face recognition on first meeting content data 302 to detect a voice or a face). Although Nelson et al. discloses a method that detects a sensitivity level for the live session by performing a sentiment analysis with respect to an ongoing discussion (Paragraph 0177, angry tone), Nelson et al. does not specifically disclose how the method selects a sensitivity level for the live session. However, Dotan-Cohen et al. discloses selects a sensitivity level for the live session (Paragraph 0003, Embodiments described in the present disclosure are directed toward technologies for improving the functionality of multimedia content generated or presented by computing applications accessible on user computing devices. In particular, this disclosure provides certain technologies to programmatically provide a modified meeting presentation that sanitizes an occurrence of unwanted content in the meeting. In one example, the modified meeting presentation is a version of the meeting presentation that has been altered, based on a sensitivity mitigation action being applied to at least partially remove the unwanted or otherwise sensitive content associated with the meeting. In one example, the sensitivity mitigation action is a modification applied to a meeting presentation, based on a comparison of aspects of the meeting. In this example, the comparison of aspects of the meeting indicates that the segment of the meeting associated with the aspects contains sensitive content. In another example, the sensitivity mitigation action is a modification applied to a segment of a meeting presentation, based on a comparison of aspects of the segment, based on aspects of different segments, and the like; Paragraph 0005, Embodiments described in the present disclosure include applying the sensitivity mitigation action to cause sensitive content to at least be partially removed either through altering, editing, obscuring, hiding, or removing visual or audio aspects of the meeting; Paragraph 0104, The sensitivity analyzer 280 may employ any suitable ranking or classification scheme to rank or classify the aspects or segments. For example, the ranking or classification scheme may include a three-tier system for classifying or ranking sensitive content. In this example, the sensitive content may include high sensitivity content, medium sensitivity content, and low sensitivity content). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for generating an agenda for the live session by selecting one or more topics to present based on a plurality of factors (e.g., time constrains, questions, action items, etc.) of the invention of Nelson et al. to further incorporate to modify the agenda based on a selected sensitivity level of the live session of the invention of Dotan-Cohen et al. because doing so would allow the method to modify a meeting presentation, based on a sensitivity mitigation action being applied to at least partially remove the unwanted or otherwise sensitive content associated with the meeting (see Dotan-Cohen et al., Paragraph 0003). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 17 (Currently Amended), Nelson et al. discloses a method for dynamically augmenting a live session leveraging a generative artificial intelligence (“AI”) engine, the method comprising (Paragraph 0007, The approach may also be implemented by one or more computer-implemented methods; Paragraph 0070, Artificial intelligence is introduced into an electronic meeting context to perform various tasks before, during, and/or after electronic meetings. The tasks may include a wide variety of tasks, such as agenda creation, participant selection, real-time meeting management, meeting content supplementation, and post-meeting processing): receiving information relating to: a plurality of topics for discussion during the live session (Paragraph 0116, Suggested agenda items 256 are topics for discussion that are determined to be relevant and appropriate for a particular new electronic meeting. The topics may be topics that have been scheduled for discussion, or actually discussed, in other electronic meetings, or they may be new topics); an aggregate amount of time for conducting the live session (Paragraph 0086, Meeting rules may be specified by an organization, e.g., via bylaws, or by entities external to organizations, such as governmental, judicial or law enforcement entities. One example is a time constraint (minimum or maximum) for discussion of a particular agenda item); and one or more users to present the plurality of topics (Paragraph 0102, Meeting intelligence apparatus 102 may determine, based upon an analysis of prior meetings for the Pluto project, such as a first code review meeting, that Bob. H is a good candidate to be the meeting owner of the second code review meeting, and the meeting owner field may be automatically populated with Bob H); generating an agenda for the live session, based on the received information, the generating comprising: selecting from the plurality of topics one or more topics to present (Paragraph 0112, Electronic meeting agendas may be created manually by users and may be created with the assistance of artificial intelligence provided by meeting intelligence apparatus 102. According to one embodiment, meeting intelligence apparatus 102 participates in the creation of electronic meeting agendas by providing suggested items to be included on an electronic meeting agenda. The electronic meeting application may request that meeting intelligence apparatus 102 provide suggested agenda items for an electronic meeting; Paragraph 0115, Agenda information 252 also includes suggested agenda items 256 that are generated with the assistance of meeting intelligence apparatus 102; Paragraph 0116, Suggested agenda items 256 are topics for discussion that are determined to be relevant and appropriate for a particular new electronic meeting); assigning respective users from the one or more users to present the one or more topics to present (Paragraph 0102, Meeting intelligence apparatus 102 may determine, based upon an analysis of prior meetings for the Pluto project, such as a first code review meeting, that Bob. H is a good candidate to be the meeting owner of the second code review meeting, and the meeting owner field may be automatically populated with Bob H); assigning a respective amount of time for each selected topic to present; and assigning a respective amount of time for each of the one or more users to present the respective topic to present (Paragraph 0086, Meeting rules may be specified by an organization, e.g., via bylaws, or by entities external to organizations, such as governmental, judicial or law enforcement entities. One example is a time constraint (minimum or maximum) for discussion of a particular agenda item); … a sensitivity level for the live session (Paragraph 0177, FIG. 4B is a block diagram that depicts an arrangement for performing sentiment analysis with respect to an ongoing discussion 402. Referring to FIG. 4B, meeting intelligence apparatus 102 includes sentiment analysis logic 404 that performs sentiment analysis on first meeting content data 302 related to ongoing discussion 402. For example, meeting intelligence apparatus 102 may detect an angry tone or sentiment that is a cue 304 for meeting intelligence apparatus 102 to generate intervention data 310 indicating that another electronic meeting has been automatically scheduled for continuing ongoing discussion 402); selecting one or more of a plurality of sources to link to the agenda (Paragraph 0079, In an embodiment, meeting intelligence apparatus 102 is communicatively coupled to any of a number of external data sources (not shown), such as websites, other data available via the World Wide Web, databases managed by Salesforce, Oracle, SAP, Workday, or any entity other than the entity managing meeting intelligence apparatus 102. Meeting intelligence apparatus 102 may be communicatively coupled to the external data sources via network infrastructure 106. The external data sources may provide meeting intelligence apparatus 102 with access to any of a variety of data, meeting-related or otherwise; Paragraph 0122, According to one embodiment, meeting intelligence apparatus 102 is configured to analyze a plurality of data items to identify typical agenda items for the meeting type of the new electronic meeting. In the present example, this includes determining typical agenda items for code review meetings. This may include determining the typical agenda items for code review meetings within the same organization, or searching beyond the current organization to other organizations. The search may be conducted within the same context, industry, etc., or may extend to other contexts, industries, etc. According to one embodiment, meeting intelligence apparatus 102 identifies electronic documents related to one or more topics or subjects of the new electronic meeting and then analyzes the identified electronic documents to determine one or more suggested agenda items for the Other category. In the present example, meeting intelligence apparatus 102 determines that the “Software Testing Schedule” agenda item is typical for code review meetings and is therefore included as a suggested agenda item. Other criteria besides meeting type may be used to identify suggested agenda items. For example, the meeting subject may be used as a criterion to identify suggested agenda items. In the present example, meeting intelligence apparatus may search the plurality of data items to identify data items related to the Pluto Project, and determine suggested agenda items based upon the data items related to the Pluto Project; Examiner interprets “search for data items related to a specific project” as the “plurality of sources to link to the agenda”); executing the live session by the generative AI engine, the executing comprising: generating talking points for each topic of the selected topics (Paragraph 0112, Electronic meeting agendas may be created manually by users and may be created with the assistance of artificial intelligence provided by meeting intelligence apparatus 102. According to one embodiment, meeting intelligence apparatus 102 participates in the creation of electronic meeting agendas by providing suggested items to be included on an electronic meeting agenda. The electronic meeting application may request that meeting intelligence apparatus 102 provide suggested agenda items for an electronic meeting; Paragraph 0115, Agenda information 252 also includes suggested agenda items 256 that are generated with the assistance of meeting intelligence apparatus 102; Paragraph 0116, Suggested agenda items 256 are topics for discussion that are determined to be relevant and appropriate for a particular new electronic meeting); deploying each talking point to a plurality of user devices, each user device associated with at least one of the one or more users, respective talking points being deployed to the user device associated with the one or more users presenting the corresponding selected one or more topics (Paragraph 0083, Each node of the one or more nodes 104A-N is associated with one or more participants 108A-N. Each participant is a person who participates in an electronic meeting. Each node processes data transmission between network infrastructure 106 and at least one participant. Multiple nodes 104A-N may be communicatively coupled with each other using any of a number of different configurations; Paragraph 0084, In an embodiment, a node includes a computing device that executes an electronic meeting application 112; Paragraph 0102, Meeting intelligence apparatus 102 may determine, based upon an analysis of prior meetings for the Pluto project, such as a first code review meeting, that Bob. H is a good candidate to be the meeting owner of the second code review meeting, and the meeting owner field may be automatically populated with Bob H); upon deployment of the respective talking points, prompting the respective one or more users to accept, decline or revise the talking points (Paragraph 0103, Missing information may be presented in a manner to visually indicate that the information was automatically provided, for example, via highlighting, coloring, special effects, etc., and a user may be given an opportunity to accept, reject, or edit the missing information that was automatically provided; Paragraph 0117, Suggested agenda items 256 may be organized and presented to a user in any manner that may vary depending upon a particular implementation. For a large number of suggested agenda items 256, visually organizing the suggested agenda items on a user interface may provide a more favorable user experience than merely listing all available suggested agenda items 256; Paragraph 0118, FIG. 2I depicts suggested agenda items for each category of suggested agenda items depicted in FIG. 2H. Organizing suggested agenda items by category may be more useful to some users than listing suggesting agenda items in random order, although embodiments are not limited to organizing suggested agenda items 256 by category, and other approaches may be used such as alphabetical order, etc.); and monitoring the live session, the monitoring comprising: actively processing conversation of participants of the live session (Paragraph 0155, FIG. 4A is a block diagram that depicts an arrangement in which meeting intelligence apparatus 102 includes speech or text recognition logic 400 that processes first meeting content data 302 to determine one or more corresponding agenda topics. In the example depicted in FIG. 4A, first meeting content data 302 includes the speech or text statement “Gross sales are expected to be $10.8 million next quarter.” A participant associated with node 104A may have caused first meeting content data 302 to be generated by speaking, writing, typing, or displaying the statement. Speech or text recognition logic 400 may process first meeting content data 302 by parsing to detect keywords that are mapped to a meeting agenda. In the present example, speech or text recognition logic 400 detects the keywords “next quarter.” These keywords are a cue 304 for meeting intelligence apparatus 102 to generate intervention data 310 that indicates a corresponding agenda topic. The intervention data 310 may be used by the electronic meeting application to determine a correspondence between a current point in an electronic meeting and a meeting agenda. This correspondence is used to provide agenda management functionality, including tracking the current agenda topic); actively keeping track of the respective amount of time utilized during the live session for each of the selected one or more topics; and actively keeping track of the amount of time each of the one or more users is presenting (Paragraph 0156, A determined correspondence between a current point in an electronic meeting and a meeting agenda may be used to monitor the progress of an electronic meeting and enforce time constraints with respect to individual agenda items, groups of agenda items, and/or an entire electronic meeting. This may include tracking the amount of time spent on agenda items and providing one or more indications to meeting participants; Examiner notes that each agenda item (e.g., topic) is associated with one or more users); and in response to a deviation from the agenda, deploying a corrective action to each device, the deviation including one or more of a deviation from: the respective amount of time for each selected topic to present; and the respective amount of time for each of the one or more users to present the respective topic to present (Paragraph 0154, According to one embodiment, artificial intelligence is used to provide agenda management functionality during electronic meetings. Agenda management functionality may include a wide variety of functionality that may vary depending upon a particular implementation. Example functionality includes, without limitation, enforcing time constraints for agenda items, changing designated amounts of time for agenda items, changing, deleting and adding agenda items, including providing missing or supplemental information for agenda items, and agenda navigation); wherein: the deployed talking points are adjusted based on [topics for discussion that are determined to be relevant] (Paragraph 0112, Electronic meeting agendas may be created manually by users and may be created with the assistance of artificial intelligence provided by meeting intelligence apparatus 102. According to one embodiment, meeting intelligence apparatus 102 participates in the creation of electronic meeting agendas by providing suggested items to be included on an electronic meeting agenda. The electronic meeting application may request that meeting intelligence apparatus 102 provide suggested agenda items for an electronic meeting; Paragraph 0115, Agenda information 252 also includes suggested agenda items 256 that are generated with the assistance of meeting intelligence apparatus 102; Paragraph 0116, Suggested agenda items 256 are topics for discussion that are determined to be relevant and appropriate for a particular new electronic meeting); the sources include websites, private servers and databases (Paragraph 0079, In an embodiment, meeting intelligence apparatus 102 is communicatively coupled to any of a number of external data sources (not shown), such as websites, other data available via the World Wide Web, databases managed by Salesforce, Oracle, SAP, Workday, or any entity other than the entity managing meeting intelligence apparatus 102. Meeting intelligence apparatus 102 may be communicatively coupled to the external data sources via network infrastructure 106. The external data sources may provide meeting intelligence apparatus 102 with access to any of a variety of data, meeting-related or otherwise); the corrective action includes one or more of additional talking points, adjustment of the one or more topics to present, adjustment of the respective amount of time for each selected topic to present, adjustment of the respective amount of time for one or more users to present the respective topic and adjustment of the one or more users to present each topic (Paragraph 0154, According to one embodiment, artificial intelligence is used to provide agenda management functionality during electronic meetings. Agenda management functionality may include a wide variety of functionality that may vary depending upon a particular implementation. Example functionality includes, without limitation, enforcing time constraints for agenda items, changing designated amounts of time for agenda items, changing, deleting and adding agenda items, including providing missing or supplemental information for agenda items, and agenda navigation; It can be noted that the claim language is written in alternative form. The limitation taught by Nelson et al. is based on “adjustment of the respective amount of time for each selected topic to present" and “one or more additional talking points”); processing the conversation includes passing the conversation through a speech spectrogram to generate a speech evaluation record (Paragraph 0156, Speech and text recognition may also be used to ensure that all agenda items and action items are addressed during an electronic meeting, which may include discussion, deferral, etc.); the generative AI engine extracts a text context from the speech evaluation record via a speech-text context extraction engine (Paragraph 0155, Speech or text recognition logic 400 may process first meeting content data 302 by parsing to detect keywords that are mapped to a meeting agenda. In the present example, speech or text recognition logic 400 detects the keywords “next quarter.” These keywords are a cue 304 for meeting intelligence apparatus 102 to generate intervention data 310 that indicates a corresponding agenda topic. The intervention data 310 may be used by the electronic meeting application to determine a correspondence between a current point in an electronic meeting and a meeting agenda. This correspondence is used to provide agenda management functionality, including tracking the current agenda topic); the generative AI engine extracts a voice analysis from the speech evaluation record via a speech-voice analysis engine (Paragraph 0109, Sentiment analysis may use various cues that occur in speech during an electronic meeting, such as tone of voice, volume of voice, velocity of speech, lack of pauses in speech, profanity, sounds such as grunts, exhalation of air, etc.; Paragraph 0177, FIG. 4B is a block diagram that depicts an arrangement for performing sentiment analysis with respect to an ongoing discussion 402; Paragraph 0189, FIG. 4D is a block diagram that depicts an arrangement for supplementing meeting content with participant identification data. Referring to FIG. 4D, meeting intelligence apparatus 102 includes voice or face recognition logic 412, which performs voice or face recognition on first meeting content data 302 to detect a voice or a face); the generative AI engine deploys the corrective action based on the text context and the voice analysis; and the generative AI engine generates the additional talking points based on the text context and the voice analysis (Paragraph 0149, FIG. 3 is a block diagram that depicts an arrangement for generating intervention data. Referring to FIG. 3, meeting intelligence apparatus 102 receives audio/video data 300 from node 104A. Audio/video data 300 may be one or more data packets, a data stream, and/or any other form of data that includes audio and/or video information related to an electronic meeting. In the example depicted in FIG. 3, audio/video data 300 includes first meeting content data 302 which, in turn, includes cue 304. Cue 304 may take many forms that may vary depending upon a particular implementation. Examples of cue 304 include, without limitation, one or more keywords, tones, sentiments, facial recognitions, etc., that can be discerned from audio/video data 300; Paragraph 0150, Meeting intelligence apparatus 102 includes cue detection logic 306, which analyzes audio/video data 300 to determine whether audio/video data 300 includes cue 304. Cue detection logic 306 may analyze audio/video data 300 on a continuous basis, or on a periodic basis, depending upon a particular implementation. Meeting intelligence apparatus 102 also includes data generation logic 308, which generates intervention data 310 if audio/video data 300 includes cue 304. Meeting intelligence apparatus 102 transmits intervention data 310 to node 104A during and/or after an electronic meeting. Intervention data 310 includes second meeting content data 312 that may supplement or replace first meeting content data 302, as described in more detail hereinafter. Intervention data may also Meeting intelligence apparatus 102 may can intervene in an electronic meeting in a wide variety of ways. Non-limiting examples include intervening to manage meeting flow, to provide information retrieval services, and/or to supplement meeting content; Examiner interprets “supplement the meeting content” as the “additional talking points”). Although Nelson et al. discloses detecting a sensitivity level for the live session by performing a sentiment analysis with respect to an ongoing discussion (Paragraph 0177, angry tone), Nelson et al. does not specifically disclose how the method is selecting a sensitivity level for the live session. However, Dotan-Cohen et al. discloses selecting a sensitivity level for the live session (Paragraph 0003, Embodiments described in the present disclosure are directed toward technologies for improving the functionality of multimedia content generated or presented by computing applications accessible on user computing devices. In particular, this disclosure provides certain technologies to programmatically provide a modified meeting presentation that sanitizes an occurrence of unwanted content in the meeting. In one example, the modified meeting presentation is a version of the meeting presentation that has been altered, based on a sensitivity mitigation action being applied to at least partially remove the unwanted or otherwise sensitive content associated with the meeting. In one example, the sensitivity mitigation action is a modification applied to a meeting presentation, based on a comparison of aspects of the meeting. In this example, the comparison of aspects of the meeting indicates that the segment of the meeting associated with the aspects contains sensitive content. In another example, the sensitivity mitigation action is a modification applied to a segment of a meeting presentation, based on a comparison of aspects of the segment, based on aspects of different segments, and the like; Paragraph 0005, Embodiments described in the present disclosure include applying the sensitivity mitigation action to cause sensitive content to at least be partially removed either through altering, editing, obscuring, hiding, or removing visual or audio aspects of the meeting; Paragraph 0104, The sensitivity analyzer 280 may employ any suitable ranking or classification scheme to rank or classify the aspects or segments. For example, the ranking or classification scheme may include a three-tier system for classifying or ranking sensitive content. In this example, the sensitive content may include high sensitivity content, medium sensitivity content, and low sensitivity content) …; wherein: the deployed talking points are adjusted based on the selected sensitivity level (Paragraph 0003, Embodiments described in the present disclosure are directed toward technologies for improving the functionality of multimedia content generated or presented by computing applications accessible on user computing devices. In particular, this disclosure provides certain technologies to programmatically provide a modified meeting presentation that sanitizes an occurrence of unwanted content in the meeting. In one example, the modified meeting presentation is a version of the meeting presentation that has been altered, based on a sensitivity mitigation action being applied to at least partially remove the unwanted or otherwise sensitive content associated with the meeting. In one example, the sensitivity mitigation action is a modification applied to a meeting presentation, based on a comparison of aspects of the meeting. In this example, the comparison of aspects of the meeting indicates that the segment of the meeting associated with the aspects contains sensitive content. In another example, the sensitivity mitigation action is a modification applied to a segment of a meeting presentation, based on a comparison of aspects of the segment, based on aspects of different segments, and the like; Paragraph 0005, Embodiments described in the present disclosure include applying the sensitivity mitigation action to cause sensitive content to at least be partially removed either through altering, editing, obscuring, hiding, or removing visual or audio aspects of the meeting; Paragraph 0104, The sensitivity analyzer 280 may employ any suitable ranking or classification scheme to rank or classify the aspects or segments. For example, the ranking or classification scheme may include a three-tier system for classifying or ranking sensitive content. In this example, the sensitive content may include high sensitivity content, medium sensitivity content, and low sensitivity content); It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for generating an agenda for the live session by selecting one or more topics to present based on a plurality of factors (e.g., time constrains, questions, action items, etc.) of the invention of Nelson et al. to further incorporate to modify the agenda based on a selected sensitivity level of the live session of the invention of Dotan-Cohen et al. because doing so would allow the method to modify a meeting presentation, based on a sensitivity mitigation action being applied to at least partially remove the unwanted or otherwise sensitive content associated with the meeting (see Dotan-Cohen et al., Paragraph 0003). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claims 2 and 10 (Original), which are dependent of claims 1 and 9, the combination of Nelson et al. and Dotan-Cohen et al. discloses all the limitations in claims 1 and 0. Nelson et al. further discloses upon detection of a question from text context and voice analysis, the generative AI engine: generates an answer to the question; and deploys the answer to the user devices in real-time; wherein the generative AI engine generates the answer from data from the plurality of sources (Paragraph 0148, According to one embodiment, artificial intelligence is used to manage various aspects of electronic meetings. For example, meeting intelligence apparatus 102 may intervene during electronic meetings to provide any of a variety of intervention data, such as visual indications, messages in message window 224, participant information, recommendation information, and/or any other data that meeting intelligence apparatus 102 transmits during an electronic meeting; Paragraph 0181, Speech or text recognition logic 400 parses and interprets first meeting content data 302 to detect natural language request 406, which is a cue 304 for meeting intelligence apparatus 102 to generate intervention data 310 to be sent to at least node 104A during an electronic meeting; Paragraph 0182, In the example of FIG. 4C, meeting intelligence apparatus 102 may interpret the question as a command to search and analyze prior meeting data to determine an answer to the question. Determining the answer to the question may involve analyzing meeting content data related to an ongoing meeting and/or a prior meeting, thereby increasing the relevance of the answer to the question. For example, the question “Where did we leave off at the last meeting?” may be analyzed using contextual data (e.g., metadata) from the current meeting, such as the identities of participants 108A-N, the topic of the current discussion, etc. Meeting intelligence apparatus 102 may search the meeting data repository for information that most closely matches the contextual data from the current meeting). Regarding claims 3 and 11 (Original), which are dependent of claims 1 and 9, the combination of Nelson et al. and Dotan-Cohen et al. discloses all the limitations in claims 1 and 9. Nelson et al. further discloses wherein: the generative AI engine deploys the corrective action based on the text context and the voice analysis; and the generative AI engine generates the additional talking points based on the text context and the voice analysis (Paragraph 0149, FIG. 3 is a block diagram that depicts an arrangement for generating intervention data. Referring to FIG. 3, meeting intelligence apparatus 102 receives audio/video data 300 from node 104A. Audio/video data 300 may be one or more data packets, a data stream, and/or any other form of data that includes audio and/or video information related to an electronic meeting. In the example depicted in FIG. 3, audio/video data 300 includes first meeting content data 302 which, in turn, includes cue 304. Cue 304 may take many forms that may vary depending upon a particular implementation. Examples of cue 304 include, without limitation, one or more keywords, tones, sentiments, facial recognitions, etc., that can be discerned from audio/video data 300; Paragraph 0150, Meeting intelligence apparatus 102 includes cue detection logic 306, which analyzes audio/video data 300 to determine whether audio/video data 300 includes cue 304. Cue detection logic 306 may analyze audio/video data 300 on a continuous basis, or on a periodic basis, depending upon a particular implementation. Meeting intelligence apparatus 102 also includes data generation logic 308, which generates intervention data 310 if audio/video data 300 includes cue 304. Meeting intelligence apparatus 102 transmits intervention data 310 to node 104A during and/or after an electronic meeting. Intervention data 310 includes second meeting content data 312 that may supplement or replace first meeting content data 302, as described in more detail hereinafter. Intervention data may also Meeting intelligence apparatus 102 may can intervene in an electronic meeting in a wide variety of ways. Non-limiting examples include intervening to manage meeting flow, to provide information retrieval services, and/or to supplement meeting content; Examiner interprets “supplement the meeting content” as the “additional talking points”). Regarding claims 4 and 12 (Original), which are dependent of claims 3 and 11, the combination of Nelson et al. and Dotan-Cohen et al. discloses all the limitations in claims 3 and 11. Nelson et al. further discloses wherein the generative AI engine dynamically generates the additional talking points using information from the selected sources (Paragraph 0079, In an embodiment, meeting intelligence apparatus 102 is communicatively coupled to any of a number of external data sources (not shown), such as websites, other data available via the World Wide Web, databases managed by Salesforce, Oracle, SAP, Workday, or any entity other than the entity managing meeting intelligence apparatus 102. Meeting intelligence apparatus 102 may be communicatively coupled to the external data sources via network infrastructure 106. The external data sources may provide meeting intelligence apparatus 102 with access to any of a variety of data, meeting-related or otherwise; Paragraph 0149, FIG. 3 is a block diagram that depicts an arrangement for generating intervention data. Referring to FIG. 3, meeting intelligence apparatus 102 receives audio/video data 300 from node 104A. Audio/video data 300 may be one or more data packets, a data stream, and/or any other form of data that includes audio and/or video information related to an electronic meeting. In the example depicted in FIG. 3, audio/video data 300 includes first meeting content data 302 which, in turn, includes cue 304. Cue 304 may take many forms that may vary depending upon a particular implementation. Examples of cue 304 include, without limitation, one or more keywords, tones, sentiments, facial recognitions, etc., that can be discerned from audio/video data 300; Paragraph 0150, Meeting intelligence apparatus 102 includes cue detection logic 306, which analyzes audio/video data 300 to determine whether audio/video data 300 includes cue 304. Cue detection logic 306 may analyze audio/video data 300 on a continuous basis, or on a periodic basis, depending upon a particular implementation. Meeting intelligence apparatus 102 also includes data generation logic 308, which generates intervention data 310 if audio/video data 300 includes cue 304. Meeting intelligence apparatus 102 transmits intervention data 310 to node 104A during and/or after an electronic meeting. Intervention data 310 includes second meeting content data 312 that may supplement or replace first meeting content data 302, as described in more detail hereinafter. Intervention data may also Meeting intelligence apparatus 102 may can intervene in an electronic meeting in a wide variety of ways. Non-limiting examples include intervening to manage meeting flow, to provide information retrieval services, and/or to supplement meeting content; Examiner interprets “supplement the meeting content” as the “additional talking points”). Regarding claims 5, 13, and 18 (Original), which are dependent of claims 1, 9, and 17, the combination of Nelson et al. and Dotan-Cohen et al. discloses all the limitations in claims 1, 9, and 17. Nelson et al. further discloses: selecting a moderator for the live session; alerting the moderator when the deviation is detected, before deploying the corrective action (Paragraph 0156, A determined correspondence between a current point in an electronic meeting and a meeting agenda may be used to monitor the progress of an electronic meeting and enforce time constraints with respect to individual agenda items, groups of agenda items, and/or an entire electronic meeting. This may include tracking the amount of time spent on agenda items and providing one or more indications to meeting participants. For example, in addition to the timer provided in agenda window 218 (FIG. 2D), a visual and/or audible indication may be provided when an amount of time designated for an agenda item, group of agenda items, or an entire electronic meeting, is nearing expiration or has expired. If the timer value exceeds the specified time limit, the electronic meeting application may cause a message to be displayed in message window 224. The message may also be spoken by the electronic meeting application. The message may indicate, for example, the that time limit for the current agenda item has expired and the electronic meeting will be progressing to the next agenda item. Additionally or alternatively, the electronic meeting application may move a visual indication to a different agenda topic. Speech and text recognition may also be used to ensure that all agenda items and action items are addressed during an electronic meeting, which may include discussion, deferral, etc.); and receiving instructions in real-time from the moderator, whether to deploy the corrective action upon receipt of the alert (Paragraph 0157, As previously described herein, agenda items may be designated as requiring a decision, for example via one or more meeting rules templates, or via user-designation. According to one embodiment, an electronic meeting application ensures that a decision is made for all agenda items requiring a decision during an electronic meeting. If a user attempts to navigate to another agenda item or action item before a decision has been made on a current agenda item, the electronic meeting application may display a message in message window 224, or speak the message, indicating that the current agenda item or action item requires a decision. This may include preventing navigation to other agenda items or action items until the current agenda item is addressed. A meeting owner may be permitted to override this functionality and move to another agenda item or action item). Regarding claims 6, 14, and 19 (Original), which are dependent of claims 5, 13, and 18, the combination of Nelson et al. and Dotan-Cohen et al. discloses all the limitations in claims 5, 13, and 18. Nelson et al. further discloses wherein the user associated with the user device on which the corrective action is deployed is prompted to accept or decline the corrective action (Paragraph 0103, Missing information may be presented in a manner to visually indicate that the information was automatically provided, for example, via highlighting, coloring, special effects, etc., and a user may be given an opportunity to accept, reject, or edit the missing information that was automatically provided; Paragraph 0157, As previously described herein, agenda items may be designated as requiring a decision, for example via one or more meeting rules templates, or via user-designation. According to one embodiment, an electronic meeting application ensures that a decision is made for all agenda items requiring a decision during an electronic meeting. If a user attempts to navigate to another agenda item or action item before a decision has been made on a current agenda item, the electronic meeting application may display a message in message window 224, or speak the message, indicating that the current agenda item or action item requires a decision. This may include preventing navigation to other agenda items or action items until the current agenda item is addressed. A meeting owner may be permitted to override this functionality and move to another agenda item or action item). Regarding claims 7, 15, and 20 (Original), which are dependent of claims 6, 14, and 19, the combination of Nelson et al. and Dotan-Cohen et al. discloses all the limitations in claims 6, 14, and 19. Nelson et al. further discloses wherein, upon the user declining the corrective action, the moderator is prompted to accept or decline blocking the user from further participation in the live session (Paragraph 0103, Missing information may be presented in a manner to visually indicate that the information was automatically provided, for example, via highlighting, coloring, special effects, etc., and a user may be given an opportunity to accept, reject, or edit the missing information that was automatically provided; Paragraph 0157, As previously described herein, agenda items may be designated as requiring a decision, for example via one or more meeting rules templates, or via user-designation. According to one embodiment, an electronic meeting application ensures that a decision is made for all agenda items requiring a decision during an electronic meeting. If a user attempts to navigate to another agenda item or action item before a decision has been made on a current agenda item, the electronic meeting application may display a message in message window 224, or speak the message, indicating that the current agenda item or action item requires a decision. This may include preventing navigation to other agenda items or action items until the current agenda item is addressed. A meeting owner may be permitted to override this functionality and move to another agenda item or action item). Regarding claims 8 and 16 (Original), which are dependent of claims 1 and 9, the combination of Nelson et al. and Dotan-Cohen et al. discloses all the limitations in claims 1 and 9. Nelson et al. further discloses wherein the deployed talking points are adjusted based on the … sensitivity level (Paragraph 0177, FIG. 4B is a block diagram that depicts an arrangement for performing sentiment analysis with respect to an ongoing discussion 402. Referring to FIG. 4B, meeting intelligence apparatus 102 includes sentiment analysis logic 404 that performs sentiment analysis on first meeting content data 302 related to ongoing discussion 402. For example, meeting intelligence apparatus 102 may detect an angry tone or sentiment that is a cue 304 for meeting intelligence apparatus 102 to generate intervention data 310 indicating that another electronic meeting has been automatically scheduled for continuing ongoing discussion 402). Although Nelson et al. discloses detecting a sensitivity level for the live session by performing a sentiment analysis with respect to an ongoing discussion (Paragraph 0177, angry tone), Nelson et al. does not specifically disclose how the method is selecting a sensitivity level for the live session. However, Dotan-Cohen et al. discloses wherein the deployed talking points are adjusted based on the selected sensitivity level (Paragraph 0003, Embodiments described in the present disclosure are directed toward technologies for improving the functionality of multimedia content generated or presented by computing applications accessible on user computing devices. In particular, this disclosure provides certain technologies to programmatically provide a modified meeting presentation that sanitizes an occurrence of unwanted content in the meeting. In one example, the modified meeting presentation is a version of the meeting presentation that has been altered, based on a sensitivity mitigation action being applied to at least partially remove the unwanted or otherwise sensitive content associated with the meeting. In one example, the sensitivity mitigation action is a modification applied to a meeting presentation, based on a comparison of aspects of the meeting. In this example, the comparison of aspects of the meeting indicates that the segment of the meeting associated with the aspects contains sensitive content. In another example, the sensitivity mitigation action is a modification applied to a segment of a meeting presentation, based on a comparison of aspects of the segment, based on aspects of different segments, and the like; Paragraph 0005, Embodiments described in the present disclosure include applying the sensitivity mitigation action to cause sensitive content to at least be partially removed either through altering, editing, obscuring, hiding, or removing visual or audio aspects of the meeting; Paragraph 0104, The sensitivity analyzer 280 may employ any suitable ranking or classification scheme to rank or classify the aspects or segments. For example, the ranking or classification scheme may include a three-tier system for classifying or ranking sensitive content. In this example, the sensitive content may include high sensitivity content, medium sensitivity content, and low sensitivity content). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method for generating an agenda for the live session by selecting one or more topics to present based on a plurality of factors (e.g., time constrains, questions, action items, etc.) of the invention of Nelson et al. to further incorporate to modify the agenda based on a selected sensitivity level of the live session of the invention of Dotan-Cohen et al. because doing so would allow the method to modify a meeting presentation, based on a sensitivity mitigation action being applied to at least partially remove the unwanted or otherwise sensitive content associated with the meeting (see Dotan-Cohen et al., Paragraph 0003). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Li et al. (US 10,825,470 B2) – discloses FIG. 2 is a speech spectrogram of a speech data in the prior art. FIG. 3 is a schematic diagram of states of the present disclosure. As shown in FIG. 3, with respect to the speech spectrogram shown in FIG. 2, 1 may be used to represents that the user speaks, 0 used to represent that the user does not speak, A, B, C and D are used to represent states such as mute, the starting point of the speech, retention of the speech, and the finishing point of the speech in turn (see at least Fig. 2 & Fig. 3) Jeon (WO 2023/136505 A1) – discloses a method for automatizing a check on a meeting agenda, performed by at least one processor of a user terminal. This method comprises the steps of: receiving one or more meeting agendas in a text form; converting uttered speech of meeting participants into dialogue text; segmenting the dialogue text into one or more pieces of topic unit text; and on the basis of a similarity level between the segmented one or more pieces of topic unit text and the one or more meeting agendas, identifying whether or not a discussion on the one or more meeting agendas has been started or completed (see at least Abstract). Deng et al. (US 2023/0208898 A1) – discloses a dialog monitor 452 that tracks the dialog flow of all the participants to figure out topics being discussed, number of times a participant raising or answering questions, willingness to adopt suggestions, and number of attempts to counter support a view (see at least Paragraph 0056). Swerdlow (US 2022/0351149 A1) – discloses a transcription processing tool 702 that uses the agenda input 706 and the agenda items 708 to determine when the existing agenda items 708 and when other topics to consider as agenda items for the new agenda 712 are being discussed during a multi-participant communication. The transcription processing tool 702 may, for example, refer to software which processes an agenda generated in real-time during the multi-participant communication to determine whether an agenda item has been completed during the multi-participant communication or whether it is incomplete when the multi-participant communication ends (see at least Paragraph 0097). THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARJORIE PUJOLS-CRUZ whose telephone number is (571)272-4668. The examiner can normally be reached Mon-Thru 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia H Munson can be reached at (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.P./Examiner, Art Unit 3624 /HAMZEH OBAID/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Aug 02, 2024
Application Filed
Oct 01, 2025
Non-Final Rejection — §101, §103
Jan 09, 2026
Response Filed
Feb 03, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12106240
SYSTEMS AND METHODS FOR ANALYZING USER PROJECTS
2y 5m to grant Granted Oct 01, 2024
Patent 12014298
AUTOMATICALLY SCHEDULING AND ROUTE PLANNING FOR SERVICE PROVIDERS
2y 5m to grant Granted Jun 18, 2024
Patent 11966927
Multi-Task Deep Learning of Client Demand
2y 5m to grant Granted Apr 23, 2024
Patent 11941651
LCP Pricing Tool
2y 5m to grant Granted Mar 26, 2024
Patent 11847602
SYSTEM AND METHOD FOR DETERMINING AND UTILIZING REPEATED CONVERSATIONS IN CONTACT CENTER QUALITY PROCESSES
2y 5m to grant Granted Dec 19, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
18%
Grant Probability
46%
With Interview (+27.9%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month