DETAILED ACTION
Status of Claims
The following is a Final Office Action in response to request for amendment received on 09/30/2025.
Claims 1, 9, and 17 are amended. Claims 2-6, 11-14, and 18-19 are cancelled. Claims 1, 7-10, 15-17, and 20 are considered in this Office Action. Claims 1, 7-10, 15-17, and 20 are currently pending.
Examiner Notice: claims 7, 8, 15, 16, and 20 do not comply with the requirements for proper status identifiers for claims as required by 37 C.F.R. 1.121(c). The claims have “(Currently amended)” without markings to indicate the changes that have been made relative to the immediate prior version of the claims. All claims being currently amended in an amendment paper shall be presented in the claim listing, indicate a status of “currently amended,” and be submitted with markings to indicate the changes that have been made relative to the immediate prior version of the claims. The text of any added subject matter must be shown by underlining the added text. The text of any deleted matter must be shown by strike-through except that double brackets placed before and after the deleted characters may be used to show deletion of five or fewer consecutive characters. The text of any deleted subject matter must be shown by being placed within double brackets if strike-through cannot be easily perceived. Only claims having the status of “currently amended,” shall include markings. See MPEP 714.
Response to Argument
Applicant’s amendment necessitated the new ground(s) of rejection set forth in this Office Action.
Applicant’s arguments with respect to the 35 U.S.C. §101 rejection to claims have been considered, but are not persuasive.
Applicant asserts that based on the MPEP’s guidance; the claims do not recite a judicial exception. Specifically, the claims do not recite mental processes or methods of organizing human activity. Instead, the claims recite specific determination of a computer monitoring program that generates a suggested expertise level of the presentation based on particular monitoring by programs of attendee computers that use at least image capture devices; the claims do not recite Fundamental Economic Practices and/or Commercial or Legal Interactions as in financing for purchasing a product (Credit Acceptance), local processing of payments for remotely purchased goods (Inventor Holdings); third party guaranty (buySAFE), evaluating loan financing (Mortgage Grader), or mitigating settlement risk (Alice). And, because it is a computer-based solution, including claim features impossible to be performed by the human mind like monitoring attendees using image capture devices and communicating between computers as a result, it cannot be accomplished in the human mind and is not organizing human activity.
The examiner respectfully disagrees. The examiner notes that the claims recite an abstract idea by reciting concepts of monitoring the engagement level of participant during a presentation and change and modifying based on the engagement level which can be categorized as “Mental process” and “Certain Methods of Organizing Human Activity” (e.g., managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions)). The abstract idea can be categorized as “mental process” because it is directed concept performed in the human mind (including an observation, evaluation, judgment, opinion) or by the aid of a pen and/or paper. The abstract idea can be categorized as “Certain Methods of Organizing Human Activity” (e.g., managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions)) because it is directed toward managing personal behavior or relationships or interactions between people to determine the engagement level of attendees be within the enumerated groupings of abstract ideas.
Accordingly, the examiner notes that applicant’s argument are raised in light of applicant’s amendments, and updated 35 U.S.C. § 101 rejection will address applicant’s amendments.
Applicant’s arguments and amendments with respect to the 35 U.S.C. §103 rejection to claims have been considered, however they are primarily raised in light of applicant’s amendments. An updated the 35 U.S.C. §103 rejection will address applicant’s amendment.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 7-10, 15-17, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claims are directed to an abstract idea without significantly more.
Claims 1, 7-10, 15-17, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The eligibility analysis in support of these findings is provided below, in accordance with the “Patent Subject Matter Eligibility Guidance”.
With respect to Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is first noted that the method (claims 1, 7, 8), the system (claims 9, 10, 15, 16), and the non-transitory computer readable storage medium (claims 17, and 20) are directed to an eligible category of subject matter (i.e., process, machine, and article of manufacture respectively). Thus, Step 1 is satisfied.
With respect to Step 2, and in particular Step 2A Prong One, it is next noted that the claims recite an abstract idea by reciting concepts of monitoring the engagement level of participant during a presentation which can be categorized as “Mental process” and “Certain Methods of Organizing Human Activity” (e.g., managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions)). The abstract idea can be categorized as “mental process” because it is directed concept performed in the human mind (including an observation, evaluation, judgment, opinion) or by the aid of a pen and/or paper. The abstract idea can be categorized as “Certain Methods of Organizing Human Activity” (e.g., managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions)) because it is directed toward managing personal behavior or relationships or interactions between people to determine the engagement level of attendees be within the enumerated groupings of abstract ideas. The limitations reciting the abstract idea are highlighted in italics and the limitation directed to additional elements highlighted in bold, as set forth in exemplary claim 1, are: A method for using real time interactive data with artificial intelligence capabilities to improve presentations comprising: receiving, at a moderator computer program executed by a moderator computer, a presentation for a presenter to present to an audience comprising a plurality of attendees, wherein the presentation is a slide-based presentation; monitoring, by an attendee computer program executed by an attendee computer based on reporting provided to the moderator computer program, the plurality of attendees to determine a engagement level for the audience by capturing audio data from an attendee capture device of the attendee computer and image capture data from an image capture device(recited at high level of generality amounts to data gathering means and would amount to extra-solution activity); generating, by the moderator computer program, a suggested expertise level of presentation for the presenter based on the engagement level by tracking user engagement time and a face of an attendee during the presentation based on the image capture data from the image capture device, wherein the moderator computer program monitors facial expressions, audio feedback, and received questions for one or more of the plurality of attendees, and determines the engagement level using the facial expressions, audio feedback, and received questions; providing, by the moderator computer program, the suggested level of presentation to an electronic device associated with the presenter; providing, by the moderator computer program, a summary of the presentation to the presenter and the plurality of attendees; identifying, by the moderator computer program and by using a trained machine learning engine, a plurality of potential audience questions based on the presentation; identifying, by the moderator computer program, answers to the plurality of potential questions; making, by the moderator computer program, the answers available to the audience during the presentation, and automatically displaying, by the moderator computer program, a recommendation to the presenter including speed up or slow down based on a timing goal and a speed of the presentation(amounts to displaying results and amounts to extra-solution activity). Claim 17 recites substantially the same limitations as claim 1, and therefore subject to the same rationale.
The limitations reciting the abstract idea are highlighted in italics and the limitation directed to additional elements highlighted in bold, as set forth in exemplary claim 9, are: A system, comprising: a presenter electronic device with a presenter executing a presenter computer application; a plurality of attendee electronic devices, each executing an attendee computer program and including an image capture device, the image capture device capable of capturing an image of each attendee of a plurality of attendees(amounts to extra-solution activity); and a moderator electronic device executing a moderator computer program; wherein the moderator computer program receives a presentation for a presenter to present to an audience comprising the plurality of attendees; the moderator computer program receives the images of the plurality of attendees; the moderator computer program determines a sentiment for the plurality of attendees based from the images; the moderator computer program generates a suggested expertise level of presentation for the presenter based on the sentiment by tracking user engagement time and a face of an attendee during the presentation based on the image capture data from the image capture device, wherein the moderator computer program monitors facial expressions, audio feedback, and received questions for one or more of the plurality of attendees, and determines the engagement level using the facial expressions, audio feedback, and received questions; the moderator computer program provides the suggested level of presentation to the presenter computer program(amounts to extra-solution activity); the moderator computer program provides a summary of the presentation to the presenter and the plurality of attendees; the moderator computer program, using a trained machine learning engine, identifying a plurality of potential audience questions based on the presentation; the moderator computer program identifying answers to the plurality of potential questions; the moderator computer program making the answers available to the audience during the presentation, and automatically creating, by the moderator computer program, a clip of interactive points of the presentation(amounts to extra-solution activity).
With respect to Step 2A Prong Two, the judicial exception is not integrated into a practical application. The additional elements are directed system, a presenter electronic device with a presenter executing a presenter computer application, the image capture device capturing facial expressions, each attendee electronic device further comprising an audio capture device, and wherein the moderator computer program receives audio feedback from one or more of the plurality of attendees, a plurality of attendee electronic devices, each executing an attendee computer program and including an image capture device, the image capture device capable of capturing an image of each attendee of a plurality of attendees; the moderator computer program receives audio feedback from one or more of the plurality of attendees, and determines the sentiment using the audio feedback; and a moderator electronic device executing a moderator computer program(recited at high level of generality as means to gather and collect data), and a moderator electronic device executing a moderator computer program; the attendee image capture device captures images of a plurality of attendees(recited at high level of generality as means to collect and gather data), the moderator computer program provides the suggested level of presentation to the presenter computer program (recited at high level of generality), using a trained machine learning engine (recited at high level of generality)the moderator computer program provides the insights to the presenter computer program, non-transitory computer readable storage medium, including instructions stored thereon, which when read and executed by one or more computer processors, cause the one or more computer processors, automatically displaying, by the moderator computer program, a recommendation to the presenter including speed up or slow down based on a timing goal and a speed of the presentation(amounts to displaying results and amounts to extra-solution activity), automatically creating, by the moderator computer program, a clip of interactive points of the presentation(amounts to extra-solution activity), and providing to the electronic device associated with the presenter a bundle of questions selected from the received questions, the bundle of questions determined by the trained machine learning engine to be similar to each other(amounts to extra-solution activity) to implement the abstract idea. However, these elements fail to integrate the abstract idea into a practical application because they fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Furthermore, these elements have been fully considered, however they are directed to the use of generic computing elements (Applicant’s Specification figure [0027] describe high level general purpose computer) to perform the abstract idea, which is not sufficient to amount to a practical application and is tantamount to simply saying “apply it” using a general purpose computer, which merely serves to tie the abstract idea to a particular technological environment (computer based operating environment) by using the computer as a tool to perform the abstract idea, which is not sufficient to amount to particular application. See MPEP 2106.05(f) and 2106.05(h). The step to receive data, although part of the abstract idea itself, also encompass insignificant extra-solution data gathering activity, which is not indicative of a practical application. In accordance to MPEP 2106, claims do recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include: claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); and a claim to collecting and comparing known information (claim 1), which are steps that can be practically performed in the human mind, Classen Immunotherapies, Inc. v. Biogen IDEC, 659 F.3d 1057, 1067, 100 USPQ2d 1492, 1500 (Fed. Cir. 2011);
Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception.
With respect to Step 2B of the eligibility inquiry, it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations are system, a presenter electronic device with a presenter executing a presenter computer application, the image capture device capturing facial expressions, each attendee electronic device further comprising an audio capture device, and wherein the moderator computer program receives audio feedback from one or more of the plurality of attendees, a plurality of attendee electronic devices, each executing an attendee computer program and including an image capture device, the image capture device capable of capturing an image of each attendee of a plurality of attendees; the moderator computer program receives audio feedback from one or more of the plurality of attendees, and determines the sentiment using the audio feedback; and a moderator electronic device executing a moderator computer program(recited at high level of generality as means to gather and collect data), and a moderator electronic device executing a moderator computer program; the attendee image capture device captures images of a plurality of attendees(recited at high level of generality as means to collect and gather data), the moderator computer program provides the suggested level of presentation to the presenter computer program (recited at high level of generality), using a trained machine learning engine (recited at high level of generality)the moderator computer program provides the insights to the presenter computer program, non-transitory computer readable storage medium, including instructions stored thereon, which when read and executed by one or more computer processors, cause the one or more computer processors, automatically displaying, by the moderator computer program, a recommendation to the presenter including speed up or slow down based on a timing goal and a speed of the presentation(amounts to displaying results and amounts to extra-solution activity), automatically creating, by the moderator computer program, a clip of interactive points of the presentation(amounts to extra-solution activity), and providing to the electronic device associated with the presenter a bundle of questions selected from the received questions, the bundle of questions determined by the trained machine learning engine to be similar to each other(amounts to extra-solution activity) to implement the abstract idea. These elements have been considered, but merely serve to tie the invention to a particular operating environment (i.e., computer-based implementation), though at a very high level of generality and without imposing meaningful limitation on the scope of the claim. In addition, Applicant’s Specification (paragraph [0027]) describes generic off-the-shelf computer-based elements for implementing the claimed invention, and which does not amount to significantly more than the abstract idea, which is not enough to transform an abstract idea into eligible subject matter. Such generic, high-level, and nominal involvement of a computer or computer-based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent-eligible, as noted at pg. 74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo. It is further noted that that the claimed use of moderator computer program and by using a trained machine learning or artificial intelligence engine is recited at a high level of generality these elements amount to well-understood, routine, and conventional activity in the art, which fails to add significantly more to the claims. See, e.g., Mane et al., US 2024/0095446 A1 (paragraph [0038]: “Pre-training includes a machine learning model, e.g., BERT or the IBM Watson? Natural Language Classifier, that is trained on a dataset of transcripts and previously asked questions and answers that are saved in past session database 254…. The machine learning model of the text classifier is trained using feature extraction based on past observations, such as the stored transcripts. To produce a classification model, a training data set is fed to the algorithm consisting of pairs of features and tags. After training, the model is fed with unseen text to predict which label to apply upon it. The classification algorithms predict the category of testing data sets based on the labels of training datasets.”). Similarly, the receiving activity is directed to insignificant extra-solution activity for transmitting/receiving data over a network, which has been recognized as well-understood, and conventional and/or insignificant extra-solution activity that fails to amount to significantly more. See Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network).
In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrates the abstract idea into a practical application. Their collective functions merely provide conventional computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that the ordered combination amounts to significantly more than the abstract idea itself.
The dependent claims have been fully considered as well, however, similar to the finding for claims above, these claims are similarly directed to the abstract idea of certain method of organizing human activity and a mental process, without integrating it into a practical application and with, at most, a general-purpose computer that serves to tie the idea to a particular technological environment, which does not add significantly more to the claims. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 7, and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Samuel Osebe (US 2021/0264929 A1, hereinafter “Osebe”) in view of Marc William Crawford (US 2020/0380468 A1, hereinafter “Crawford) in view of Avinash Tukaram Mane (US 2024/0095446 A1, hereinafter “Mane”) in view of Darren Edge (US 2014/0344702 A1, hereinafter “Edge”).
Claim 1
Osebe teaches:
A method for using real time interactive data with artificial intelligence capabilities to improve presentations comprising: receiving, at a moderator computer program executed by a moderator computer, a presentation for a presenter to present to an audience comprising a plurality of attendees, wherein the presentation is a slide-based presentation([0016] In an example scenario, e.g., scenario 1, a presenter may be speaking and presenting a series of presentation “slides” (e.g., pages), or material on a particular topic to an audience, some of who may be in the same room and others may be in remote locations, e.g., watching on a webcast. [0053] analyzing a presentation material used in a presentation);
monitoring, by an attendee computer program executed by an attendee computer based on reporting provided to the moderator computer program, the plurality of attendees to determine an engagement level for the audience by capturing audio data from an attendee capture device of the attendee computer and image capture data from an image capture device ([0054] At 104, the method can include monitoring one or more viewers of the presentation in real time. For instance, voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience. Voice or speech data can be received via a microphone or another sound detection device, for example, installed in the vicinity of the audience, or for example, connected or coupled to the audience or viewer's device or devices. Similarly, image data can be received via a camera or another photo or video taking device, for example, installed in the vicinity of the audience, or for example, connected or coupled to the audience or viewer's device or devices. [0064] considering audiences' (including remote audiences) current state of knowledge, mood, engagement level, and context);
and a face of an attendee during the presentation based on the image capture data from the image capture device, wherein the moderator computer program monitors facial expressions, audio feedback, and received questions for one or more of the plurality of attendees, and determines the engagement level using the facial expressions, audio feedback, and received questions([0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context).
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Crawford teaches:
generating, by the moderator computer program, a suggested expertise level of presentation for the presenter based on the engagement level by tracking user engagement time and [...] ([0059] a report may include documents that and/or raw data downloads that may be provided to a presenter and/or organizer. The report may provide a summary of engagement data. Reports may include visualizations of the slides and/or presentations a participant interacts with, key metrics, a timeline of engagement. [0068] organizer may monitor the number of participants logged in, the number of live connections, counts of all the engagement metrics (number of slides saved, presenter questions, etc.), responses to polls the participants are submitting, the time spent on a slide, time since the last engagement action, the number of slides left in deck, and the like in real time. Figs. 5A-5D illustrates GUI of insights for the presenter based on the sentiment or engagement level. [0078] FIG. 5D illustrates a report 571 that may be provided or organizers or other stakeholders that provides a summary 573 of engagement activity with live meetings. [0103]- [0104] the platform is integrated into a web conferencing platform, the presentation data may be augmented with audio transcript data from the presentation; the platform is integrated into a web conferencing platform, the presentation may also be augmented with video data from the presentation. audio transcript data may be used in connection with a sentiment analysis module. The sentiment analysis module may be configured to apply natural language processing to the audio transcript data to determine how participants and presenters feel about the presentation (as a whole, and at individual moments of the presentation). The results from the sentiment analysis module may be integrated into engagement data and/or key metrics. [0088]-[0090] Such system may be configured to apply artificial intelligence to key metrics, combinations of metrics, and patterns to provide summaries and suggestions for areas of presentation improvement. For example, i natural language processing, sentiment analysis, and topic modeling may be used to review the ratings and reviews for each meeting's evaluations and generate data. Based on the generated data, the system can automatically determine which sessions or areas are underperforming and provide alerts to the client. Additionally, some systems may be used to analyze data across a plurality of meetings in order to identify patterns across meetings such as a specific presenter on a specific topic may be under performing compared to another presenter. For example, such a system may be configured to review the comments across a plurality of meetings, flag “negative” comments and identify that a particular presenter was ineffective because he or she was speaking too fast);
providing, by the moderator computer program, the suggested level of presentation to an electronic device associated with the presenter (Fig. 1 illustrates 115A-115C to an electronic device associated with the presenter; Figs. 5A-5D illustrates GUI of insights for the presenter based on the sentiment or engagement level. [0078] FIG. 5D illustrates a report 571 that may be provided or organizers or other stakeholders that provides a summary 573 of engagement activity with live meetings. [0079] Examples of data that may be displayed towards a user may include an overall meeting summary including a graphical indication of engagement levels 575. For example, the summary may display the what percentage of the participants were engaged, or highly engaged. [0089] Modifications to the presentation may be based upon reviewing the determined key metrics for each slide. For example, by reviewing the top slides with most actions and/or engagement, a presenter may determine that the slides with the most important content did not resonate with the participants and therefore review and update the indicated slides. In some embodiments, the system may provide a list of slides most likely to require revisions. In another example, a presenter may determine, based on the slide with most questions submitted with it, that more detail is required for a slide. [0090] Modifications to presentations may also be made by reviewing responses to polling questions. For example, if the responses to the polling questions indicate that the participant did not understand the content related to the polling question, the presenter may create new slides, change the existing slides, or the like.);
providing, by the moderator computer program, a summary of the presentation to the presenter and the plurality of attendees ([0059] a report may include documents that and/or raw data downloads that may be provided to a presenter and/or organizer. The report may provide a summary of engagement data. Reports may include visualizations of the slides and/or presentations a participant interacts with, key metrics, a timeline of engagement. [0068] organizer may monitor the number of participants logged in, the number of live connections, counts of all the engagement metrics (number of slides saved, presenter questions, etc.), responses to polls the participants are submitting, the time spent on a slide, time since the last engagement action, the number of slides left in deck, and the like in real time. Figs. 5A-5D illustrates GUI of insights for the presenter based on the sentiment or engagement level. [0078] FIG. 5D illustrates a report 571 that may be provided or organizers or other stakeholders that provides a summary 573 of engagement activity with live meetings).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe with Crawford to include generating, by the moderator computer program, a suggested expertise level of presentation for the presenter based on the engagement level by tracking user engagement time, providing, by the moderator computer program, the suggested level of presentation to an electronic device associated with the presenter, and providing, by the moderator computer program, a summary of the presentation to the presenter and the plurality of attendees, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee.
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Mane teaches:
identifying, by the moderator computer program and by using a trained machine learning engine, a plurality of potential audience questions based on the presentation ([0014] In real-time, during a meeting, the artificial intelligence (AI) based NLP engine identifies questions from the chat transcript, and groups similar questions together. ([0040] The program 205 performs question detection through the pre-trained AI Transformer model with the addition of the chat transcript captured in the live meeting and the presentation content that was loaded prior to the meeting. [0041] The program 205 further identifies and groups similar topics together through the pre-trained AI Transformer model and semantic analysis, with the addition of the chat transcript captured in the live meeting and the presentation content that was loaded prior to the meeting. Several factors may be considered, such as a count of the times the question was recognized, and a count of keywords that are found to match between the questions);
identifying, by the moderator computer program, answers to the plurality of potential questions, making, by the moderator computer program, the answers available to the audience during the presentation, ([0049] At 315, where possible, a question is identified for possible being answered automatically by the program 205. [0050] At 320, the program 205 determines whether the question has already been answered, either previously or the answer can be located in the past session database 254. [0051] At 325, if the answer is available, both the question and the answer is displayed in the meeting chat, and processing of the question ends).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe and Crawford with Mane to include identifying, by the moderator computer program and by using a trained machine learning engine, a plurality of potential audience questions based on the presentation and identifying, by the moderator computer program, answers to the plurality of potential questions, making, by the moderator computer program, the answers available to the audience during the presentation, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee and optimize engagement level.
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Edge teaches:
and automatically displaying, by the moderator computer program, a recommendation to the presenter including speed up or slow down based on a timing goal and a speed of the presentation (0020] The adaptive timing engine 102 may provide timing signals to ensure that the user 106 finishes the presentation on time while covering all the sections in the presentation. The timing signals may be dynamically adjusted based on an amount of time that the user 106 actually spends on each slide of the presentation. In this way, each of the timing signals may cue the user 106 to advance to a subsequent slide of the presentation, while at the same time providing the user 106 with some degree of freedom in choosing the amount of time to spend on each slice. Accordingly, by using the cues to move through sections, the user 106 may finish the presentation on time without resorting to skipping sections. [0021] The adaptive timing engine 102 may space the timing signals based on a target time duration. The target time duration may be a total presentation time that the user 106 is seeking to achieve for the verbal delivery of the presentation. The adaptive timing engine 102 may allocate the target time duration among the number of slides in the presentations so that each slide has an allocated time interval).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe, Crawford, and Mane with Edge to include displaying, by the moderator computer program, a recommendation to the presenter including speed up or slow down based on a timing goal and a speed of the presentation, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee and optimize engagement level.
Claim 7
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Edge teaches:
The method of claim 1, further comprising generating, by the moderator computer program, a suggestion of adjusting a rate at which the presentation is given ([0021] The adaptive timing engine 102 may space the timing signals based on a target time duration. The target time duration may be a total presentation time that the user 106 is seeking to achieve for the verbal delivery of the presentation. The adaptive timing engine 102 may allocate the target time duration among the number of slides in the presentations so that each slide has an allocated time interval. The time intervals may be allocated equally among the slides, or disproportionally allocated among the slides based on specific user inputs. The machine learning techniques may enable the adaptive timing engine 102 to infer a time interval to allocate to each of the slides based on the amount of text, images, notes, and/or embedded multimedia content in each slide. For example, the adaptive timing engine 102 may use machine learning to analyze the content of each slide, and allocate similar time intervals to slides that have similar amounts and/or types of content. In another example, the adaptive timing engine 102 may project a user allocated time interval for a slide to another slide that has similar amounts and/or types of content. The adaptive timing engine 102 may be configured to provide timing signals with respect to an approach, a completion, and/or an overrun with respect to the ends of the time intervals, in which the timing signals may be provided continuously, periodically, or with systematically varying intervals).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe, Crawford, and Mane with Edge to include generating, by the moderator computer program, a suggestion of adjusting a rate at which the presentation is given, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee and optimize engagement level.
Claim 8
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Crawford teaches:
The method of claim 1, further comprising generating, by the moderator computer program, a suggestion of adjusting a level of detail of the presentation ([0089] Modifications to the presentation may be based upon reviewing the determined key metrics for each slide. For example, by reviewing the top slides with most actions and/or engagement, a presenter may determine that the slides with the most important content did not resonate with the participants and therefore review and update the indicated slides. In some embodiments, the system may provide a list of slides most likely to require revisions. In another example, a presenter may determine, based on the slide with most questions submitted with it, that more detail is required for a slide. In another example, after reviewing the notes associated with a particular slide, the presenter may determine that the format (i.e., highlight, bolding), or order of content on the slide requires updating).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe with Crawford to include generating, by the moderator computer program, a suggestion of adjusting a level of detail of the presentation, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee.
Claims 9, 10, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Osebe in view of Crawford in view of Mane in view of Sven Kratz (US 2016/0286164 A1, hereinafter “Kratz”).
Claim 9
Osebe teaches:
A system, comprising: a plurality of attendee electronic devices([0039] viewers devices), each executing an attendee computer program and including an image capture device, the image capture device capable of capturing an image of each attendee of a plurality of attendees([0054]monitoring one or more viewers of the presentation in real time. image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience. image data can be received via a camera or another photo or video taking device, for example, installed in the vicinity of the audience, or for example, connected or coupled to the audience or viewer's device or devices);
and a moderator electronic device executing a moderator computer program([0052] one or more hardware processors, for example, may include components such as programmable logic devices, microcontrollers, memory devices, and/or other hardware components, which may be configured to perform respective tasks);
the moderator computer program receives the images of the plurality of attendees([0054] At 104, the method can include monitoring one or more viewers of the presentation in real time. For instance, voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience. Voice or speech data can be received via a microphone or another sound detection device, for example, installed in the vicinity of the audience, or for example, connected or coupled to the audience or viewer's device or devices. Similarly, image data can be received via a camera or another photo or video taking device, for example, installed in the vicinity of the audience, or for example, connected or coupled to the audience or viewer's device or devices. [0064] considering audiences' (including remote audiences) current state of knowledge, mood, engagement level, and context);
the moderator computer program determines a sentiment for the plurality of attendees based from the images; a face of an attendee during the presentation based on the image capture data from the image capture device([0054] At 104, the method can include monitoring one or more viewers of the presentation in real time.image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience. Similarly, image data can be received via a camera or another photo or video taking device, for example, installed in the vicinity of the audience, or for example, connected or coupled to the audience or viewer's device or devices. [0064] considering audiences' (including remote audiences) current state of knowledge, mood, engagement level, and context);
wherein the moderator computer program monitors facial expressions, audio feedback, and received questions for one or more of the plurality of attendees, and determines the engagement level using the facial expressions, audio feedback, and received questions([0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context).
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Crawford teaches:
a presenter electronic device with a presenter executing a presenter computer application([0039]-[0040] he server system 105 may configure an online platform that can be accessed by the participant computing device 101 and/or presenter computing device 115. the server system 105 may receive presentation data from the presenter computing device 115. The presenter computing device 115 may generate presentation data including a time-stamp for the display of each slide within a presentation.);
wherein the moderator computer program receives a presentation for a presenter to present to an audience comprising the plurality of attendees( [0062]receiving presentation data from a presenter of a live meeting environment 203,);
the moderator computer program generates a suggested expertise level of presentation for the presenter based on the sentiment by tracking user engagement time and [...] ([0059] a report may include documents that and/or raw data downloads that may be provided to a presenter and/or organizer. The report may provide a summary of engagement data. Reports may include visualizations of the slides and/or presentations a participant interacts with, key metrics, a timeline of engagement. [0068] organizer may monitor the number of participants logged in, the number of live connections, counts of all the engagement metrics (number of slides saved, presenter questions, etc.), responses to polls the participants are submitting, the time spent on a slide, time since the last engagement action, the number of slides left in deck, and the like in real time. Figs. 5A-5D illustrates GUI of insights for the presenter based on the sentiment or engagement level. [0078] FIG. 5D illustrates a report 571 that may be provided or organizers or other stakeholders that provides a summary 573 of engagement activity with live meetings. [0103]- [0104] the platform is integrated into a web conferencing platform, the presentation data may be augmented with audio transcript data from the presentation; the platform is integrated into a web conferencing platform, the presentation may also be augmented with video data from the presentation. audio transcript data may be used in connection with a sentiment analysis module. The sentiment analysis module may be configured to apply natural language processing to the audio transcript data to determine how participants and presenters feel about the presentation (as a whole, and at individual moments of the presentation). The results from the sentiment analysis module may be integrated into engagement data and/or key metrics. [0088]-[0090] Such system may be configured to apply artificial intelligence to key metrics, combinations of metrics, and patterns to provide summaries and suggestions for areas of presentation improvement. For example, i natural language processing, sentiment analysis, and topic modeling may be used to review the ratings and reviews for each meeting's evaluations and generate data. Based on the generated data, the system can automatically determine which sessions or areas are underperforming and provide alerts to the client. Additionally, some systems may be used to analyze data across a plurality of meetings in order to identify patterns across meetings such as a specific presenter on a specific topic may be under performing compared to another presenter. For example, such a system may be configured to review the comments across a plurality of meetings, flag “negative” comments and identify that a particular presenter was ineffective because he or she was speaking too fast);
the moderator computer program provides the suggested level of presentation to the presenter computer program(Fig. 1 illustrates 115A-115C to an electronic device associated with the presenter; Figs. 5A-5D illustrates GUI of insights for the presenter based on the sentiment or engagement level. [0078] FIG. 5D illustrates a report 571 that may be provided or organizers or other stakeholders that provides a summary 573 of engagement activity with live meetings. [0079] Examples of data that may be displayed towards a user may include an overall meeting summary including a graphical indication of engagement levels 575. For example, the summary may display the what percentage of the participants were engaged, or highly engaged. [0089] Modifications to the presentation may be based upon reviewing the determined key metrics for each slide. For example, by reviewing the top slides with most actions and/or engagement, a presenter may determine that the slides with the most important content did not resonate with the participants and therefore review and update the indicated slides. In some embodiments, the system may provide a list of slides most likely to require revisions. In another example, a presenter may determine, based on the slide with most questions submitted with it, that more detail is required for a slide. [0090] Modifications to presentations may also be made by reviewing responses to polling questions. For example, if the responses to the polling questions indicate that the participant did not understand the content related to the polling question, the presenter may create new slides, change the existing slides, or the like.);
the moderator computer program provides a summary of the presentation to the presenter and the plurality of attendees([0059] a report may include documents that and/or raw data downloads that may be provided to a presenter and/or organizer. The report may provide a summary of engagement data. Reports may include visualizations of the slides and/or presentations a participant interacts with, key metrics, a timeline of engagement. [0068] organizer may monitor the number of participants logged in, the number of live connections, counts of all the engagement metrics (number of slides saved, presenter questions, etc.), responses to polls the participants are submitting, the time spent on a slide, time since the last engagement action, the number of slides left in deck, and the like in real time. Figs. 5A-5D illustrates GUI of insights for the presenter based on the sentiment or engagement level. [0078] FIG. 5D illustrates a report 571 that may be provided or organizers or other stakeholders that provides a summary 573 of engagement activity with live meetings).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe with Crawford to include a presenter electronic device with a presenter executing a presenter computer application, wherein the moderator computer program receives a presentation for a presenter to present to an audience comprising the plurality of attendees, the moderator computer program generates a suggested expertise level of presentation for the presenter based on the sentiment by tracking user engagement time, the moderator computer program provides the suggested level of presentation to the presenter computer program, and the moderator computer program provides a summary of the presentation to the presenter and the plurality of attendees, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee and optimize engagement level.
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Mane teaches:
the moderator computer program, using a trained machine learning engine, identifying a plurality of potential audience questions based on the presentation ([0014] In real-time, during a meeting, the artificial intelligence (AI) based NLP engine identifies questions from the chat transcript, and groups similar questions together. ([0040] The program 205 performs question detection through the pre-trained AI Transformer model with the addition of the chat transcript captured in the live meeting and the presentation content that was loaded prior to the meeting. [0041] The program 205 further identifies and groups similar topics together through the pre-trained AI Transformer model and semantic analysis, with the addition of the chat transcript captured in the live meeting and the presentation content that was loaded prior to the meeting. Several factors may be considered, such as a count of the times the question was recognized, and a count of keywords that are found to match between the questions);
the moderator computer program identifying answers to the plurality of potential questions; the moderator computer program making the answers available to the audience during the presentation ([0049] At 315, where possible, a question is identified for possible being answered automatically by the program 205. [0050] At 320, the program 205 determines whether the question has already been answered, either previously or the answer can be located in the past session database 254. [0051] At 325, if the answer is available, both the question and the answer is displayed in the meeting chat, and processing of the question ends).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe and Crawford with Mane to include identifying, by using a trained machine learning engine, a plurality of potential audience questions based on the presentation and identifying answers to the plurality of potential questions, and making the answers available to the audience during the presentation, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee and optimize engagement level.
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Kratz teaches:
and automatically creating, by the moderator computer program, a clip of interactive points of the presentation ([0118] the reporting module 542 prepares (730) a report on overall audience interest and on individual interest in the presentation. For example, automated video-based meeting summary can be generated by the reporting module 542. In the meeting summary, clips from the first-person view captured by the head mounted device 306 can be used to highlight portions of content the viewers are interested in the most. In some implementations, based on the viewing activities stored at the server in the viewing activities database, the additional video feeds module 544 sends additional content relevant to a viewer for display on one or more devices associated with the viewer (e.g., the presentation screen 304, the computing device 308, and/or the head mounted device 306)).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe, Crawford, and Mane with Kratz to include creating, by the moderator computer program, a clip of interactive points of the presentation, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee and optimize engagement level.
Claim 10
Osebe further teaches:
The system of claim 9, wherein the presentation is a slide- based presentation([0016] In an example scenario, e.g., scenario 1, a presenter may be speaking and presenting a series of presentation “slides” (e.g., pages), or material on a particular topic to an audience, some of who may be in the same room and others may be in remote locations, e.g., watching on a webcast. [0053] analyzing a presentation material used in a presentation).
Claim 16
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Crawford teaches:
The system of claim 9, further comprising the moderator computer program generates a suggestion of adjusting a level of detail of the presentation ([0089] Modifications to the presentation may be based upon reviewing the determined key metrics for each slide. For example, by reviewing the top slides with most actions and/or engagement, a presenter may determine that the slides with the most important content did not resonate with the participants and therefore review and update the indicated slides. In some embodiments, the system may provide a list of slides most likely to require revisions. In another example, a presenter may determine, based on the slide with most questions submitted with it, that more detail is required for a slide. In another example, after reviewing the notes associated with a particular slide, the presenter may determine that the format (i.e., highlight, bolding), or order of content on the slide requires updating).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe with Crawford to include generating a suggestion of adjusting a level of detail of the presentation, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee and optimize engagement level.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Osebe in view of Crawford in view of Mane in view of Kratz, as applied in claim 9, and further in view of Edge.
Claim 15
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Edge teaches:
The system of claim 9, further comprising the moderator computer program generates a suggestion of adjusting a rate at which the presentation is given([0021] The adaptive timing engine 102 may space the timing signals based on a target time duration. The target time duration may be a total presentation time that the user 106 is seeking to achieve for the verbal delivery of the presentation. The adaptive timing engine 102 may allocate the target time duration among the number of slides in the presentations so that each slide has an allocated time interval. The time intervals may be allocated equally among the slides, or disproportionally allocated among the slides based on specific user inputs. The machine learning techniques may enable the adaptive timing engine 102 to infer a time interval to allocate to each of the slides based on the amount of text, images, notes, and/or embedded multimedia content in each slide. For example, the adaptive timing engine 102 may use machine learning to analyze the content of each slide, and allocate similar time intervals to slides that have similar amounts and/or types of content. In another example, the adaptive timing engine 102 may project a user allocated time interval for a slide to another slide that has similar amounts and/or types of content. The adaptive timing engine 102 may be configured to provide timing signals with respect to an approach, a completion, and/or an overrun with respect to the ends of the time intervals, in which the timing signals may be provided continuously, periodically, or with systematically varying intervals).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe, Crawford, Mane, and Kratz with Edge to include generating a suggestion of adjusting a rate at which the presentation is given, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee and optimize engagement level.
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Osebe in view of Crawford in view of Mane.
Claim 17
Osebe teaches:
A non-transitory computer readable storage medium, including instructions stored thereon, which when read and executed by one or more computer processors, cause the one or more computer processors to perform steps([0084] a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to perform steps/functions) comprising: receiving a presentation for a presenter to present to an audience comprising a plurality of attendees ([0016] In an example scenario, e.g., scenario 1, a presenter may be speaking and presenting a series of presentation “slides” (e.g., pages), or material on a particular topic to an audience, some of who may be in the same room and others may be in remote locations, e.g., watching on a webcast. [0053] analyzing a presentation material used in a presentation);
monitoring, by an attendee electronic device executed by an attendee computer based on reporting provided to the moderator computer program, the plurality of attendees to determine a sentiment for the audience by capturing audio data from an attendee capture device of the attendee computer and image capture data from an image capture device([0054] At 104, the method can include monitoring one or more viewers of the presentation in real time. For instance, voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience. Voice or speech data can be received via a microphone or another sound detection device, for example, installed in the vicinity of the audience, or for example, connected or coupled to the audience or viewer's device or devices. Similarly, image data can be received via a camera or another photo or video taking device, for example, installed in the vicinity of the audience, or for example, connected or coupled to the audience or viewer's device or devices. [0064] considering audiences' (including remote audiences) current state of knowledge, mood, engagement level, and context);
and a face of an attendee during the presentation based on the image capture data from the image capture device, wherein the moderator computer program monitors facial expressions, audio feedback, and received questions for one or more of the plurality of attendees, and determines the engagement level using the facial expressions, audio feedback, and received questions([0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context).
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Crawford teaches:
generating a suggested expertise level of presentation for the presenter based on the engagement level by tracking user engagement time and [...] ([0059] a report may include documents that and/or raw data downloads that may be provided to a presenter and/or organizer. The report may provide a summary of engagement data. Reports may include visualizations of the slides and/or presentations a participant interacts with, key metrics, a timeline of engagement. [0068] organizer may monitor the number of participants logged in, the number of live connections, counts of all the engagement metrics (number of slides saved, presenter questions, etc.), responses to polls the participants are submitting, the time spent on a slide, time since the last engagement action, the number of slides left in deck, and the like in real time. Figs. 5A-5D illustrates GUI of insights for the presenter based on the sentiment or engagement level. [0078] FIG. 5D illustrates a report 571 that may be provided or organizers or other stakeholders that provides a summary 573 of engagement activity with live meetings. [0103]- [0104] the platform is integrated into a web conferencing platform, the presentation data may be augmented with audio transcript data from the presentation; the platform is integrated into a web conferencing platform, the presentation may also be augmented with video data from the presentation. audio transcript data may be used in connection with a sentiment analysis module. The sentiment analysis module may be configured to apply natural language processing to the audio transcript data to determine how participants and presenters feel about the presentation (as a whole, and at individual moments of the presentation). The results from the sentiment analysis module may be integrated into engagement data and/or key metrics. [0088]-[0090] Such system may be configured to apply artificial intelligence to key metrics, combinations of metrics, and patterns to provide summaries and suggestions for areas of presentation improvement. For example, i natural language processing, sentiment analysis, and topic modeling may be used to review the ratings and reviews for each meeting's evaluations and generate data. Based on the generated data, the system can automatically determine which sessions or areas are underperforming and provide alerts to the client. Additionally, some systems may be used to analyze data across a plurality of meetings in order to identify patterns across meetings such as a specific presenter on a specific topic may be under performing compared to another presenter. For example, such a system may be configured to review the comments across a plurality of meetings, flag “negative” comments and identify that a particular presenter was ineffective because he or she was speaking too fast);
providing the suggested level of presentation to an electronic device associated with the presenter (Fig. 1 illustrates 115A-115C to an electronic device associated with the presenter; Figs. 5A-5D illustrates GUI of insights for the presenter based on the sentiment or engagement level. [0078] FIG. 5D illustrates a report 571 that may be provided or organizers or other stakeholders that provides a summary 573 of engagement activity with live meetings. [0079] Examples of data that may be displayed towards a user may include an overall meeting summary including a graphical indication of engagement levels 575. For example, the summary may display the what percentage of the participants were engaged, or highly engaged. [0089] Modifications to the presentation may be based upon reviewing the determined key metrics for each slide. For example, by reviewing the top slides with most actions and/or engagement, a presenter may determine that the slides with the most important content did not resonate with the participants and therefore review and update the indicated slides. In some embodiments, the system may provide a list of slides most likely to require revisions. In another example, a presenter may determine, based on the slide with most questions submitted with it, that more detail is required for a slide. [0090] Modifications to presentations may also be made by reviewing responses to polling questions. For example, if the responses to the polling questions indicate that the participant did not understand the content related to the polling question, the presenter may create new slides, change the existing slides, or the like.);
providing a summary of the presentation to the presenter and the plurality of attendees ([0059] a report may include documents that and/or raw data downloads that may be provided to a presenter and/or organizer. The report may provide a summary of engagement data. Reports may include visualizations of the slides and/or presentations a participant interacts with, key metrics, a timeline of engagement. [0068] organizer may monitor the number of participants logged in, the number of live connections, counts of all the engagement metrics (number of slides saved, presenter questions, etc.), responses to polls the participants are submitting, the time spent on a slide, time since the last engagement action, the number of slides left in deck, and the like in real time. Figs. 5A-5D illustrates GUI of insights for the presenter based on the sentiment or engagement level. [0078] FIG. 5D illustrates a report 571 that may be provided or organizers or other stakeholders that provides a summary 573 of engagement activity with live meetings).
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Mane teaches:
identifying, by a trained machine learning engine, a plurality of potential audience questions based on the presentation ([0014] In real-time, during a meeting, the artificial intelligence (AI) based NLP engine identifies questions from the chat transcript, and groups similar questions together. ([0040] The program 205 performs question detection through the pre-trained AI Transformer model with the addition of the chat transcript captured in the live meeting and the presentation content that was loaded prior to the meeting. [0041] The program 205 further identifies and groups similar topics together through the pre-trained AI Transformer model and semantic analysis, with the addition of the chat transcript captured in the live meeting and the presentation content that was loaded prior to the meeting. Several factors may be considered, such as a count of the times the question was recognized, and a count of keywords that are found to match between the questions);
identifying answers to the plurality of potential questions; making the answers available to the audience during the presentation, ([0049] At 315, where possible, a question is identified for possible being answered automatically by the program 205. [0050] At 320, the program 205 determines whether the question has already been answered, either previously or the answer can be located in the past session database 254. [0051] At 325, if the answer is available, both the question and the answer is displayed in the meeting chat, and processing of the question ends);
and providing to the electronic device associated with the presenter a bundle of questions selected from the received questions, the bundle of questions determined by the trained machine learning engine to be similar to each other ([0014] In real-time, during a meeting, the artificial intelligence (AI) based NLP engine identifies questions from the chat transcript, and groups similar questions together. The AI based NLP engine further understands the relevancy and complexity of a question and suggests answers from the presentation content and/or chat transcript. [0015] For each participant and for the presenter, the dashboard can include a web-based representation of questions grouped by topic, and for each group, the number of questions in the group as well as a suggested answer. [0047] At 305, the program 205 extracts each question from the live chat, using an available API. Similar questions are grouped together. [0049] At 315, where possible, a question is identified for possible being answered automatically by the program 205).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe and Crawford with Mane to include identifying, by using a trained machine learning engine, a plurality of potential audience questions based on the presentation and identifying answers to the plurality of potential questions, making the answers available to the audience during the presentation, and providing to the electronic device associated with the presenter a bundle of questions selected from the received questions, the bundle of questions determined by the trained machine learning engine to be similar to each other, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee and optimize engagement level.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Osebe in view of Crawford in view of Mane, as applied in claim 17, and further in view of Edge.
Claim 20
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Crawford further teaches:
The non-transitory computer readable storage medium of claim 17, further comprising [...] and adjusting a level of detail of the presentation ([0089] Modifications to the presentation may be based upon reviewing the determined key metrics for each slide. For example, by reviewing the top slides with most actions and/or engagement, a presenter may determine that the slides with the most important content did not resonate with the participants and therefore review and update the indicated slides. In some embodiments, the system may provide a list of slides most likely to require revisions. In another example, a presenter may determine, based on the slide with most questions submitted with it, that more detail is required for a slide. In another example, after reviewing the notes associated with a particular slide, the presenter may determine that the format (i.e., highlight, bolding), or order of content on the slide requires updating).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe with Crawford to include generating a suggestion of adjusting a level of detail of the presentation, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee.
While Osebe teaches in [0031] estimating overall sentiment or mood of the audience and using this estimation to determine content and style of slides to be presented as a presentation evolves, while a presenter is giving the presentation. For example, facial analysis may be performed with known technique or techniques to estimate the mood. Other sentiment analysis may be performed based on words spoken and prosody. [0054] voice or speech analysis and natural language processing techniques can be employed to detect questions and comments generated by the viewers; image analysis such as facial expression analysis can be performed to determine a state of a viewer or audience and [0055] Monitoring of viewers may be performed via any one or more of, but not limited to: analysis of words spoken (e.g., comment and feedback given, questions asked); language of viewers; analysis of ambient illumination or noise level and quality; analysis of gestures; analysis of collective movement (people entering or leaving); analysis of viewer cohort, cognitive state, social network connections; analysis of participants live comments on a smart e-presentation system, social media; determining one or more viewer positions in an organizational hierarchy., while [0030] The metadata may be extracted from data associated with monitored viewers and may include data resulting from analysis of the real time comments and feedback given, questions asked, analysis of gestures, and analysis of viewer cohort. [0064] considering audiences' (including remote current state of knowledge, mood, engagement level, and context. Osebe does not explicitly teach the following limitation. However, analogues reference Edge teaches:
adjusting a rate at which the presentation is given ([0021] The adaptive timing engine 102 may space the timing signals based on a target time duration. The target time duration may be a total presentation time that the user 106 is seeking to achieve for the verbal delivery of the presentation. The adaptive timing engine 102 may allocate the target time duration among the number of slides in the presentations so that each slide has an allocated time interval. The time intervals may be allocated equally among the slides, or disproportionally allocated among the slides based on specific user inputs. The machine learning techniques may enable the adaptive timing engine 102 to infer a time interval to allocate to each of the slides based on the amount of text, images, notes, and/or embedded multimedia content in each slide. For example, the adaptive timing engine 102 may use machine learning to analyze the content of each slide, and allocate similar time intervals to slides that have similar amounts and/or types of content. In another example, the adaptive timing engine 102 may project a user allocated time interval for a slide to another slide that has similar amounts and/or types of content. The adaptive timing engine 102 may be configured to provide timing signals with respect to an approach, a completion, and/or an overrun with respect to the ends of the time intervals, in which the timing signals may be provided continuously, periodically, or with systematically varying intervals).
It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the teaching of Osebe, Crawford, and Mane with Edge to include adjusting a rate at which the presentation is given, because in doing so it will provide efficient meeting management by gagging the engagement level of attendee and optimize engagement level.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 20150350269 A1
Information Processing Device and Information Processing Method
Shibata; Yukihiro
US 20160125426 A1
Determining Engagement Levels Based on Topical Interest
Francolla; Steven J. et al.
US 20200034408 A1
Dynamic Management of Content in An Electronic Presentation
Gourley; Sean et al.
US 20200403817 A1
Generating Customized Meeting Insights Based on User Interactions and Meeting Media
Daredia; Shehzad et al.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REHAM K ABOUZAHRA whose telephone number is (571)272-0419. The examiner can normally be reached M-F 7:00 AM to 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached at (571)-270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/REHAM K ABOUZAHRA/Examiner, Art Unit 3625
/BRIAN M EPSTEIN/Supervisory Patent Examiner, Art Unit 3625