DETAILED ACTION
This action is responsive to 06/25/25.
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-4, 6, 8-10, 13-17, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Poteet (US Pub. 2020/0380264).
Regarding claim 1, Poteet discloses a method (see fig. 2) comprising: at a device including a display (a graphical user interface (GUI) display 110 on a screen of computing device (e.g., 324)-see figs. 1 and 3), one or more processors (see, for example, [0005], [0041] and fig. 3-one or more processors, e.g., processor 304), and non-transitory memory (a non-transitory memory component 306-see fig. 3): displaying, on the display, one or more slides of a presentation (audio-visual screen 734 configured to display a presentation 114-see fig. 7and [0074]); recording audio of a user during a performance of the presentation (recording verbal presentation-see block 202 of fig. 2); generating feedback based on comparing the audio of the user to previously recorded audio of the user during a prior performance of the presentation; and providing the feedback to the user (analyzing video recording of the verbal presentation (see block 204) and automatically providing feedback based on analysis of the video recording (see block 206) … video analysis tool 10 may further automatically compare the recorded verbal presentation of the individual 102 with previous recordings of the individual 102, whether of the same or different verbal presentations, to generate a report on trends or other useful automated feedback with respect to the individual-see [0040]).
Regarding claim 13, Poteet discloses a device comprising: a display (a graphical user interface (GUI) display 110 on a screen of computing device (e.g., 324)-see figs. 1 and 3); a non-transitory memory (a non-transitory memory component 306-see fig. 3); and one or more processors (see, for example, [0005], [0041] and fig. 3-one or more processors, e.g., processor 304) to: display, on the display, one or more slides of a presentation (audio-visual screen 734 configured to display a presentation 114-see fig. 7and [0074]); record audio of a user during a performance of the presentation (recording verbal presentation-see block 202 of fig. 2); generate feedback based on comparing the audio of the user to previously recorded audio of the user during a prior performance of the presentation; and provide the feedback to the user (analyzing video recording of the verbal presentation (see block 204) and automatically providing feedback based on analysis of the video recording (see block 206) … video analysis tool 10 may further automatically compare the recorded verbal presentation of the individual 102 with previous recordings of the individual 102, whether of the same or different verbal presentations, to generate a report on trends or other useful automated feedback with respect to the individual-see [0040]).
Regarding claims 2 and 14, Poteet discloses wherein generating the feedback includes comparing content of the audio of the user to content of the previously recorded audio of the user (see [0040]).
Regarding claims 3 and 15, Poteet discloses wherein generating the feedback includes comparing a time of the audio of the user to a corresponding time of the previously recorded audio of the user (video analysis may include timestamp feedback. Timestamp feedback parameters may include feedback based on specific moments in time within the video recording to show exactly what the feedback is related to and what prompted it, such as a moments at which a certain gesture or word was used or moments at which the focus of the individual 102 shifted with respect to the audience-see [0026]-[0027]).
Regarding claim 4, Poteet discloses wherein generating the feedback includes comparing an intelligibility of speech of the audio of the user to an intelligibility of speech of the previously recorded audio of the user (see [0029] and figs. 5-12-audio and/or speech components may be analyzed and provided … audio and/or speech feedback components associated with presentation 114 by the video analysis tool may include, but are not limited to, talk speed, vocal quality, vocal fillers, conclusion signaling, conclusion summarization, planning etc.).
Regarding claims 6 and 16, Poteet discloses wherein generating the feedback is further based on comparing the audio of the user to the one or more slides of the presentation (video analysis tool 10 may further automatically compare the recorded verbal presentation of the individual 102 with previous recordings of the individual 102, whether of the same or different verbal presentations-see [0046] and [0074]).
Regarding claim 8, Poteet discloses wherein generating the feedback is further based on movement of the user during the performance of the presentation (see fig. 6 with description in [0060]-[0062]-using a body heat map 600 to track and analyze movement of a body of the individual 102 during presentation 114).
Regarding claims 9 and 17, Poteet discloses wherein providing the feedback to the user is performed during the performance of the presentation (video analysis system 300 may generate and automatically provide one or more feedback information or recommendations, through feedback generation module 316-see fig. 7 and [0081]).
Regarding claim 10, Poteet discloses wherein providing the feedback to the user includes displaying, on the display, a feedback notification (see figs. 7-11).
Regarding claim 20, Poteet discloses a non-transitory memory storing one or more programs (a non-transitory memory component 306-see fig. 3 and [0041]), which, when executed by one or more processors (see, for example, [0005], [0041] and fig. 3-one or more processors, e.g., processor 304) of a device including a display (a graphical user interface (GUI) display 110 on a screen of computing device (e.g., 324)-see figs. 1 and 3), cause the device to: display, on the display, one or more slides of a presentation (audio-visual screen 734 configured to display a presentation 114-see fig. 7and [0074]); record audio of a user during a performance of the presentation (recording verbal presentation-see block 202 of fig. 2); generate feedback based on comparing the audio of the user to previously recorded audio of the user during a prior performance of the presentation; and provide the feedback to the user (analyzing video recording of the verbal presentation (see block 204) and automatically providing feedback based on analysis of the video recording (see block 206) … video analysis tool 10 may further automatically compare the recorded verbal presentation of the individual 102 with previous recordings of the individual 102, whether of the same or different verbal presentations, to generate a report on trends or other useful automated feedback with respect to the individual-see [0040]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5, 7, 11-12 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Poteet in view of Gupta et al. (US Pub. 2016/0049094), hereinafter Gupta.
Regarding claim 5, Poteet does not appear to expressly disclose wherein generating the feedback is further based on comparing slide transition times during the performance of the presentation to previously recorded slide transition times during the prior performance of the presentation.
Gupta is relied upon to teach wherein generating the feedback is further based on comparing slide transition times during the performance of the presentation to previously recorded slide transition times during the prior performance of the presentation (see, for example, [0082]-presentation features extracted from presentation materials (by materials analysis engine 158-see fig. 7) include when user 10 advances to the next slide).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effectively filing date of the claimed invention to incorporate the teachings of Gupta with the invention of Poteet to include analyzing slide transition times in rehearsing presentations, as taught by Gupta, which constitutes combining prior art elements according to known methods to yield predictable results (i.e., providing enhanced presentation and public speaking training system with environmental simulation and real-time feedback-see [0009]).
Regarding claim 7, Gupta is further relied upon to teach wherein generating the feedback is further based on a biometric of the user during the performance of the presentation (see Gupta et al. 2016/0049094-see figs. 2-7 with description in [0029], [0054], and [0081], which teaches a biometric reader that reads biometrics of a user 10, transmits data feed representing the biometrics to a biometrics analysis engine 156, and a training system provides real-time feedback during a presentation).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effectively filing date of the claimed invention to incorporate the teachings of Gupta with the invention of Poteet to include generating feedback based on biometric information of the user, as taught by Gupta, in order to provide a dynamic, goal-based, educational experience (see [0029]).
Regarding claims 11 and 18, Gupta is further relied upon to teach wherein providing the feedback to the user includes altering display of the one or more slides of the presentation (see fig. 12d with description in [0111]-examples of real-time feedback 224, wherein application 100 displays a feature or metric graph 260 while user 10 is presenting).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effectively filing date of the claimed invention to incorporate the teachings of Gupta with the invention of Poteet to include providing feedback by altering display of one or more slides of the presentation, as taught by Gupta, which constitutes combining prior art elements according to known methods to yield predictable results (i.e., providing enhanced presentation and public speaking training system with environmental simulation and real-time feedback-see [0009]).
Regarding claims 12 and 19, Gupta is further relied upon to teach further comprising: displaying an XR environment representation of the prior performance of the presentation based on the previously recorded audio of the user during the prior performance of the presentation and additional data of the user during the prior performance of the presentation (see, for example, [0100], which teaches recording, analyzing and reviewing past presentations of user 10 by application 100, and fig. 12e with description in [0114]-[0117], which further teach using a virtual reality (VR) headset 266 or augmented reality for application 100).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effectively filing date of the claimed invention to incorporate a virtual or mixed reality environment within the invention of Poteet, as taught by Gupta, therefore, user 10 is able to give a practice presentation in an actual room where a real performance will later be given, and application 100 simulates a realistic audience in the room (see [0116]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARDIS F AZONGHA whose telephone number is (571)270-7706. The examiner can normally be reached 10AM-7:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ke Xiao can be reached at (571)272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARDIS F AZONGHA/Primary Examiner, Art Unit 2627