DETAILED ACTION
Information Disclosure Statement
1. The information disclosure statements submitted are being considered by the examiner.
Double Patenting
2. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
3. Claims 1, 4, 11, 12 and 15 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 9, 11 and 15 of co-pending application No. 18/572,585. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are broader in scope than the claims of the co-pending application as shown in the table below.
Application No. 18/661,314
Application No. 18,572,585
1. A method for providing on-video content during a video presentation by at least one user, the method comprising, during execution of one or more applications by an electronic device associated with at least a display unit and a capture element having a field of view including the at least one user: generating in a screen area of the display unit a first image layer comprising content associated with at least one of the one or more applications; generating in the screen area of the display unit a second image layer comprising a content window, wherein content displayed in the content window is provided in accordance with the at least one of the one or more applications, and wherein a generated location of the content window within the screen area can be arranged at a user-preferred position, the user-preferred position being: dependent at least in part on a determined location of the capture element; disposed within a side window of the one or more applications, such that the content window is accessible within an environment of the one or more applications; at least partially overlapping the first image layer; or a combination thereof.
1. A method for providing on-video content during a video presentation by at least one user, the method comprising, during execution of one or more applications by an electronic device associated with at least a display unit and a capture element having a field of view including the at least one user: generating in a screen area of the display unit a first image layer comprising content associated with at least one of the one or more applications; generating in the screen area of the display unit a second image layer comprising an at least partially transparent content window, wherein the second image layer at least partially overlaps the first image layer, wherein content displayed in the content window is provided in accordance with the at least one of the one or more applications, and wherein a generated location of the content window within the screen area is dependent at least in part on a determined location of the capture element.
4. The method of claim 1, further comprising automatically ascertaining a location of the at least one user relative to the capture element.
4. The method of claim 2, further comprising automatically ascertaining a location of the at least one user relative to the capture element.
11. A system for providing on-video content during a video presentation by at least one user, the system comprising: an electronic device comprising a processor functionally linked to at least a display unit and a capture element having a field of view including the at least one user, wherein the processor is configured, during execution of one or more applications via the electronic device, to: generate in a screen area of the display unit a first image layer comprising content associated with at least one of the one or more applications; generate in the screen area of the display unit a second image layer comprising a content window; and wherein a generated location of the content window within the screen area is arranged at a user-preferred position, the user-preferred position being: dependent at least in part on a determined location of the capture element; disposed within a side window of the one or more applications, such that the content window is accessible within an environment of the one or more applications; at least partially overlapping the first image layer; or a combination thereof.
9. A system for providing on-video content during a video presentation by at least one user, the system comprising: an electronic device comprising a processor functionally linked to at least a display unit and a capture element having a field of view including the at least one user, wherein the processor is configured, during execution of one or more applications via the electronic device, to: generate in a screen area of the display unit a first image layer comprising content associated with at least one of the one or more applications; generate in the screen area of the display unit a second image layer comprising an at least partially transparent content window, wherein the second image layer at least partially overlaps the first image layer, wherein content displayed in the content window is provided in accordance with the at least one of the one or more applications, and wherein a generated location of the content window within the screen area is dependent at least in part on a determined location of the capture element.
12. The system of claim 11, wherein the at least one of the one or more applications comprises a web conferencing platform.
11. The system of claim 9, wherein the at least one of the one or more applications comprises a web conferencing platform.
15. The system of claim 11, the processor being further configured to automatically ascertain a location of the at least one user relative to the capture element.
15. The system of claim 13, wherein the processor is further configured to automatically ascertain a location of the at least one user relative to the capture element.
4. Claims 5, 6, 16 and 17 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of co-pending Application No. 18/572,585 in view of Cossar et al, U.S. Patent No. 12,052,299 (hereinafter Cossar).
Regarding claim 5, claim 1 of co-pending Application No. 18/572,585 does not teach automatically detecting a performance metric of the user via the capture element, analyzing the performance metric of the user, and automatically providing feedback to the user. All the same, Cossar discloses automatically detecting a performance metric of the user via the capture element, analyzing the performance metric of the user, and automatically providing feedback to the user (from Figure 3D, see eye contact). Therefore, it would have been obvious to one of ordinary skill in the art to modify claim 1 of co-pending Application No. 18/572,585 with automatically detecting a performance metric of the user via the capture element, analyzing the performance metric of the user, and automatically providing feedback to the user as taught by Cossar. This modification would have made the experience more effective by projecting confidence as suggested by Cossar.
Regarding claim 6, the combination of claim 1 of co-pending Application No. 18/572,585 and Cossar discloses the performance metric comprises any one or more of a use of frequency of filler words, the user’s tone or confidence, a speed or pace of the presentation, the user’s adherence to the content, and an amount of user eye contact (from Figure 3D of Cossar, see eye contact) with the capture element.
Claim 16 is rejected for the same reasons as claim 5.
Claim 17 is rejected for the same reasons as claim 6.
5. Claims 2, 3, 7, 10, 13, 14 and 18 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of co-pending Application No. 18/572,585 in view of Afrasiabi, U.S. Patent Application Publication No. 2022/0286625 (hereinafter Afrasiabi).
Regarding claim 2, claim 1 of co-pending Application No. 18/572,585 does not teach automatically revising the content according to a user-preferred parameter, wherein the user-preferred parameter comprises a time limit for the video presentation. All the same, Afrasiabi discloses automatically revising the content according to a user-preferred parameter, wherein the user-preferred parameter comprises a time limit for the video presentation (from paragraph 0049, see Moreover, different fields or boxes 220 may be configured with different parameters. For example, a user may log into the digital overlay application 101, create an event 200 and upload one field 220 of content 240 comprising scrolling functions and three fields 220 static in position. Additionally, the speech and text parameters may be pre-set at a desired pace (like a teleprompter) so as to initiate the scrolling function for a field 220 comprising a script for a speech once the user begins a speech or live presentation). Therefore, it would have been obvious to one of ordinary skill in the art to modify claim 1 of co-pending Application No. 18/572,585 with automatically revising the content according to a user-preferred parameter, wherein the user-preferred parameter comprises a time limit for the video presentation as taught by Afrasiabi. This modification would have improved the system’s convenience by providing a teleprompter function as suggested by Afrasiabi.
Regarding claim 3, claim 1 of co-pending Application No. 18/572,585 as modified by Afrasiabi discloses automatically revising the content comprises any one or more of: automatically adding additional content to satisfy the time limit, automatically increasing a scrolling speed of the content on the display unit to satisfy the time limit, removing content to satisfy the time limit, decreasing the scrolling speed (from paragraph 0049 of Afrasiabi, see stop the scrolling function of a particular field 220 based on a timer elapsing, an interaction with the user interface 115 indicating an instruction to stop scrolling, latency or no audio being detected, or the user going off script by way of recognizing the voice has departed from the pre-set speech so the speech pauses) of the content to satisfy the time limit, or a combination thereof.
Regarding claim 7, claim 1 of co-pending Application No. 18/572,585 does not teach the method of claim 1, further comprising providing an audience member with one or more applications for viewing the video presentation on an audience member electronic device that is associated with a second display unit; generating in a screen area of the second display unit a content-viewing window; wherein the video presentation is provided in the content-viewing window in accordance with the at least one of the one or more applications; generating and displaying a feedback input portion of the content-viewing window; accepting a feedback input from the audience member; automatically transmitting the feedback input to the electronic device; and displaying the feedback input to the user via the content window.
All the same, Afrasiabi discloses the method of claim 1,
further comprising providing an audience member with one or more applications (from Figure 1, see Network conferencing platform) for viewing the video presentation (from Figure 1, see Video feed 120) on an audience member electronic device (from Figure 1, see 150) that is associated with a second display unit;
generating in a screen area of the second display unit a content-viewing window; wherein the video presentation is provided in the content-viewing window in accordance with the at least one of the one or more applications (from paragraph 0028, see the content 240 projected from the digital overlay screen 112 is visible to the presenter (and any invited associates who are helping or communicating with the presenter by pre-approval),);
generating and displaying a feedback input portion of the content-viewing window (From Figure 6, see Associate may view/edit content for event);
accepting a feedback input from the audience member (from Figure 6, see Associate may modify/add);
automatically transmitting the feedback input to the electronic device (from paragraph 0027, see transmitted to the presenter during the event); and
displaying the feedback input to the user via the content window (from paragraph 0028, see visible to the presenter).
Therefore, it would have been obvious to one of ordinary skill in the art to modify claim 1 of co-pending Application No. 18/572,585 with the method of claim 1, further comprising providing an audience member with one or more applications for viewing the video presentation on an audience member electronic device that is associated with a second display unit; generating in a screen area of the second display unit a content-viewing window; wherein the video presentation is provided in the content-viewing window in accordance with the at least one of the one or more applications; generating and displaying a feedback input portion of the content-viewing window; accepting a feedback input from the audience member; automatically transmitting the feedback input to the electronic device; and displaying the feedback input to the user via the content window as taught by Afrasiabi. This modification would have improved the system’s convenience by allowing for help as suggested by Afrasiabi.
Regarding claim 10, claim 1 of co-pending Application No. 18/572,585 does not teach the content is displayed in the content window according to one or more parameters set via user input from the at least one user. All the same, Afrasiabi discloses the content is displayed in the content window according to one or more parameters (from Figure 6, see 610) set via user input from the at least one user. Therefore, it would have been obvious to one of ordinary skill in the art to modify claim 1 of co-pending Application No. 18/572,585 wherein the content is displayed in the content window according to one or more parameters set via user input from the at least one user as taught by Afrasiabi. This modification would have improved the flexibility by claim 1 of co-pending Application No. 18/572,585 by providing different accessibility options as suggested by Afrasiabi.
Claim 13 is rejected for the same reasons as claim 2.
Claim 14 is rejected for the same reasons as claim 3.
Claim 18 is rejected for the same reasons as claim 7.
6. Claims 8, 9, 19 and 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of co-pending Application No. 18/572,585 combined with Afrasiabi in further view of Christmas et al, U.S. Patent Application Publication No. 2019/0318883 (hereinafter Christmas).
Regarding claim 8, the combination of claim 1 of co-pending Application No. 18/572,585 and Afrasiabi does not teach:
an audience member capture element having a field of view including the audience member;
capturing an audience sentiment metric from the audience member via the capture element;
automatically analyzing the audience sentiment metric;
automatically generating a performance score, wherein the performance score is dependent on the audience sentiment metric;
and displaying the performance score to the user via the content window.
All the same, Christmas discloses:
an audience member capture element having a field of view including the audience member (from paragraph 0066, see attendee devices 1320 may comprise one or more sensors 1322. For example, sensors 1322 may comprise one or more cameras, microphones, thermal imaging sensors, accelerometers, compasses, etc. The sensors 1322 may be configured to detect actions of the attendees, capture feedback from the attendees, and/or the like. In various embodiments, the sensors 1322 may comprise a camera which captures still images or video of attendees. The sensors 1322 may comprise a microphone which detects audio, such as words, tones, or sounds. The sensors 1322 may comprise a thermal imaging (e.g., infrared) sensor which detects the number and/or locations of persons in view of the sensor);
capturing an audience sentiment metric from the audience member via the capture element (from paragraph 0068, see the sensor data processing software may comprise facial recognition software. The facial recognition software may be configured to detect where the attendee's eyes are looking, or the number or location of persons or faces in view of the sensor 1322 (e.g., a camera). The facial recognition software may detect particular locations on attendee devices 1320 where the attendees are looking, which may indicate which portion of the content the attendees are currently viewing (or not viewing). For example, a presenter may be talking about a first bullet point on a slide for several minutes, but the facial recognition software may detect that 90% of attendees were viewing a subsequent bullet point, indicating that the attendees were moving faster through the content than the presenter. Additionally, the facial recognition software may determine what portion of the attendees are looking at the attendee devices 1320 versus a different location entirely (e.g., five attendees were present and one was looking at attendee device 1320);
automatically analyzing the audience sentiment metric; automatically generating a performance score (from Figure 8, see SCORE), wherein the performance score is dependent on the audience sentiment metric; and
displaying the performance score to the user via the content window (from paragraph 0067, see In various embodiments, one or more of presenting device 1310, attendee devices 1320, or portable storage device 1100 may comprise sensor data processing software. The sensor data processing software may be configured to analyze the data captured by the sensors 1322 and present the data to the presenter. The analyzed data may be presented to the presenter using any suitable technique. For example, the analyzed data may be presented to the presenter in real time via presentation UI 1315. The analyzed data may also be presented in a generated report and transmitted via email, SMS, or the like).
Therefore, it would have been obvious to one of ordinary skill in the art to further modify the combination of claim 1 of co-pending Application No. 18/572,585 and Afrasiabi with an audience member capture element having a field of view including the audience member; capturing an audience sentiment metric from the audience member via the capture element; automatically analyzing the audience sentiment metric; automatically generating a performance score, wherein the performance score is dependent on the audience sentiment metric; and displaying the performance score to the user via the content window as taught by Christmas. This modification would have improved the system’s convenience by allowing the presenter to gauge the interest of attendees as suggested by Christmas.
Regarding claim 9, the combination of references as modified by Christmas discloses wherein the audience sentiment metric comprises the amount of time the audience member's eyes are directed toward the content-viewing window (from paragraph 0068 of Christmas, see the sensor data processing software may comprise facial recognition software. The facial recognition software may be configured to detect where the attendee's eyes are looking, or the number or location of persons or faces in view of the sensor 1322 (e.g., a camera). The facial recognition software may detect particular locations on attendee devices 1320 where the attendees are looking, which may indicate which portion of the content the attendees are currently viewing (or not viewing). For example, a presenter may be talking about a first bullet point on a slide for several minutes, but the facial recognition software may detect that 90% of attendees were viewing a subsequent bullet point, indicating that the attendees were moving faster through the content than the presenter. Additionally, the facial recognition software may determine what portion of the attendees are looking at the attendee devices 1320 versus a different location entirely (e.g., five attendees were present and one was looking at attendee device 1320), a number of questions asked verbally by the audience member, a number of questions asked in a chat feature of the one or more applications, a time the audience member spends in front of the capture element, a number of times the audience member looks away from the content-viewing window, the total amount of time the audience member spends looking away from the content-viewing window during the video presentation, audience participation in polls, audience time spent speaking as compared to the speaker time spent speaking, or a combination thereof.
Claim 19 is rejected for the same reasons as claim 8.
Claim 20 is rejected for the same reasons as claim 9.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim Rejections - 35 USC § 102
7. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
8. Claims 1, 2, 3, 7, 10-14 and 18 are rejected under 35 U.S.C. § 102(a)(2) as being anticipated by Afrasiabi.
Regarding claim 1, Afrasiabi discloses a method for providing on-video content during a video presentation by at least one user, the method comprising, during execution of one or more applications by an electronic device (from Figure 1, see 110) associated with at least a display unit (from Figure 1, see 102) and a capture element (from Figure 1, see 103) having a field of view including the at least one user:
generating in a screen area (from Figure 7, see 115) of the display unit a first image layer comprising content associated with at least one of the one or more applications (from Figure 7, see 120);
generating in the screen area of the display unit a second image layer comprising a content window (from Figure 7, see 220), and
wherein content displayed in the content window is provided in accordance with the at least one of the one or more applications (from Figure 7, see 240), and
wherein a generated location of the content window (from Figure 7, see 220) within the screen area (from Figure 7, see 115) can be arranged at a user-preferred position (from paragraph 0007, see The content may be customizable as to location on the digital overlay screen or may have pre-determined locations), the user-preferred position being:
dependent at least in part on a determined location of the capture element;
disposed within a side window of the one or more applications, such that the content window is accessible within an environment of the one or more applications;
at least partially overlapping the first image layer (from abstract, see the digital overlay screen may overlay on a video feed of the telecommunications without distorting said feed); or
a combination thereof.
Regarding claim 2, Afrasiabi discloses automatically revising the content according to a user-preferred parameter, wherein the user-preferred parameter comprises a time limit for the video presentation (from paragraph 0049, see Moreover, different fields or boxes 220 may be configured with different parameters. For example, a user may log into the digital overlay application 101, create an event 200 and upload one field 220 of content 240 comprising scrolling functions and three fields 220 static in position. Additionally, the speech and text parameters may be pre-set at a desired pace (like a teleprompter) so as to initiate the scrolling function for a field 220 comprising a script for a speech once the user begins a speech or live presentation).
Regarding claim 3, Afrasiabi discloses automatically revising the content comprises any one or more of: automatically adding additional content to satisfy the time limit, automatically increasing a scrolling speed of the content on the display unit to satisfy the time limit, removing content to satisfy the time limit, decreasing the scrolling speed (from paragraph 0049, see stop the scrolling function of a particular field 220 based on a timer elapsing, an interaction with the user interface 115 indicating an instruction to stop scrolling, latency or no audio being detected, or the user going off script by way of recognizing the voice has departed from the pre-set speech so the speech pauses) of the content to satisfy the time limit, or a combination thereof.
Regarding claim 7, Afrasiabi discloses the method of claim 1,
further comprising providing an audience member with one or more applications (from Figure 1, see Network conferencing platform) for viewing the video presentation (from Figure 1, see Video feed 120) on an audience member electronic device (from Figure 1, see 150) that is associated with a second display unit;
generating in a screen area of the second display unit a content-viewing window; wherein the video presentation is provided in the content-viewing window in accordance with the at least one of the one or more applications (from paragraph 0028, see the content 240 projected from the digital overlay screen 112 is visible to the presenter (and any invited associates who are helping or communicating with the presenter by pre-approval),);
generating and displaying a feedback input portion of the content-viewing window (From Figure 6, see Associate may view/edit content for event);
accepting a feedback input from the audience member (from Figure 6, see Associate may modify/add);
automatically transmitting the feedback input to the electronic device (from paragraph 0027, see transmitted to the presenter during the event); and
displaying the feedback input to the user via the content window (from paragraph 0028, see visible to the presenter).
Regarding claim 10, Afrasiabi discloses the content is displayed in the content window according to one or more parameters (from Figure 6, see 610) set via user input from the at least one user.
Regarding claim 11, Afrasiabi discloses a system for providing on-video content during a video presentation by at least one user, the system comprising:
an electronic device (from Figure 1, see 110) comprising a processor (from Figure 1, see 109) functionally linked to at least a display unit (from Figure 1, see 102) and a capture element (from Figure 1, see 103) having a field of view including the at least one user,
wherein the processor is configured, during execution of one or more applications via the electronic device, to:
generate in a screen area (from Figure 7, see 115) of the display unit a first image layer comprising content associated with at least one of the one or more applications (from Figure 7, see 120);
generate in the screen area of the display unit a second image layer comprising a content window (from Figure 7, see 220), and
wherein a generated location of the content window (from Figure 7, see 220) within the screen area (from Figure 7, see 115) is arranged at a user-preferred position (from paragraph 0007, see The content may be customizable as to location on the digital overlay screen or may have pre-determined locations), the user preferred position being:
dependent at least in part on a determined location of the capture element;
disposed within a side window of the one or more applications, such that the content window is accessible within an environment of the one or more applications;
at least partially overlapping the first image layer (from abstract, see the digital overlay screen may overlay on a video feed of the telecommunications without distorting said feed); or
a combination thereof.
Regarding claim 12, Afrasiabi discloses the at least one of the
one or more applications comprises a web conferencing platform (from paragraph 0024, see network conferencing program 100 may be Zoom, Microsoft Teams, BlueJeans, FaceTime, Skype, Webex Meetings, GoTo Meeting, or any other program which allows individuals to video conference efficiently and in real-time to participants on remote devices in remote locations).
Claim 13 is rejected for the same reasons as claim 2.
Claim 14 is rejected for the same reasons as claim 3.
Claim 18 is rejected for the same reasons as claim 11.
Claim Rejections - 35 USC § 103
9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
10. Claims 4-6 and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Afrasiabi in view of Cossar.
Regarding claim 4, Afrasiabi does not teach automatically ascertaining a location of the at least one user relative to the capture element. All the same, Cossar discloses automatically ascertaining a location of the at least one user relative to the capture element (from Figure 4B, see Lens Too). Therefore, it would have been obvious to one of ordinary skill in the art to modify Afrasiabi with automatically ascertaining a location of the at least one user relative to the capture element as taught by Cossar. This modification would have made the experience more effective by providing proper framing as suggested by Cossar.
Regarding claim 5, Afrasiabi does not teach automatically detecting a performance metric of the user via the capture element, analyzing the performance metric of the user, and automatically providing feedback to the user. All the same, Cossar discloses automatically detecting a performance metric of the user via the capture element, analyzing the performance metric of the user, and automatically providing feedback to the user (from Figure 3D, see eye contact). Therefore, it would have been obvious to one of ordinary skill in the art to modify Afrasiabi with automatically detecting a performance metric of the user via the capture element, analyzing the performance metric of the user, and automatically providing feedback to the user as taught by Cossar. This modification would have made the experience more effective by projecting confidence as suggested by Cossar.
Regarding claim 6, the combination of Afrasiabi and Cossar discloses the performance metric comprises any one or more of a use of frequency of filler words, the user’s tone or confidence, a speed or pace of the presentation, the user’s adherence to the content, and an amount of user eye contact (from Figure 3D of Cossar, see eye contact) with the capture element.
Claim 15 is rejected for the same reasons as claim 4.
Claim 16 is rejected for the same reasons as claim 5.
Claim 17 is rejected for the same reasons as claim 6.
11. Claims 8, 9, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Afrasiabi in view of Christmas.
Regarding claim 8, Afrasiabi does not teach:
an audience member capture element having a field of view including the audience member;
capturing an audience sentiment metric from the audience member via the capture element;
automatically analyzing the audience sentiment metric;
automatically generating a performance score, wherein the performance score is dependent on the audience sentiment metric;
and displaying the performance score to the user via the content window.
All the same, Christmas discloses:
an audience member capture element having a field of view including the audience member (from paragraph 0066, see attendee devices 1320 may comprise one or more sensors 1322. For example, sensors 1322 may comprise one or more cameras, microphones, thermal imaging sensors, accelerometers, compasses, etc. The sensors 1322 may be configured to detect actions of the attendees, capture feedback from the attendees, and/or the like. In various embodiments, the sensors 1322 may comprise a camera which captures still images or video of attendees. The sensors 1322 may comprise a microphone which detects audio, such as words, tones, or sounds. The sensors 1322 may comprise a thermal imaging (e.g., infrared) sensor which detects the number and/or locations of persons in view of the sensor);
capturing an audience sentiment metric from the audience member via the capture element (from paragraph 0068, see the sensor data processing software may comprise facial recognition software. The facial recognition software may be configured to detect where the attendee's eyes are looking, or the number or location of persons or faces in view of the sensor 1322 (e.g., a camera). The facial recognition software may detect particular locations on attendee devices 1320 where the attendees are looking, which may indicate which portion of the content the attendees are currently viewing (or not viewing). For example, a presenter may be talking about a first bullet point on a slide for several minutes, but the facial recognition software may detect that 90% of attendees were viewing a subsequent bullet point, indicating that the attendees were moving faster through the content than the presenter. Additionally, the facial recognition software may determine what portion of the attendees are looking at the attendee devices 1320 versus a different location entirely (e.g., five attendees were present and one was looking at attendee device 1320);
automatically analyzing the audience sentiment metric; automatically generating a performance score (from Figure 8, see SCORE), wherein the performance score is dependent on the audience sentiment metric; and
displaying the performance score to the user via the content window (from paragraph 0067, see In various embodiments, one or more of presenting device 1310, attendee devices 1320, or portable storage device 1100 may comprise sensor data processing software. The sensor data processing software may be configured to analyze the data captured by the sensors 1322 and present the data to the presenter. The analyzed data may be presented to the presenter using any suitable technique. For example, the analyzed data may be presented to the presenter in real time via presentation UI 1315. The analyzed data may also be presented in a generated report and transmitted via email, SMS, or the like).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Afrasiabi with an audience member capture element having a field of view including the audience member; capturing an audience sentiment metric from the audience member via the capture element; automatically analyzing the audience sentiment metric; automatically generating a performance score, wherein the performance score is dependent on the audience sentiment metric; and displaying the performance score to the user via the content window as taught by Christmas. This modification would have improved the system’s convenience by allowing the presenter to gauge the interest of attendees as suggested by Christmas.
Regarding claim 9, the combination of Afrasiabi and Christmas discloses wherein the audience sentiment metric comprises the amount of time the audience member's eyes are directed toward the content-viewing window (from paragraph 0068 of Christmas, see the sensor data processing software may comprise facial recognition software. The facial recognition software may be configured to detect where the attendee's eyes are looking, or the number or location of persons or faces in view of the sensor 1322 (e.g., a camera). The facial recognition software may detect particular locations on attendee devices 1320 where the attendees are looking, which may indicate which portion of the content the attendees are currently viewing (or not viewing). For example, a presenter may be talking about a first bullet point on a slide for several minutes, but the facial recognition software may detect that 90% of attendees were viewing a subsequent bullet point, indicating that the attendees were moving faster through the content than the presenter. Additionally, the facial recognition software may determine what portion of the attendees are looking at the attendee devices 1320 versus a different location entirely (e.g., five attendees were present and one was looking at attendee device 1320), a number of questions asked verbally by the audience member, a number of questions asked in a chat feature of the one or more applications, a time the audience member spends in front of the capture element, a number of times the audience member looks away from the content-viewing window, the total amount of time the audience member spends looking away from the content-viewing window during the video presentation, audience participation in polls, audience time spent speaking as compared to the speaker time spent speaking, or a combination thereof.
Claim 19 is rejected for the same reasons as claim 8.
Claim 20 is rejected for the same reasons as claim 9.
Conclusion
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLISA ANWAH whose telephone number is 571-272-7533. The examiner can normally be reached Monday to Friday from 8.30 AM to 6 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carolyn Edwards can be reached on 571-270-7136. The fax phone numbers for the organization where this application or proceeding is assigned are 571-273-8300 for regular communications and 571-273-8300 for After Final communications.
Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the receptionist whose telephone number is 571-272-2600.
Olisa Anwah
Patent Examiner
December 10, 2025
/OLISA ANWAH/Primary Examiner, Art Unit 2692
/CAROLYN R EDWARDS/Supervisory Patent Examiner, Art Unit 2692