DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. Claims 1-3, 5-17 and 19-21 are currently pending in this application.
Claims 1, 11-12, and 16-17, are amended as filed on 02/04/2026.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 8, 10-13, 15-17, and 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over Masi et al. (Pre-Grant Publication No. US 2021/0352120 A1), hereinafter Masi, in view of Munoz et al. (Patent No. US 10,893,087 B1), hereinafter Munoz, and in further view of Daw et al. (Pre-Grant Publication No. US 2021/0136447 A1), hereinafter Daw.
2. With respect to claim 1, Masi taught a system for integrating third-party video content into a video conference session of a plurality of a presenter attendee and non-presenter attendees (0052, where the presenters can be toggled on and off and assigned to different participants. Accordingly, the different participants, when not in presenter mode, are the non-presenters. See also, 0071, the groups of presenters) comprising: a plurality of user devices corresponding to a presenter attendee and a plurality of non-presenter attendees of the video conference session (0071, where the different groups have their own devices, where the video conference can be seen in 0023), where each of the plurality of user devices comprising: a) at least one processor (0162); b) a video conferencing application (0023, the meeting room), comprising: i) a first interface for receiving connection information to the third-party video content as selected by the presenter attendee (figure 15, item 1502, where the link is provided, and then selecting it would open the embedded window. See also 0056, where the videos are dragged and dropped into the room); ii) a second interface for embedding and displaying the third-party video content on each of the plurality of user devices, wherein the video content is streamed directly from a third-party content provider to each of the plurality of user devices (0098, where the iframe or embedded window is the second interface); and a video conference server for relaying state changes of the third-party video content as the video content is streamed to the plurality of non-presenter user devices (0048, where the user’s input would initiate the state changes as the collaboration server presents the content).
However, Masi did not explicitly state monitoring the actions of the user device of the presenter attendee and to identify state changes made by the presenter attendee to the third-party video content and in response, relaying information regarding the state changes; wherein the system causes each of the user devices of the non-presenter attendees to receive a separate respective stream of the video content upon using the communicated connection information, and selecting to make identified state changes made by the presenter attendee to the third-party video content. On the other hand, Munoz did teach monitoring the actions of the user device of the presenter attendee and to identify state changes made by the presenter attendee to the third-party video content and in response, relaying information regarding the state changes (9:10-21, where the third-party video content platform can be seen in 16:51-66); wherein the system causes each of the user devices of the non-presenter attendees to receive a separate respective stream of the video content upon using the communicated connection information (23:34-67), and selecting to make identified state changes made by the presenter attendee to the third-party video content (12:29-39). Both of the systems of Masi and Munoz are directed towards relaying state information from video feeds and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Masi, to utilize relaying the state information from a third-party website, as taught by Munoz, in order to increase the system’s usability by providing third party direct video sources.
However, Masi did not explicitly state wherein the video content is concurrently streamed during the video conference session; wherein the non-presenter user devices are enabled to select whether to make or not make the same state changes to the respective separate streams. On the other hand, Daw did teach wherein the video content is concurrently streamed during the video conference session (0017); wherein the non-presenter user devices are enabled to select whether to make or not make the same state changes to the respective separate streams (0063). Both of the systems of Masi and Daw are directed towards managing embedded content and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Masi, to utilize a allowing user selection of embedded content, as taught by Daw, in order to allow video conferences to seamless manipulate embedded video content.
3. As for claim 2, it is rejected on the same basis as claim 1. In addition, Masi taught wherein the video conference server relays state changes of the third-party video content when a state change is made on the user device corresponding to the presenter attendee (0048, where the user’s input would initiate the state changes as the collaboration server presents the content).
4. As for claim 3, it is rejected on the same basis as claim 1. In addition, Masi taught wherein the second interface is a third-party iFrame comprising an embedded third-party video player (0098, where the iFrame can be seen).
5. As for claim 8, it is rejected on the same basis as claim 1. In addition, Masi taught wherein the video conferencing application further comprises audio control capabilities from a user device corresponding to a non-presenter attendee, wherein audio control information includes adjusting a volume of the video content and/or adjusting closed captioning, wherein audio control changes audio settings for the user device corresponding to the non-presenter attendee only (0127, where the audio stream can be seen. Likewise, the user’s particular control can be seen in 0051).
6. As for claim 10, it is rejected on the same basis as claim 1. In addition, Masi taught wherein the video conference session attendees corresponding to the plurality of user devices are displayed alongside the video content (figure 15, where the attendees can be seen).
7. With respect to claim 11, Masi taught a computer-implemented method of embedding third-party video content in a video conference session (0098, where the iframe & embedded window can be seen), comprising: receiving, on from a user device of a presenter attended in the video conference session, a link to third-party video content selected by the presenter attendee to be presented in the video conference session (0098, where the embedded video can be seen. figure 15, item 1502, where the link is provided, and then selecting it would open the embedded window. See also 0056, where the videos are dragged and dropped into the room); and communicating the link to the to the third party video content to a plurality of non-presenter user devices in the video conference session (0098, where the embedded window is presented to the users as can be seen in 0035) , wherein execution of the link by any of the plurality of the non-presenter user devices causes each of the executing plurality of the non-presenter user devices to embed and display, using a third party iFrame application programming interface (0098, where the iframe can be seen and the link can be posted by anyone with access to the conference session that is allowed access under broadest reasonable interpretation), the third-party video content on each of the plurality of the non-presenter user devices locally (0048, the application executing on the local participants computing device), and wherein the third-party video content is streamed directly from a third-party content provider to each of the plurality of the non-presenter user devices using the link (figure 15, item 1502, where the link is provided, and then selecting it would open the embedded window. See also 0056, where the videos are dragged and dropped into the room).
However, Masi did not explicitly state monitoring the actions of the user device of the presenter attendee during the video conference session. On the other hand, Munoz did teach monitoring the actions of the user device of the presenter attendee during the video conference session (9:10-21, where the third-party video content platform can be seen in 16:51-66). Both of the systems of Masi and Munoz are directed towards relaying state information from video feeds and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Masi, to utilize relaying the state information from a third-party website, as taught by Munoz, in order to increase the system’s usability by providing third party direct video sources.
However, Masi did not explicitly state wherein the video content is concurrently streamed during the video conference session. On the other hand, Daw did teach wherein the video content is concurrently streamed during the video conference session (0017). Both of the systems of Masi and Daw are directed towards managing embedded content and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Masi, to utilize a allowing user selection of embedded content, as taught by Daw, in order to allow video conferences to seamless manipulate embedded video content.
8. As for claim 12, it is rejected on the same basis as claim 11. In addition, Munoz taught identifying a state change of a playback of the third-party video content on the user device of the presenter attendee via the monitoring; and communicating the change of playback state to the plurality of the non-presenter user devices, wherein communicating the change of playback state enables each of the plurality of the non-presenter user devices to select whether to make or not make the same state changes to the respective, separate streams as the identified state changes made by the presenter attendee to the third-party video content, using the third-party iFrame API locally (9:10-21 & 12:29-39, where the third-party video content platform can be seen in 16:51-66).
9. As for claim 13, it is rejected on the same basis as claim 11. In addition, Munoz taught wherein the link is a universal resource location to a third-party video content provider (31:35-42).
10. As for claim 15, it is rejected on the same basis as claim 11. However, Munoz taught wherein the communication of the state change comprises sending the state change to a video conference server that is coordinating the video conference session (1:37-40).
11. With respect to claim 16, Masi taught a method of embedding third-party video content in a video conference session on a respective user device of a at least one non-presenter attendee (0071, where the groups of presenters and non-presenters can be seen and the video content can be seen in 0035), comprising: receiving, on a respective user device of the at least one non-presenter attendee from a user device of a presenter attendee in a video conference session, a link to video content (figure 15, where the links are presented into the user’s device. See also: 0098, where the embedded window is presented to the users as can be seen in 0035); and communicating the link to the third-party video content to the plurality of non-presenter user devices, wherein execution of the link by any of the plurality of the non-presenter user devices causes each of the executing plurality of the non-presenter user devices (0035 & 0043. See also: 0098, where the iframe can be seen and the link can be posted by anyone with access to the conference session that is allowed access under broadest reasonable interpretation) to embed and display the third-party video content into the video conference session on the plurality of user devices using a third party iFrame application programming interface (0098, where the iframe can be seen), and wherein the third-party video content is streamed directly from a third-party content provider to each of the plurality of the non-presenter user devices using the link (figure 15, item 1502, where the link is provided, and then selecting it would open the embedded window. See also 0056, where the videos are dragged and dropped into the room).
However, Masi did not explicitly state wherein the link is a universal resource location to third-party video content for presentation by a presenter attendee to one or more second user devices in the video conference session. On the other hand, Munoz did teach wherein the link is a universal resource location to third-party video content for presentation by a presenter attendee to one or more second user devices in the video conference session (31:35-44). Both of the systems of Masi and Munoz are directed towards relaying state information from video feeds and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Masi, to utilize relaying the state information from a third-party website, as taught by Munoz, in order to increase the system’s usability by providing third party direct video sources.
However, Masi did not explicitly state wherein the video content is concurrently streamed during the video conference session. On the other hand, Daw did teach wherein the video content is concurrently streamed during the video conference session (0017). Both of the systems of Masi and Daw are directed towards managing embedded content and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Masi, to utilize a allowing user selection of embedded content, as taught by Daw, in order to allow video conferences to seamless manipulate embedded video content.
12. As for claim 17, it is rejected on the same basis as claim 16. In addition, Masi taught identifying a state change of a playback of the third-party video content on the user device of the presenter attendee via the monitoring (0015, where the child iframe is changed based on the third party parent and where the video messaging session can be seen in 0020 & 0024); and relaying a notification of a state change of the video playback on the second user device (0048, where the user’s input would initiate the state changes as the collaboration server presents the content); and in response to receiving the notification of the state change, using the third- party iFrame API to apply the change of the playback state locally on the first user device (0098, where these actions are performed using the iFrame); wherein the change of state was based on a playback control of the third-party video content; and wherein user devices is enabled to select whether to make or not make the same state changes to the respective, separate streams as the identified state changes made by the presenter attendee to the third-party video content using (9:10-21 & 12:29-39, where the third-party video content platform can be seen in 16:51-66).
13. As for claim 19, it is rejected on the same basis as claim 17. In addition, Masi taught wherein the received the state change is received from a video conference server that is coordinating the video conference session (0048).
14. As for claim 20, it is rejected on the same basis as claim 16. In addition, Munoz taught receiving, on the first user device, audio control information of the video playback, wherein the audio control information comprising adjusting a volume of the video playback and/or changing closed captioning settings for the video playback, wherein audio adjustment is applied locally on the first user device without altering the audio of the remaining plurality of user devices in the video conference session (6:35-59, where this, at least, teaches the volume limitation).
15. As for claim 21, it is rejected on the same basis as claim 16. In addition, Masi taught receiving, on the first user device, a change to a streaming quality of the video content, wherein the change to the streaming is applied locally to the first user device without altering the streaming quality of the video content of the remaining plurality of user devices in the video conference session (0051).
Claim(s) 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Masi, in view of Munoz, in view of Daw, and in further view of Baker (Pre-Grant Publication No. US 2021/0250195 A1).
16. As for claim 5, it is rejected on the same basis as claim 1. However, Masi did not explicitly state wherein relaying state changes comprises communicating playback control information of the video content from the presenter attendee to the plurality of user devices corresponding to the non-presenter attendees. On the other hand, Baker did teach wherein relaying state changes comprises communicating playback control information of the video content from the presenter attendee to the plurality of user devices corresponding to the non-presenter attendees (0055 & 0073, where the playback control can be explicitly seen. Accordingly, the third party links can be seen in 0032). Both of the systems of Masi and Baker are directed towards managing video presentations in a collaboration session and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Masi, to utilize playback controls, as taught by Baker, as Masi undoubtedly performed such a feature in, at least, based on the embedded Youtube video controls. However, Masi does not explicitly state such a feature.
17. As for claim 6, it is rejected on the same basis as claim 5. In addition, Masi taught wherein the playback control information includes one selected from a list of a pause function, a resume function, a rewind function, and a fast forward function (0058, where it is given that the YouTube video feature, at least contains pause and resume functions and it was previously shown that the attendees are able to interact with the linked embedded content. See also Baker: 0055 & 0073).
18. As for claim 7, it is rejected on the same basis as claim 1. However, Masi did not explicitly state wherein playback control information on a user device corresponding to a non-presenter attendee changes the playback for the user device corresponding to the non-presenter attendee only. On the other hand, Baker did teach wherein playback control information on a user device corresponding to a non-presenter attendee changes the playback for the user device corresponding to the non-presenter attendee only (0055 & 0073). Both of the systems of Masi and Baker are directed towards managing video presentations in a collaboration session and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Masi, to utilize playback controls, as taught by Baker, as Masi undoubtedly performed such a feature in, at least, based on the embedded Youtube video controls. However, Masi does not explicitly state such a feature.
Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over Masi, in view of Munoz, in view of Daw, and in further view of Official Notice.
19. As for claim 9, it is rejected on the same basis as claim 1. However, Masi did not explicitly state wherein a live streaming quality of the video content is selectable to be changed locally on each user device of the plurality of user devices. On the other hand, the examiner gives official notice that adjusting quality of streaming videos was well known and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Masi, such that video quality control is provided to the user, in order to ensure that the user’s experience is not damaged by low bandwidth. Accordingly, it is likely that such a feature is provided by Masi as YouTube video playback provides such a feature.
Claim(s) 14 is rejected under 35 U.S.C. 103 as being unpatentable over Masi, in view of Munoz, in view of Daw, and in further view of Dunne (Patent No. US 10,990,642 B2).
20. As for claim 14, it is rejected on the same basis as claim 11. However, Masi did not explicitly state validating the link to the third-party content to ensure the link is from a supported third-party video content provider. On the other hand, Dunne did teach validating the link to the third-party content to ensure the link is from a supported third-party video content provider (column 6, lines 44-56). Both of the systems of Masi and Dunne are directed towards managing embedded content and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings of Masi, to utilize a validated URL, as taught by Dunne, in order to utilize standard links that were contemporary to the time of the invention.
Response to Arguments
Applicant’s arguments with respect to the claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
(a) Ducheneaut et al. (Pre-Grant Publication No. US 2006/0112325 A1), 0069, 0137, 0186.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH L GREENE whose telephone number is (571)270-3730. The examiner can normally be reached Monday - Thursday, 10:00am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas R. Taylor can be reached at 571 272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH L GREENE/Primary Examiner, Art Unit 2443