DETAILED ACTION
This FINAL action is in response to Application No. 17/296,615 filed 10/04/2021, which claims priority from PCT/EP2018/082745 filed 11/27/2018. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The amendment presented on 12/10/2025 which provides amendments to claims 14, 23, 24, and 27, is hereby acknowledged. Claims 14-24 and 26-28 are currently pending.
Claim Rejections – Withdrawn
The previous 35 U.S.C § 112 rejections of claims 14-24 and 26-28 have been withdrawn as necessitated by amendment.
Response to Arguments
Applicant’s arguments with respect to the prior art rejections have been considered, however, they are unpersuasive.
Applicant contends that Yu does not teach or suggest “the received first image is displayed on a display of the second client" and "transmitting the first selection area from the media server to the second client for display on the second client as a second image, whereby only the second image is displayed on the second client.” Applicant argues the discussion in Yu is different from the claims because “both the first image and the second image are displayed on the same second client” while the “video playback device 110” displays the target video.
However, the Examiner relies on Ouyang to teach the majority of the language argued by Applicant. Consequently, if Yu did teach the language argued by Applicant, then a 35 U.S.C.§102(a)(1) rejection under Yu would likely be applicable. As shown in the previous office action, the rejection is only relying on Yu for suggesting “whereby only the second image is displayed on the second client”. Specifically, Yu teaches a device (“portable communication device”) that specifies a region to zoom into during video playback, will result in only displaying that specified region. Specifically, Figures 6 and 7 of Yu describe the flow of the portable communication devices selecting a region of interest (steps 610 and 710), and receiving and displaying scaled video according to the selections (reusing steps 322 and 422 from Figures 3 and 4), which are described by “the portable communication device only needs to display the video content of the first [and second] scaled video on the screen in the operation 322 [and 422]” (see Yu, col 8, lines 11-14, col 9, lines 60-63, col 11, lines 1-5). What the “video playback device 110” displays or does not display is immaterial to the rationale as it is a separate device all together and the function of which is not relied on. With (1) Ouyang teaching a device receiving and displaying video where a user can select a region of interest of the displayed video, a server enhancing the selected region of interest, and the device then receiving and displaying the enhanced region of interest from the server (Ouyang, abstract, [0019], [0035], [0037]), and with (2) Yu describing a device whereupon a user can select a region of interest of a video, a server enhances the selected region of interest, and the device then receives and displays only the enhanced region of interest from the server (Yu, Figures 5-8, col 7, line 60 – col 8, line 3; col 9, lines 42-52; col 10, lines 17-33), the combination quite clearly suggests at least a device receiving and displaying video where a user can select a region of interest of the displayed video, a server enhances the selected region of interest, and the device then receives and displays only enhanced region of interest from the server. In other words, the combination of Ouyang and Yu suggests “the received first image is displayed on a display of the second client" and "transmitting the first selection area from the media server to the second client for display on the second client as a second image, whereby only the second image is displayed on the second client.” Therefore, the rejections are maintained.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 14, 15, 20-24, 26, and 27, as best understood, is/are rejected under 35 U.S.C. 103 as being unpatentable over Ouyang et al. (US 2015/0052200 A1, hereinafter referred to as “Ouyang”), and further in view of Yu (US 8965172 B1, hereinafter “Yu”).
Regarding claim 14, Ouyang teaches a computer-implemented method for sharing a data stream displayed on a display of a first client with a second client, wherein the first client and the second client communicate with each other via a communication network that comprises a central media server, (Ouyang, abstract, Figure 1, server 130, host device 110, attendee devices 120, second endpoint shares data stream with a first endpoint) the method comprising:
receiving, on the media server, video and/or image data to be shared which is transmitted by the first client and is displayed on the first client. More specifically, second endpoint (first client) shares content to a first endpoint (second client) (Ouyang, abstract). The server 130 is used to receive content from the host device 110 for redistribution to one or more attendee devices 120 (Ouyang, [0019]).
forwarding the video and/or image data from the central media server to the second client, wherein the second client receives the video and/or image data as a first image, the received first image is displayed on a display of the second client. More specifically, the first endpoint device (second client) receives an initial image of content shared by a second endpoint (first client) in the online/web-based meeting/conference. At the first endpoint, the initial image of the shared content is displayed (Ouyang, abstract). The server 130 is used to receive content from the host device 110 for redistribution to one or more attendee devices 120 (Ouyang, [0019]).
a first selection area is determined on the second client from the first image. More specifically, a user selection of a selected region of the initial image is received at the first endpoint (Ouyang, abstract). At 901, the attendee device determines the location of the selected region in its display and calculates relative coordinate values for this region (Ouyang, [0035]).
receiving, on the media server from the second client, first data which describes the first selection area from the first image. More specifically, At 904, the attendee device (second client) sends the relative coordinate values to the server (Ouyang, [0035]).
cutting, on the media server, the first selection area out of the image data; and transmitting the first selection area from the media server to the second client for display on the second client as a second image. More specifically, the server 130 receives the relative coordinate values and reads raw data for the region defined by the relative coordinates from the image data cache. At 905, the server sends the raw data for the region to the attendee device (cutting the first selection area out of the video and image data to be shared). The attendee device receives the raw data and renders an improved resolution image (Ouyang, [0035]). Ouyang even further describes a “Part mode” as opposed to “whole mode” as transfers less data than would be transferred in the whole mode (Ouyang, [0037]).
However, Ouyang may not explicitly teach every aspect of
whereby only the second image is displayed on the second client.
Yu discloses a multi-screen video playback system includes: a video playback device having a main display, a portable communication device having a screen, and a multi-screen display controlling server. While the main display displays the target video, if the multi-screen display controlling server received a selection message corresponding to a position of a partial region of the main display, the multi-screen display controlling server dynamically generates a scaled video corresponding to images displayed on the partial region and transmits the scaled video to the portable communication device via a network so that the portable communication device displays the scaled video on the screen (Yu, abstract). Steps 610 and 710 are described as receiving a selection of a portion of the video on the portable devices (depicted in Figure 8 with UI controls 810 and 820 being invoked), and steps 322 and 422 are described as the displaying of the selected portions on the portable devices. Figure 5, which is described as depicting steps 322 and 422, depicts the results of the selections where only the selected portions (second images) are displayed on the portable devices (Yu, Figures 5-8, col 7, line 60 – col 8, line 3; col 9, lines 42-52; col 10, lines 17-33)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Ouyang and Yu that a method for sharing a data stream where a portion of a shared display is selected to be magnified on a receiving client would include only the second image is displayed on the second client. With Ouyang and Yu disclosing video streams where a selection is received of a portion of the shared content to magnify at a receiving client, and with Yu additionally disclosing that only the selected portion is displayed on the receiving client, one of ordinary skill in the art of implementing a method for sharing a data stream where a portion of a shared display is selected to be magnified on a receiving client would include only the second image is displayed on the second client in order to allow a receiving client to provide better focus and attention to the content shared and reduce the amount of data transmission during the video conference. One would therefore be motivated to combine these teachings as in doing so would create this method for sharing a data stream where a portion of a shared display is selected to be magnified on a receiving client.
Regarding claim 15, Ouyang and Yu teach the method of claim 14, wherein the first data that describes the first selection area comprises a relative size and position of the first selection area. More specifically, at 701, the attendee locates the magnifier element 610 over a region of the initial image of the shared content displayed on the attendee device to define the selected region 630. Region acquisition module 204 calculates the relative coordinates values of the selected region. (Ouyang, [0030]-[0031],[0035], [0041], [0044]). Additionally, the first configuration message may comprise the information of the coordination, the shape, the size, the boundary of the first partial region 510, or other suitable position information (Yu, at least col 6, lines 38-47).
Regarding claim 20, Ouyang and Yu teach the method of claim 14, wherein the media server processes and forwards the image and/or video data to be shared by the first client to the second client according to the first data that describes the first selection area. More specifically, the server 130 receives the relative coordinate values and reads raw data for the region defined by the relative coordinates from the image data cache. At 905, the server sends the raw data for the region to the attendee device. The attendee device receives the raw data and renders an improved resolution image (Ouyang, [0035]). Additionally, in the operation 318, the scaled video providing module 240 of the multi-screen display controlling server 130 obtains the position information of the first partial region 510 according to the first selection message, and dynamically generates a first scaled video corresponding to the images displayed on the first partial region 510 (Yu, at least col 6, lines 55-61).
Regarding claim 21, Ouyang and Yu teach the method of claim 14, wherein the media server receives second data from the second client that describes a second selection area from the first image. More specifically, an attendee moves the magnifier element to select a new region (Ouyang, [0037])
Regarding claim 22, Ouyang and Yu teach the method of claim 14, wherein the method is performed in a real-time conference on a web-based communication and collaboration platform, in which content displayed on the display on the first client is shareable via screen sharing with the second client. More specifically, the conference session is a real-time collaboration session (Ouyang, [0017]). Communication occurs over network 115 to and from server 130 from attendees 120 and host device 110 (Ouyang, [0018], Figure 1).
Regarding claim 23, Ouyang and Yu teach the method of claim 22, wherein the display on the first client has a higher resolution than the display on the second client. More specifically, in on-line/web-based meetings in which a party with a high resolution display shares desktop content with another participant having a lower resolution display (Ouyang, [0002]).
Regarding claim 24, Ouyang and Yu teach the method of claim 14, wherein the method further comprises: receiving, on the media server third data that describes a third selection area from the first image from a third client in real-time, wherein the third selection area received from the third client is different from the first selection area received from the second client, cutting, on the media server, the third selection area out of the image data; and transmitting the third selection area from the media server to the third client for display on the third client as a third image, whereby only the third image is displayed on the third client. More specifically, Figures 3-8 describe two portable devices (second and third clients) receiving selections of distinct portions of video for displaying only the selected portions on the respective portable devices (Yu, Figures 3-8 and associated description).
Regarding claim 26, Ouyang and Yu teach a collaboration and conversation platform with a central media server communicatively connectable to a number of clients which communicate with one another via a communication network for carrying out a computer-implemented method for sharing a data stream which comprises video and/or image data displayed on a first display of a first client according to claim 14. More specifically, a meeting participant/attendee at a first endpoint device in an online/web-based meeting/conference to acquire shared content with higher resolution. The first endpoint device receives an initial image of content shared by a second endpoint in the online/web-based meeting/conference (Ouyang, [0016]). A conference session can be any suitable communication session (e.g., instant messaging, video conferencing, web or other on-line conferencing/meeting, remote log-in and control of one computing device by another computing device, etc.) in which audio, video, document, screen image and/or any other type of content is shared between two or more computing devices (Ouyang, [0017]).
Regarding claim 27, this claim recites a collaboration and conversation platform comprising a central media server that performs the steps of the method of claim 14, therefore, the same rationale of rejection is applicable.
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ouyang and Yu, and further in view of Wang et al. (US 2016/0073052 A1, hereinafter, “Wang”).
Regarding claim 16, Ouyang and Yu teach the method of claim 14, including an embodiment where the attendee device sends the relative coordinate values directly to the host device without going through a server (Ouyang, [0031]), however, may not explicitly teach wherein the method comprises forwarding the first data from the media server to the first client.
Wang discloses a first request is received from a first attendee device participating in an online meeting. The first request includes information indicating/describing a first particular region of shared content being presented by a presenter device during the online meeting, to be magnified at the first attendee device (Wang, abstract). Meeting server software 300 enables the server 130 to deliver shared content from the host to attendee endpoints in an online conference/meeting. The meeting server software 300 includes a magnification content delivery module 304 to relay requests from an attendee device for magnified image data corresponding to a region of the shared content (Wang, [0024], [0027]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Ouyang, Yu, and Wang that a method for allowing an attendee of a video conference to select a portion of what is shared to be magnified using a meeting server would include wherein the method comprises forwarding the first data from the media server to the first client. With Ouyang and Wang disclosing video conferences where a server can receive attendee selections of portions of the shared content to magnify, and with Wang disclosing that the selection can be forwarded to the presenter device from the server, one of ordinary skill in the art of implementing a method for allowing an attendee of a video conference to select a portion of what is shared to be magnified using a meeting server would include wherein the method comprises forwarding the first data from the media server to the first client in order to relieve the processing at the server or in case the server is not configured to be able to magnify selected portions of content. One would therefore be motivated to combine these teachings as in doing so would create this method for allowing an attendee of a video conference to select a portion of what is shared to be magnified using a meeting server.
Claim(s) 17 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ouyang and Yu, and further in view of Kim et al. (US 2014/0022329 A1, hereinafter, “Kim”).
Regarding claim 17, Ouyang and Yu teach the method of claim 14, including that the conference session is a real-time collaboration session (Ouyang, [0017]), however, may not explicitly teach every aspect of wherein the first data that describes the first selection area is sent by the second client via a second real-time transport protocol data channel to the media server.
Kim discloses an image providing method including transmitting video to an external device, receiving a selection of an area of interest of the received video at the external device, and transmitting a second video image based on the selection to the external device (Kim, abstract). Only a portion of the original video is received at the external device (Kim, [0023], [0030], [0041]-[0043]). The video images are communicated with real-time transfer protocol (Kim, [0026]-[0027], [0089], [0254]-[0255], and [0257]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Ouyang, Yu, and Kim that a method for video conferencing where selected portions of content is shared would include using real-time transfer protocol for the video conference. With Ouyang and Kim disclosing video conferences where a server can receive selections of portions of the shared content, and with Kim additionally disclosing using real-time transfer protocol for the video conference, one of ordinary skill in the art of implementing a method for video conferencing where selected portions of content is shared would include using real-time transfer protocol for the video conference in order to allow the use of a typical networking protocol in video conferencing. One would therefore be motivated to combine these teachings as in doing so would create this method for video conferencing where selected portions of content is shared.
Regarding claim 18, Ouyang and Yu teach the method of claim 14, including that the conference session is a real-time collaboration session (Ouyang, [0017]), however, may not explicitly teach every aspect of wherein the media server forwards the first data that describes the first selection area to the first client via a first real-time transport protocol data channel.
Kim discloses an image providing method including transmitting video to an external device, receiving a selection of an area of interest of the received video at the external device, and transmitting a second video image based on the selection to the external device (Kim, abstract). Only a portion of the original video is received at the external device (Kim, [0023], [0030], [0041]-[0043]). The video images are communicated with real-time transfer protocol (Kim, [0026]-[0027], [0089], [0254]-[0255], and [0257]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Ouyang, Yu, and Kim that a method for video conferencing where selected portions of content is shared would include using real-time transfer protocol for the video conference. With Ouyang, and Kim disclosing video conferences where a server can receive selections of portions of the shared content, and with Kim additionally disclosing using real-time transfer protocol for the video conference, one of ordinary skill in the art of implementing a method for video conferencing where selected portions of content is shared would include using real-time transfer protocol for the video conference in order to allow the use of a typical networking protocol in video conferencing. One would therefore be motivated to combine these teachings as in doing so would create this method for video conferencing where selected portions of content is shared.
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ouyang and Yu, and further in view of Choi et al. (US 2017/0237986 A1, hereinafter, “Choi”).
Regarding claim 19, Ouyang and Yu teach the method of claim 14, however, may not explicitly teach every aspect of wherein the media server receives encoded image and/or video data from the first client for the first selection area and forwards them to the second client.
Choi discloses a video encoding method and an electronic device adapted to the method. The electronic device includes: a wireless communication circuit configured to communicate with a first electronic device, a touchscreen configured to display a user interface for performing a video call (Choi, abstract). A region of interest (ROI) may be set, based on a graphic tool for a rectangle, a circle, etc., a closed loop, a touch input, a duration of a touch input, etc. (Choi, [0239]). The ROI is set in a one device, the ROI is transmitted to another electronic device which adapts/adjusts its video stream based on at least one of: focus, crop, change in picture quality, or exposure adjustment, with respect to the ROI (Choi 2, [0145], [0217], [0241]). The ROI is encoded and transmitted (Choi, [0218]-[0219], [0240]).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Ouyang and Yu with Choi that a method for allowing attendees of a video conference to select a portion of what is shared to be magnified would include that the selected and magnified portion would be encoded. With Ouyang and Choi disclosing video conferences where portions of the shared content is selected to magnify in attendee devices, and with Choi disclosing that the selected regions of interested are encoded when transmitted to other attendee devices, one of ordinary skill in the art of implementing a method for allowing attendees of a video conference to select a portion of what is shared to be magnified would include that the selected and magnified portion would be encoded in order to allow for the selected portions to be adjusted for different display resolutions and bandwidth of the attendee devices. One would therefore be motivated to combine these teachings as in doing so would create this method for allowing attendees of a video conference to select a portion of what is shared to be magnified.
Claim(s) 28 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ouyang and Yu, and further in view of Julin et al. (US 2008/0198178 A1, hereinafter, “Julin”).
Regarding claim 28, Ouyang and Yu teach the method of claim 14, however, may not explicitly teach every aspect of wherein the first selection area maintains an aspect ratio of the first image.
Julin discloses the present invention is intended for use in connection with digital cameras, such as pan/tilt/zoom digital cameras that are often used in various types of surveillance applications or video conferencing systems (Julin, [0003]). An original image recorded by the camera is received. The original image has an original aspect ratio. A first user input defining a center point of an area of interest within the original image is received. A second user input defining a perimeter for the area of interest of the original image is received. The perimeter for the area of interest has the same aspect ratio as the original aspect ratio. The camera view is adjusted based on the first user input and the second user input to provide a zoomed image of the defined area of interest. The zoomed image is centered on the center point defined by the first user input. Finally, the zoomed image is displayed to the user on a display screen (Julin, abstract).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Ouyang and Yu with Julin that a method for allowing attendees of a video conference to select a portion of what is shared to be magnified would include wherein the first selection area maintains an aspect ratio of the first image. With Ouyang and Julin suggesting video conferences where portions of the shared content is selected to magnify in attendee devices, and with Julin additionally disclosing that the aspect ratio of the zoom perimeter be the same as the original image aspect ratio, one of ordinary skill in the art of implementing a method for allowing attendees of a video conference to select a portion of what is shared to be magnified would include wherein the first selection area maintains an aspect ratio of the first image in order to ensure to maximize use of the display and prevent image warping during enlarging of the image on the client device. One would therefore be motivated to combine these teachings as in doing so would create this method for allowing attendees of a video conference to select a portion of what is shared to be magnified.
Pertinent Prior Art
The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. A sample:
Oyman (US 2015/0195490 A1) – a device attending a video conference can select a region of interest of the displayed conference, send a request for the ROI, and the device receives and displays only the ROI.
Lindbergh (US 2013/0083153 A1) – ROIs are selected and cropped for attendee devices during video conferences.
Roman (US 2008/0250458 A1) – receiving device can select a region for enlargement on their display during video conference.
Yada (US 8,823,602 B2) – regions of a video conference are cropped for the display sizes of attendee devices.
Kim (US 2014/0022329 A1) – attendee device selects region to enhance in a video conference.
Reuschel (US 2023/0376326 A1) – collaborating environment where attendees have a viewpoint portion of the entire environment.
Edmonds (US 2019/0172177 A1) –environment where mobile devices select and receive enhanced regions of interest of a shared image.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK F RIEGLER whose telephone number is (571)270-3625. The examiner can normally be reached M-F 9:30am-6:00pm, ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PATRICK F RIEGLER/ Primary Examiner, Art Unit 2171