DETAILED ACTION
This action is in response to the Amendment dated 25 September 2025. Claims 55, 66 and 70 are amended. No claims have been added or cancelled. Claims 55-74 remain pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner’s Suggestion
Examiner suggests amending the independent claims to further detail how the “injection” into the camera occurs (e.g. see applicant’s specification paragraphs 0115, 0379, 0429). Examiner believes that an amendment in this manner would facilitate the advancement of prosecution.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 55-74 are rejected under 35 U.S.C. 103 as being unpatentable over Jayaweera (US 2021/0352128 A1) in view of Koyama et al. (US 2022/0083306 A1) and further in view of Chong et al. (US 8,988,558 B2).
As for independent claim 55, Jayaweera teaches a method comprising:
establishing, by a first terminal, a second connection to a third terminal, wherein the second connection comprises a video call connection established through a video call application [(e.g. see Jayaweera paragraph 0037 and Fig. 1) ”a first user 102 may utilize a primary device 120 to establish a video conference session through the server 106 with another primary device 140 associated with a second user 104”].
playing, by the first terminal, a second picture received from the third terminal and captured by a camera of the third terminal, and sending to the third terminal, a first picture captured by a camera of the first terminal [(e.g. see Jayaweera paragraphs 0039, 0042 and Fig. 2A) ” The host session window 204 may include a presentation window 210, a scratch pad window 212, participant video feeds 226 … Each participant may have a device A or B which displays a participant session window 206 and 208 that includes a presentation window 216 and 222 sent by the host, a participant window 220 and 224, and/or a video feed 228 and 230 sent by the host”].
displaying, by the first terminal, a first user interface, wherein the first user interface comprises a first window and a second window, wherein the second picture is displayed in the first window [(e.g. see Jayaweera paragraph 0039 and Fig. 2A numeral 226) ”The host session window 204 may include a presentation window 210, a scratch pad window 212, participant video feeds 226, and/or one or more participant windows 214”].
establishing, by the first terminal, a first connection to a second terminal, wherein the first terminal and the second terminal log in to a same account or a different account [(e.g. see Jayaweera paragraphs 0048, 0052 and Fig. 1 numerals 122, 124, 126, 130, 132, 134) ”users may dynamically extend their video conference session presence onto other devices. For example, referring again to FIG. 1, the first user 102 may have joined the video conference session (e.g., via a website) from its primary device 120. During the video conference session, the first user 102 may wish to add a second camera or share a screen with other participants of the video conference session. Having the ability to dynamically add devices to an established or ongoing video conference session allows users to expand their video conference session capabilities to one or more peripheral devices 122, 124, 126, 128, and/or 142 via wired or wireless links 130, 132, 134, 136, and/or 144 … a user 102 or 104 may, for example, add a peripheral device 122, 124, 126, 128, and/or 142 to add an additional camera, share a work screen (e.g., for documents, or drawings) … Various methods of adding a peripheral device to a previously established video conference are contemplated herein. In one example, the user of the first participating device A 120 may initiate joining 314 of the session by using the same user/participant/account identifier (User ID X) as used with the primary device A 120 and/or using the device identifier C for the peripheral device C 122 … Upon receipt of the join request 316 from the peripheral device C 122, the server 106 may add 318 the peripheral device C 122 to the video conference session”].
wherein the screen interface content of the second terminal is injected into the camera of the first terminal [(e.g. see Jayaweera paragraphs 0064, 0065) ”The primary device may transmit a stream of session data, audio, and/or video to the host server 706 … the first device may also relay streams of session data, audio, and/or video between the second device and the host server 710. For instance, the primary device may use a short range or local link to communicate to/from the second device, and a network interface to communicate to/from the host server. The streams between the first device and host server may be used to also transmit/receive the audio, video, and/or data for the session to/from the peripheral device. For instance, the streams of session data, audio, and/or video to/from the peripheral device may be encoded or interleaved within the streams transmitted between the first device and the host server”]. Examiner notes that the data stream from the second device is interleaved/encoded into the video stream of the first device.
so that the third terminal displays the screen interface content of the second terminal while the third terminal is having a video call with the first terminal [(e.g. see Jayaweera paragraphs 0048, 0051, 0071) ”a user may wish to share an idea or document and thus may add a peripheral device (e.g., tablet, screen, etc.) to the video conference session through which the user can share such idea or document with other participant(s) … While the video conference session is still active, a first participating device A 120 (or user thereof) may seek to add a peripheral device C 122. Adding the peripheral device 122 to the existing video conference session 302 may permit a user of the first participating device A 120 to add a camera or share a document on an as-needed basis … dynamic extension of a session onto another device may include adding a peripheral device, such as a mobile phone, tablet, touchscreen television or monitor, which can be used to make and share a drawing, document, and/or picture during the session”].
Jayaweera does not specifically teach triggering, by the first terminal in response to a first operation, the second terminal to share screen interface content of the second terminal by using the video call application. However, in the same field of invention, Koyama teaches:
triggering, by the first terminal in response to a first operation, the second terminal to share screen interface content of the second terminal by using the video call application [(e.g. see Koyama paragraph 0051 and Fig. 5) ”FIG. 5 is an illustration of an example of a conference screen according to the present embodiment. The conference screen 1040 illustrated in FIG. 5 has a screen sharing button (“SHARE SCREEN”) 1042 at the bottom of the screen. In response to a user operation of a user who operates (uses) the information processing device 10 of pressing the screen sharing button 1042, a screen sharing target list is displayed on the conference screen 1040. The screen sharing target list is a list for selecting a target for screen sharing (screen sharing target). The screen sharing target list illustrated in FIG. 5 includes an external PC input 1044 used for selecting an external input screen as one of the screen sharing targets. In response to a user operation of selecting the external PC input 1044, start of screen sharing of external input screen is requested. A request for start of screen sharing of external input screen may be referred to as an external input screen sharing start request”].
Therefore, considering the teachings of Jayaweera and Koyama, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add triggering, by the first terminal in response to a first operation, the second terminal to share screen interface content of the second terminal by using the video call application, as taught by Koyama, to the teachings of Jayaweera because it allows a user to reduce the time and the effort taken for performing screen sharing (e.g. see Koyama paragraph 0099).
Jayaweera and Koyama do not specifically teach the first picture is displayed in the second window. However, in the same field of invention, Chong teaches:
the first picture is displayed in the second window [(e.g. see Chong col 1 lines 32-34, col 3 line 16, lines 22-24, col 4 lines 4-16 and Fig. 4) ”When used as a video telephone, the display shows a video of the remote party and, in some cases, a video also of the user superimposed on the screen … showing the user 105 as captured by the front camera … The image of the user is overlaid onto the image of the trees. This provides an inset 107 of the user together with the larger background image … The mobile device may also offer options to substitute one of the images for another image, for example an image stored in a memory of the device or an image received from another mobile device. In this way, the user can switch between narrating the background to narrating a previous background or video to receiving a video of another background or discussion from another connected user. In one example, the inset is used for video chat or video telephony with another user, while the primary image can be switched between the rear camera's view, the other user's rear camera view and other images or video that is considered to be interesting or relevant to the video chat session”].
Therefore, considering the teachings of Jayaweera, Koyama and Chong, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add the first picture is displayed in the second window, as taught by Chong, to the teachings of Jayaweera and Koyama because it allows the user to customize the content for each window to what the user finds interesting or relevant (e.g. see Chong col 4 lines 4-16).
As for dependent claim 56. Jayaweera, Koyama and Chong teach the method as described in claim 55 and Jayaweera further teaches:
wherein the second connection comprises a communication connection for transferring data through the Internet or a local area network [(e.g. see Jayaweera paragraph 0008, 0036, 0039 and Fig. 1) ”session host device to host a multi-participant conference session. The host device may host an online session having a plurality of participant devices over a network, each participant device identified by a participant account and/or a unique device identifier. The host device may receive a (first) request from a first device to be added to the online session, the first device having a first participant account and a first unique device identifier. Similarly, the host device may receive another (second) request from a second device to be added to the online session, the second device having the first participant account and a second unique device identifier … the server 106 may communicate with user devices via a wired or wireless network … the video conference session and communications between devices/participants may occur through a hosting server (e.g., an online session)”].
As for dependent claim 57. Jayaweera, Koyama and Chong teach the method as described in claim 55, but Jayaweera does not specifically teach the following limitation. However, Koyama teaches:
wherein the first operation comprises an operation applied to a first option displayed by the first terminal [(e.g. see Koyama paragraph 0051 and Fig. 5 numeral 1044) ”The conference screen 1040 illustrated in FIG. 5 has a screen sharing button (“SHARE SCREEN”) 1042 at the bottom of the screen. In response to a user operation of a user who operates (uses) the information processing device 10 of pressing the screen sharing button 1042, a screen sharing target list is displayed on the conference screen 1040. The screen sharing target list is a list for selecting a target for screen sharing (screen sharing target). The screen sharing target list illustrated in FIG. 5 includes an external PC input 1044 used for selecting an external input screen as one of the screen sharing targets. In response to a user operation of selecting the external PC input 1044, start of screen sharing of external input screen is requested”].
The motivation to combine is the same as that used for claim 55.
As for dependent claim 58. Jayaweera, Koyama and Chong teach the method as described in claim 55 and Jayaweera further teaches
wherein the first operation comprises an operation applied to a second option displayed by the second terminal [(e.g. see Jayaweera paragraphs 0048, 0052) ”the peripheral device C 122 may simply pull up a website for the conference session and logs into the session. For instance, the peripheral device C 122 may send a message or request to join 316 the conference session using the user/participant/account identifier (User ID X) … a user may wish to share an idea or document and thus may add a peripheral device (e.g., tablet, screen, etc.) to the video conference session through which the user can share such idea or document with other participant(s)”].
As for dependent claim 59. Jayaweera, Koyama and Chong teach the method as described in claim 55, but Jayaweera and Koyama do not specifically teach the following limitation. However, Chong teaches:
wherein the method further comprises: after the triggering, by the first terminal in response to the first operation, the second terminal to share the screen interface content of the second terminal through the video call application, displaying by the first terminal, the screen interface content of the second terminal in the first window, and displaying the second picture in the second window [(e.g. see Chong col 1 lines 34-36, col 3 lines 38-42, col 4 lines 4-16 and Figs. 5 and 6) ”When used as a video telephone, the display shows a video of the remote party and, in some cases, a video also of the user superimposed on the screen … The mobile device may also offer options to substitute one of the images for another image, for example an image stored in a memory of the device or an image received from another mobile device. In this way, the user can switch between narrating the background to narrating a previous background or video to receiving a video of another background or discussion from another connected user. In one example, the inset is used for video chat or video telephony with another user, while the primary image can be switched between the rear camera's view, the other user's rear camera view and other images or video that is considered to be interesting or relevant to the video chat session … In FIG. 6, the images are reversed so that the image of the trees 103 now fills the inset, while the image of the user 105 is the primary image. This and many other manipulations of the two images may be performed”].
The motivation to combine is the same as that used for claim 55.
As for dependent claim 60. Jayaweera, Koyama and Chong teach the method as described in claim 55, but Jayaweera and Koyama do not specifically teach the following limitation. However, Chong teaches:
wherein the first window is a main window, the second window is a floating window, and the size of the second window is smaller than the size of the first window [(e.g. see Chong col 1 lines 34-36, col 3 lines 34-38, 64-66, col 4 lines 11-16 and Figs. 4 and 5) ”When used as a video telephone, the display shows a video of the remote party and, in some cases, a video also of the user superimposed on the screen … As shown in FIG. 5, the position of the inset may be moved to different places on the primary image. This can allow the user to expose different parts of the primary image that may otherwise have been obscured by the inset … Other mobile devices feature a touch screen, in which case the inset may be touched and then dragged to a new position … In one example, the inset is used for video chat or video telephony with another user, while the primary image can be switched between the rear camera's view, the other user's rear camera view and other images or video that is considered to be interesting or relevant to the video chat session”]. Examiner notes that, as depicted in Fig. 5, the inset is smaller than the primary.
The motivation to combine is the same as that used for claim 55.
As for dependent claim 61. Jayaweera, Koyama and Chong teach the method as described in claim 55 and Jayaweera further teaches
wherein the first terminal and the second terminal are trusted devices to each other, and wherein the establishing, by the first terminal, the first connection to the second terminal comprises: automatically establishing, by the first terminal, the first connection to the second terminal [(e.g. see Jayaweera paragraphs 0055, 0104) ”the primary device A 120 may establish a communication link 315 (e.g., a short range link such as Bluetooth) and then registers the peripheral device C 122 with the server 106 by sending a join conference session message 317 which may identify the user/participant identifier ID X and/or the device identifier ID C … application automatically start and join the conference session”].
As for dependent claim 62. Jayaweera, Koyama and Chong teach the method as described in claim 55 and Jayaweera further teaches
wherein the screen interface content of the second terminal comprises a conference material, a photo, or a video [(e.g. see Jayaweera paragraph 0071) ”extension of a session onto another device may include adding a peripheral device, such as a mobile phone, tablet, touchscreen television or monitor, which can be used to make and share a drawing, document, and/or picture during the session”].
As for dependent claim 63. Jayaweera, Koyama and Chong teach the method as described in claim 55, but Jayaweera does not specifically teach the following limitation. However, Koyama teaches:
wherein the screen interface content of the second terminal comprises a screen recording picture of the second terminal [(e.g. see Koyama paragraph 0057) ”the external input display application 30 selects the capture device 16 from among the peripheral devices connected via USBs or the like. In step S16, after confirming the activation of the external input display application 30, the display control unit 50 of the video conference application 32 causes the display device 12 to display an output image of the information processing terminal 14, thereby starting screen sharing”].
The motivation to combine is the same as that used for claim 55.
As for dependent claim 64. Jayaweera, Koyama and Chong teach the method as described in claim 55 and Jayaweera further teaches
further comprising: receiving, by the first terminal, an operation from the third terminal applied to a shared content during the video call, so that the shared content is consistent [(e.g. see Jayaweera paragraphs 0040, 0044, 0095) ” The participant window 220 and/or 224 may allow each participant device 206 and 208 to share content with the host (e.g., documents, graphics, video, etc.) … each participant may make its participation in the conference session either public (e.g., where other participants are aware of each other's participation in the conference session) … According to one aspect, any audio/video content, whiteboard content, and/or presentation/notes content within a session may be … timestamped (for synchronization purposes)”].
As for dependent claim 65. Jayaweera, Koyama and Chong teach the method as described in claim 55 and Jayaweera further teaches
sending, by the first terminal to the third terminal, an operation from the second terminal applied to a shared content during the video call, so that the shared content is consistent [(e.g. see Jayaweera paragraphs 0040, 0044, 0055, 0095) ”the primary device A 120 may act as a relay (e.g., through a short range or local link) for the video conference session stream to/from the peripheral device C 122 … The participant window 220 and/or 224 may allow each participant device 206 and 208 to share content with the host (e.g., documents, graphics, video, etc.) … each participant may make its participation in the conference session either public (e.g., where other participants are aware of each other's participation in the conference session) … According to one aspect, any audio/video content, whiteboard content, and/or presentation/notes content within a session may be … timestamped (for synchronization purposes)”].
As for independent claim 66, Jayaweera, Koyama and Chong teach a device. Claim 66 discloses substantially the same limitations as claim 55. Therefore, it is rejected with the same rational as claim 55.
As for dependent claim 67, Jayaweera, Koyama and Chong teach the device as described in claim 66; further, claim 67 discloses substantially the same limitations as claim 56. Therefore, it is rejected with the same rational as claim 56.
As for dependent claim 68, Jayaweera, Koyama and Chong teach the device as described in claim 66; further, claim 68 discloses substantially the same limitations as claim 57. Therefore, it is rejected with the same rational as claim 57.
As for dependent claim 69, Jayaweera, Koyama and Chong teach the device as described in claim 66; further, claim 69 discloses substantially the same limitations as claim 58. Therefore, it is rejected with the same rational as claim 58.
As for dependent claim 70, Jayaweera, Koyama and Chong teach the device as described in claim 66; further, claim 70 discloses substantially the same limitations as claim 59. Therefore, it is rejected with the same rational as claim 59.
As for dependent claim 71, Jayaweera, Koyama and Chong teach the device as described in claim 66; further, claim 71 discloses substantially the same limitations as claim 60. Therefore, it is rejected with the same rational as claim 60.
As for dependent claim 72, Jayaweera, Koyama and Chong teach the device as described in claim 66; further, claim 72 discloses substantially the same limitations as claim 61. Therefore, it is rejected with the same rational as claim 61.
As for dependent claim 73, Jayaweera, Koyama and Chong teach the device as described in claim 66; further, claim 73 discloses substantially the same limitations as claim 62. Therefore, it is rejected with the same rational as claim 62.
As for dependent claim 74, Jayaweera, Koyama and Chong teach the device as described in claim 66; further, claim 74 discloses substantially the same limitations as claim 64. Therefore, it is rejected with the same rational as claim 64.
Response to Arguments
Applicant's arguments, filed 25 September 2025, have been fully considered but they are not persuasive.
Applicant argues that [“None of the cited references disclose the above features [‘wherein screen interface content of the second terminal is injected into the camera of the first terminal’].” (Page 7).].
Examiner respectfully disagrees. Jayaweera teaches wherein the screen interface content of the second terminal is injected into the camera of the first terminal in paragraphs 0064, 0065 of Jayaweera’s disclosure [“The primary device may transmit a stream of session data, audio, and/or video to the host server 706 … the first device may also relay streams of session data, audio, and/or video between the second device and the host server 710. For instance, the primary device may use a short range or local link to communicate to/from the second device, and a network interface to communicate to/from the host server. The streams between the first device and host server may be used to also transmit/receive the audio, video, and/or data for the session to/from the peripheral device. For instance, the streams of session data, audio, and/or video to/from the peripheral device may be encoded or interleaved within the streams transmitted between the first device and the host server”]. One of ordinary skill in the art, namely a software developer, would recognize that the first device my act as a relay for the second device by receiving the data stream from the second device and interleaving or encoding it into the video stream of the first device. Thus, the combination adequately teaches applicant’s claimed amended limitation.
Applicant argues that [“Jayaweera is clear that the shared document is from device A 120 (the asserted first terminal), which is not the screen interface content of the asserted second terminal” (Page 8).].
Examiner respectfully disagrees. Jayaweera teaches that the shared document can be from the added device (i.e. second terminal) in paragraphs 0048, 0071 of Jayaweera’s disclosure [“a user 102 or 104 may, for example, add a peripheral device 122, 124, 126, 128, and/or 142 to add an additional camera, share a work screen (e.g., for documents, or drawings) … a user may wish to share an idea or document and thus may add a peripheral device (e.g., tablet, screen, etc.) to the video conference session through which the user can share such idea or document with other participant(s) … dynamic extension of a session onto another device may include adding a peripheral device, such as a mobile phone, tablet, touchscreen television or monitor, which can be used to make and share a drawing, document, and/or picture during the session”]. One of ordinary skill in the art, namely a software developer, would recognize that the added/peripheral device (i.e. second terminal) can be used to make and share a document. Thus, the combination adequately teaches applicant’s claimed limitation.
Citation of Pertinent Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. PGPub 2012/0221960 A1 issued to Robinson et al. on 30 August 2012. The subject matter disclosed therein is pertinent to that of claims 55-74 (e.g. video calling while mixing a camera image of a target object/scene).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER J FIBBI whose telephone number is (571)-270-3358. The examiner can normally be reached Monday - Thursday (8am-6pm).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at (571)-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER J FIBBI/Primary Examiner, Art Unit 2174