DETAILED ACTION
This action is in response to the remarks filed 12/12/2025. Claims 1 - 21 are pending and have
been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 12/12/2025 have been fully considered but they are not persuasive.
In Regards to Remarks filed 12/12/2025, Applicant states:
“Applicant respectfully submits that Agarwal, taken alone or in any proper combination with Fang, fails to disclose,
responsive to detecting the first trigger condition, the first device concurrently performing:
(a) causing, by the first device, display of the video stream, corresponding to the video call, on the second device;
(b) executing, by the first device, the first application to utilize the voice data and the video data captured by the first device for the video call; and
(c) displaying, by the first device on the first display screen comprised in the first device, application data corresponding to a second application that is different than the first application without displaying on the first display screen the video stream being displayed on the second device,
as presently recited in claim 1 (representative claim).
The Office acknowledges, "Agarwal does not expressively [sic] teach, "concurrently with causing [by the first device] display of the video stream on the second device [of the video call being conducted by the first application executing on the first device]: displaying, by the first device on the first screen, application data corresponding to a second application that is different than the first application." (Office Action, page 4). The Office relies on Fang for curing the deficiencies of Agarwal.
With reference to FIGS. 4(a)-4(d) of Fang, reproduced below, Fang discloses a video call being transferred between a first device, i.e., a mobile phone, and a second device, i.e., a smart television. As illustrated in FIG. 4(a), the video call is initiated on the mobile phone. A user action, i.e., engaging control 212 on the mobile phone, causes the video call to switch to the smart television. As illustrated in FIG. 4(d), after a video call is switched to a smart television, the mobile phone displays a first notification message 214 indicating, "[t]he video call is being performed on the living room television." Fang discloses, "when the video call is performed on the smart television, a built-in or external camera of the smart television may be used." [0251]. In this manner, when the video call is switched from the mobile phone to the smart television, the voice data and the video data for the video call are captured by the smart television. The voice data and the video data for the call are no longer captured by the mobile phone, as recited presently in claim 1.”
In regards to Non-Final Rejection dated 09/15/2025, claims 1 – 21 were rejected under 35 U.S.C. 103 as being unpatentable over Agarwal in view of Fang. Agarwal teaches causing a display of a video stream to be casted from a first device to a second device upon user input, in which the first device captures local audio and video to send to the video conference. Agarwal does not teach the ability to then display a different application other than the video conference on the first device. Because of this, Fang is used to cure this deficiency. Fang teaches switching a video call from a first device (phone) to a second device (television) entirely, in which the user can then use the first device for other applications. Fang does not teach maintaining the capturing of audio and video data locally at the first device after the switching to the second device. However, in combination, these two references (Agarwal in view of Fang) teach all the limitations of Claim 1. As stated previously in Non-Final Rejection dated 09/15/2025, It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of utilizing a primary device associated with a user in a video conference and displaying video conference stream on secondary device also associated with said user while utilizing input data from a first device (as taught in Agarwal), with concurrently using two devices associated with a user in a video conference and running separate applications on each device (as taught in Fang), the motivation being to improve video conferencing experience by enabling the ability to; utilize superior components across multiple devices, such as a television for a larger display, or using a microphone or camera from a mobile phone with higher quality (see Fang Paragraph [0003] and [0004]), and enable a video conference user to multitask (see Fang Figures 4C and 4D). Therefore, Claims 1 – 21 remain as rejected under 35 U.S.C. 103 as being unpatentable over Agarwal in view of Fang.
Response to Amendment
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 – 21 are rejected under 35 U.S.C. 103 as being unpatentable over Agarwal (U.S. Patent No. 9948888) in view of Fang et al. (E.P. Pub. No. 4109891, hereinafter “Fang”).
Regarding Claim 1, Agarwal teaches
A non-transitory computer readable medium comprising instructions, which when executed by one or more hardware processors, cause performance of operations (see Agarwal Column 2, lines 4 – 8, The computer-readable medium can have a set of instructions stored thereon that, when executed by one or more processors of a first computing device, causes the first computing device to perform operations) comprising:
executing, by a first device, a first application for conducting a video call, the first application (a) causing display of a video stream corresponding to the video call on a first display screen comprised in the first device and (b) utilizing voice data and video data captured by the first device for the video call (see Agarwal Column 7, lines 42 – 53, Referring now to FIG. 4, an example technique 400 for utilizing the display 144 for a video chat session requested is illustrated. At 404, the mobile computing device 104 can detect a video chat request. The video chat request could be received at the mobile computing device 104, or the mobile computing device 104 could be notified of the receipt of the video chat request by the server computing device 108. When the video chat request is detected, the technique 400 can proceed to 408. Otherwise, the technique 400 can end or return to 404. At 408, the video chat session can be established between the mobile computing device 104 and the other computing device 116, utilizing audio and video input from computing device(s));
detecting a first trigger condition for transferring display of the video stream from being displayed on the first device to being displayed on a second device (see Agarwal Figure 4, item 412, “input received?”, item 416, transmit configuration information, Column 7, lines 53 – 56, At 412, the mobile computing device 104 can determine whether an input has been received by the casting device 140 (e.g., from the user 124) to operate the display 144 in an output mode and Column 8, lines 1 – 4, At 416, the mobile computing device 104 can transmit configuration information for enabling the casting device 140 to output, at the display 144, audio/video information received from computing device); and
responsive to detecting the first trigger condition (see Agarwal Column 7, lines 53 – 56, At 412, the mobile computing device 104 can determine whether an input has been received by the casting device 140 (e.g., from the user 124) to operate the display 144 in an output mode and Figure 4, item 412, “input received?”), the first device concurrently performing:
(a)v causing, by the first device, display of the video stream, corresponding to the video call, on the second device (see Agarwal Column 7, lines 53 – 63, At 412, the mobile computing device 104 can determine whether an input has been received by the casting device 140 (e.g., from the user 124) to operate the display 144 in an output mode. As previously discussed, in this mode, the display 144 receives (via the casting device 140) and outputs the audio/video information from the other computing device 116, but does not receive/transmit any audio/video information captured locally. Instead, this can be done using the mobile computing device 104. Thus, this may also be referred to as an output-only mode, Column 8, lines 1 – 4, At 416, the mobile computing device 104 can transmit configuration information for enabling the casting device 140 to output, at the display 144, audio/video information received from the other computing device 116 and Column 8, lines 16 – 21, receipt of the configuration information causes the casting device 140 to (i) receive the first audio/video information directly from the other computing device 116 and (ii) output the first audio/video information directly via the display 144);
(b)v executing, by the first device, the first application to utilize the voice data and the video data captured by the first device for the video call (see Agarwal Column 7, lines 53 – 63, At 412, the mobile computing device 104 can determine whether an input has been received by the casting device 140 (e.g., from the user 124) to operate the display 144 in an output mode. As previously discussed, in this mode, the display 144 receives (via the casting device 140) and outputs the audio/video information from the other computing device 116, but does not receive/transmit any audio/video information captured locally. Instead, this can be done using the mobile computing device 104. Thus, this may also be referred to as an output-only mode, Column 8, lines 1 – 4, At 416, the mobile computing device 104 can transmit configuration information for enabling the casting device 140 to output, at the display 144, audio/video information received from the other computing device 116 and Column 8, lines 16 – 21, receipt of the configuration information causes the casting device 140 to (i) receive the first audio/video information directly from the other computing device 116 and (ii) output the first audio/video information directly via the display 144);
Agarwal does not expressively teach
(c)v displaying, by the first device on the first display screen comprised in the first device, application data corresponding to a second application that is different than the first application without displaying on the first display screen the video stream displayed on the second device.
However, Fang teaches
(c)v displaying, by the first device on the first display screen comprised in the first device, application data corresponding to a second application that is different than the first application without displaying on the first display screen the video stream displayed on the second device (see Fang Figures 4C and 4D, which are simultaneous in time, displaying a user’s primary or first device, and a secondary device being a television and Paragraph [0221], After the mobile phone switches the video call to the smart television 102, the mobile phone returns to the user interface. For example, the mobile phone displays a home screen before receiving a video call request. As shown in FIG. 4(d) , after the mobile phone switches the video call to the smart television 102, the mobile phone returns to the home screen (and thus the phone can be used for another application simultaneously). In some embodiments, the graphical user interface may include a first notification message 214. The first notification message 214 may display text information, for example, "The video call is being performed on the living room television", to prompt the user that the video call is being performed on the living room television and Paragraph [0221], In some other embodiments, if the mobile phone displays an interface of another application (such as gallery or reading) or an interface of a video call application before receiving the video call request, after the mobile phone switches the video call to the smart television 102, the mobile phone may display a corresponding interface before the switching. Therefore, after video call is switched to second device, the first device may use a different application other than the video call application, such as a gallery application or reading application, in which the first application is the video call application and the second application is a gallery or reading application).
It would have been obvious to one of ordinary skill in the art before the effective filing date of
the claimed invention to combine the teaching of utilizing a primary device associated with a user in a video conference and displaying video conference stream on secondary device also associated with said user while utilizing input data from a first device (as taught in Agarwal), with concurrently using two devices associated with a user in a video conference and running separate applications on each device (as taught in Fang), the motivation being to improve video conferencing experience by enabling the ability to; utilize superior components across multiple devices, such as a television for a larger display, or using a microphone or camera from a mobile phone with higher quality (see Fang Paragraph [0003] and [0004]), and enable a video conference user to multitask (see Fang Figures 4C and 4D).
Regarding Claim 2, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 1, wherein the operations further comprise:
concurrently with displaying the video stream corresponding to the video call on the first display screen, playing an audio stream corresponding to the video call on a first audio component corresponding to the first device (see Agarwal Column 7, lines 42 – 53, Referring now to FIG. 4, an example technique 400 for utilizing the display 144 for a video chat session requested is illustrated. At 404, the mobile computing device 104 can detect a video chat request. The video chat request could be received at the mobile computing device 104, or the mobile computing device 104 could be notified of the receipt of the video chat request by the server computing device 108. When the video chat request is detected, the technique 400 can proceed to 408. Otherwise, the technique 400 can end or return to 404. At 408, the video chat session can be established between the mobile computing device 104 and the other computing device 116, utilizing audio and video input from computing device(s)),
wherein further responsive to detecting the first trigger condition, the first device causing playing of the audio stream on a second audio component corresponding to the second device (see Agarwal Column 7, lines 53 – 63, At 412, the mobile computing device 104 can determine whether an input has been received by the casting device 140 (e.g., from the user 124) to operate the display 144 in an output mode. As previously discussed, in this mode, the display 144 receives (via the casting device 140) and outputs the audio/video information from the other computing device 116, but does not receive/transmit any audio/video information captured locally. Instead, this can be done using the mobile computing device 104. Thus, this may also be referred to as an output-only mode, Column 8, lines 1 – 4, At 416, the mobile computing device 104 can transmit configuration information for enabling the casting device 140 to output, at the display 144, audio/video information received from the other computing device 116 and Column 8, lines 16 – 21, receipt of the configuration information causes the casting device 140 to (i) receive the first audio/video information directly from the other computing device 116 and (ii) output the first audio/video information directly via the display 144).
Regarding Claim 3, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 1, wherein the operations further comprise:
detecting a second trigger condition for transferring display of the video stream back to being displayed on the first device from being displayed on the second device (see Fang Figure 3D, text information displayed on primary (mobile) device with an input option to “switch to the mobile phone” therefore switching the video conference back to the phone from the television); and
responsive to detecting the second trigger condition:
the first device causing display of the video stream, corresponding to the video call, on the first display screen of the first device while the first application continues to utilize the voice data and the video data captured by the first device for the video call (see Fang Figure 3D, text information displayed on primary (mobile) device with an input option (215) to “switch to the mobile phone” therefore switching the video conference back to the phone from the television and Paragraph [0221], The graphical user interface may further include a control 215. If the mobile phone detects an operation performed on the control, the mobile phone may send a third switching message to the smart television 102. In response to the third switching message, the mobile phone may continue the video call and display data collected by the camera of the other party, and the smart television 102 may display an interface displayed before a moment at which the mobile phone switches the video call to the smart television 102).
Regarding Claim 4, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 1, wherein detecting the first trigger condition includes detecting user engagement with a user interface element on the first device for initiating transfer of the video stream to the second device (see Agarwal Figure 4, item 412, “input received?”, item 416, transmit configuration information, Column 7, lines 53 – 56, At 412, the mobile computing device 104 can determine whether an input has been received by the casting device 140 (e.g., from the user 124) to operate the display 144 in an output mode and Column 8, lines 1 – 4, At 416, the mobile computing device 104 can transmit configuration information for enabling the casting device 140 to output, at the display 144, audio/video information received from computing device).
Regarding Claim 5, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 1, wherein detecting the first trigger condition includes detecting that the second device is available (see Fang Paragraph [0192], For example, after accessing the local area network, the electronic device 100 may broadcast, to the network, that the electronic device 100 has entered the network, and simultaneously obtain attribute information of another device from the network, for example, a device type, a device identifier, and device description information. In some other embodiments, the electronic device 100 may obtain a list of available devices in the network from one device (for example, a router or a control device in the network) in the network. The device list may include a device type, a device identifier, device description information, and the like of each available device. The device type may distinguish between types of terminal devices, such as a television set, a camera, and a PC. The device identifier is used to distinguish different devices. The device description information indicates more specific description information of the device, for example, capability information such as a service or a protocol supported by the device, Paragraph [0195], After accessing the network again or generating an interaction request, the electronic device 100 may monitor whether the smart television 102 is available and whether the stored attribute information changes, Paragraph [0298], After determining the service status, the electronic device 100 may determine whether there is currently an available device that can perform interaction. For example, after determining that a mobile phone is currently in a video call state, the mobile phone may determine whether there is a device having a video playback capability in a current device group, for example, a smart television. In some embodiments, the mobile phone may determine, by using the device description capability obtained during device discovery, whether the device has the video playback capability).
Regarding Claim 6, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 1, wherein detecting the first trigger condition includes detecting opening of the second application (see Fang Paragraph [0322], the data channel establishment signal may include an application start indication. The indication is used to start the video call application. After receiving the application start indication, the smart television opens the video call application).
Regarding Claim 7, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 1, wherein detecting the first trigger condition includes detecting user engagement with the first device to minimize the first application (see Fang Figures 4A to 4D, in which when switching, application is minimized on first or primary device and video conference is displayed on television).
Regarding Claim 8, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 3, wherein detecting the second trigger condition includes detecting user engagement with a user interface element on the first device terminating transfer of the video stream to the second device (see Fang Figure 3D, text information displayed on primary (mobile) device with an input option (215) to “switch to the mobile phone” therefore switching the video conference back to the phone from the television and Paragraph [0221], The graphical user interface may further include a control 215. If the mobile phone detects an operation performed on the control, the mobile phone may send a third switching message to the smart television 102. In response to the third switching message, the mobile phone may continue the video call and display data collected by the camera of the other party, and the smart television 102 may display an interface displayed before a moment at which the mobile phone switches the video call to the smart television 102).
Regarding Claim 9, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 3, wherein detecting the second trigger condition includes detecting that the second device is unavailable (see Fang Paragraph [0222], after the mobile phone switches the video call to the smart television 102, the user may switch, by using an operation shown in FIG. 5(a) and FIG. 5(b) , the video call back to the mobile phone for continuing. As shown in FIG. 5(a) , when the mobile phone detects a downward interaction gesture on a status bar, in response to the gesture, the mobile phone may display a window 216, a first notification message 214, and a control 215 on a graphical user interface, as shown in FIG. 5(b) . An on/off control of a function such as Bluetooth or Wi-Fi may be displayed in the window 216. When the mobile phone detects an operation performed on the control 215, the mobile phone may trigger switching of the video call from the smart television 102 back to the mobile phone for continuing).
Regarding Claim 10, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 3, wherein detecting the second trigger condition includes detecting closing of the second application (see Fang Figure 3D, text information displayed on primary (mobile) device with an input option (215) to “switch to the mobile phone” therefore switching the video conference back to the phone from the television and Paragraph [0221], The graphical user interface may further include a control 215. If the mobile phone detects an operation performed on the control, the mobile phone may send a third switching message to the smart television 102. In response to the third switching message, the mobile phone may continue the video call and display data collected by the camera of the other party, and the smart television 102 may display an interface displayed before a moment at which the mobile phone switches the video call to the smart television 102).
Regarding Claim 11, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 1, wherein an operating system of the first device causes display of the video stream on the second device (see Agarwal Column 10, lines 61 – 67, Certain aspects of the described techniques include process steps and instructions described herein in the form of an algorithm. It should be noted that the described process steps and instructions could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems and Column 1, lines 54 – 60, The operations can include detecting a request to establish a video chat session between the first computing device and a second computing device; in response to the detecting, establishing, between the first and second computing devices, the video chat session; receiving, from a user, an input to operate a casting device in an output mode for the video chat session).
Regarding Claim 12, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 1, wherein an operating system of the first application causes display of the video stream on the second device (see Agarwal Column 10, lines 61 – 67, Certain aspects of the described techniques include process steps and instructions described herein in the form of an algorithm. It should be noted that the described process steps and instructions could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems and Column 1, lines 54 – 60, The operations can include detecting a request to establish a video chat session between the first computing device and a second computing device; in response to the detecting, establishing, between the first and second computing devices, the video chat session; receiving, from a user, an input to operate a casting device in an output mode for the video chat session).
Regarding Claim 13, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 1, wherein the second device is a television (see Agarwal Column 4, lines 44 – 48, The mobile computing device 104 can also be associated with a casting device 140 that is connected to a display 144, such as a non-smart television that is not configured for communication via the network 112, such as for the execution of video chatting/streaming software applications and FIG. 4, which is a flow diagram of an example technique for utilizing a television for a video chat session according to some implementations of the present disclosure, in which the television is the second device to display video conference).
Regarding Claim 14, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 1, wherein causing display of the video stream on the second device includes transmitting the video stream to a streaming device (see Agarwal Column 4, lines 44 – 48, The mobile computing device 104 can also be associated with a casting device 140 that is connected to a display 144, such as a non-smart television that is not configured for communication via the network 112, such as for the execution of video chatting/streaming software applications and Figure 4, transmit captured audio/video information for display).
Regarding Claim 15, Agarwal in view of Fang teaches
The non-transitory computer readable medium of claim 1, wherein the voice data and the video data for the video call are captured by a microphone of the first device and a camera of the first device (see Agarwal Column 7, lines 53 – 63, At 412, the mobile computing device 104 can determine whether an input has been received by the casting device 140 (e.g., from the user 124) to operate the display 144 in an output mode. As previously discussed, in this mode, the display 144 receives (via the casting device 140) and outputs the audio/video information from the other computing device 116, but does not receive/transmit any audio/video information captured locally. Instead, this can be done using the mobile computing device 104. Thus, this may also be referred to as an output-only mode, Column 8, lines 1 – 4, At 416, the mobile computing device 104 can transmit configuration information for enabling the casting device 140 to output, at the display 144, audio/video information received from the other computing device 116 and Column 8, lines 16 – 21, receipt of the configuration information causes the casting device 140 to (i) receive the first audio/video information directly from the other computing device 116 and (ii) output the first audio/video information directly via the display 144).
Regarding Claims 16 - 18, they are rejected similarly as Claims 1 - 3, respectively. The method can be found in Agarwal (Column 9, line 1, method).
Regarding Claims 19 - 21, they are rejected similarly as Claims 1 - 3, respectively. The system can be found in Agarwal (Column 9, line 1, system).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of References Cited for a listing of analogous art.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARISSA ANNE JONES whose telephone number is (703)756-1677. The examiner can normally be reached Telework M-F 6:30 AM - 4:00 PM CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at 5712727503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CARISSA A JONES/ Examiner, Art Unit 2691
/DUC NGUYEN/ Supervisory Patent Examiner, Art Unit 2691