DETAILED ACTION
Response to Amendment
The Applicants’ amendment, filed 03/04/2026, was received and entered. As the results, dependent claims 3 and 15 were cancelled. New claims 21 and 22 were added. Therefore, claims 1-2, 4-14 and 16-22 are pending in this application at this time.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 8, 10, 16-17 and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Small et al. (US 9,009,631) in view of Koons et al. (US 2010/0057872).
Regarding claim 1, Small et al. (hereinafter “Small”) teaches a system (i.e., a system, as shown in figure 1, to transmit context data before connecting a voice call between users of mobile devices; col.2, lines 27-29) comprising:
one or more processors configured to cause the system at least to:
send, to a second device, a video stream captured by a camera of a first device (i.e., a calling mobile device receives a video taken or captured by a video recorder, i.e., a camera, associated with the calling mobile device; col.4, line 59 through col.5, line 2);
send, to the second device, a request to establish a bi-directional connection between the first device and the second device (i.e., the calling mobile device initiates a Session Initiation Protocol (SIP) connection, as a request or an indication of the placed voice call, used to setup a communication session with a destination mobile device (as the second device); col.5, lines 11-27);
wherein sending the video stream causes displaying of the video stream on the second device to notify a user associated with the second device of the request to establish the bi-directional connection (i.e., the destination mobile device presents or displays the context information (the video taken by the video recorder) along with the indication of the received voice call and one or more options requesting guidance in handling the call; col.5, lines 34-45 and lines 52-58);
establish the bi- directional connection between the first device and the second device based, at least in part, on detecting an interaction with the second device (i.e., detecting a selection of one of the options, such as the “accept” option to answer the call from a user after he or she review the context information; col.5, lines 59-64); and
receiving audio captured by the second device (i.e., upon receiving a selection to accept the voice call, the destination mobile device connects to the calling mobile device, establish a voice call for audio conversation; col.5, lines 64-67).
It should be noticed that Small the SIP message comprising the video recorded by the video recorder and the request or indication of the received voice call transmitted to the destination mobile device, as discussed above. Small failed to clearly teach the message further comprising an audio stream captured by the first device and transmitted to the second device. However, Koons et al. (hereinafter “Koons”) teaches a media file being sent from either a media enabled mobile device or a personal computer (para.[0046]). Koons further teaches the media file is created and comprising a video file, an audio file and/or a picture file to be transmitted to a selected recipient (para.[0047]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of send, to the second device, an audio stream captured by the first device, as taught by Koons, into view of Small in order to audibly indicate to a user associated with the second device that incoming call has been received.
Regarding claim 8, Small further teaches limitations of claim, as shown in figure 2, such as mobile telecommunication device 200a, 200b…200n performed as first device and second device, a communication channel performed as one or more wireless connection, a web server performed as a server to communicate communication data (content and other information) transmitted between mobile devices (col.3, lines 10-38).
Regarding claim 10, Small teaches a method comprising:
sending, to a second device, a video stream captured by a camera of a first device (i.e., a calling mobile device receives a video taken or captured by a video recorder, i.e., a camera, associated with the calling mobile device; col.4, line 59 through col.5, line 2);
sending, to the second device, a request to establish a bi-directional connection between the first device and the second device (i.e., the calling mobile device initiates a Session Initiation Protocol (SIP) connection, as a request or an indication of the placed voice call, used to setup a communication session with a destination mobile device (as the second device); col.5, lines 11-27);
wherein sending the video stream causes displaying of the video stream on the second device to notify a user associated with the second device of the request to establish the bi-directional connection (i.e., the destination mobile device presents or displays the context information (the video taken by the video recorder) along with the indication of the received voice call and one or more options requesting guidance in handling the call; col.5, lines 34-45 and lines 52-58);
detecting an interaction with the second device based, at least in part, on the video stream (i.e., detecting a selection of one of the options, such as the “accept” option to answer the call from a user after he or she review the context information; col.5, lines 59-64);
establish the bi- directional connection between the first device and the second device based, at least in part, on detecting an interaction with the second device (i.e., upon receiving a selection to accept the voice call, the destination mobile device connects to the calling mobile device, establish a voice call; col.5, lines 64-67); and
receiving audio captured by the second device (i.e., upon establishing the bi-directional connection, a voice call is connected for audio conversations between the calling and destination mobile devices; col.5, lines 64-67).
It should be noticed that Small the SIP message comprising the video recorded by the video recorder and the request or indication of the received voice call transmitted to the destination mobile device, as discussed above. Small failed to clearly teach the message further comprising an audio stream captured by the first device and transmitted to the second device. However, Koons et al. (hereinafter “Koons”) teaches a media file being sent from either a media enabled mobile device or a personal computer (para.[0046]). Koons further teaches the media file is created and comprising a video file, an audio file and/or a picture file to be transmitted to a selected recipient (para.[0047]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of send, to the second device, an audio stream captured by the first device, as taught by Koons, into view of Small in order to audibly indicate to a user associated with the second device that incoming call has been received.
Regarding claim 16, Small further teaches limitations of claim, as shown in figure 2, such as mobile telecommunication device 200a, 200b…200n performed as first device and second device, a communication channel performed as one or more wireless connection, a web server performed as a server to send communication data (i.e., content or the video taken by the video recorder, etc.) transmitted between mobile devices (col.3, lines 10-38).
Regarding claim 17, Small further teaches limitations of claim, such as instructions that cause the destination mobile device to take record a video of a user that receives the voice call and the destination mobile device as the second device (col.8, line 65 - col.9, line 4).
Regarding claim 20, Koons teaches a non-transitory computer-readable recording medium having instructions recorded thereon, the instructions, when executed by one or more processors, causing the one or more processors to perform operations comprising:
sending, to a second device, a video stream captured by a camera of a first device (i.e., a calling mobile device receives a video taken or captured by a video recorder, i.e., a camera, associated with the calling mobile device; col.4, line 59 through col.5, line 2);
sending, to the second device, a request to establish a bi-directional connection between the first device and the second device (i.e., the calling mobile device initiates a Session Initiation Protocol (SIP) connection, as a request or an indication of the placed voice call, used to setup a communication session with a destination mobile device (as the second device); col.5, lines 11-27);
wherein sending the video stream causes displaying of the video stream on the second device to notify a user associated with the second device of the request to establish the bi-directional connection (i.e., the destination mobile device presents or displays the context information (the video taken by the video recorder) along with the indication of the received voice call and one or more options requesting guidance in handling the call; col.5, lines 34-45 and lines 52-58);
detecting an interaction with the second device based, at least in part, on the video stream (i.e., detecting a selection of one of the options, such as the “accept” option to answer the call from a user after he or she review the context information; col.5, lines 59-64);
establish the bi- directional connection between the first device and the second device based, at least in part, on detecting an interaction with the second device (i.e., upon receiving a selection to accept the voice call, the destination mobile device connects to the calling mobile device, establish a voice call; col.5, lines 64-67); and
receiving audio captured by the second device (i.e., upon establishing the bi-directional connection, a voice call is connected for audio conversations between the calling and destination mobile devices; col.5, lines 64-67).
It should be noticed that Small the SIP message comprising the video recorded by the video recorder and the request or indication of the received voice call transmitted to the destination mobile device, as discussed above. Small failed to clearly teach the message further comprising an audio stream captured by the first device and transmitted to the second device. However, Koons et al. (hereinafter “Koons”) teaches a media file being sent from either a media enabled mobile device or a personal computer (para.[0046]). Koons further teaches the media file is created and comprising a video file, an audio file and/or a picture file to be transmitted to a selected recipient (para.[0047]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of send, to the second device, an audio stream captured by the first device, as taught by Koons, into view of Small in order to audibly indicate to a user associated with the second device that incoming call has been received.
Regarding claim 21, Small further teaches the (SIP) message transmitted via the SIP connection to the destination mobile device wherein the message comprises a context information, such as the video taken by the video recorder along with the indication of the place voice call (the request to establish the voice call connection; col.5, lines11-27). Koons further teaches the medio file (as the message) being created and sent comprising a video file, an audio file, etc. in para.[0047]. Therefore, the combination of Small and Koons, the message comprises the video stream, the audio stream and the request to establish the voice call connection.
Claims 2, 4, 9, 11-13 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Small et al. (US 9,009,631) in view of Koons et al. (US 2010/0057872), as applied to claims 1 and 10 above, and further in view of Kumar (US 2011/017667 as recited in the previous Office Action).
Regarding claims 2, 9, 11 and 18, Small and Koons, in combination, teach all subject matters as claimed above, except for the features of performing a facial recognition operation on an image from the video stream captured by the camera of the first device; identifying user information based, at least in part, on the facial recognition operation; and sending the user information to the second device. However, An failed to teach the feature of identifying user information, such as a profile containing the actual name of the calling party and transmitting the profile containing the actual name of the calling party to the second device. However, Kumar teaches a system comprising a calling party communication device 50, called party communication device 70 and biometric communication system 105, as shown in figure 1. When a calling party initiates a call using the calling party communication device 50, the calling party image or video (as facial recognition data) is captured by a biometric input 50 on the calling party communication device 50 and transmitted to the biometric communication system 105 (para. [0019] and [0022]). The biometric ID application 110 utilizes the biometric information input by the calling party to validate or to authenticate the biometric information of the calling party by querying biometric database 115 located in the biometric communication system 105. If there is a match, a user profile is identified, retrieved and provided to the called party communication device 70 (para. [0023]-[0024]). The user profile contains the actual name of the calling party (read on user information). The actual name of the calling party or the user information is displayed on the called party communication device 70 (para. [0025]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of identifying user information, such as a profile containing the actual name of the calling party and
transmitting the profile containing the actual name of the calling party to the second device, as taught by Kumar, into view of An in order to display the authenticated caller
identification information to the called party associated with the second device.
Regarding claim 4, Small further teaches the feature of the claim, such as a picture of the user (caller) is decoded as a user information and provided as the input in selecting context information to be sent and displayed at the destination mobile device (col.5, lines 3-5 and lines 52-58).
Regarding claim 12, Kumar further teaches limitations of the claim, such as the
called party communication device 70 received the user profile contains the actual
name of the calling party (read on user information) from the biometric ID application
110 (para. [0024]). The actual name of the calling party or the user information is
displayed on the called party communication device 70 (para. [0025]).
Regarding claim 13, Kumar further teaches the biometric ID application 110
utilizes the biometric information (i.e., an image included a face of the calling party)
input by the calling party to validate or to authenticate the biometric information of the
calling party by querying biometric database 115 located in the biometric
communication system 105. If there is a match, a user profile is identified, retrieved and
provided to the called party communication device 70 (para. [0023]-[0024]). The user
profile contains the actual name of the calling party (read on user information).
Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Small et al. (US 9,009,631) in view of Koons et al. (US 2010/0057872) and Kumar (US 2011/017667), as applied to claims 1-2, 10-11 and 13 above, and further in view of Jackson et al. (US 2009/0225745 also recited in the previous Office Action).
Regarding claims 5 and 14, Kumar teaches the feature of identifying a user
profile based on the image or face of the calling party as discussed in the rejection of 2
above. Kumar, in combination of Small and Koons, failed to clearly teach the feature of
wherein the second device performs an action based, at least in part, on the user
profile, the action comprising announcing, through a speaker of the second device, a
text identifier associated with the user profile via at least one of a voice generator or a
speech synthesizer. However, Jackson et al. (hereinafter "Jackson") teaches a "talking
caller identification service" implemented on a called device. Jackson further teaches a
text-based caller identification information (text identifier) is translated, by the called
device, into an audible ringtone using a text-to-speech synthesizer in order to alert a
user of the called device (para. [0002]).
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to incorporate the features of wherein the
second device performs an action based, at least in part, on the user profile, the action
comprising announcing, through a speaker of the second device, a text identifier
associated with the user profile via at least one of a voice generator or a speech
synthesizer, as taught by Jackson, into view of Small, Koons and Kumar in order to alert a user of the called device on the incoming call.
Claims 6-7 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Small et al. (US 9,009,631) in view of Koons et al. (US 2010/0057872) and Kumar (US 2011/017667), as applied to claims 1-2, 10-11 and 13 above, and further in view of Jackson et al. (US 2009/0225745 also recited in the previous Office Action).
Regarding claim 6, Small, Koons and Kumar, in combination, teaches all subject matters as claimed above, except for the feature of wherein the second device performs an action based, at least in part, on a failure to authenticate a user associated with the video stream. However, Kenoyer a videoconference environment 100 including a local videoconference system 102 coupled to a remote videoconference system 104 through a communication network 106, as shown in figure 1 (para. [0024]). Kenoyer further teaches the local videoconference system 102 (performed as a second device) which use biometrics authentication for authoring inbound videoconference requests over the network 106 from the remote videoconference system 104 (para. [0037]). Kenoyer further teaches the local videoconference system 102 as a videoconference system 200, shown in the figure 2, which includes a face match module 406 (figure 4) to access the biometric database 302 (figure 3) for biometrics information of authorized user faces. If the face match module 406 finds a match between the current captured facial image and an authorized user face from the biometrics database 302, the current user is automatically logged into the videoconference network. However, if no match is found (as a failure), then (the local videoconference system 200) denies (performed an action) the user to access the videoconference network (para. [0041]-[0042]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of wherein the second device performs an action based, at least in part, on a failure to authenticate a
user associated with the video stream, as taught by Kenoyer, into view of Small, Koons and Kumar in order to prevent the unauthorized user to participated into the video call.
Regarding claims 7 and 19, Kenoyer further teaches limitation of the claim such
as the videoconference system 200, performed as the second device, to capture and send live video image in paragraph [0026]-[0027].
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Small et al. (US 9,009,631) in view of Koons et al. (US 2010/0057872), as applied to claims 1 and 10 above, and further in view of Velius (US 5,594,784 also recited in the previous Office Action).
Regarding claim 22, Small and Koons, in combination, teaches all subject matters as claimed above, except for the feature of wherein reception of the audio captured by the second device occurs while at least a portion of the video stream is displayed on the second device. However, Velius teaches such features in col.7, lines 34-45 for a purpose of providing a voice command as a selected option of a plurality of options to handle in either to accept or reject the incoming call.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the features of wherein reception of the audio captured by the second device occurs while at least a portion of the video stream is displayed on the second device, as taught by Velius, into view of Small and Koons in order to providing a voice command as a selected option of a plurality of options to handle in either to accept or reject the incoming call.
Response to Arguments
Applicant’s arguments with respect to claims 1-2, 4-14 and 16-22 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for response to this final action is set to expire THREE MONTHS from the date of this action. In the event a first response is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event will the statutory period for response expire later than SIX MONTHS from the date of this final action.
Any response to this final action should be mailed to:
BOX AF
Commissioner of Patents and Trademarks
Washington, D.C. 20231
Or faxed to:
(703) 872-9314 or (301) 273-8300 (for formal communications;
Please mark “EXPEDITED PROCEDURE”)
Or: If it is an informal or draft communication, please label “PROPOSED” or “DRAFT”)
Hand Carry Deliveries to:
Customer Service Window
(Randolph Building)
407 Dulany Street
Alexandria, VA 22314
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BINH TIEU whose telephone number is (571)272-7510. The examiner can normally be reached on 9-5. The Examiner’s fax number is (571) 273-7510 and E-mail address: BINH.TIEU@USPTO.GOV.
Examiner interviews are available via telephone or video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
It should be noticed that interview attribute time per new application or RCE (utility) is available when, during prosecution, the examiner conducts an interview. When more than one interview is needed in an application, supervisors have the flexibility to approve additional time to advance prosecution.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, FAN S. TSANG can be reached on (571) 272-7547.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (FAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the FAIR system, see fitp://nair-direct.usoto.aqev. If you have any questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/Binh Kien Tieu/Primary Examiner, Art Unit 2694
Date: February 2026