DETAILED ACTION
This action is in response to the original filing on 02/01/2024 and the preliminary amendment on 02/26/2024. Claims 23-42 are pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 23-42 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
The specification generally discloses generating, via a camera, image data that is reproducible as an image of at least a portion of a subject (see abs.). The specification further discloses front-facing camera is caused to take a picture of at least a portion of a face of the user on the scale simultaneously with the rear-facing camera taking a picture of: (a) at least a portion of feet of the user on the scale, and (b) at least a portion of the display of the scale displaying the unique indicium and the determined body weight of the user ([0016], [0089-0090], [0102-0103]). The specification further discloses a first user interface for a weigh in procedure (Figs. 10A-10D, [0139-0143]). The user may capture a video ([0140-0141]). A scale measures a user's weight and a recorded weight is displayed as a numeric value (1018 in Fig. 10D, [0143]). The specification further discloses a second graphical user interface displaying a profile of a user, including a picture, name and location (Fig. 11, [0144]). The specification discloses user profile 1100 also includes a weight graph 1108 which provides a visual indicator of the results of the user's submissions. For example, weight graph 1108 plots the recorded weights for each of the user's submissions to the contest and provides a line between each recorded weight ([0145]). The specification further discloses a third graphical user interface including the profile and a plurality of frames of other users (Fig. 12 [0146]). Each verification area 1206 includes the image data 1208 of another user's weigh-in, a user-selectable play button 1210, a user-selectable “Verify” button 1212, and a user-selectable “Dispute” button 1214 ([0146]). The user may view the image data 1208 submitted by other users by pressing the play button 1210. The user then determines whether there is any reason to dispute the other users' weigh-ins. The recorded weight for each submission may be displayed on verification screen 1200 (e.g., overlaid thereon and/or baked into the video) simultaneously with the recorded video of the weigh-in to allow a user to view both the weigh-in video and the recorded weight to help determine whether the weigh-in is legitimate and if it should be verified ([0147]).
Regarding claim 23, the disclosure of the parent applications appear to disclose taking a pictures using a front and rear facing camera. The parent applications further appear to disclose simultaneously displaying a profile of a second user and a plurality of frames of first users. The specification also discloses a recorded weight may be displayed as numeric value on a page (Figs. 10, 11, [0143], [0145]), as a graph (Fig. 11, [0145]), or simultaneously with a video ([0146]). However, the parent applications do not disclose the limitations of independent claim 23, including generating, via a first camera of a first electronic device, first video data; generating, via a second camera of the first electronic device, second video data, wherein at least some of the second video data is generated simultaneously with the generating of the first video data; generating a data file including (i) at least a portion of the first video data, wherein the at least a portion of the first video data is reproducible as a first visual video clip of at least a portion of a first user of the first electronic device and (ii) at least a portion of the second video data, wherein the at least a portion of the second video data is reproducible as a second visual video clip; transmitting, from the first electronic device, the generated data file to a server; receiving, by a second electronic device associated with a second user, the generated data file; and simultaneously displaying, on a display of the second electronic device associated with the second user, (i) a first frame of the first visual video clip, (ii) a second frame of the second visual video clip, and (iii) a first profile image associated with the first user.
Regarding claim 40, similar to the discussion above with respect to claim 23, the instant specification further does not disclose the limitations of independent claim 40, including generating, via a first camera of a first electronic device, first video data; generating, via a second camera of the first electronic device, second video data, wherein at least some of the second video data is generated simultaneously with the generating of the first video data; displaying, on the first electronic device, a user-selectable re-do element, wherein selecting the user-selectable re-do element causes the first electronic device to: generate, via the first camera of the first electronic device, third video data; generate, via the second camera of the first electronic device, fourth video data, wherein at least some of the fourth video data is generated simultaneously with the generating of the third video data; generating a data file including (i) at least a portion of the third video data, wherein the at least a portion of the third video data is reproducible as a third visual video clip of at least a part of the first user of the first electronic device and (ii) at least a portion of the fourth video data, wherein the at least a portion of the fourth video data is reproducible as a fourth visual video clip; transmitting, from the first electronic device, the generated data file to a server; receiving, by a second electronic device associated with a second user, the generated data file; and simultaneously displaying, on a display of the second electronic device associated with the second user, (i) a first frame of the third visual video clip, (ii) a second frame of the fourth visual video clip, and (iii) a first profile image associated with the first user.
Claims 24-39, 41, and 42 are also rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as being dependent on parent claims failing to comply with the written description requirement.
Regarding claim 24, claim 24 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “wherein the simultaneously displaying further includes simultaneously displaying a first name associated with the first user”.
Regarding claim 25, claim 25 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “wherein the simultaneously displaying further includes simultaneously displaying a location associated with the first user.”.
Regarding claim 26, claim 26 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “further comprising causing the first electronic device to communicate a first prompt to the first user to generate the first video data, the second video data, or both”.
Regarding claim 27, claim 27 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “further comprising overlaying a first user-selectable element on at least a portion of the first frame of the first visual video clip, the second frame of the second visual video clip, or a combination thereof”.
Regarding claim 28, claim 28 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “wherein the first user-selectable element is a play button, wherein selecting the play button causes at least a portion of the first visual video clip, the second visual video clip, or a combination thereof, to play”.
Regarding claim 29, claim 29 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “wherein the simultaneously displaying further includes simultaneously displaying a name of the first user and a location of the first user adjacent to the first frame of the first visual video clip, the second frame of the second visual video clip, or a combination thereof”.
Regarding claim 30, claim 30 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “further comprising displaying, on the first electronic device, a second user-selectable element”.
Regarding claim 31, claim 31 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “wherein the first user is notified when a predetermined number of other users select the second user-selectable element”.
Regarding claim 32, claim 32 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “further comprising simultaneously displaying, on a display of the first electronic device, (1) a first profile image associated with the first user, (2) a first name associated with the first user, and (3) a list of submissions of the first user”.
Regarding claim 33, claim 33 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “further comprising simultaneously displaying, on the display of the second electronic device, (1) a second profile image associated with the second user, (2) a second name associated with the second user, and (3) a list of submissions of the second user”.
Regarding claim 34, claim 34 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “further comprising: analyzing the at least a portion of the first video data; and creating a first user appearance model of the first user”.
Regarding claim 35, claim 35 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “wherein the first user appearance model includes a first user face model”.
Regarding claim 36, claim 36 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “further comprising creating a second user appearance model based at least on the first user appearance model and a weight associated with the first user”.
Regarding claim 37, claim 37 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “wherein the second visual video clip is of at least a second portion the first user of the first electronic device, the first portion of the first user includes a face of the first user and the second portion of the first user includes a foot of the first user”.
Regarding claim 38, claim 38 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “wherein the second visual video clip further includes at least a portion of a display device”.
Regarding claim 39, claim 39 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “wherein the display device is built into a scale and is configured to display a weight of the first user thereon”.
Regarding claim 41, claim 41 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “further comprising displaying, on the first electronic device, a user-selectable accept element, wherein selecting the user-selectable accept element causes the generated date file to be transmitted to the server”.
Regarding claim 42, claim 42 includes subject matter not described in the specification, as discussed above with respect to the parent claim. The specification further does not disclose “wherein the simultaneously displaying further includes simultaneously displaying a first name associated with the first user”.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 26-29 and 39 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 26, claim 26 recites “or both”. It is unclear what specific limitations, this limitation is intended to refer. For the purposes of examination, this limitation is interpreted as: generate the first video data or the second video data
Regarding claim 27, claim 27 recites “or a combination thereof”. It is unclear what specific limitations, this limitation is intended to refer. For the purposes of examination, this limitation is interpreted as: at least a portion of the first frame of the first visual video clip or the second frame of the second visual video clip
Regarding claim 28, claim 28 recites “or a combination thereof”. It is unclear what specific limitations, this limitation is intended to refer. For the purposes of examination, this limitation is interpreted as: at least a portion of the first visual video clip or the second visual video clip
Regarding claim 29, claim 29 recites “or a combination thereof”. It is unclear what specific limitations, this limitation is intended to refer. For the purposes of examination, this limitation is interpreted as: the first frame of the first visual video clip, the second frame of the second visual video clip
Regarding claim 39, claim 39 recites “a weight of the first user thereon”. It is unclear what specific limitations, this limitation is intended to refer. For the purposes of examination, this limitation is interpreted as: a weight of the first user
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 23-27, 29-35, and 40-42 are rejected under 35 U.S.C. 103 as being unpatentable over Sim et al. (US 20100128103 A1¸ published 05/27/2010), hereinafter Sim, in view of Chong et al. (US 20120274808 A1, published 11/01/2012), hereinafter Chong.
Regarding claim 23, Sim teaches the claim comprising:
A method comprising: generating, via a first camera of a first electronic device, first video data; generating, via a second camera of the first electronic device, second video data, wherein at least some of the second video data is generated simultaneously with the generating of the first video data (Sim Figs. 1-6; [0047], more than one camera is included in the system for capturing the first location on video for the webcast; [0051], The WLAN 102 includes a control unit 104, a first camera 110 connected to the control unit 104 for capturing a first camera view of the webcast at the first location; The first communication unit 114 includes a second camera 118 that is capable of capturing a second camera view of the webcast at the first location; [0055], The first and second cameras 110 and 118 respectively, may be digital cameras where each includes a microphone and adjustment means for panning and/or zooming in or out the captured view. If the camera supports optical panning and zooming, there would be provided one or more electric motor and mechanical parts to enable these features. The cameras 110 and 118 capture both moving images and sound produced at the first location; [0056], During operation, the control unit 104 streams the captured video in the camera views of the first and second cameras 110 and 118 respectively to the server 108. The server 108 in turn streams the captured sound and moving images to the first communication unit 114 and the N communication units 116 via the Internet 112. Depending on user request, the control unit 104 may stream only one video captured by one of the first and second cameras 110 and 118, or stream both videos captured by both cameras 110 and 118. The streaming can be done in real time; [0156], It is appreciated that the first communication unit 114, the camera 110 and the control unit 104 described with reference to FIG. 1 may be components of a single device. For instance, the single device may be a laptop computer, mobile phone, personal digital assistant, media player, entertainment device or communication device);
generating a data file including (i) at least a portion of the first video data, wherein the at least a portion of the first video data is reproducible as a first visual video clip of at least a portion and (ii) at least a portion of the second video data, wherein the at least a portion of the second video data is reproducible as a second visual video clip; transmitting, from the first electronic device, the generated data stream to a server; receiving, by a second electronic device associated with a second user, the generated data file (Sim Figs. 1-6; [0006], one or more control interfaces for controlling the camera view of the one or more videos displayed in the first display, wherein controlling the camera view comprises switching the one or more videos displayed in the first display between videos captured by the two or more cameras; [0019], The one or more videos captured may be recorded and stored as a video file in a database; [0044], users can access the web portal to view a live webcast at a public WiFi hotspot of their choice and on a camera view of their preference; the webcasts can be recorded for future viewing; [0048], The web portal enables users accessing the web portal on the Internet to select the live or recorded webcast they are interested to view; [0056], During operation, the control unit 104 streams the captured video in the camera views of the first and second cameras 110 and 118 respectively to the server 108. The server 108 in turn streams the captured sound and moving images to the first communication unit 114 and the N communication units 116 via the Internet 112; The main web page 200 provides users with a list of existing or most recently created live webcasts in a first window 202 and a list of previous webcasts (recorded footages) in a second window 204. By selecting one of the webcasts in the lists 202, 204, users can call out a webcasting webpage (400 in FIG. 4) dedicated for the selected webcast; [0109], If the webcaster chose to stream both videos captured by the first and second cameras (110 and 118 in FIG. 1), webcast viewers would be allowed to switch between camera views captured by the two cameras (110 and 118 in FIG. 1). In this case, the Switch Camera View button (418 in FIG. 4) would be enabled; [0140], Recorded footages of past webcasts can also be viewed by other users by selecting and accessing the webcasting webpage of a previous webcast entry from the second window 204 in the main page 200);
and simultaneously displaying, on a display of the second electronic device associated with the second user, (i) a first frame of the first visual video clip, (ii) a second frame of a visual video clip, and (iii) a first profile image associated with the first user (Sim Figs. 1-6; [0060], By selecting one of the webcasts in the lists 202, 204, users can call out a webcasting webpage (400 in FIG. 4) dedicated for the selected webcast; [0102], there is provided one or more avatars (e.g. 412 in FIG. 4) selectable for use to identify each user accessing the web portal via any one of the first communication unit (114 in FIG. 1) or N other communication units (116 in FIG. 1) upon user access at the web portal; Users may have the option to change the avatar and upload their own picture or avatar if they decide not to use one of the pre-stored avatars; [0109], If the webcaster chose to stream both videos captured by the first and second cameras (110 and 118 in FIG. 1), webcast viewers would be allowed to switch between camera views captured by the two cameras (110 and 118 in FIG. 1). In this case, the Switch Camera View button (418 in FIG. 4) would be enabled; [0127], FIG. 4 shows an illustration of a webcasting webpage 400 of an existing webcast that is displayed after a webcast viewer, which is a Level 3 user in the example embodiment, login to the web portal and selected one of the webcast entries from the first window 202 on the main web page (200 in FIG. 2). The webcasting webpage 400 includes a specific location field 416 showing, for example, the venue and address of the webcast; a geographical location field 420 showing, for example, the city and country; a webcast title field 422 showing the webcast title; a display window 404 for displaying the webcast; and an online contact list 402 containing the avatars and username of all the users who have accessed the webcasting webpage 400; [0130], The online contact list 402 is visible to all the users (Levels 1 to 4 only) in the contact list 402. In this case, there is provided in the contact list 402, a first contact field 412 (webcasting webpage) belonging to the webcaster, a second contact field 414 (webcasting webpage) of another user who has previously accessed the webcasting webpage 400 and a third contact field 406 (webcasting webpage) belonging to the new user who has just accessed the webcasting webpage 400; [0132], FIG. 4 shows that the first user (webcaster) has selected an option to communicate with the second user in a video conference and the second user has approved it. After the option is selected, a second display window 410 displaying the webcam view of the second user appears. The second display window 410 can be resized and dragged around the screen display of the webcasting webpage 400 by a mouse cursor or finger contact (if touch screen is being used). In the example embodiment, only the user who has selected to hold a video conference with the second user can see the second display window 410. However, it is appreciated that in other example embodiments, the second display window 410 may be visible to more users when other users in the contact list 406 selects to engage the second user in a video conference and the second user approves it; [0141], viewers of a live or recorded webcast can provide comments about the live or recorded webcast by writing in the chat window 408, by speaking to the webcaster, which can be heard by other viewers, or by gesturing in a video conference display window 410; [0160], the user webpage (300 in FIG. 3) (webcaster's view) and the webcasting webpage (400 in FIG. 4) (other users' view) may contain more than one displays for displaying the several camera views. Also, one display may be split up into two or more smaller displays to display two or more camera views, for instance, in a television picture-in-picture manner)
However, Sim fails to expressly disclose a first visual video clip of at least a portion of a first user of the first electronic device; transmitting, from the first electronic device, the generated data file to a server; simultaneously displaying, on a display of the second electronic device associated with the second user, (i) a first frame of the first visual video clip, (ii) a second frame of the second visual video clip. In the same field of endeavor, Chong teaches:
a first visual video clip of at least a portion of a first user of the first electronic device; transmitting, from the first electronic device, the generated data file to a server; simultaneously displaying, on a display of the second electronic device associated with the second user, (i) a first frame of the first visual video clip, (ii) a second frame of the second visual video clip (Chong Figs. 1-12; [0020], a user is able to record video of the background from the rear camera and also record a simultaneous narration from the front camera using a single mobile device. The narration can be combined as a live overlay of the user over the background image or video. This function can be provided using two separate image or video processors or a single controller can take in the two video streams and merge them in software or a special hardware image processing or graphics processing module. Alternatively, this function can be provided by a master image sensor used to control a slave image sensor; [0024], In FIG. 3, the mobile display is configured to combine the image of the trees from the rear camera with the image of the user from the front camera; [0027], In FIG. 6, the images are reversed so that the image of the trees 103 now fills the inset; [0028], In the example of FIG. 7, the two images are positioned beside each other; [0032], Any one or more of the views of FIGS. 1-7 can be sent by the mobile device to local memory, to remote memory, or to another user; The call may also be a session with a remote server from which data may be relayed to one or more other users; [0035], The exchange of messages, images, video, and audio may continue throughout the course of the call or session; [0048], FIG. 10 is block diagram of an alternative configuration of a mobile device hardware platform 1000 suitable for use with embodiments of the present invention; [0064], FIG. 12 shows a more detailed view of the two camera modules 1025, 1031 of FIG. 10; [0076], The overlay combination block 1219 is coupled to receive the same first two inputs as the overlay multiplexer, the primary camera and the secondary camera direct input as a raw or other type of image file. The overlay combination block takes these two image files and combines them based on commands received from the controller or based on an automated or fixed routine. In one example, as shown in FIG. 3, the overlay combination block takes the pixels of an image from the secondary camera and uses them to replace a portion of the pixels of an image from the primary camera)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated a first visual video clip of at least a portion of a first user of the first electronic device; transmitting, from the first electronic device, the generated data file to a server; simultaneously displaying, on a display of the second electronic device associated with the second user, (i) a first frame of the first visual video clip, (ii) a second frame of the second visual video clip as suggested in Chong into Sim. Doing so would be desirable because many mobile devices are equipped with both a front camera with a view of the user and a rear camera with a view of the background or what lies in front of the user (see Chong [0002]). When used as a video telephone, the display shows a video of the remote party and, in some cases, a video also of the user superimposed on the screen. However, these devices do not permit the display to show captured images from both cameras on the same display at the same time. They also do not permit images captured from the two cameras at the same time to be transmitted to another mobile device. This limits the usefulness of the two cameras on the mobile device (see Chong [0004]). Many different types of mobile devices offer or may be adapted to offer multiple cameras with different views (see Chong [0031]). Additionally, the system of Chong would improve the system of Sim by enabling viewers to watch both streams of video generated by the first user simultaneously, without the need to press a button to switch the view. The file of combined streams would save the viewers time and better enable the viewers to watch desired multi-camera content.
Regarding claim 40, Sim teaches the claim comprising:
A method comprising: generating, via a first camera of a first electronic device, first video data; generating, via a second camera of the first electronic device, second video data, wherein at least some of the second video data is generated simultaneously with the generating of the first video data (Sim Figs. 1-6; [0047], more than one camera is included in the system for capturing the first location on video for the webcast; [0051], The WLAN 102 includes a control unit 104, a first camera 110 connected to the control unit 104 for capturing a first camera view of the webcast at the first location; The first communication unit 114 includes a second camera 118 that is capable of capturing a second camera view of the webcast at the first location; [0055], The first and second cameras 110 and 118 respectively, may be digital cameras where each includes a microphone and adjustment means for panning and/or zooming in or out the captured view. If the camera supports optical panning and zooming, there would be provided one or more electric motor and mechanical parts to enable these features. The cameras 110 and 118 capture both moving images and sound produced at the first location; [0056], During operation, the control unit 104 streams the captured video in the camera views of the first and second cameras 110 and 118 respectively to the server 108. The server 108 in turn streams the captured sound and moving images to the first communication unit 114 and the N communication units 116 via the Internet 112. Depending on user request, the control unit 104 may stream only one video captured by one of the first and second cameras 110 and 118, or stream both videos captured by both cameras 110 and 118. The streaming can be done in real time; [0156], It is appreciated that the first communication unit 114, the camera 110 and the control unit 104 described with reference to FIG. 1 may be components of a single device. For instance, the single device may be a laptop computer, mobile phone, personal digital assistant, media player, entertainment device or communication device);
displaying, on the first electronic device, a user-selectable re-do element, wherein selecting the user-selectable re-do element causes the first electronic device to: generate, via the first camera of the first electronic device, third video data; generate, via the second camera of the first electronic device, fourth video data, wherein at least some of the fourth video data is generated simultaneously with the generating of the third video data (Sim Figs. 1-6; [0047], more than one camera is included in the system for capturing the first location on video for the webcast; [0051], The WLAN 102 includes a control unit 104, a first camera 110 connected to the control unit 104 for capturing a first camera view of the webcast at the first location; The first communication unit 114 includes a second camera 118 that is capable of capturing a second camera view of the webcast at the first location; [0055], The first and second cameras 110 and 118 respectively, may be digital cameras where each includes a microphone and adjustment means for panning and/or zooming in or out the captured view. If the camera supports optical panning and zooming, there would be provided one or more electric motor and mechanical parts to enable these features. The cameras 110 and 118 capture both moving images and sound produced at the first location; [0056], During operation, the control unit 104 streams the captured video in the camera views of the first and second cameras 110 and 118 respectively to the server 108. The server 108 in turn streams the captured sound and moving images to the first communication unit 114 and the N communication units 116 via the Internet 112. Depending on user request, the control unit 104 may stream only one video captured by one of the first and second cameras 110 and 118, or stream both videos captured by both cameras 110 and 118. The streaming can be done in real time; [0111], The webcast control button 320 starts and stops the streaming of the webcast from the server; When the webcast control button 320 is selected in a second instance, i.e. signifying `stop`, the streaming of the webcast would end, and a dialogue box (not shown in the Figures) would appear to prompt the webcaster whether to delete the recording made so far; [0112], A Select Camera View button 338 is provided for the webcaster to switch between camera views captured by different cameras; [0117], If a manual text entry based application is implemented, selecting the Add Subtitle control button 328 would call out an Add Subtitle dialogue box; [0122], The user webpage 300 includes a first control button 316 for controlling zooming in the camera view of the camera (110 in FIG. 1) and a second control button 318 for controlling zooming out the camera view. Panning control of the camera view is controlled by selecting four directional control buttons 314 in the desired manner; [0120], there is provided in the user webpage 300 an Add Audio Source control button 330 for the user to provide an alternative source of audio input in addition to the sound captured by the camera microphone of the camera (110 in FIG. 1). The alternative audio source may complement the camera microphone of the camera (110 in FIG. 1) if the camera microphone is activated, or it may serve as the only source of sound for the webcast; [0165], one or more control interfaces may be provided for re-recording a recorded webcast so as to incorporate or change the subtitles, sound effects, sound or music played in the recorded webcast);
generating a data file including (i) at least a portion of the third video data, wherein the at least a portion of the third video data is reproducible as a third visual video clip of at least a part and (ii) at least a portion of the fourth video data, wherein the at least a portion of the fourth video data is reproducible as a fourth visual video clip; transmitting, from the first electronic device, the generated data stream to a server; receiving, by a second electronic device associated with a second user, the generated data file (Sim Figs. 1-6; [0006], one or more control interfaces for controlling the camera view of the one or more videos displayed in the first display, wherein controlling the camera view comprises switching the one or more videos displayed in the first display between videos captured by the two or more cameras; [0019], The one or more videos captured may be recorded and stored as a video file in a database; [0044], users can access the web portal to view a live webcast at a public WiFi hotspot of their choice and on a camera view of their preference; the webcasts can be recorded for future viewing; [0048], The web portal enables users accessing the web portal on the Internet to select the live or recorded webcast they are interested to view; [0056], During operation, the control unit 104 streams the captured video in the camera views of the first and second cameras 110 and 118 respectively to the server 108. The server 108 in turn streams the captured sound and moving images to the first communication unit 114 and the N communication units 116 via the Internet 112; The main web page 200 provides users with a list of existing or most recently created live webcasts in a first window 202 and a list of previous webcasts (recorded footages) in a second window 204. By selecting one of the webcasts in the lists 202, 204, users can call out a webcasting webpage (400 in FIG. 4) dedicated for the selected webcast; [0109], If the webcaster chose to stream both videos captured by the first and second cameras (110 and 118 in FIG. 1), webcast viewers would be allowed to switch between camera views captured by the two cameras (110 and 118 in FIG. 1). In this case, the Switch Camera View button (418 in FIG. 4) would be enabled; [0140], Recorded footages of past webcasts can also be viewed by other users by selecting and accessing the webcasting webpage of a previous webcast entry from the second window 204 in the main page 200);
and simultaneously displaying, on a display of the second electronic device associated with the second user, (i) a first frame of the third visual video clip, (ii) a second frame of a visual video clip, and (iii) a first profile image associated with the first user (Sim Figs. 1-6; [0060], By selecting one of the webcasts in the lists 202, 204, users can call out a webcasting webpage (400 in FIG. 4) dedicated for the selected webcast; [0102], there is provided one or more avatars (e.g. 412 in FIG. 4) selectable for use to identify each user accessing the web portal via any one of the first communication unit (114 in FIG. 1) or N other communication units (116 in FIG. 1) upon user access at the web portal; Users may have the option to change the avatar and upload their own picture or avatar if they decide not to use one of the pre-stored avatars; [0109], If the webcaster chose to stream both videos captured by the first and second cameras (110 and 118 in FIG. 1), webcast viewers would be allowed to switch between camera views captured by the two cameras (110 and 118 in FIG. 1). In this case, the Switch Camera View button (418 in FIG. 4) would be enabled; [0127], FIG. 4 shows an illustration of a webcasting webpage 400 of an existing webcast that is displayed after a webcast viewer, which is a Level 3 user in the example embodiment, login to the web portal and selected one of the webcast entries from the first window 202 on the main web page (200 in FIG. 2). The webcasting webpage 400 includes a specific location field 416 showing, for example, the venue and address of the webcast; a geographical location field 420 showing, for example, the city and country; a webcast title field 422 showing the webcast title; a display window 404 for displaying the webcast; and an online contact list 402 containing the avatars and username of all the users who have accessed the webcasting webpage 400; [0130], The online contact list 402 is visible to all the users (Levels 1 to 4 only) in the contact list 402. In this case, there is provided in the contact list 402, a first contact field 412 (webcasting webpage) belonging to the webcaster, a second contact field 414 (webcasting webpage) of another user who has previously accessed the webcasting webpage 400 and a third contact field 406 (webcasting webpage) belonging to the new user who has just accessed the webcasting webpage 400; [0132], FIG. 4 shows that the first user (webcaster) has selected an option to communicate with the second user in a video conference and the second user has approved it. After the option is selected, a second display window 410 displaying the webcam view of the second user appears. The second display window 410 can be resized and dragged around the screen display of the webcasting webpage 400 by a mouse cursor or finger contact (if touch screen is being used). In the example embodiment, only the user who has selected to hold a video conference with the second user can see the second display window 410. However, it is appreciated that in other example embodiments, the second display window 410 may be visible to more users when other users in the contact list 406 selects to engage the second user in a video conference and the second user approves it; [0141], viewers of a live or recorded webcast can provide comments about the live or recorded webcast by writing in the chat window 408, by speaking to the webcaster, which can be heard by other viewers, or by gesturing in a video conference display window 410; [0160], the user webpage (300 in FIG. 3) (webcaster's view) and the webcasting webpage (400 in FIG. 4) (other users' view) may contain more than one displays for displaying the several camera views. Also, one display may be split up into two or more smaller displays to display two or more camera views, for instance, in a television picture-in-picture manner)
However, Sim fails to expressly disclose a third visual video clip of at least a part of the first user of the first electronic device; transmitting, from the first electronic device, the generated data file to a server; simultaneously displaying, on a display of the second electronic device associated with the second user, (i) a first frame of the third visual video clip, (ii) a second frame of the fourth visual video clip. In the same field of endeavor, Chong teaches:
a third visual video clip of at least a part of the first user of the first electronic device; transmitting, from the first electronic device, the generated data file to a server; simultaneously displaying, on a display of the second electronic device associated with the second user, (i) a first frame of the third visual video clip, (ii) a second frame of the fourth visual video clip (Chong Figs. 1-12; [0020], a user is able to record video of the background from the rear camera and also record a simultaneous narration from the front camera using a single mobile device. The narration can be combined as a live overlay of the user over the background image or video. This function can be provided using two separate image or video processors or a single controller can take in the two video streams and merge them in software or a special hardware image processing or graphics processing module. Alternatively, this function can be provided by a master image sensor used to control a slave image sensor; [0024], In FIG. 3, the mobile display is configured to combine the image of the trees from the rear camera with the image of the user from the front camera; [0027], In FIG. 6, the images are reversed so that the image of the trees 103 now fills the inset; [0028], In the example of FIG. 7, the two images are positioned beside each other; [0032], Any one or more of the views of FIGS. 1-7 can be sent by the mobile device to local memory, to remote memory, or to another user; The call may also be a session with a remote server from which data may be relayed to one or more other users; [0035], The exchange of messages, images, video, and audio may continue throughout the course of the call or session; [0048], FIG. 10 is block diagram of an alternative configuration of a mobile device hardware platform 1000 suitable for use with embodiments of the present invention; [0064], FIG. 12 shows a more detailed view of the two camera modules 1025, 1031 of FIG. 10; [0076], The overlay combination block 1219 is coupled to receive the same first two inputs as the overlay multiplexer, the primary camera and the secondary camera direct input as a raw or other type of image file. The overlay combination block takes these two image files and combines them based on commands received from the controller or based on an automated or fixed routine. In one example, as shown in FIG. 3, the overlay combination block takes the pixels of an image from the secondary camera and uses them to replace a portion of the pixels of an image from the primary camera)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated a third visual video clip of at least a part of the first user of the first electronic device; transmitting, from the first electronic device, the generated data file to a server; simultaneously displaying, on a display of the second electronic device associated with the second user, (i) a first frame of the third visual video clip, (ii) a second frame of the fourth visual video clip as suggested in Chong into Sim. Doing so would be desirable because many mobile devices are equipped with both a front camera with a view of the user and a rear camera with a view of the background or what lies in front of the user (see Chong [0002]). When used as a video telephone, the display shows a video of the remote party and, in some cases, a video also of the user superimposed on the screen. However, these devices do not permit the display to show captured images from both cameras on the same display at the same time. They also do not permit images captured from the two cameras at the same time to be transmitted to another mobile device. This limits the usefulness of the two cameras on the mobile device (see Chong [0004]). Many different types of mobile devices offer or may be adapted to offer multiple cameras with different views (see Chong [0031]). Additionally, the system of Chong would improve the system of Sim by enabling viewers to watch both streams of video generated by the first user simultaneously, without the need to press a button to switch the view. The file of combined streams would save the viewers time and better enable the viewers to watch desired multi-camera content.
Regarding claim 24, Sim in view of Chong teaches all the limitations of claim 23, further comprising:
wherein the simultaneously displaying further includes simultaneously displaying a first name associated with the first user (Sim Figs. 1-6; [0127], FIG. 4 shows an illustration of a webcasting webpage 400 of an existing webcast that is displayed after a webcast viewer, which is a Level 3 user in the example embodiment, login to the web portal and selected one of the webcast entries from the first window 202 on the main web page (200 in FIG. 2). The webcasting webpage 400 includes a specific location field 416 showing, for example, the venue and address of the webcast; a geographical location field 420 showing, for example, the city and country; a webcast title field 422 showing the webcast title; a display window 404 for displaying the webcast; and an online contact list 402 containing the avatars and username of all the users who have accessed the webcasting webpage 400; [0130], The online contact list 402 is visible to all the users (Levels 1 to 4 only) in the contact list 402. In this case, there is provided in the contact list 402, a first contact field 412 (webcasting webpage) belonging to the webcaster, a second contact field 414 (webcasting webpage) of another user who has previously accessed the webcasting webpage 400 and a third contact field 406 (webcasting webpage) belonging to the new user who has just accessed the webcasting webpage 400).
Regarding claim 42, claim 42 contains substantially similar limitations to those found in claim 24. Consequently, claim 42 is rejected for the same reasons.
Regarding claim 25, Sim in view of Chong teaches all the limitations of claim 24, further comprising:
wherein the simultaneously displaying further includes simultaneously displaying a location associated with the first user (Sim Figs. 1-6; [0061], Each webcast entry listed in the main web page 200 displays the title of the webcast in a webcast title field 222, a specific location field 218 and a geographical location field 220. The specific location field 218 and a geographical location field 220 specify where the webcaster and the control unit (104 in FIG. 1) are located; [0127], FIG. 4 shows an illustration of a webcasting webpage 400 of an existing webcast that is displayed after a webcast viewer, which is a Level 3 user in the example embodiment, login to the web portal and selected one of the webcast entries from the first window 202 on the main web page (200 in FIG. 2). The webcasting webpage 400 includes a specific location field 416 showing, for example, the venue and address of the webcast; a geographical location field 420 showing, for example, the city and country; a webcast title field 422 showing the webcast title; a display window 404 for displaying the webcast; and an online contact list 402 containing the avatars and username of all the users who have accessed the webcasting webpage 400; [0130], The online contact list 402 is visible to all the users (Levels 1 to 4 only) in the contact list 402. In this case, there is provided in the contact list 402, a first contact field 412 (webcasting webpage) belonging to the webcaster, a second contact field 414 (webcasting webpage) of another user who has previously accessed the webcasting webpage 400 and a third contact field 406 (webcasting webpage) belonging to the new user who has just accessed the webcasting webpage 400)
Regarding claim 26, Sim in view of Chong teaches all the limitations of claim 23, further comprising:
further comprising causing the first electronic device to communicate a first prompt to the first user to generate the first video data, the second video data, or both (Sim Figs. 1-6; [0105], The user webpage 300 includes a `create webcast` button 302 for the registered user to create a webcast. If the registered user or also herein known as the webcaster selects the `create webcast` button 302, the relevant software drivers would run in the background on the control unit (104 in FIG. 1) to search for the presence of cameras connected to it. In the example embodiment, the first and second cameras (110 and 118 in FIG. 1) are directly connected to the control unit (104 in FIG. 1). When these camera connections are detected, a dialogue box (not shown in the figures) will be displayed to prompt the webcaster to select one of the camera connections; [0106], the dialogue box also prompts the webcaster to select between streaming the video captured from the first camera (110 in FIG. 1), the second camera (118 in FIG. 1), or from both cameras (110 and 118 in FIG. 1); [0109], If the webcaster chose to stream both videos captured by the first and second cameras (110 and 118 in FIG. 1), webcast viewers would be allowed to switch between camera views captured by the two cameras (110 and 118 in FIG. 1); [0111], The webcast control button 320 starts and stops the streaming of the webcast from the server (108 in FIG. 1) to the N communication units (116 in FIG. 1))
Regarding claim 27, Sim in view of Chong teaches all the limitations of claim 23, further comprising:
further comprising overlaying a first user-selectable element on at least a portion of the first frame of the first visual video clip, the second frame of the second visual video clip, or a combination thereof (Sim Figs. 1-6; [0114], subtitles can be added to the webcast. The subtitles entered are located at a lower portion 338 of the webcast display 304. Similarly, subtitles could appear at the same location in the webcast display 404, which is described with reference to FIG. 4. It is appreciated that there could be control interfaces available to relocate the subtitles to other locations on the display 304 (and 404 in FIG. 4) or to other locations on the web pages 300 or 400. The words and/or characters appearing in the webcast may be overlaid on the display 304 (and 404 in FIG. 4) of the webcast or be embedded with the video stream of the webcast; [0127], FIG. 4 shows an illustration of a webcasting webpage 400 of an existing webcast that is displayed after a webcast viewer, which is a Level 3 user in the example embodiment, login to the web portal and selected one of the webcast entries from the first window 202 on the main web page (200 in FIG. 2); [0132], FIG. 4 shows that the first user (webcaster) has selected an option to communicate with the second user in a video conference and the second user has approved it. After the option is selected, a second display window 410 displaying the webcam view of the second user appears. The second display window 410 can be resized and dragged around the screen display of the webcasting webpage 400 by a mouse cursor or finger contact (if touch screen is being used). In the example embodiment, only the user who has selected to hold a video conference with the second user can see the second display window 410. However, it is appreciated that in other example embodiments, the second display window 410 may be visible to more users when other users in the contact list 406 selects to engage the second user in a video conference and the second user approves it; [0141], viewers of a live or recorded webcast can provide comments about the live or recorded webcast by writing in the chat window 408, by speaking to the webcaster, which can be heard by other viewers, or by gesturing in a video conference display window 410)
Regarding claim 29, Sim in view of Chong teaches all the limitations of claim 27, further comprising:
wherein the simultaneously displaying further includes simultaneously displaying a name of the first user and a location of the first user adjacent to the first frame of the first visual video clip, the second frame of the second visual video clip, or a combination thereof (Sim Figs. 1-6; [0061], Each webcast entry listed in the main web page 200 displays the title of the webcast in a webcast title field 222, a specific location field 218 and a geographical location field 220. The specific location field 218 and a geographical location field 220 specify where the webcaster and the control unit (104 in FIG. 1) are located; [0127], FIG. 4 shows an illustration of a webcasting webpage 400 of an existing webcast that is displayed after a webcast viewer, which is a Level 3 user in the example embodiment, login to the web portal and selected one of the webcast entries from the first window 202 on the main web page (200 in FIG. 2). The webcasting webpage 400 includes a specific location field 416 showing, for example, the venue and address of the webcast; a geographical location field 420 showing, for example, the city and country; a webcast title field 422 showing the webcast title; a display window 404 for displaying the webcast; and an online contact list 402 containing the avatars and username of all the users who have accessed the webcasting webpage 400; [0130], The online contact list 402 is visible to all the users (Levels 1 to 4 only) in the contact list 402. In this case, there is provided in the contact list 402, a first contact field 412 (webcasting webpage) belonging to the webcaster, a second contact field 414 (webcasting webpage) of another user who has previously accessed the webcasting webpage 400 and a third contact field 406 (webcasting webpage) belonging to the new user who has just accessed the webcasting webpage 400)
Regarding claim 30, Sim in view of Chong teaches all the limitations of claim 27, further comprising:
further comprising displaying, on the first electronic device, a second user-selectable element (Sim Figs. 1-6; [0121], There is provided a contact list 306 containing, in this case, a first contact field 310 (user webpage) displaying an avatar and the username of the webcaster and a second contact field 308 (user webpage) displaying an avatar and the username of a user who has selected to view the webcast from the main web page (200 in FIG. 1). Each of these contact fields are activators, which may be mouse action triggered or finger action triggered (in the case of touch screens), to enable the user of the user webpage 300 to engage in one-to-one user text messaging, speak to the user associated with the contact field selected via audio transmission over the Internet (i.e. teleconferencing), and engage the user associated with the contact field in video conferencing; [0123], There is also provided a common chat window 312 for text messaging between all the users in the contact list 306. The chat window 312 is visible to all the webcaster and user(s) who access the user webpage 300; [0126], There is also provided an `invite friends` control button 336 for inviting users to view the webcast. A dialogue box (not shown in the Figures) would pop up after selecting the button 336. The dialogue box has options for the user to invite/alert friends to view the webcast via email, text messaging, short messaging service (SMS), or the like; [0127], FIG. 4 shows an illustration of a webcasting webpage 400 of an existing webcast that is displayed after a webcast viewer, which is a Level 3 user in the example embodiment, login to the web portal and selected one of the webcast entries from the first window 202 on the main web page (200 in FIG. 2). The webcasting webpage 400 includes a specific location field 416 showing, for example, the venue and address of the webcast; a geographical location field 420 showing, for example, the city and country; a webcast title field 422 showing the webcast title; a display window 404 for displaying the webcast; and an online contact list 402 containing the avatars and username of all the users who have accessed the webcasting webpage 400; [0130-0131], if communication is enabled for all users in the contact list 402, one of the other two users in the contact list 402 can communicate with the user associated with the third contact field 406 by selecting the third contact field 406 in the contact list 402 and selecting the means of communication. The users can communicate with each other by, for instance, text messaging all the users in a common chat window 408 viewable by all the users, one to one text messaging, hold a teleconference (audio transmission only) and/or engage in video conference (video and audio transmission) with selected users in the contact list 402; [0132], FIG. 4 shows that the first user (webcaster) has selected an option to communicate with the second user in a video conference and the second user has approved it. After the option is selected, a second display window 410 displaying the webcam view of the second user appears; [0163], With reference to FIGS. 3 and 4, it is appreciated that the contact list (306 in FIG. 3) in the user webpage (300 in FIG. 3) and the contact list (402 in FIG. 4) in the webcasting webpage (400 in FIG. 4) may contain a list of all the user fields who the webcaster has selected as a friend. That is, these user fields are selected to be displayed in the contact list until it is removed by the webcaster. If the friend is online, i.e. accessing the webcasting webpage (400 in FIG. 4), the field of the friend in the list would indicate that the friend is online)
Regarding claim 31, Sim in view of Chong teaches all the limitations of claim 30, further comprising:
wherein the first user is notified when a predetermined number of other users select the second user-selectable element (Sim Figs. 1-6; [0121], There is provided a contact list 306 containing, in this case, a first contact field 310 (user webpage) displaying an avatar and the username of the webcaster and a second contact field 308 (user webpage) displaying an avatar and the username of a user who has selected to view the webcast from the main web page (200 in FIG. 1). Each of these contact fields are activators, which may be mouse action triggered or finger action triggered (in the case of touch screens), to enable the user of the user webpage 300 to engage in one-to-one user text messaging, speak to the user associated with the contact field selected via audio transmission over the Internet (i.e. teleconferencing), and engage the user associated with the contact field in video conferencing; [0123], There is also provided a common chat window 312 for text messaging between all the users in the contact list 306. The chat window 312 is visible to all the webcaster and user(s) who access the user webpage 300; [0126], There is also provided an `invite friends` control button 336 for inviting users to view the webcast. A dialogue box (not shown in the Figures) would pop up after selecting the button 336. The dialogue box has options for the user to invite/alert friends to view the webcast via email, text messaging, short messaging service (SMS), or the like; [0127], FIG. 4 shows an illustration of a webcasting webpage 400 of an existing webcast that is displayed after a webcast viewer, which is a Level 3 user in the example embodiment, login to the web portal and selected one of the webcast entries from the first window 202 on the main web page (200 in FIG. 2). The webcasting webpage 400 includes a specific location field 416 showing, for example, the venue and address of the webcast; a geographical location field 420 showing, for example, the city and country; a webcast title field 422 showing the webcast title; a display window 404 for displaying the webcast; and an online contact list 402 containing the avatars and username of all the users who have accessed the webcasting webpage 400; [0130-0131], if communication is enabled for all users in the contact list 402, one of the other two users in the contact list 402 can communicate with the user associated with the third contact field 406 by selecting the third contact field 406 in the contact list 402 and selecting the means of communication. The users can communicate with each other by, for instance, text messaging all the users in a common chat window 408 viewable by all the users, one to one text messaging, hold a teleconference (audio transmission only) and/or engage in video conference (video and audio transmission) with selected users in the contact list 402; [0132], FIG. 4 shows that the first user (webcaster) has selected an option to communicate with the second user in a video conference and the second user has approved it. After the option is selected, a second display window 410 displaying the webcam view of the second user appears; [0163], With reference to FIGS. 3 and 4, it is appreciated that the contact list (306 in FIG. 3) in the user webpage (300 in FIG. 3) and the contact list (402 in FIG. 4) in the webcasting webpage (400 in FIG. 4) may contain a list of all the user fields who the webcaster has selected as a friend. That is, these user fields are selected to be displayed in the contact list until it is removed by the webcaster. If the friend is online, i.e. accessing the webcasting webpage (400 in FIG. 4), the field of the friend in the list would indicate that the friend is online)
Regarding claim 32, Sim in view of Chong teaches all the limitations of claim 23, further comprising:
further comprising simultaneously displaying, on a display of the first electronic device, (1) a first profile image associated with the first user, (2) a first name associated with the first user, and (3) a list of submissions of the first user (Sim Figs. 1-6; [0113], There is further provided a Webcast Title Entry text box 332 for the webcaster to enter the title of the webcast; [0121], There is provided a contact list 306 containing, in this case, a first contact field 310 (user webpage) displaying an avatar and the username of the webcaster and a second contact field 308 (user webpage) displaying an avatar and the username of a user who has selected to view the webcast from the main web page (200 in FIG. 1). Each of these contact fields are activators, which may be mouse action triggered or finger action triggered (in the case of touch screens), to enable the user of the user webpage 300 to engage in one-to-one user text messaging, speak to the user associated with the contact field selected via audio transmission over the Internet (i.e. teleconferencing), and engage the user associated with the contact field in video conferencing; [0123], There is also provided a common chat window 312 for text messaging between all the users in the contact list 306. The chat window 312 is visible to all the webcaster and user(s) who access the user webpage 300; [0124], There is further provided a webpage listing window 322, a page up button 324 and a page down button 326 for scrolling entries page by page in the webpage listing window 322. The webpage listing lists web pages of recorded webcast(s) and current live webcast(s); [0152], Referring back to FIGS. 3 and 4, the user webpage 300 may further include one or more editing tools (not shown in the Figures) for registered users to set up and maintain a personal webpage, i.e. a user blog. Examples of such editing tools include text editing interfaces, picture/video/audio file uploading interfaces, blog preview and publishing interfaces, and the like. In this case, blog entries i.e. text, pictures, video file or audio file postings would be added and edited at the user webpage 300. Other users can access these blog entries from the webcasting webpage 400. Also, each recorded webcast may be made available in chronological order for viewing by users accessing the webcasting webpage 400; [0134], The webcasting webpage 400 further includes a scrollable list (not shown in the Figures) containing the web pages of all the recorded webcast and current live webcast, i.e. the works, authored by a specific webcaster. The entries in the list are selectable and upon selecting an entry, the webcasting webpage 400 associated with the selected entry would appear in a separate Internet browser window or a separate Internet browser tab)
Regarding claim 33, Sim in view of Chong teaches all the limitations of claim 32, further comprising:
further comprising simultaneously displaying, on the display of the second electronic device, (1) a second profile image associated with the second user, (2) a second name associated with the second user, and (3) a list of submissions of the second user (Sim Figs. 1-6; [0127], FIG. 4 shows an illustration of a webcasting webpage 400 of an existing webcast that is displayed after a webcast viewer, which is a Level 3 user in the example embodiment, login to the web portal and selected one of the webcast entries from the first window 202 on the main web page (200 in FIG. 2). The webcasting webpage 400 includes a specific location field 416 showing, for example, the venue and address of the webcast; a geographical location field 420 showing, for example, the city and country; a webcast title field 422 showing the webcast title; a display window 404 for displaying the webcast; and an online contact list 402 containing the avatars and username of all the users who have accessed the webcasting webpage 400; [0130-0131], if communication is enabled for all users in the contact list 402, one of the other two users in the contact list 402 can communicate with the user associated with the third contact field 406 by selecting the third contact field 406 in the contact list 402 and selecting the means of communication. The users can communicate with each other by, for instance, text messaging all the users in a common chat window 408 viewable by all the users, one to one text messaging, hold a teleconference (audio transmission only) and/or engage in video conference (video and audio transmission) with selected users in the contact list 402; [0132], FIG. 4 shows that the first user (webcaster) has selected an option to communicate with the second user in a video conference and the second user has approved it. After the option is selected, a second display window 410 displaying the webcam view of the second user appears; [0152], Referring back to FIGS. 3 and 4, the user webpage 300 may further include one or more editing tools (not shown in the Figures) for registered users to set up and maintain a personal webpage, i.e. a user blog. Examples of such editing tools include text editing interfaces, picture/video/audio file uploading interfaces, blog preview and publishing interfaces, and the like. In this case, blog entries i.e. text, pictures, video file or audio file postings would be added and edited at the user webpage 300. Other users can access these blog entries from the webcasting webpage 400. Also, each recorded webcast may be made available in chronological order for viewing by users accessing the webcasting webpage 400; [0134], The webcasting webpage 400 further includes a scrollable list (not shown in the Figures) containing the web pages of all the recorded webcast and current live webcast, i.e. the works, authored by a specific webcaster. The entries in the list are selectable and upon selecting an entry, the webcasting webpage 400 associated with the selected entry would appear in a separate Internet browser window or a separate Internet browser tab)
Regarding claim 34, Sim in view of Chong teaches all the limitations of claim 23, further comprising:
further comprising: analyzing the at least a portion of the first video data; and creating a first user appearance model of the first user (Sim Figs. 1-6; [0102], there is provided one or more avatars (e.g. 412 in FIG. 4) selectable for use to identify each user accessing the web portal via any one of the first communication unit (114 in FIG. 1) or N other communication units (116 in FIG. 1) upon user access at the web portal. For registered users (Level 1 to 4 users), they are either required to select an avatar or are assigned an avatar to represent their virtual presence. During registration, they may be prompted by the system through message boxes (not shown in the Figures) to select one of the pre-stored avatars on the web portal to represent his or her presence in the web portal. The assigned avatar may be one of the pre-stored avatars. Unregistered users (Level 5 users) can also access the web portal with a temporary avatar, which exists only during the period the user remains connected to the web portal. If a Level 5 user who has not login to the web portal via the login interface (206 in FIG. 2) selects one of the existing webcast entries on the main web page (200 in FIG. 2), a message box would appear to request the user to select an online avatar and to give a name for it. In the example embodiment, all users have the option to name and rename the avatars. Data relating to the pre-stored avatar are stored in the database (106 in FIG. 1) and processed by the server (108 in FIG. 1) for displaying on the web pages of the web portal. The avatar may be an animation (e.g. in .gif file format) or static picture (e.g. in .gif, .jpg, .tiff, .png or .bmp formats) that is represented in two or three dimensions. Users may have the option to change the avatar and upload their own picture or avatar if they decide not to use one of the pre-stored avatars; [0105], selecting the camera connection, the video captured by the selected camera connection will be displayed in a webcast display window 304; [0118], If the sound input is not already digitised, it would be converted to a digital format that is suitable for applying a speech-to-text algorithm to determine and generate the words and/or characters in the sound input for displaying in the display 304 (and 404 in FIG. 4) of the webcast; [0119], to prevent the subtitles from appearing too soon or too late at the display (404 in FIG. 4) in the webcasting webpage (400 in FIG. 4), which is viewed across the Internet, timing mechanisms may be implemented to ensure that the video displayed at the webcasting webpage (400 in FIG. 4) is displayed in synchronization with the rate at which the subtitles are appearing in the video displayed at the user webpage 300; [0121], There is provided a contact list 306 containing, in this case, a first contact field 310 (user webpage) displaying an avatar and the username of the webcaster and a second contact field 308 (user webpage) displaying an avatar and the username of a user who has selected to view the webcast from the main web page (200 in FIG. 1); Each of these contact fields are activators, which may be mouse action triggered or finger action triggered (in the case of touch screens), to enable the user of the user webpage 300 to engage in one-to-one user text messaging, speak to the user associated with the contact field selected via audio transmission over the Internet (i.e. teleconferencing), and engage the user associated with the contact field in video conferencing; [0127], a display window 404 for displaying the webcast; [0131], Users who have any networked image capture apparatus or a video conferencing apparatus at their respective locations can carry out video conferencing with other users who have accessed the webcasting webpage 400; [0132], a second display window 410 displaying the webcam view of the second user appears. The second display window 410 can be resized and dragged around the screen display of the webcasting webpage 400 by a mouse cursor or finger contact (if touch screen is being used). In the example embodiment, only the user who has selected to hold a video conference with the second user can see the second display window 410. However, it is appreciated that in other example embodiments, the second display window 410 may be visible to more users when other users in the contact list 406 selects to engage the second user in a video conference and the second user approves it; [0135], control interfaces allows system administrators to edit or delete offensive, irrelevant or outdated video, audio, picture or textual data that is streamed or uploaded to the web portal. Advantageously, this allows the system administrator (Level 1 user) to act as a moderator to censor or filter out offensive text/video/audio data;)
Regarding claim 35, Sim in view of Chong teaches all the limitations of claim 34, further comprising:
wherein the first user appearance model includes a first user face model (Sim Figs. 1-6; [0102], there is provided one or more avatars (e.g. 412 in FIG. 4) selectable for use to identify each user accessing the web portal via any one of the first communication unit (114 in FIG. 1) or N other communication units (116 in FIG. 1) upon user access at the web portal. For registered users (Level 1 to 4 users), they are either required to select an avatar or are assigned an avatar to represent their virtual presence. During registration, they may be prompted by the system through message boxes (not shown in the Figures) to select one of the pre-stored avatars on the web portal to represent his or her presence in the web portal. The assigned avatar may be one of the pre-stored avatars. Unregistered users (Level 5 users) can also access the web portal with a temporary avatar, which exists only during the period the user remains connected to the web portal. If a Level 5 user who has not login to the web portal via the login interface (206 in FIG. 2) selects one of the existing webcast entries on the main web page (200 in FIG. 2), a message box would appear to request the user to select an online avatar and to give a name for it. In the example embodiment, all users have the option to name and rename the avatars. Data relating to the pre-stored avatar are stored in the database (106 in FIG. 1) and processed by the server (108 in FIG. 1) for displaying on the web pages of the web portal. The avatar may be an animation (e.g. in .gif file format) or static picture (e.g. in .gif, .jpg, .tiff, .png or .bmp formats) that is represented in two or three dimensions. Users may have the option to change the avatar and upload their own picture or avatar if they decide not to use one of the pre-stored avatars; [0121], There is provided a contact list 306 containing, in this case, a first contact field 310 (user webpage) displaying an avatar and the username of the webcaster and a second contact field 308 (user webpage) displaying an avatar and the username of a user who has selected to view the webcast from the main web page (200 in FIG. 1); Each of these contact fields are activators, which may be mouse action triggered or finger action triggered (in the case of touch screens), to enable the user of the user webpage 300 to engage in one-to-one user text messaging, speak to the user associated with the contact field selected via audio transmission over the Internet (i.e. teleconferencing), and engage the user associated with the contact field in video conferencing; [0130], The online contact list 402 is visible to all the users (Levels 1 to 4 only) in the contact list 402; [0131], Users who have any networked image capture apparatus or a video conferencing apparatus at their respective locations can carry out video conferencing with other users who have accessed the webcasting webpage 400; [0132], a second display window 410 displaying the webcam view of the second user appears. The second display window 410 can be resized and dragged around the screen display of the webcasting webpage 400 by a mouse cursor or finger contact (if touch screen is being used). In the example embodiment, only the user who has selected to hold a video conference with the second user can see the second display window 410. However, it is appreciated that in other example embodiments, the second display window 410 may be visible to more users when other users in the contact list 406 selects to engage the second user in a video conference and the second user approves it)
Regarding claim 41, Sim in view of Chong teaches all the limitations of claim 40. Chong further teaches:
further comprising displaying, on the first electronic device, a user-selectable accept element, wherein selecting the user-selectable accept element causes the generated date stream to be transmitted to the server (Sim Figs. 1-6; [0047], more than one camera is included in the system for capturing the first location on video for the webcast; [0056], During operation, the control unit 104 streams the captured video in the camera views of the first and second cameras 110 and 118 respectively to the server 108. The server 108 in turn streams the captured sound and moving images to the first communication unit 114 and the N communication units 116 via the Internet 112. Depending on user request, the control unit 104 may stream only one video captured by one of the first and second cameras 110 and 118, or stream both videos captured by both cameras 110 and 118. The streaming can be done in real time; [0105], If the registered user or also herein known as the webcaster selects the `create webcast` button 302, the relevant software drivers would run in the background on the control unit (104 in FIG. 1) to search for the presence of cameras connected to it; [0110], After selecting the camera view to be displayed in the display window 304 and deciding whether to stream one or both videos captured by the first and second cameras (110 and 118 in FIG. 1), the selected videos would be streamed from the control unit (104 in FIG. 1) to the server (108 in FIG. 1); [0111], The webcast control button 320 starts and stops the streaming of the webcast from the server; When the webcast control button 320 is selected in a second instance, i.e. signifying `stop`, the streaming of the webcast would end, and a dialogue box (not shown in the Figures) would appear to prompt the webcaster whether to delete the recording made so far; [0112], A Select Camera View button 338 is provided for the webcaster to switch between camera views captured by different cameras; [0138], lastly, the webcaster selects `start` using the start/stop control button 320. Selecting `start` also indicates the webcaster's status as available for communication. Assuming the webcaster selected the streaming of videos captured by both cameras, when the webcast is started, the captured videos of the cameras 110 and 118 are streamed from the cameras 110 and 118 to the control unit 104. The control unit 104 in turn streams the captured videos to the server 108 hosting the web portal. The server 108 then streams the captured videos to any user communication unit (e.g. one of the N communication units 116) accessing the webcasting page 400; [0165], one or more control interfaces may be provided for re-recording a recorded webcast so as to incorporate or change the subtitles, sound effects, sound or music played in the recorded webcast;);
Chong further teaches:
the generated date file to be transmitted to the server (Chong Figs. 1-12; [0020], a user is able to record video of the background from the rear camera and also record a simultaneous narration from the front camera using a single mobile device. The narration can be combined as a live overlay of the user over the background image or video. This function can be provided using two separate image or video processors or a single controller can take in the two video streams and merge them in software or a special hardware image processing or graphics processing module; [0024-0028], In the example of FIG. 7, the two images are positioned beside each other; [0032], Any one or more of the views of FIGS. 1-7 can be sent by the mobile device to local memory, to remote memory, or to another user; The call may also be a session with a remote server from which data may be relayed to one or more other users; [0076], The overlay combination block 1219 is coupled to receive the same first two inputs as the overlay multiplexer, the primary camera and the secondary camera direct input as a raw or other type of image file. The overlay combination block takes these two image files and combines them based on commands received from the controller or based on an automated or fixed routine. In one example, as shown in FIG. 3, the overlay combination block takes the pixels of an image from the secondary camera and uses them to replace a portion of the pixels of an image from the primary camera)
Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Sim in view of Chong in further view of Eim et al. (US 20180349019 A1, published 12/06/2018), hereinafter Eim.
Regarding claim 28, Sim in view of Chong teaches all the limitations of claim 27. However, Sim in view of Chong fails to expressly disclose wherein the first user-selectable element is a play button, wherein selecting the play button causes at least a portion of the first visual video clip, the second visual video clip, or a combination thereof, to play.
wherein the first user-selectable element is a play button, wherein selecting the play button causes at least a portion of the first visual video clip, the second visual video clip, or a combination thereof, to play (Eim Figs. 1-25; [0225-0226], Referring to FIG. 10, if an SNS account of a prescribed user is accessed, posts registered at the SNS account can be outputted. In doing so, if the post includes a video, the controller 180 can control a preview image of the video included in the post and at least one button, which is provided to trigger a video play, to be outputted; [0227], For instance, FIG. 10 shows that a play button 1020 is outputted by overlaying a preview image 1010; [0228], If a user input of touching the play button 1020 is received, the controller 180 can play the video)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein the first user-selectable element is a play button, wherein selecting the play button causes at least a portion of the first visual video clip, the second visual video clip, or a combination thereof, to play as suggested in Eim into Sim in view of Chong. Doing so would be desirable because there are ongoing efforts to support and increase the functionality of mobile terminals. Such efforts include software and hardware improvements, as well as changes and improvements in the structural components which form the mobile terminal (see Eim [0004]). Recently, many ongoing efforts are made to research and develop cameras capable of a multi-view photographing through a plurality of cameras. For instance, if a plurality of cameras are combined together, it is able to photograph a multi-view image having the coverage of 360°. In case of a multi-view image photographed through a plurality of cameras, a different user experience can be provided depending on which part of a video is viewed (see Eim [0005]). It is necessary to consider a method of providing similar experiences to users who view a multi-view image (see Eim [0006]). The present invention is directed to a mobile terminal and controlling method thereof that substantially obviate one or more of the problems due to limitations and disadvantages of the related art (see Eim [0007]). An object of the present invention is to provide a mobile terminal and controlling method thereof, by which user convenience can be enhanced (see Eim [0008]).
Claim 36 is rejected under 35 U.S.C. 103 as being unpatentable over Sim in view of Chong in further view of Lee et al. (US 20110148864 A1, published 06/23/2011), hereinafter Lee.
Regarding claim 36, Sim in view of Chong teaches all the limitations of claim 34. However, Sim in view of Chong fails to expressly disclose further comprising creating a second user appearance model based at least on the first user appearance model and a weight associated with the first user, to play.
further comprising creating a second user appearance model based at least on the first user appearance model and a weight associated with the first user (Lee Figs. 1-4; [0054], The method of creating 3D avatars will now be described in more detail with reference to FIG. 3. When the standard 3D face and body geometries 25 and 26 are prepared on the basis of the BRDF data 16, the subsurface scattering data 17, the face photographs 12 and the user information 13 for a user, the avatar creation unit 30 creates a 3D avatar resembling the user; [0061], First, the user body information acquisition unit 10 receives two sheets of user photographs from a user at step S10. The received user photographs 12 include front and side photographs of the face of the user. The received user photographs are stored in the face photograph DB 20; [0062], Thereafter, the user body information acquisition unit 10 receives the user information 13 from the user at step S20. The user information includes the user's height and weight. The avatar creation unit 30 searches the 3D humanoid DB 23 for a standard 3D face geometry 25 and a standard 3D body geometry 26 for a 3D avatar on the basis of the received user information (i.e., body information (i.e., height and weight); [0064], Once the standard 3D face and body geometries 25 and 26 are prepared on the basis of the BRDF data 16, the subsurface scattering data 17, the face photographs 12 and the user information 13 for the user, the avatar creation unit 30 starts to create a 3D avatar which resembles the user. [0066], Thereafter, the avatar creation unit 30 scales up or down the standard 3D body data 26, selected on the basis of the user information, in accordance with the user's height and weight. Thereafter, the avatar creation unit 30 generates the 3D face data 81 for the user by modifying the standard 3D face data 25 at step S70; [0067], Thereafter, the avatar creation unit 30 creates a user avatar by combining the 3D body data 82 and the 3D face data 81, which have been modified on the basis of the user, at step S80)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated further comprising creating a second user appearance model based at least on the first user appearance model and a weight associated with the first user as suggested in lee into Sim in view of Chong. Doing so would be desirable because the present invention relates generally to a method and apparatus for creating a three-dimensional (3D) avatar, and, more particularly, to a method and apparatus for creating a 3D avatar, which are capable of more easily and quickly creating high-quality 3D avatars used in 3D content (see Lee [0003]). Since the term "avatar" originally means a 2D or 3D character representing a user, most of the users want their avatars to resemble them. It is, however, difficult to create an avatar resembling the user, or a high-quality 3D avatar, by performing such a combination (see Lee [0006]). Furthermore, high-quality 3D characters have been used even when video content, such as an existing movie, is created. For this purpose, a character resembling a real human has been generated using the latest in 3D graphics technology. A tremendous cost and time expenditure is, however, required to generate such a character. There is a demand for a method of easily and quickly creating a 3D avatar, which is different from existing methods, in order for a user to produce content that uses a high-quality avatar which resembles the user (see Lee [0007]). The present invention is intended to create a high-quality 3D avatar, which resembles oneself; unlike the low-quality 3D avatars that are being used currently (see Lee [0008]). There is a need for two elements so that a user receives the impression that a 3D avatar resembles the user. The first element is that the face of the 3D avatar resemble the user himself or herself. For this purpose, the use of a photograph of the user himself or herself is insufficient and the face geometry data of a 3D avatar must resemble the user. The second element is the bodily shape of the user (see Lee [0009]).
Claim 37-39 is rejected under 35 U.S.C. 103 as being unpatentable over Sim in view of Chong in further view of The Smart Scale for Geeks - SahmReviews.com (https://web.archive.org/web/20150317043654/http://www.sahmreviews.com:80/2014/06/weight-gurus-bathroom-scale.html, archived by the Wayback Machine on 03/17/2015), hereinafter Smart Scale.
Regarding claim 37, Sim in view of Chong teaches all the limitations of claim 23. Chong further teaches:
wherein the second visual video clip is of at least a second portion the first user of the first electronic device, the first portion of the first user includes a face of the first user and the second portion of the first user includes (Chong Figs. 1-12; [0020], a user is able to record video of the background from the rear camera and also record a simultaneous narration from the front camera using a single mobile device. The narration can be combined as a live overlay of the user over the background image or video. This function can be provided using two separate image or video processors or a single controller can take in the two video streams and merge them in software or a special hardware image processing or graphics processing module; [0024], In FIG. 3, the mobile display is configured to combine the image of the trees from the rear camera with the image of the user from the front camera; [0027], In FIG. 6, the images are reversed so that the image of the trees 103 now fills the inset; [0028], In the example of FIG. 7, the two images are positioned beside each other; [0032], Any one or more of the views of FIGS. 1-7 can be sent by the mobile device to local memory, to remote memory, or to another user; The call may also be a session with a remote server from which data may be relayed to one or more other users; [0035], The exchange of messages, images, video, and audio may continue throughout the course of the call or session)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein the second visual video clip is of at least a second portion the first user of the first electronic device, the first portion of the first user includes a face of the first user and the second portion of the first user includes as suggested in Chong into Sim. Doing so would be desirable because many mobile devices are equipped with both a front camera with a view of the user and a rear camera with a view of the background or what lies in front of the user (see Chong [0002]). When used as a video telephone, the display shows a video of the remote party and, in some cases, a video also of the user superimposed on the screen. However, these devices do not permit the display to show captured images from both cameras on the same display at the same time. They also do not permit images captured from the two cameras at the same time to be transmitted to another mobile device. This limits the usefulness of the two cameras on the mobile device (see Chong [0004]). Many different types of mobile devices offer or may be adapted to offer multiple cameras with different views (see Chong [0031]). Additionally, the system of Chong would improve the system of Sim by enabling viewers to watch both streams of video generated by the first user simultaneously, without the need to press a button to switch the view. The file of combined streams would save the viewers time and better enable the viewers to watch desired multi-camera content.
However, Sim in view of Chong fails to expressly disclose a foot of the first user. In the same field of endeavor, Smart Scale teaches:
a foot of the first user (p. 2, First let me tell you about the scale itself. It’s square with a nice stepping platform. It’s black with a very low-profile; extremely sleek; see images on p. 2, 5, 6)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated a foot of the first user as suggested in lee into Sim in view of Chong. Doing so would be desirable because having things sync with my phone is pretty cool (see Smart Scale p. 2). Additionally, Smart Scale clarifies new uses for Chong’s background video from the rear camera and simultaneous narration from the front camera (see Chong [0020]), thereby increasing the usefulness of the systems of Sim and Chong by providing additional desired uses of the cameras. Providing additional contexts and capabilities for capturing multi-camera content, such as pointing the rear camera down while still providing a video of the user’s face from the front camera, increases the versatility and desirability of the system. Enabling the users to capture any desired content at any angle in any location would further drive adoption of the system by new users that want to capture different types of content beyond those described in Sim and Chong.
Regarding claim 38, Sim in view of Chong in further view of Smart Scale teaches all the limitations of claim 37. Chong further teaches:
wherein the second visual video clip further includes (Chong Figs. 1-12; [0020], a user is able to record video of the background from the rear camera and also record a simultaneous narration from the front camera using a single mobile device. The narration can be combined as a live overlay of the user over the background image or video. This function can be provided using two separate image or video processors or a single controller can take in the two video streams and merge them in software or a special hardware image processing or graphics processing module; [0024], In FIG. 3, the mobile display is configured to combine the image of the trees from the rear camera with the image of the user from the front camera; [0027], In FIG. 6, the images are reversed so that the image of the trees 103 now fills the inset; [0028], In the example of FIG. 7, the two images are positioned beside each other; [0032], Any one or more of the views of FIGS. 1-7 can be sent by the mobile device to local memory, to remote memory, or to another user; The call may also be a session with a remote server from which data may be relayed to one or more other users; [0035], The exchange of messages, images, video, and audio may continue throughout the course of the call or session)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein the second visual video clip includes as suggested in Chong into Sim. Doing so would be desirable because many mobile devices are equipped with both a front camera with a view of the user and a rear camera with a view of the background or what lies in front of the user (see Chong [0002]). When used as a video telephone, the display shows a video of the remote party and, in some cases, a video also of the user superimposed on the screen. However, these devices do not permit the display to show captured images from both cameras on the same display at the same time. They also do not permit images captured from the two cameras at the same time to be transmitted to another mobile device. This limits the usefulness of the two cameras on the mobile device (see Chong [0004]). Many different types of mobile devices offer or may be adapted to offer multiple cameras with different views (see Chong [0031]). Additionally, the system of Chong would improve the system of Sim by enabling viewers to watch both streams of video generated by the first user simultaneously, without the need to press a button to switch the view. The file of combined streams would save the viewers time and better enable the viewers to watch desired multi-camera content.
Smart Scale further teaches:
at least a portion of a display device (p. 2, First let me tell you about the scale itself. It’s square with a nice stepping platform. It’s black with a very low-profile; extremely sleek; see images on p. 2, 5, 6)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated at least a portion of a display device as suggested in lee into Sim in view of Chong. Doing so would be desirable because having things sync with my phone is pretty cool (see Smart Scale p. 2). Additionally, Smart Scale clarifies new uses for Chong’s background video from the rear camera and simultaneous narration from the front camera (see Chong [0020]), thereby increasing the usefulness of the systems of Sim and Chong by providing additional desired uses of the cameras. Providing additional contexts and capabilities for capturing multi-camera content, such as pointing the rear camera down while still providing a video of the user’s face from the front camera, increases the versatility and desirability of the system. Enabling the users to capture any desired content at any angle in any location would further drive adoption of the system by new users that want to capture different types of content beyond those described in Sim and Chong.
Regarding claim 39, Sim in view of Chong in further view of Smart Scale teaches all the limitations of claim 38. Smart Scale further teaches:
wherein the display device is built into a scale and is configured to display a weight of the first user thereon (p. 2, First let me tell you about the scale itself. It’s square with a nice stepping platform. It’s black with a very low-profile; extremely sleek; see images on p. 2, 5, 6)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated wherein the display device is built into a scale and is configured to display a weight of the first user thereon as suggested in lee into Sim in view of Chong. Doing so would be desirable because having things sync with my phone is pretty cool (see Smart Scale p. 2). Additionally, Smart Scale clarifies new uses for Chong’s background video from the rear camera and simultaneous narration from the front camera (see Chong [0020]), thereby increasing the usefulness of the systems of Sim and Chong by providing additional desired uses of the cameras. Providing additional contexts and capabilities for capturing multi-camera content, such as pointing the rear camera down while still providing a video of the user’s face from the front camera, increases the versatility and desirability of the system. Enabling the users to capture any desired content at any angle in any location would further drive adoption of the system by new users that want to capture different types of content beyond those described in Sim and Chong.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ahmed (US 20150172238 A1) see Figs. 1-29 and [0077-0079], [0110].
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN T REPSHER III whose telephone number is (571)272-7487. The examiner can normally be reached Monday - Friday, 8AM-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHN T REPSHER III/ Primary Examiner, Art Unit 2143
/JENNIFER N WELCH/ Supervisory Patent Examiner, Art Unit 2143