DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 10,446,188 by D’Autremont in view of US 2013/0239063 by Ubillos et al.
Regarding claim 1, D’Autremont discloses an electronic device for sharing content using dummy media, the electronic device comprising:
an editing UI display unit configured to provide an editing UI including a dummy media creation UI to a display device (col. 5 lines 6-9 teaches “The computer system 100 may also include one or more network interfaces 108, through which the computer system can receive data and media files from remote sources via a network. Further still, the computer system 100 may include a display 110 (e.g., a monitor) and input(s) 112 (e.g., a keyboard, mouse, etc.) for interacting with users.”, col 6 lines 11-25 teaches “FIG. 8 shows an example that relates a source file to a target file with respect to an insert edit that replaces various video and audio channel 2 frames of a target multimedia file with various video and audio channel 2 frames of a source multimedia file. The resultant edited target file is shown at the bottom of FIG. 8, where two selected video frames and two audio channel 2 frames from the source file have replaced two selected video and audio channel 2 frames in the original target file. FIG. 9 shows an NLE representation of the insert edit shown by FIG. 8, where the subject files are broken down into their respective video, audio channel 1, and audio channel 2 tracks, each shown in correspondence with the SMPTE time codes. FIG. 10 shows an example user interface for the edits of FIGS. 8 and 9.”);
a user input checking unit configured to check user input information received through the dummy media creation UI (in addition to discussion above, col. 7 lines 50-col. 8 lines 10 teaches “At step 206, the processor checks whether track Kt exists based on the information extracted in step 202. The reason for this check is that the input from the user about which track should be modified and that track's source is checked against the actual parsed data of the file to ensure that the desired track with the correct characteristics for the actual input data stream actually exists in the target file…..….”); and
an editing UI processing unit configured to create at least one dummy media according to input of the dummy media creation UI based on the user input information checked by the user input checking unit and to perform processing a video project including the dummy media according to a request (in addition to discussion above, col. 7 lines 50-col. 8 lines 10 teaches “At step 208, the processor determines whether the source data and stream/track Ks are compatible with target stream/track Kt by comparing the data extracted at steps 200 and 202. As a first part of this compatibility check, the processor checks to see if track Kt is the track that was targeted for replacement by track Ks…..”).
D’Autremont fails to disclose to perform processing to share a video project according to a request.
Ubillos et al. discloses to perform processing to share a video project according to a request (paragraph 0176 teaches “When the user navigates to the photos tab, some embodiments display a similar set of shelves as for the other tabs, but display individual image thumbnails instead of collections. These shelves include a thumbnail for each image imported into the image viewing application, which enable the user to navigate directly to a specific image (e.g., to edit the image, share the image, etc.)”, paragraph 0447 teaches “With the GUI in state 6605, the user can add one or more images to the image display area. These may replace the currently selected image (e.g., if the user double taps on an unselected image to display all images similar to that image), or be added alongside the currently selected image (e.g., if the user presses and holds down over a thumbnail. When the user performs one of these actions to add one or more images to the preview display area, the GUI transitions to state 6610 and displays the multiple selected images in the image display area and displays secondary selection indicators on the corresponding thumbnail.”, paragraph 0509-0510 teaches “As briefly mentioned in the previous section, some embodiments allow the user to share images directly from the image editing, viewing, and organizing application by uploading the images to a social media or photo sharing website through the user's account on the site. When the user requests to share the image, the application instructs the device on which it operates to connect to the website (e.g., through the Internet), then automatically uploads it to the user's account on the website. In some embodiments, the application additionally identifies when other users of the website have commented on the image, and displays these comments to the user of the application.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the ability to perform processing to share a video project according to a request, as taught by Ubillos et al. into the system of D’Autremont, because such incorporation would allow for the benefit of sharing a video to a user to view, thus increase user accessibility of the system.
Regarding claim 2, the electronic device wherein the editing UI processing unit is configured to: detect individual media information as information on at least one media included in a video project and create dummy media information based on the individual media information, and perform processing to share the video project by including the dummy media information in the video project (in addition to discussion above, D’Autremont, col. 8 lines 17-31 teaches “The information used for the compatibility check may include but not be limited to: For Audio—sample rate, sample bit rate, track type (e.g., mono, stereo, interleaved) For Video—resolution, pixel format, pixel bit depth, pixel aspect ratio, SMPTE color range, transfer characteristics, frame type (progressive or interlaced), field dominance for interlaced material, track type (e.g., single, stereo 3D, etc.), track duration, delay values, editing ratio (aka frame rate) For Timecode—Track type, standard, track duration For Closed Captions—Standard, packet type, track duration. For other ancillary data—Packet type and standard referenced, track duration”; Ubillos et al., paragraph 0176, 0447, 0509-0510 as discussed above).
The motivation for combining references has been discussed in independent claim above.
Regarding claim 3, the electronic device wherein the individual media information comprises at least one of a type of media, resolution of media or time information of media (in addition to discussion above, D’Autremont, col. 8 lines 17-31 teaches “The information used for the compatibility check may include but not be limited to: For Audio—sample rate, sample bit rate, track type (e.g., mono, stereo, interleaved) For Video—resolution, pixel format, pixel bit depth, pixel aspect ratio, SMPTE color range, transfer characteristics, frame type (progressive or interlaced), field dominance for interlaced material, track type (e.g., single, stereo 3D, etc.), track duration, delay values, editing ratio (aka frame rate) For Timecode—Track type, standard, track duration For Closed Captions—Standard, packet type, track duration. For other ancillary data—Packet type and standard referenced, track duration”; Ubillos et al., paragraph 0176, 0447, 0509-0510 as discussed above).
The motivation for combining references has been discussed in independent claim above.
Regarding claim 4, the electronic device wherein the editing UI processing unit generates respectively dummy media information for all media included in the video project or generates dummy media information for selected media among at least one media included in the video project (in addition to discussion above, D’Autremont, col. 8 lines 17-31; Ubillos et al., paragraph 0176, 0447, 0509-0510 as discussed above).
The motivation for combining references has been discussed in independent claim above.
Regarding claim 5, the electronic device wherein the editing UI comprises a dummy media replacement UI configured to check whether at least one dummy media exists in the video project and to determine alternative media corresponding to the dummy media (in addition to discussion above, D’Autremont, col. 7 lines 50-col. 8 lines 10 teaches “At step 206, the processor checks whether track Kt exists based on the information extracted in step 202. The reason for this check is that the input from the user about which track should be modified and that track's source is checked against the actual parsed data of the file to ensure that the desired track with the correct characteristics for the actual input data stream actually exists in the target file…..….”; Ubillos et al., paragraph 0176, 0447, 0509-0510 as discussed above).
The motivation for combining references has been discussed in independent claim above.
Regarding claim 6, the electronic device wherein the editing UI processing unit comprises a dummy media processing unit configured to check whether at least one dummy media exists in the video project based on the dummy media information in response to input of a shared video project, to determine alternative media corresponding to the dummy media according to a request in response to existence of the dummy media, and to replace the dummy media with the alternative media in the video project (in addition to discussion above, D’Autremont, ; Ubillos et al., ).
The motivation for combining references has been discussed in independent claim above.
Regarding claim 7, the electronic device wherein the dummy media processing unit is configured to compare a type of the dummy media, resolution of the dummy media and time information of the dummy media with a type of the alternative media, resolution of the alternative media and time information of the alternative media (in addition to discussion above, D’Autremont, col. 7 lines 50-col. 8 lines 10 teaches “At step 208, the processor determines whether the source data and stream/track Ks are compatible with target stream/track Kt by comparing the data extracted at steps 200 and 202. As a first part of this compatibility check, the processor checks to see if track Kt is the track that was targeted for replacement by track Ks…..”, col. 8 lines 17-31 teaches “The information used for the compatibility check may include but not be limited to: For Audio—sample rate, sample bit rate, track type (e.g., mono, stereo, interleaved) For Video—resolution, pixel format, pixel bit depth, pixel aspect ratio, SMPTE color range, transfer characteristics, frame type (progressive or interlaced), field dominance for interlaced material, track type (e.g., single, stereo 3D, etc.), track duration, delay values, editing ratio (aka frame rate) For Timecode—Track type, standard, track duration For Closed Captions—Standard, packet type, track duration. For other ancillary data—Packet type and standard referenced, track duration”; Ubillos et al., paragraph 0176, 0447, 0509-0510 as discussed above).
The motivation for combining references has been discussed in independent claim above.
Regarding claim 8, the electronic device wherein the dummy media processing unit is configured to select the alternative media in consideration of whether the type of the dummy media, the resolution of the dummy media and the time information of the dummy media match the type of the alternative media, the resolution of the alternative media and the time information of the alternative media (in addition to discussion above, D’Autremont, col. 7 lines 50-col. 8 lines 10 teaches “At step 208, the processor determines whether the source data and stream/track Ks are compatible with target stream/track Kt by comparing the data extracted at steps 200 and 202. As a first part of this compatibility check, the processor checks to see if track Kt is the track that was targeted for replacement by track Ks…..”, col. 8 lines 17-31 teaches “The information used for the compatibility check may include but not be limited to: For Audio—sample rate, sample bit rate, track type (e.g., mono, stereo, interleaved) For Video—resolution, pixel format, pixel bit depth, pixel aspect ratio, SMPTE color range, transfer characteristics, frame type (progressive or interlaced), field dominance for interlaced material, track type (e.g., single, stereo 3D, etc.), track duration, delay values, editing ratio (aka frame rate) For Timecode—Track type, standard, track duration For Closed Captions—Standard, packet type, track duration. For other ancillary data—Packet type and standard referenced, track duration”; Ubillos et al., paragraph 0176, 0447, 0509-0510 as discussed above).
The motivation for combining references has been discussed in independent claim above.
Regarding claim 9, the electronic device wherein the dummy media processing unit is configured to control the alternative media in consideration of whether the type of the dummy media, the resolution of the dummy media and the time information of the dummy media match the type of the alternative media, the resolution of the alternative media and the time information of the alternative media (in addition to discussion above, D’Autremont, col. 7 lines 50-col. 8 lines 10 teaches “At step 208, the processor determines whether the source data and stream/track Ks are compatible with target stream/track Kt by comparing the data extracted at steps 200 and 202. As a first part of this compatibility check, the processor checks to see if track Kt is the track that was targeted for replacement by track Ks…..”, col. 8 lines 17-31 teaches “The information used for the compatibility check may include but not be limited to: For Audio—sample rate, sample bit rate, track type (e.g., mono, stereo, interleaved) For Video—resolution, pixel format, pixel bit depth, pixel aspect ratio, SMPTE color range, transfer characteristics, frame type (progressive or interlaced), field dominance for interlaced material, track type (e.g., single, stereo 3D, etc.), track duration, delay values, editing ratio (aka frame rate) For Timecode—Track type, standard, track duration For Closed Captions—Standard, packet type, track duration. For other ancillary data—Packet type and standard referenced, track duration”; Ubillos et al., paragraph 0176, 0447, 0509-0510 as discussed above).
The motivation for combining references has been discussed in independent claim above.
Regarding claim 10, the electronic device wherein the dummy media processing unit is configured to: extract media capable of replacing the dummy media in consideration of the type of the dummy media, the resolution of the dummy media, the time information of the dummy media, the type of the alternative media, the resolution of the alternative media and the time information of the alternative media, providing a list of the extracted media, and determining at least one alternative media from the extracted media (in addition to discussion above, D’Autremont, col. 8 lines 17-43 teaches “The information used for the compatibility check may include but not be limited to: For Audio—sample rate, sample bit rate, track type (e.g., mono, stereo, interleaved) For Video—resolution, pixel format, pixel bit depth, pixel aspect ratio, SMPTE color range, transfer characteristics, frame type (progressive or interlaced), field dominance for interlaced material, track type (e.g., single, stereo 3D, etc.), track duration, delay values, editing ratio (aka frame rate) For Timecode—Track type, standard, track duration For Closed Captions—Standard, packet type, track duration. For other ancillary data—Packet type and standard referenced, track duration. Since from step 200, the processor knows the equivalent information about the new frame, the processor is able to compare the results of steps 200 and 202 to assess compatibility between the source and target frames. Two frames are deemed compatible when there is an exact data format and data size match for all fields required by the target file's wrapper and encoder specifications, for example for video: resolution, pixel format, bit depth; for audio: sample rate, bit rate and bit depth. The exact list of matches will vary depending on the particular wrapper and encoders required for the particular target file. In general they can be said to include at minimum those listed above in this section.”; Ubillos et al., paragraph 0176, 0447, 0509-0510 as discussed above).
The motivation for combining references has been discussed in independent claim above.
Claim 11 is rejected for the same reason as discussed in the corresponding claim 1 above.
Claim 12 is rejected for the same reason as discussed in the corresponding claim 2 above.
Claim 13 is rejected for the same reason as discussed in the corresponding claim 3 above.
Claim 14 is rejected for the same reason as discussed in the corresponding claim 4 above.
Claim 15 is rejected for the same reason as discussed in the corresponding claim 5 above.
Claim 16 is rejected for the same reason as discussed in the corresponding claim 6 above.
Claim 17 is rejected for the same reason as discussed in the corresponding claim 7 above.
Claim 18 is rejected for the same reason as discussed in the corresponding claim 8 above.
Claim 19 is rejected for the same reason as discussed in the corresponding claim 9 above.
Claim 20 is rejected for the same reason as discussed in the corresponding claim 1 above.
Claim 11 is rejected for the same reason as discussed in the corresponding claim 1 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NIGAR CHOWDHURY whose telephone number is (571)272-8890. The examiner can normally be reached Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NIGAR CHOWDHURY/Primary Examiner, Art Unit 2484