Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9, 13-15 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Anvaripour et al. (US 2022/0206738) and further in view of Lin et al. (US 2023/0129509).
Regarding claims 1 and 19, Anvaripour teaches a video editing method/apparatus, comprising:
a memory (paragraphs 167-168, 177, 179-180 and 190 teaches various memory types); and
a processor coupled to the memory, the processor being configured to perform (paragraphs 167-168, 177 and 179-180 teaches a processor/computer based systems implementing the software stored on the memory), based on instructions stored in the memory, the video editing method comprising:
displaying a main menu in a video editing interface, wherein the main menu comprises a material entry control (Figs. 5 and 7 teaches a main menu, wherein material entry control is met by ability to add video content, add text, add audio, effects/carousel items, etc.);
in response to a triggering operation of a user on the material entry control, displaying a material addition control (Fig. 5, wherein button 520, buttons in 526 and Fig. 6C, teaches button 612 that meet the claimed material entry control and each of the buttons on the UI allows for adding its own type of “material addition”), wherein the triggering operation is any of triggering operation(s) of the user on the material entry control, the triggering operation(s) being used to triggering the display of the material addition control (Fig. 5 and 6C, wherein button 520, buttons in 526 and button 612 meet the claimed material entry control and each of the buttons on the UI allows for adding its own type of “material addition”);
acquiring a material added by the user in response to a triggering operation of the user on the material addition control (Fig. 5, wherein button 520, buttons in 526 and button 612 meet the claimed material entry control and each of the buttons on the UI allows for adding its own type of “material addition”. In an example, when video clips are to be added, button 520 is used as a material addition control to add video clips. Similarly, buttons 526 allows for entry of text and audio meeting their representative “material additions”); and
displaying, in the video editing interface, a material control corresponding to the material added by the user (E.g.: Figs. 5 and 7, wherein after user inputs a first clip, additional controls are available to the user, including adding more clips, adding carousel items to the video clip, adding text, adding audio, etc. Additionally, when buttons 520, buttons in 526 and button 612 are selected additional material control are displayed as a result).
While Anvaripour teaches the claimed as discussed above, including the video editing interface comprises a main track and secondary track(s), the main track is for displaying a control corresponding to the edited video, the secondary track(s) are for displaying material control(s) corresponding to the material(s) added into the edited video, and the material control(s) are for controlling the material(s) (Anvaripour partially teaches the secondary track for displaying the materials added into the edited video in Figs. 5, 7 and 8A-8E in the form of video clips that are part of the edit, however, fails to explicitly teach the main track as claimed.
In an analogous art, Lin teaches the missing portion, also in the graphical user interface: the video editing interface comprises a main track and secondary track(s), the main track is for displaying a control corresponding to the edited video, the secondary track(s) are for displaying material control(s) corresponding to the material(s) added into the edited video, and the material control(s) are for controlling the material(s) (Paragraphs 50-55 teaches a “main track” that shows the video clips added to the edit, and a secondary and additional video tracks, including the reference to a virtual tracks that shows the edits that are made to the “main track”.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Lin into the system of Anvaripour such that the graphical user interface also utilizes the display of additional video tracks, because such an incorporation would allow for the benefit of improving the user experience (paragraph 36).
NTCRM claim 20 are rejected for the same reasons as discussed in claim 19 above wherein the memory storing the instructions meets the claimed non-transitory computer-readable storage medium.
Regarding claim 2, Anvaripour teaches the claimed wherein the material addition control is displayed in a form of a floating menu, the floating menu comprising a plurality of material addition items, each of the material addition items corresponding to a material addition manner (Fig. 5, wherein button 520, buttons in 526 meet the claimed material entry control and each of the buttons on the UI allows for adding its own type of “material addition”. The buttons are “floating” over the user interface. Each button allows for adding additional content within its respective category (e.g. – second text input, second audio input, second video clip input, etc.)).
Regarding claim 3, Anvaripour teaches the claimed further comprising:
after the floating menu is displayed, closing the floating menu in response to a triggering operation of the user on a blank area or the material entry control in the video editing interface (Figs. 6A–6B, when pressed the menu and buttons 612, 526, etc. goes away).
Regarding claim 4, Anvaripour teaches the claimed wherein the acquiring the material added by the user in response to the triggering operation of the user on the material addition control comprises:
displaying a material addition panel in response to a triggering operation of the user on a material addition item in the floating menu (See Fig. 5 and 7, wherein each of the triggering of the button 520 results in the material); and
determining the material added by the user according to an interaction operation of the user with the material addition panel (See Fig. 5 and 7, wherein each of the triggering of the button 520 results in the material being added by the user).
Regarding claim 5, Anvaripour teaches the claimed wherein:
the material addition items correspond to multiple material addition manners of adding a sound file, adding recording, or adding a sound effect in response to the material entry control being a sound entry control (paragraph 38, 41, 60, 132 and 139 teaches using sound elements (as one of the material entry controls) allows for input of sound file or effect); or,
the material addition items correspond to material addition manners of adding input text and adding a subtitle in response to the material entry control being a text entry control (paragraph 38 and buttons 526 allows for text addition to the video editing).
Regarding claim 6, Anvaripour teaches the claimed wherein the material addition control is a material addition panel (Figs. 5 and 7, wherein menu presented allows for material addition of video content, text, audio, effects/carousel items, etc.).
Regarding claim 7, Anvaripour teaches the claimed wherein the material entry control is a picture-in-picture entry control or a special effect entry control (Figs. 6C-6D special effects via button 612 and corresponding UI in 6D).
Regarding claim 8, Anvaripour teaches the claimed wherein the displaying the material control corresponding to the material added by the user comprises:
displaying a thumbnail control, wherein the thumbnail control is associated with a plurality of material controls and occupies a secondary track (paragraph 126 teaches thumbnail control shown on a track in 524 in Fig. 5, “thumbnails are individually selectable” therefore allows “thumbnail control” occupying a “secondary track” in 524); and
displaying the plurality of material controls associated with the thumbnail control in response to a triggering operation of the user on the thumbnail control, wherein the plurality of material controls occupy one or more secondary tracks (paragraph 126 teaches thumbnail control including “the thumbnails are individually selectable for editing/deleting”).
Regarding claim 9, Anvaripour teaches the claimed wherein:
the plurality of material controls associated with the thumbnail control are displayed in response to the triggering operation of the user on the thumbnail control, and the plurality of material controls are in an unselected state (paragraph 126 teaches thumbnail control including “the thumbnails are individually selectable for editing/deleting”); or (examiner notes the alternative aspect of this claim, which requires one of the listed to meet the claimed limitations. It is vague as to which limitations are presented in the alternative, whether the remaining of the entire claim is an entire option or not. However, in applying the prior art, examiner attempts to address both).
the displaying the plurality of material controls associated with the thumbnail control in response to the triggering operation of the user on the thumbnail control comprises:
determining a video time point corresponding to a triggering point of the user on the thumbnail control in response to the triggering operation of the user on the thumbnail control (paragraph 126 teaches thumbnail control including “the thumbnails are individually selectable for editing/deleting”, therefore determining the video point is met by the ability to select the thumbnail to represent the video clip).;
moving a main track and the one or more secondary tracks to locate the video time point at a target position (paragraph 126 teaches thumbnail control including “the thumbnails are individually selectable for editing/deleting”, therefore determining the moving the main track and secondary track is met by the ability to select the thumbnail to represent the video clip); and
displaying, in the moved one or more secondary track, the plurality of material controls associated with the thumbnail control (paragraph 126 teaches thumbnail control including “the thumbnails are individually selectable for editing/deleting”, therefore determining the displaying plurality of material controls associated with the thumbnail control is met by the ability to select the thumbnail to represent the video clip when presented in the track to “individually select” thumbnails for editing/deleting).
Regarding claim 13, Anvaripour teaches the claimed further comprising:
hiding the main menu and displaying a material editing component in response to a triggering operation of the user on the material control or (examiner notes the alternative language) in response to displaying the material control corresponding to the material added by the user, wherein the material editing component comprises one or more material editing controls (Figs. 5, 7 and 8A-8E teaches editing content closes the initial menu and allows for the display of a secondary menu for material editing (images/effects/audio/etc.)).
Regarding claim 14, Anvaripour teaches the claimed wherein the material editing component further comprises a return control for closing the material editing component and displaying the main menu (Figs. 5-7 and 8A-8E teaches returning back to the previous menu by closing the current material editing/adding control/menus).
Regarding claim 15, Anvaripour teaches the claimed further comprising:
closing the material editing component and displaying the main menu in response to a triggering operation of the user on a blank area in the video editing interface (Figs. 5-7 and 8A-8E teaches returning back to the previous menu by closing the current material editing/adding control/menus).
Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Anvaripour et al. (US 2022/0206738) in view of Lin et al. (US 2023/0129509) further in view of Shin (WO 2022/154387) with a published date of 07/21/2022 using US2024/0105232 for its translation and citations.
Regarding claim 10, Anvaripour teaches the claimed, however fails to, but Shin teaches wherein the plurality of material controls associated with the thumbnail control belong to a same type, wherein material controls of the same type are added through a same material entry control (Figs. 5-8, 11 and 20 teaches wherein thumbnails can be added/edited/modified, during which times, thumbnail control belong to the same type of image or addition features having the same controls for each of the thumbnails. E.g. paragraphs 74-75 and 175 teaches new icon or an effect of flickering indicating a new thumbnail selection using a user interface).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Shin into the system of Anvaripour and Lin because such an incorporation allows for the benefit of improving the system by allowing users to be able to identify specific people (abstract).
Regarding claim 11, Anvaripour teaches the claimed, however fails to, but Shin teaches wherein a style of the thumbnail control corresponds to the type to which the material controls associated with the thumbnail control belong, the style comprising at least one of a texture, an icon, or a character (Figs. 5-8, 11 and 20 teaches wherein thumbnails can be added/edited/modified, during which times, thumbnail control belong to the same type of image or addition features having the same controls for each of the thumbnails. E.g. paragraphs 74-75 and 175 teaches new icon or an effect of flickering indicating a new thumbnail selection using a user interface).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Shin into the system of Anvaripour and Lin because such an incorporation allows for the benefit of improving the system by allowing users to be able to identify specific people (abstract).
Regarding claim 12, Anvaripour teaches the claimed wherein the displaying the plurality of material controls associated with the thumbnail control in response to the triggering operation of the user on the thumbnail control comprises:
displaying the plurality of material controls associated with the thumbnail control and a material editing component and hiding the main menu in response to the triggering operation of the user on the thumbnail control, wherein the material editing component comprises one or more material editing controls (Specifically Fig. 11 teaches a set of thumbnails that are added to a track. Thumbnail control belong to the same type of image or addition features having the same controls for each of the thumbnails. E.g. paragraphs 74-75 and 175 teaches new icon or an effect of flickering indicating a new thumbnail selection using a user interface. However, the plurality of the material controls of the thumbnail control and a material editing component (adding, subtracting, more, etc. in Fig. 11) are displayed on the same user interface for editing thumbnails).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Shin into the system of Anvaripour and Lin because such an incorporation allows for the benefit of improving the system by allowing users to be able to identify specific people (abstract).
Claims 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Anvaripour et al. (US 2022/0206738) and further in view of Lin et al. (US 2023/0129509) and further in view of Ubillos (WO2013/133895).
Regarding claim 16, Anvaripour teaches the claimed as discussed above, however fails to, but Ubillos teaches the claimed further comprising:
decreasing brightness of an un-triggered material control or increasing transparency of the un-triggered material control in response to the triggering operation of the user on the material control; or (examiner notes the alternative aspect of this limitation, which requires one of the listed to meet the claimed)
moving the material control to a target secondary track in response to a moving operation of the user on the material control (Figs. 6-15 teaches various manners in which material control for adding material to content is able to be moved, either moving the fans anywhere in the user interface or having the effects move from a fan style into a track on the bottom on the user interface).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Ubillos into the system of Anvaripour and Lin because such an incorporation allows for the benefit of allowing users to see the image will look like based on editing options presented (Background/page 1).
Regarding claim 17, Anvaripour teaches the claimed as discussed above, however fails to, but Ubillos teaches the claimed wherein the displaying a material control corresponding to the material added by the user comprises:
displaying, in a secondary track, the material control corresponding to the material added by the user (Figs. 6-15 teaches various manners in which material control for adding material to content is able to be moved, either moving the fans anywhere in the user interface or having the effects move from a fan style into a track on the bottom on the user interface).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Ubillos into the system of Anvaripour and Lin because such an incorporation allows for the benefit of allowing users to see the image will look like based on editing options presented (Background/page 1).
Allowable Subject Matter
Claim 18 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is an examiner’s statement of reasons for allowance: the closest prior art in Anvaripour et al. (US 2022/0206738), Shin (WO 2022/154387), Lin et al. (US 2023/0129509) and Ubillos (WO2013/133895) do not teach nor suggest in detail the limitations of “in response to the plurality of material controls being located on a same secondary track and having a temporal overlap, determining a layer sequence of the material controls according to a sequence of adding the material controls, and displaying the plurality of added material controls according to the layer sequence” as recited in claim 18. Anvaripour, Lin, Shin and Ubillos teaches various systems that allows for editing video segments/portions with a user interface to preview video edits, edit a thumbnail track, etc., fails to teach the above underlined limitations.
Furthermore, the underlined claim limitations of as recited above in claim 18 appear to fall outside of the abstract idea groupings (as per 2019 Revised Patent Subject Matter Eligibility Guidance (PEG)) including mathematical concepts, mental process and certain methods of organizing human activity. The claimed limitations are stated in such a manner the processes aren’t broad enough (for each of the claims as a whole) for them to fall into one of the three groupings of abstract ideas.
So as indicated by the above statements, the closest prior art as discussed above, either singularly or in combination, fail to anticipate or render the above combination of the discussed features/limitations obvious and additionally, applicant’s arguments have been considered persuasive, in light of the claim limitations as well as the enabling portions of the specification.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GELEK W TOPGYAL whose telephone number is (571)272-8891. The examiner can normally be reached M-F (9:30-6 PST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GELEK W TOPGYAL/ Primary Examiner, Art Unit 2481