Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
This is in response to applicant’s amendment/response filed on 01/23/2026, which has been entered and made of record. Claims 1-2, 9-10, 17-18 have been amended. Claims 4, 7 and 12, 15 have been cancelled. No claim has been added. Claims 1-3, 5-6, 8-11, 13-14, 16-20 are pending in the application.
Response to Arguments
Applicant's arguments filed on 01/23/2026 have been fully considered but they are not persuasive.
Applicant submits "wherein the synchronously playing is automatically triggered in response to generating the first effect." Applicant asserts that U.S. Patent
No. 11,941,728 fails to disclose or suggest this feature.” (Remarks, Page 6)
The examiner disagrees with Applicant’s premises and conclusion. U.S. Patent
No. 11,941,728 teaches “synchronously playing the virtual video frame and at least one video clip corresponding to the virtual video frame on the video track based on a timeline, to preview an effect of the target effect style applied to the at least one video clip” in claim 1. It is obvious teaching the same feature. “synchronously” is “automatically triggered”. That is according to the definition of “synchronous”.
Applicant submits “Amended claim 1 requires that in response to the first effect style being an animation-type effect style, a length corresponding to the identifier of the first effect depends on a display duration of the first effect style (that is, depends on the display duration parameter of the first effect style itself, referring to the paragraph [0046] the originally filed application), and the length corresponding to the identifier of the first effect is unrelated to the content of the video clip on a video track.” (Remarks, Page 8).
The examiner disagrees with Applicant’s premises and conclusion. Applicant indicated “depends on the display duration parameter of the first effect style itself”. Yeh teaches duration of the first effect is same as the video clip. Applicant’s claim did not require the first effect is unrelated to the video clip and the claim only requires “based on a display duration of the first effect style.”.
Applicant submits “Zheng and Lee also fail to disclose or suggest the above- captioned features of amended claim 1 and fail to provide any motivation to realize the above features. Because Zheng and Lee do not involve determining the "a length corresponding to the identifier of the first effect" based on the type of the first effect style.”” (Remarks, Page 10).
The examiner disagrees with Applicant’s premises and conclusion. Zheng teaches ¶0031, “the preview window layer and the video layer are rendered synchronously, and the audio layer is used to add an audio effect according to the user instruction.” “the interface where the editing track is located includes a preview interface, the preview interface includes three layers, that is, the preview interface where the editing track is located includes three layers, the first layer is the preview window, the second layer is the video layer, and the third layer is the audio layer, wherein, the first layer and the second layer are implemented by Canvas, and the first layer and the second layer are rendered synchronously, and the WYSIWYG effect is achieved through real-time editing and rendering from the upper layer to the lower layer.”. Therefore, Zheng also teaches the amended features.
Lee, in addition, teaches in Fig.9, Col 2, lines 60-61, “synchronizing visual effects with the plurality of videos based on the rhythm data.”. Thus, Lee also teaches the amended features.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-3, 5-6, 8-11, 13-14, 16-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. US11941728B2.
Although the conflicting claims are not identical, they are not patentably distinct from each other because the Patent US11941728B2 contain substantially all the limitations of the instant application claims.
Application No. 18435760
Patent No. US11941728B2
1. An effect previewing method, comprising:
acquiring a start point and an end point to generate a first effect for a first effect style, and displaying an identifier of the first effect for the first effect style on an effect track,
synchronously playing the first effect and at least one video clip on a video track, in response to previewing for the first effect, wherein the synchronously playing is automatically triggered in response to generating the first effect,
wherein the previewing is configured to trigger a display of a preview effect of the first effect applied to the at least one video clip,
wherein the effect track has a hierarchical relationship with at least one video track,
wherein pictures of the effect track block pictures of each video track of the at least one video track, and wherein the method further comprises: in response to the first effect style being an animation-type effect style, determining a length corresponding to the identifier of the first effect based on a display duration of the first effect style
1. An effect application previewing method, comprising:
taking, in response to a preview trigger operation for a first effect style, a position of a pointer on a video track as a start point, to generate a virtual video frame for the first effect style; and
synchronously playing the virtual video frame and at least one video clip corresponding to the virtual video frame on the video track based on a timeline, to preview an effect of the first effect style applied to the at least one video clip;
wherein the taking, in response to the preview trigger operation for the first effect style, the position of the pointer on the video track as the start point, to generate the virtual video frame for the first effect style, comprises:
taking, in response to the preview trigger operation for the first effect style, the position of the pointer on the video track as the start point, to generate the virtual video frame for the first effect style on a virtual track, wherein the virtual track has a hierarchical relationship with the at least one video track, and
wherein the virtual track is located on an upper layer than each video track of the at least one video track, and pictures of the virtual track block pictures of each video track of the at least one video track.
2. The method according to claim 1, wherein a length corresponding to the virtual video frame is same as a display duration of the first effect style, or the length corresponding to the virtual video frame is same as a length from the start point to a preset end point.
4. The method according to claim 1, wherein the target effect style comprises an animation-type effect style and a static-type effect style.
9.
7. An effect application previewing apparatus, comprising:
a generation module, configured to take, in response to a preview trigger operation for a first effect style, a position of a pointer on a video track as a start point, to generate a virtual video frame for the first effect style; and
a playing module, configured to synchronously play the virtual video frame and at least one video clip corresponding to the virtual video frame on the video track based on a timeline, to preview an effect of the first effect style applied to the at least one video clip,
wherein the generation module is configured to:
take, in response to the preview trigger operation for the first effect style, the position of the pointer on the video track as the start point, to generate the virtual video frame for the first effect style on a virtual track, wherein the virtual track has a hierarchical relationship with the at least one video track, and
wherein the virtual track is located on an upper layer than each video track of the at least one video track, and pictures of the virtual track block pictures of each video track of the at least one video track.
10. The apparatus according to claim 7, wherein a length corresponding to the virtual video frame is same as a display duration of the first effect style, or the length corresponding to the virtual video frame is same as a length from the start point to a preset end point.
4. The method according to claim 1, wherein the target effect style comprises an animation-type effect style and a static-type effect style.
17.
12. A non-transitory computer-readable storage medium, wherein instructions are stored in the non-transitory computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is enabled to implement an effect application previewing method,
wherein the effect application previewing method comprises:
taking, in response to a preview trigger operation for a first effect style, a position of a pointer on a video track as a start point, to generate a virtual video frame for the first effect style; and
synchronously playing the virtual video frame and at least one video clip corresponding to the virtual video frame on the video track based on a timeline, to preview an effect of the first effect style applied to the at least one video clip;
wherein the taking, in response to the preview trigger operation for the first effect style, the position of the pointer on the video track as the start point, to generate the virtual video frame for the first effect style, comprises:
taking, in response to the preview trigger operation for the first effect style, the position of the pointer on the video track as the start point, to generate the virtual video frame for the first effect style on a virtual track, wherein the virtual track has a hierarchical relationship with the at least one video track, and
wherein the virtual track is located on an upper layer than each video track of the at least one video track, and pictures of the virtual track block pictures of each video track of the at least one video track.
10. The apparatus according to claim 7, wherein a length corresponding to the virtual video frame is same as a display duration of the first effect style, or the length corresponding to the virtual video frame is same as a length from the start point to a preset end point.
4. The method according to claim 1, wherein the target effect style comprises an animation-type effect style and a static-type effect style.
Dependent claims 2-8, 10-16, 18-29 recites similar matter as claims 1-20 of Patent US11941728B2 and are rejected for the same reason.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5-6, 8-11, 13-14, 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yeh (US Pub 2022/0319064 A1) in view of Meaney et al. (US Pub 2010/0281376 A1), further in view of Zheng et al. (US Pub 2021/0358524 A1) and Lee (US Pub 2006/0224940 A1).
As to claim 1, Yeh discloses an effect application previewing method (Yeh, abstract), comprising:
acquiring a start point and an end point to generate a first effect for a first effect style, and displaying an identifier of the first effect for the first effect style on an effect track (Fig. 4, ¶0046-0047, “the starting point of the graphical representation shown relative to the time axis 410 denotes the starting point in which the facial region 416 initially appears in the video 118 for that graphical representation.” Fig.9, ¶0055, “The user interface 902 also includes a horizontal element representing a time axis 910, where a progress bar element 912 indicates playback progression of the video 118. As described earlier, the user input processor 108 (FIG. 1) in the computing device 102 identifies one or more facial regions 416 displayed in the video 118. For each facial region 916, the user input processor 108 generates one or more segments or graphical representations 914 in the user interface 902 where the length of each segment or graphical representation 914 relative to the time axis 910 corresponds to the time duration in which a corresponding facial region 916 is displayed in the video 118.” ¶0056, “the starting point of the graphical representation shown relative to the time axis 910 denotes the starting point in which the facial region 916 initially appears in the video 118 for that graphical representation. Each instance in which a particular facial region 916 is displayed in the video 118 is represented by a corresponding graphical representation 914.” ¶0058, “each row of graphical representations corresponding to Face #1, Face #2, Face #3 includes a corresponding row 920, 922, 924 where each row 920, 922, 924 similarly contains graphical representations 926. The graphical representation 926 for the corresponding rows 920, 922, 924 contain one or more thumbnail previews 928 of the one or more selected facial effects applied to the facial region 916 of the user, thereby allowing the user to view which facial effects are being applied during a given segment for a particular facial region 916. For example, the thumbnail preview 928 shows facial effect #2 and facial effect #4 applied to Face #1 for a given segment, while another thumbnail preview 928 shows facial effect #4 and facial effect #5 applied to Face #2 for a given segment. Yet another thumbnail preview 928 shows facial effect #2, facial effect #3, and facial effect #4 applied to Face #3 for a given segment.”.),
synchronously playing the first effect and at least one video clip on a video track, in response to previewing for the first effect (¶0046, “a horizontal element representing a time axis 410, where a progress bar element 412 indicates playback progression of the video 118.” “For each facial region 416, the user input processor 108 generates one or more segments or graphical representations 414 in the user interface 402 where the length of each segment or graphical representation 414 relative to the time axis 410 corresponds to the time duration in which a corresponding facial region 416 is displayed in the video 118.” ¶0054, “the user interface 902 in FIG. 9 includes a window 904 for displaying playback of a video 118 (FIG. 1). A facial effects toolbox 906 in the user interface 902 contains various facial effects that the user can apply to facial regions 916 displayed during playback of the video 118. As described in more detail below, the user can navigate between the facial effects toolbox 906 and various graphical representations 914 to apply desired facial effects to facial regions 916. This can be accomplished, for example, using a mouse 908 or by performing gestures on a touchscreen display.” ¶0058, “The graphical representation 926 for the corresponding rows 920, 922, 924 contain one or more thumbnail previews 928 of the one or more selected facial effects applied to the facial region 916 of the user, thereby allowing the user to view which facial effects are being applied during a given segment for a particular facial region 916.” The video is synchronous because it is playing according time.),
wherein the synchronously playing is automatically triggered in response to generating the first effect (¶0046, “a horizontal element representing a time axis 410, where a progress bar element 412 indicates playback progression of the video 118.” “For each facial region 416, the user input processor 108 generates one or more segments or graphical representations 414 in the user interface 402 where the length of each segment or graphical representation 414 relative to the time axis 410 corresponds to the time duration in which a corresponding facial region 416 is displayed in the video 118.”),
wherein the previewing is configured to trigger a display of a preview effect of the first effect applied to the at least one video clip (Yeh, Fig. 4, “¶0047, “the graphical representation 414 may comprise a bar shaped element where the length of the bar in the horizontal direction corresponds to the length of time in which the facial region is displayed in the video 118. Furthermore, the starting point of the graphical representation shown relative to the time axis 410 denotes the starting point in which the facial region 416 initially appears in the video 118 for that graphical representation. Each instance in which a particular facial region 416 is displayed in the video 118 is represented by a corresponding graphical representation 414.” ¶0065, “The graphical representation 1026 for the corresponding rows 1020, 1022 contain one or more thumbnail previews 1028 of the one or more selected facial effects applied to the facial region 1016 of the user, thereby allowing the user to view which facial effects are being applied during a given segment for a particular facial region 1016. For example, as shown in FIG. 11, the thumbnail preview 1028 shows facial effect #2 and facial effect #4 applied to Face #1 for a given segment, while another thumbnail preview 1028 shows a mask effect (Mask #2) applied to Face #2 for each segment depicting Face #2.”),
wherein pictures of the effect track block pictures of each video track of the at least one video track (¶0027, “Thumbnail previews of applied facial effects are displayed on the graphical representations of corresponding facial regions having applied facial effects.” Fig.9, ¶0058, “the thumbnail preview 928 shows facial effect #2 and facial effect #4 applied to Face #1 for a given segment, while another thumbnail preview 928 shows facial effect #4 and facial effect #5 applied to Face #2 for a given segment. Yet another thumbnail preview 928 shows facial effect #2, facial effect #3, and facial effect #4 applied to Face #3 for a given segment.”),
wherein the method further comprises: in response to the first effect style being an (Yeh, Fig. 4, “¶0047, “the graphical representation 414 may comprise a bar shaped element where the length of the bar in the horizontal direction corresponds to the length of time in which the facial region is displayed in the video 118. Furthermore, the starting point of the graphical representation shown relative to the time axis 410 denotes the starting point in which the facial region 416 initially appears in the video 118 for that graphical representation. Each instance in which a particular facial region 416 is displayed in the video 118 is represented by a corresponding graphical representation 414.” Fig. 5 to Fig. 7, ¶0049-0052, For example, Fig. 6, ¶0051, “the user selects a facial effect from the facial effects toolbox 406 and applies the selected facial effect to a target graphical representation 414” “the facial effect 502 (FIG. 5) applied in FIG. 5 is removed from the facial region 416 once playback of the video 118 (FIG. 1) reaches the end of the target graphical representation 414a. In this regard, the user is able to apply facial effects on a segment-by-segment basis.”. “graphical representation 414” shows a length corresponds to a display duration of the first effect style or the length is same as a length from the start point to the end point.).
Yeh does not explicitly disclose “wherein the effect track has a hierarchical relationship with at least one video track”, “the effect track is located on an upper layer than each video track of the at least one video track” and “an animation-type effect style”.
Meaney teaches the effect track has a hierarchical relationship with at least one video track (Meaney, ¶0220, “the preview clips may follow the hierarchy of the committed clips, and thus be superseded by committed clips on higher-numbered tracks, but may supersede clips on lower or equal-numbered tracks. In other embodiments, preview clips may supersede committed clips regardless of whether the preview clip is placed in a higher-numbered track than the committed clip.”).
Meaney teaches “an animation-type effect style” (Meaney, Fig. 12, ¶0096, teaches “a representation of a candidate clip 1210 has been added to the composite display area 340. As shown, the representation 1210 of clip 510 has been placed in the composite display area 340 at the playhead 390 location 710 in the selected track 350 and the preview tool 410 has also been invoked (and displayed in the GUI 300).” and ¶0254, “some or all of the video clips are computer-generated animations or include computer generated animations (e.g., animated objects, computer-generated effects, etc.)”. This section teaches “in response to the first effect style being an animation-type effect style, determining a length corresponding to the identifier of the first effect based on a display duration of the first effect style”).
Yeh and Meaney are considered to be analogous art because all pertain to video editing tools. It would have been obvious before the effective filing date of the claimed invention to have modified Yah with the features of “a hierarchical relationship” and “an animation-type effect style” as taught by Meaney. The suggestion/motivation would have been in order to have the tracks are arranged hierarchically in the timeline, so that a clip on a track with a higher number supersedes the display of a track with a lower number (Meaney, ¶0218). The claim would have been obvious because the technique for improving a particular class of devices was part of the ordinary capabilities of a person of ordinary skill in the art, in view of the teaching of the technique for improvement in other situations.
Zheng teaches the effect track is located on an upper layer than each video track of the at least one video track (Zheng, Fig. 2, ¶0027, “The target material includes textures, texts, Mosaic effects and so on. The target material can be displayed on the interface where the editing track is located, after pushing a selected image to the editing track according to the user instruction, target material corresponding to the image can be selected and dragged into the editing track, which can be suspended on the video corresponding to the time axis where the image is located.” ¶0031, “dividing the editing track into a preview window layer, a video layer and an audio layer, wherein the preview window layer and the video layer are rendered synchronously, and the audio layer is used to add an audio effect according to the user instruction. Herein, the interface where the editing track is located includes a preview interface, the preview interface includes three layers, that is, the preview interface where the editing track is located includes three layers, the first layer is the preview window, the second layer is the video layer, and the third layer is the audio layer, wherein, the first layer and the second layer are implemented by Canvas, and the first layer and the second layer are rendered synchronously, and the WYSIWYG effect is achieved through real-time editing and rendering from the upper layer to the lower layer. The audio effect can be added on the editing track of the third layer, such as audio filters and so on. The addition of the effect can be implemented according to the shortcut key instruction or the mouse input instruction.”).
Zheng also teaches synchronously playing the first effect and at least one video clip on a video track, in response to previewing for the first effect, wherein the synchronously playing is automatically triggered in response to-generating the first effect (Zheng, ¶0031, “the preview window layer and the video layer are rendered synchronously, and the audio layer is used to add an audio effect according to the user instruction.” “the interface where the editing track is located includes a preview interface, the preview interface includes three layers, that is, the preview interface where the editing track is located includes three layers, the first layer is the preview window, the second layer is the video layer, and the third layer is the audio layer, wherein, the first layer and the second layer are implemented by Canvas, and the first layer and the second layer are rendered synchronously, and the WYSIWYG effect is achieved through real-time editing and rendering from the upper layer to the lower layer.”).
Lee also teaches the effect track is located on an upper layer than each video track of the at least one video track (Lee, Fig. 7-10, and Fig. 17, ¶0009, ¶0038).
Lee teaches synchronously playing the first effect and at least one video clip on a video track, in response to previewing for the first effect, wherein the synchronously playing is automatically triggered in response to generating the first effect (Lee, Fig.9, Col 2, lines 60-61, “synchronizing visual effects with the plurality of videos based on the rhythm data.”).
Yeh, Meaney, Zheng and Lee are considered to be analogous art because all pertain to video editing tools. It would have been obvious before the effective filing date of the claimed invention to have modified Yah with the features of “located on an upper layer than each video track of the at least one video track” and “synchronously playing the first effect and at least one video clip on a video track, in response to previewing for the first effect, wherein the synchronously playing is automatically triggered in response to-generating the first effect “ as taught by Zheng and Lee. The claim would have been obvious because the technique for improving a particular class of devices was part of the ordinary capabilities of a person of ordinary skill in the art, in view of the teaching of the technique for improvement in other situations.
As to claim 2, claim 1 is incorporated and the combination of Yeh, Meaney, Zheng and Lee discloses the synchronously playing the first effect and at least one video clip on a video track, comprises:
synchronously playing the first effect and at least one video clip corresponding to the first effect on the video track, and a duration of the at least one video clip corresponding to the first effect is a period from the start point to the end point (Yeh, Fig. 9, ¶0046, “the user input processor 108 generates one or more segments or graphical representations 414 in the user interface 402 where the length of each segment or graphical representation 414 relative to the time axis 410 corresponds to the time duration in which a corresponding facial region 416 is displayed in the video 118.” ¶0047, “the graphical representation 414 may comprise a bar shaped element where the length of the bar in the horizontal direction corresponds to the length of time in which the facial region is displayed in the video 118. Furthermore, the starting point of the graphical representation shown relative to the time axis 410 denotes the starting point in which the facial region 416 initially appears in the video 118 for that graphical representation. Each instance in which a particular facial region 416 is displayed in the video 118 is represented by a corresponding graphical representation 414.”).
As to claim 3, claim 1 is incorporated and the combination of Yeh, Meaney, Zheng and Lee discloses the displaying an identifier of the first effect for the first effect style on an effect track comprises:
displaying the identifier of the first effect for the first effect style on the effect track based on a position of a pointer on a video track (Yeh, Fig. 4, “¶0047, “the graphical representation 414 may comprise a bar shaped element where the length of the bar in the horizontal direction corresponds to the length of time in which the facial region is displayed in the video 118. Furthermore, the starting point of the graphical representation shown relative to the time axis 410 denotes the starting point in which the facial region 416 initially appears in the video 118 for that graphical representation. Each instance in which a particular facial region 416 is displayed in the video 118 is represented by a corresponding graphical representation 414.”).
As to claim 5, claim 1 is incorporated and Yeh discloses the first effect style comprises (Yeh, ¶0025, “The one or more facial effects may include, for example, a face-lift effect, an effect for blurring an entire facial region, and/or an effect for blurring target features in the facial region. The facial effects may also comprise predefined facial effects templates comprising combinations of facial effects. For example, one predefined facial effects template may comprise a particular lipstick and a particular eye shadow.”).
Yeh does not explicitly disclose an animation-type effect style.
Meaney teaches an animation-type effect style (Meaney, ¶0254, “some or all of the video clips are computer-generated animations or include computer generated animations (e.g., animated objects, computer-generated effects, etc.).”).
Yeh and Meaney are considered to be analogous art because all pertain to video editing tools. It would have been obvious before the effective filing date of the claimed invention to have modified Yah with the features of “an animation-type effect style.” as taught by Meaney. The claim would have been obvious because the substitution of one known element for another would have yielded predictable results to one of ordinary skill in the art at the time of the invention.
As to claim 6, claim 1 is incorporated and the combination of Yeh, Meaney, Zheng and Lee discloses displaying, in response to the previewing for the first effect, the first effect style on a video picture being played in a video playing window (Yeh, ¶0054, “the user interface 902 in FIG. 9 includes a window 904 for displaying playback of a video 118 (FIG. 1). A facial effects toolbox 906 in the user interface 902 contains various facial effects that the user can apply to facial regions 916 displayed during playback of the video 118. As described in more detail below, the user can navigate between the facial effects toolbox 906 and various graphical representations 914 to apply desired facial effects to facial regions 916. This can be accomplished, for example, using a mouse 908 or by performing gestures on a touchscreen display.”).
As to claim 8, claim 1 is incorporated and the combination of Yeh, Meaney, Zheng and Lee discloses the first effect comprises a virtual effect (Yeh, ¶0002, “performing segment-based virtual application of facial effects to facial regions displayed in video frames”).
As to claim 9, the combination of Yeh, Meaney, Zheng and Lee discloses an effect previewing apparatus, comprising:
a generation module, configured to acquire a start point and an end point to generate a first effect for a first effect style, and display an identifier of the first effect for the first effect style on an effect track; and
a playing module, configured to synchronously play the first effect and at least one video clip on a video track, in response to previewing for the first effect, wherein the synchronously playing is automatically triggered in response to generating the first effect,
wherein the previewing is configured to trigger a display of a preview effect of the first effect applied to the at least one video clip,
wherein the effect track has a hierarchical relationship with at least one video track, wherein the effect track is located on an upper layer than each video track of the at least one video track, and pictures of the effect track block pictures of each video track of the at least one video track,
and wherein the generation module is further configured to: in response to the first effect style being an animation-type effect style, determine a length corresponding to the identifier of the first effect based on a display duration of the first effect style (See claim 1 for detailed analysis.).
As to claim 10, claim 9 is incorporated and the combination of Yeh, Meaney, Zheng and Lee discloses the synchronously playing the first effect and at least one video clip on a video track, comprises:
synchronously playing the first effect and at least one video clip corresponding to the first effect on the video track, and a duration of the at least one video clip corresponding to the first effect is a period from the start point to the end point (See claim 2 for detailed analysis.).
As to claim 11, claim 9 is incorporated and the combination of Yeh, Meaney, Zheng and Lee discloses the generation module is further configured to display the identifier of the first effect for the first effect style on the effect track based on a position of a pointer on a video track (See claim 3 for detailed analysis.).
As to claim 13, claim 9 is incorporated and the combination of Yeh, Meaney, Zheng and Lee discloses the first effect style comprises the animation-type effect style and a static-type effect style (See claim 5 for detailed analysis.).
As to claim 14, claim 9 is incorporated and the combination of Yeh, Meaney, Zheng and Lee discloses a displaying module, configured to display, in response to the previewing for the first effect, the first effect style on a video picture being played in a video playing window (See claim 6 for detailed analysis.).
As to claim 16, claim 9 is incorporated and the combination of Yeh, Meaney, Zheng and Lee discloses the first effect comprises a virtual effect (See claim 8 for detailed analysis.).
As to claim 17, the combination of Yeh, Meaney, Zheng and Lee discloses a non-transitory computer-readable storage medium, wherein instructions are stored in the non-transitory computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is enabled to implement an effect previewing method, wherein the effect previewing method comprises:
acquiring a start point and an end point to generate a first effect for a first effect style, and displaying an identifier of the first effect for the first effect style on an effect track,
synchronously playing the first effect and at least one video clip on a video track, in response to previewing for the first effect, wherein the synchronously playing is automatically triggered in response to generating the first effect,
wherein the previewing is configured to trigger a display of a preview effect of the first effect applied to the at least one video clip,
wherein the effect track has a hierarchical relationship with at least one video track,
wherein the effect track is located on an upper layer than each video track of the at least one video track, and pictures of the effect track block pictures of each video track of the at least one video track
wherein the method further comprises: in response to the first effect style being an animation-type effect style, determining a length corresponding to the identifier of the first effect based on a display duration of the first effect style (See claim 1 for detailed analysis.).
As to claim 18, claim 17 is incorporated and the combination of Yeh, Meaney, Zheng and Lee discloses wherein the synchronously playing the first effect and at least one video clip on a video track, comprises:
synchronously playing the first effect and at least one video clip corresponding to the first effect on the video track, and a duration of the at least one video clip corresponding to the first effect is a period from the start point to the end point (See claim 2 for detailed analysis.).
As to claim 19, claim 17 is incorporated and the combination of Yeh, Meaney, Zheng and Lee discloses wherein the displaying an identifier of the first effect for the first effect style on an effect track comprises:
displaying the identifier of the first effect for the first effect style on the effect track based on a position of a pointer on a video track (See claim 3 for detailed analysis.).
As to claim 20, the combination of Yeh, Meaney, Zheng and Lee discloses a device, comprising: a memory, a processor, and a computer program stored in the memory and run on the processor,
wherein upon executing the computer program, the method according to claim 1 is achieved by the processor (See claim 1 for detailed analysis.).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YU CHEN/
Primary Examiner, Art Unit 2613