DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 11 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per Claim 1, the claim recites wherein if the target cover is a dynamic cover” . Claim 10 recites “wherien the target cover is a static cover or a dynamic cover”. Hence, if the target cover is static cover then the claim is indefinite. Claim recitation should incorporate language that defines the target cover as a dynamic cover then address the functionality that corresponds to such a target cover.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-4, 15, 17 & 20-22 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bodie (U.S. Pub 2006/0092291) hereinafter Yuan.
As per Claim 1, Yuan teaches An audio processing method, comprising: in response to a first instruction, displaying a first control on a first interface, wherein the first control is associated with audio editing; (Fig. 3, ¶28 wherein Selecting the appropriate icon 36 causes the CPU 124 to enable the image 170 and audio capture 172 routines. Enabling the image capture routines 170 customizes certain user interface controls to operate as the user interface of the digital imaging system. For example, in the exemplary data processing system 20, the function of the selector button 38 is customized to operate as a shutter button when the digital imaging system is invoked and references herein to the shutter button are intended to refer to the selector button of the data processing system when operating as a digital imaging system and device. In addition, activating the digital imaging system causes the CPU 124 to display one or more menus on the touch screen to enable the user to select among several optional operating modes for the digital imaging system)
in response to a touch control operation on the first control on the first interface, displaying one or more third controls, wherein the one or more third controls are configured to trigger corresponding audio editing on audio to be processed; and (Fig. 3, Fig. 6, ¶28, ¶37 wherien activating the digital imaging system causes the CPU 124 to display one or more menus on the touch screen to enable the user to select among several optional operating modes for the digital imaging system wherien the audio capture routines 172 of the data processing system 20 also include editing routines permitting the user to edit the audio data file 228. Referring to FIG. 6, a menu of audio editing options 370 can displayed on the touch screen display. By selecting an appropriate option, the user can invoke the audio editing routines to display a visual representation of the spectrum of the audio data 372, delete a portion of the audio data 374, record a new audio annotation or a new portion of the annotation in the audio data file 376, splice a new portion of the audio annotation to the audio data in the audio file 378, or apply audio effects to an audio annotation 380)
in response to a touch control operation on a third control of the one or more third controls, triggering audio editing corresponding to the third control on the audio to be processed, to acquire target audio. (Fig. 6, ¶37 wherein the user can invoke the audio editing routines to display a visual representation of the spectrum of the audio data 372, delete a portion of the audio data 374, record a new audio annotation or a new portion of the annotation in the audio data file 376, splice a new portion of the audio annotation to the audio data in the audio file 378. By way of examples, a "tunnel" effect 382, an echo 384, or background music 386 may be added to the audio data included in an audio file)
As per Claim 2, the rejection of claim 1 is hereby incorporated by reference; Bodie further teaches wherein the first interface further comprises a second control, wherein the second control is associated with a processing of an audio effect. (Fig. 15, ¶133 wherein by selecting an appropriate option, the user can invoke the audio editing routines to apply audio effects to an audio annotation 380)
As per Claim 3, the rejection of claim 2 is hereby incorporated by reference; Bodie further teaches wherein the audio effect comprises one or more of the following: reverberation, equalization, electronic sound, phase shifting, flanger, filter, chorus; and (Fig. 6, ¶37 wherein by selecting an appropriate option, the user can invoke the audio editing routines to apply audio effects to an audio annotation 380 wherein by way of examples, a "tunnel" effect 382, an echo 384, or background music 386 may be added to the audio data included in an audio file)
the audio editing comprises one or more of the following: editing audio to optimize the audio; extracting vocals and/or an accompaniment from audio; extracting vocals from audio and mixing the extracted vocals with a preset accompaniment; or extracting vocals from first audio, extracting an accompaniment from second audio, and mixing the extracted vocals with the extracted accompaniment. (Fig. 6, ¶37 wherein the user can invoke the audio editing routines to display a visual representation of the spectrum of the audio data 372, delete a portion of the audio data 374, record a new audio annotation or a new portion of the annotation in the audio data file 376, splice a new portion of the audio annotation to the audio data in the audio file 378)
As per Claim 4, the rejection of claim 1 is hereby incorporated by reference; Bodie further teaches wherein the displaying one or more third controls comprises: displaying a first window on the first interface, wherein the first window comprises the one or more third controls; or displaying the one or more third controls on a second interface. (Fig. 3, Fig. 6, ¶28 wherein Selecting the appropriate icon 36 causes the CPU 124 to enable the image 170 and audio capture 172 routines. Enabling the image capture routines 170 customizes certain user interface controls to operate as the user interface of the digital imaging system. For example, in the exemplary data processing system 20, the function of the selector button 38 is customized to operate as a shutter button when the digital imaging system is invoked and references herein to the shutter button are intended to refer to the selector button of the data processing system when operating as a digital imaging system and device. In addition, activating the digital imaging system causes the CPU 124 to display one or more menus on the touch screen to enable the user to select among several optional operating modes for the digital imaging system wherien a menu of audio editing options 370 can displayed on the touch screen display)
Claim 15 is similar in scope to Claim 1; therefore, Claim 15 is rejected under the same rationale as Claim 1.
Claim 17 is similar in scope to Claim 1; therefore, Claim 17 is rejected under the same rationale as Claim 1.
Claim 20 is similar in scope to Claim 2; therefore, Claim 20 is rejected under the same rationale as Claim 2.
Claim 21 is similar in scope to Claim 3; therefore, Claim 21 is rejected under the same rationale as Claim 3.
Claim 22 is similar in scope to Claim 4; therefore, Claim 22 is rejected under the same rationale as Claim 4.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5, 6, 9, 12-14 & 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bodie in view WANG et al. (U.S. Pub 2024/0169962) hereinafter Wang.
As per Claim 5, the rejection of claim 1 is hereby incorporated by reference; Bodie previously taught the first instruction. However, Bodie does not explicitly teach wherein the first instruction is a touch control operation on a fourth control on a third interface, wherein the fourth control is configured to trigger displaying of the first control and/or the second control, or the first instruction is a swiping operation on a third interface, wherein the first instruction is used to trigger switching from the third interface to the first interface.
Wang teaches wherein the first instruction is a touch control operation on a fourth control on a third interface, wherein the fourth control is configured to trigger displaying of the first control and/or the second control, or the first instruction is a swiping operation on a third interface, wherein the first instruction is used to trigger switching from the third interface to the first interface. (Fig. 4a-4c, ¶126, ¶147 wherein the user may operate (for example, tap by using a finger/stylus) a type tag displayed on the audio editing interface 401. For example, the user may separately tap the “Running” tag, the “Happy” tag, the “Excited” tag, and the “Rhythm and blues” tag by using a finger. After the user operates (for example, tap by using a finger/stylus) an “OK” button on the audio editing interface 401, in response to the operation, the mobile phone 10 may display an interface of all audio that is recommended by the system based on the type tags selected by the user and that has the “Running” tag, the “Happy” tag, the “Excited” tag, and the “Rhythm and blues” tag, for example, a target audio selection interface 402 shown in FIG. 4(b) wherien after the user selects the k pieces of target audio on the target audio selection interface 402 shown in FIG. 4(b) and performs the operation (for example, taps by using a finger/stylus) on the “OK” button on the target audio selection interface 402, the terminal device (namely, the mobile phone 10) may display a medley composition order selection interface 403 shown in FIG. 4(c))
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of audio data processing of Wang with the teaching of digital imaging system of Bodie because Wang teaches obtaining a richer and more diversified medley audio through obtaining m audio clips, where m is an integer greater than or equal to 2; determining m−1 pieces of transition audio information based on the m audio clips; and generating target medley audio based on the m audio clips and the m−1 pieces of transition audio information. The m−1 pieces of transition audio information are used to splice the m audio clips. For first transition audio information in the m−1 pieces of transition audio information, the first transition audio information is used to splice a first audio clip and a second audio clip that are sorted consecutively in the m audio clips. Herein, sorting of the m audio clips is a medley composition order of the m audio clips. (¶5, ¶8)
As per Claim 6, Bodie previously taught the first instruction, displaying the first control on the first interface. However, Bodies does not explicitly teach wherein before in response to the first instruction, displaying the first control on the first interface, the method further comprises: displaying a third interface for importing media, wherein the media comprises audio and/or a video; in response to a touch control operation on a fifth control on the third interface, popping up a second window, wherein the second window comprises a sixth control and/or a seventh control and/or an eighth control, and the sixth control is configured to trigger importing audio directly, the seventh control is configured to trigger importing of audio, and vocal extracting and/or accompaniment extracting on the imported audio, and the eighth control is configured to trigger importing of audio, and timbre optimization on the imported audio; and in response to a touch control operation on the sixth control or the seventh control or the eighth control on the second window, displaying the audio to be processed on the first interface.
Wang teaches wherein before in response to the first instruction, displaying the first control on the first interface, the method further comprises: displaying a third interface for importing media, wherein the media comprises audio and/or a video; in response to a touch control operation on a fifth control on the third interface, popping up a second window, wherein the second window comprises a sixth control and/or a seventh control and/or an eighth control, and the sixth control is configured to trigger importing audio directly, (Fig. 4a, Fig. 7 , ¶142, ¶143 wherein the user may select a music database based on a requirement/preference of the user by operating a “Music library” button on the audio editing interface 701 wherein after the user selects the music database by operating the “Music library” button on the audio editing interface 701, the mobile phone 10 may select, based on the value of k input by the user in the input box 702, k pieces of audio as the target audio from the music database selected by the user)
the seventh control is configured to trigger importing of audio, and vocal extracting and/or accompaniment extracting on the imported audio, and the eighth control is configured to trigger importing of audio, and timbre optimization on the imported audio; and in response to a touch control operation on the sixth control or the seventh control or the eighth control on the second window, displaying the audio to be processed on the first interface. (¶144, ¶145 wherein the mobile phone 10 may select, based on a preset rule and the value of k input by the user in the input box 702, k pieces of audio as the target audio from the music database selected by the user wherien after the terminal device determines the k pieces of target audio, the terminal device may extract m audio clips from the k pieces of target audio by using a preset algorithm. For example, the preset algorithm may be an algorithm used to extract a chorus/climax part in a song )
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of audio data processing of Wang with the teaching of digital imaging system of Bodie because Wang teaches obtaining a richer and more diversified medley audio through obtaining m audio clips, where m is an integer greater than or equal to 2; determining m−1 pieces of transition audio information based on the m audio clips; and generating target medley audio based on the m audio clips and the m−1 pieces of transition audio information. The m−1 pieces of transition audio information are used to splice the m audio clips. For first transition audio information in the m−1 pieces of transition audio information, the first transition audio information is used to splice a first audio clip and a second audio clip that are sorted consecutively in the m audio clips. Herein, sorting of the m audio clips is a medley composition order of the m audio clips. (¶5, ¶8)
As per Claim 9, the rejection of claim 1 is hereby incorporated by reference; Bodie previously taught the target audio. However, Bodie does not explicitly teach further comprising: displaying the target audio on a fifth interface, wherein the fifth interface comprises a twelfth control, and the twelfth control is configured to trigger playing of the target audio.
Wang teaches further comprising: displaying the target audio on a fifth interface, wherein the fifth interface comprises a twelfth control, and the twelfth control is configured to trigger playing of the target audio. (Fig. 10a, ¶99, ¶200 wherein The terminal device 21 may interact with a user by using a client app (for example, a client app for audio editing), for example, receive an instruction input by the user, and transmit the received instruction to the server 22. Then, the server 22 is configured to: perform, according to the instruction received from the terminal device 21, the audio data processing method provided in this embodiment of this application, and send MIDI information of generated target medley audio and/or the target medley audio to the terminal device 21 wherien After the user may perform an operation (for example, tap by using a finger or a stylus) on a play icon 1002 on the medley audio editing interface 1001, the mobile phone 10 plays the target medley audio for the user in response to the operation.)
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of audio data processing of Wang with the teaching of digital imaging system of Bodie because Wang teaches obtaining a richer and more diversified medley audio through obtaining m audio clips, where m is an integer greater than or equal to 2; determining m−1 pieces of transition audio information based on the m audio clips; and generating target medley audio based on the m audio clips and the m−1 pieces of transition audio information. The m−1 pieces of transition audio information are used to splice the m audio clips. For first transition audio information in the m−1 pieces of transition audio information, the first transition audio information is used to splice a first audio clip and a second audio clip that are sorted consecutively in the m audio clips. Herein, sorting of the m audio clips is a medley composition order of the m audio clips. (¶5, ¶8)
As per Claim 12, the rejection of claim 1 is hereby incorporated by reference; Bodie does not explicitly teach wherein the method further comprises: in response to an exporting instruction for a fifth interface, exporting data associated with the target audio to a target location, wherein the target location comprises an album or a file system.
Wang teaches wherein the method further comprises: in response to an exporting instruction for a fifth interface, exporting data associated with the target audio to a target location, wherein the target location comprises an album or a file system. (Fig. 10b, Fig. 11b, ¶222, ¶223 wherien after receiving the third operation of the user and receiving the operation (for example, a tap) of the user on the “OK” button on the audio rendering interface 1101, the mobile phone 10 may display an audio release interface 1102 displayed in FIG. 10(b). In this way, the mobile phone 10 may interact with the user through the audio release interface 1102, and export the target medley audio according to an indication input by the user wherien the mobile phone 10 may receive, under an “Export formats” option on the audio release interface 1102, a selection operation that is input by the user and that is of selecting an audio format for export, for example, an operation of selecting an “Audio format 1”. The mobile phone 10 may receive, under an “Export path” option on the audio release interface 1102, an operation that the user inputs a name (for example, a name A) and a path of the target medley audio. The mobile phone 10 may further receive, under a “Save project” option on the audio release interface 1102, an operation that is input by the user and that is of enabling a “Save project” function)
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of audio data processing of Wang with the teaching of digital imaging system of Bodie because Wang teaches obtaining a richer and more diversified medley audio through obtaining m audio clips, where m is an integer greater than or equal to 2; determining m−1 pieces of transition audio information based on the m audio clips; and generating target medley audio based on the m audio clips and the m−1 pieces of transition audio information. The m−1 pieces of transition audio information are used to splice the m audio clips. For first transition audio information in the m−1 pieces of transition audio information, the first transition audio information is used to splice a first audio clip and a second audio clip that are sorted consecutively in the m audio clips. Herein, sorting of the m audio clips is a medley composition order of the m audio clips. (¶5, ¶8)
As per Claim 13, the rejection of claim 1 is hereby incorporated by reference; Bodie does not explicitly teach wherein the method further comprises: in response to a sharing instruction for the fifth interface, sharing data associated with the target audio to a target application.
Wang teaches wherein the method further comprises: in response to a sharing instruction for the fifth interface, sharing data associated with the target audio to a target application. (Fig. 11b, ¶223 wherien the mobile phone 10 may receive, under an “Export formats” option on the audio release interface 1102, a selection operation that is input by the user and that is of selecting an audio format for export, for example, an operation of selecting an “Audio format 1”. The mobile phone 10 may receive, under an “Export path” option on the audio release interface 1102, an operation that the user inputs a name (for example, a name A) and a path of the target medley audio. The mobile phone 10 may further receive, under a “Save project” option on the audio release interface 1102, an operation that is input by the user and that is of enabling a “Save project” function)
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of audio data processing of Wang with the teaching of digital imaging system of Bodie because Wang teaches obtaining a richer and more diversified medley audio through obtaining m audio clips, where m is an integer greater than or equal to 2; determining m−1 pieces of transition audio information based on the m audio clips; and generating target medley audio based on the m audio clips and the m−1 pieces of transition audio information. The m−1 pieces of transition audio information are used to splice the m audio clips. For first transition audio information in the m−1 pieces of transition audio information, the first transition audio information is used to splice a first audio clip and a second audio clip that are sorted consecutively in the m audio clips. Herein, sorting of the m audio clips is a medley composition order of the m audio clips. (¶5, ¶8)
As per Claim 14, the rejection of claim 12 is hereby incorporated by reference; Bodie as modified further teaches wherein the data associated with the target audio comprises at least one of the following: the target audio, vocals, an accompaniment, a static cover of the target audio, and a dynamic cover of the target audio. (Fig. 11b, ¶223 wherien the mobile phone 10 may receive, under an “Export formats” option on the audio release interface 1102, a selection operation that is input by the user and that is of selecting an audio format for export, for example, an operation of selecting an “Audio format 1”. The mobile phone 10 may receive, under an “Export path” option on the audio release interface 1102, an operation that the user inputs a name (for example, a name A) and a path of the target medley audio. The mobile phone 10 may further receive, under a “Save project” option on the audio release interface 1102, an operation that is input by the user and that is of enabling a “Save project” function; as taught by Wang)
Claim 23 is similar in scope to Claim 5; therefore, Claim 23 is rejected under the same rationale as Claim 5.
Claim(s) 7, 8 , 10 & 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bodie in view Wang, as applied to claims, 6 & 9 above, and further in view ZHENG et al. (U.S. Pub 2022/0238139) hereinafter Zheng.
As per Claim 7, the rejection of claim 6 is hereby incorporated by reference; Bodie as modified does not explicitly teach further comprising: in response to a touch control operation on the second control on the first interface, displaying a third window on the first interface, wherein the third window comprises a ninth control and a tenth control, wherein the ninth control is configured to trigger custom audio effect processing on audio, and the tenth control is configured to trigger preset audio effect processing on audio.
Zheng teaches further comprising: in response to a touch control operation on the second control on the first interface, displaying a third window on the first interface, wherein the third window comprises a ninth control and a tenth control, wherein the ninth control is configured to trigger custom audio effect processing on audio, and (Fig. 6A-6F, ¶122 wherien he terminal may determine the playback start time instant and the playback end time instant of the target audio data based on the cutting instruction triggered by the dragging operation by the user on the sound spectrum line of the target audio data, and perform cutting to obtain audio data between the playback start time instant and the playback end time instant. An example is shown in FIG. 6E, which is a schematic diagram of an editing interface according to an embodiment of the present disclosure. In FIG. 6E, the user perform dragging from the 10th second to the 25th second on the sound spectrum line of the target audio data, to perform cutting to obtain the target audio data between the 10th and 25th seconds, to play the target audio data obtained by cutting. In this way, the replacing of the audio data in the target resource template and customized cutting of the playback duration are implemented, thereby meeting individual requirements of users)
the tenth control is configured to trigger preset audio effect processing on audio. (Fig. 6A-6F, ¶116, ¶117 wherien a duration of the target audio data selected by the user may be different from a duration of the audio data in the resource set of the target resource template, and the user does not cut the duration of the target audio data. In some embodiments, to adapt the user-selected target audio data to the target video template, the terminal may replace the audio data in the resource set with the target audio data by obtaining a playback timeline of the audio data in the resource set, where the playback timeline indicates at least a start time instant and an end time instant of audio playback; adjusting a playback timeline of the target audio data based on the playback timeline; and replace the audio data in the resource set with the target audio data having the adjusted playback timeline)
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of video file generation method of Zheng with the teaching of digital imaging system of Bodie as modified because Zheng teaches an improved audio data in a resource set corresponding to an editable target resource template is edited to obtained edited audio data, a playback parameter of image data in the resource set is adjusted based on the edited audio data, and video file synthesis is performed based on the edited audio data and the adjusted playback parameter to obtain the target video file. In this way, by changing or replacing the audio data, the timeline of the video file becomes flexible, so as to obtain a changeable resource template, thereby improving operability for the user. (¶65)
As per Claim 8, the rejection of claim 7 is hereby incorporated by reference; Bodie as modified further teaches further comprising: in response to a touch control operation on the ninth control or the tenth control on the third window, displaying a fourth interface, wherein the fourth interface comprises an eleventh control, and the eleventh control is configured to trigger switching between the preset audio effect processing and the custom audio effect processing. (¶123, ¶124 wherein he terminal may further edit the audio data in the following manner to obtain the edited audio data wherien presenting, in response to the clicking operation on the editing button, a volume adjustment axis for adjusting a playback volume of the audio data; adjusting, in response to a dragging operation on an adjustment node in the volume adjustment axis, a volume of the audio data at a playback position; and replacing the audio data in the resource set with the audio data having the adjusted volume; as taught by Zheng)
As per Claim 10, the rejection of claim 9 is hereby incorporated by reference; Bodie as modified previously taught the fifth interface. However, Bodie as modified does not explicitly teach wherein the fifth interface further comprises a thirteenth control, and the method further comprises: in response to a touch control operation on the thirteenth control on the fifth interface, displaying a fourth window, wherein the fourth window comprises a cover import control, one or more preset static cover controls, and one or more preset animation effect controls; and in response to a control selecting operation on the fourth window, acquiring a target cover; wherein the target cover is a static cover or a dynamic cover.
Zheng teaches wherein the fifth interface further comprises a thirteenth control, and the method further comprises: in response to a touch control operation on the thirteenth control on the fifth interface, displaying a fourth window, wherein the fourth window comprises a cover import control, one or more preset static cover controls, and one or more preset animation effect controls; and in response to a control selecting operation on the fourth window, acquiring a target cover; wherein the target cover is a static cover or a dynamic cover. (Fig. 4C, Fig. 5, Fig. 6A ¶106, ¶108 wherien after the user determines the target video template, a user-defined preset number of images may be imported into the target video template, so that the preset number of images present an effect as in the target video template wherien an editing interface according to an embodiment of the present disclosure, multiple editing buttons are displayed on the editing interface, such as Audio Selection, Special Effect, Text, Sticker, and the like. Clicking on different buttons triggers different editing manners)
It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of video file generation method of Zheng with the teaching of digital imaging system of Bodie as modified because Zheng teaches an improved audio data in a resource set corresponding to an editable target resource template is edited to obtained edited audio data, a playback parameter of image data in the resource set is adjusted based on the edited audio data, and video file synthesis is performed based on the edited audio data and the adjusted playback parameter to obtain the target video file. In this way, by changing or replacing the audio data, the timeline of the video file becomes flexible, so as to obtain a changeable resource template, thereby improving operability for the user. (¶65)
As per Claim 11, the rejection of claim 10 is hereby incorporated by reference; Bodie as modified further teaches wherein if the target cover is a dynamic cover, the in response to the control selecting operation on the fourth window, acquiring the target cover comprises: in response to a control selecting operation on the fourth window, acquiring a static cover and an animation effect; (Fig. 3, Fig. 6A ¶101 wherein the terminal is provided with a client, such as an instant messaging client, a microblog client, a short video client, and the like, and a user performs social interaction by loading a prop resource on the client, where the prop resource include at least one of: a video prop, an audio prop, a user interface (UI) animation prop. The video prop may include, for example, a video template, a video cover, and text associated with the video, such as a title, a video tag, and the like. The audio prop may be background music, and the UI animation may be an interface for network interaction. ; as taught by Zheng)
generating, according to an audio feature of the target audio, the static cover and the animation effect, a dynamic cover which changes with the audio feature of the target audio; wherein the audio feature comprises audio tempo and/or volume. (Fig. 3, Fig. 6F, ¶102, ¶125 wherein the user may click an editing button for the video on the client to trigger a corresponding editing instruction to the terminal; and on reception of the editing instruction triggered by the user, the terminal correspondingly presents multiple video templates corresponding to the video wherein the user may click an editing button for the video on the client to trigger a corresponding editing instruction to the terminal; and on reception of the editing instruction triggered by the user, the terminal correspondingly presents multiple video templates corresponding to the video.; as taught by Zheng)
Related Art
Related art not relied upon Iyer (U.S. Pub 2014/0372109) for teaching A method implemented by processing and other audio components of an electronic device provides a smart audio output volume control, which correlates a volume level of an audio output to that of an audio input that triggered generation of the audio output.
Inquiry
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGIE BADAWI whose telephone number is (571)270-7590. The examiner can normally be reached Monday thru Wednesday 9:00am - 5:00pm EST with Thursdays and Fridays off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANGIE BADAWI/ Primary Examiner, Art Unit 2179