DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is responsive to the Request for Continued Examination (RCE) filed on 1/28/2026. Claims 1-11 and 13-20 are pending in the case.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9 and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Snibbe et al. (US 20150220249 A1, hereinafter Snibbe) in view of Sun et al. (US 20240061560 A1, hereinafter Sun), and further in view of LI et al. (US 20230070812 A1, hereinafter LI).
As to independent claim 1, Snibbe teaches an audio publishing method, comprising:
acquiring an original audio material (FIG. 4A also illustrates detecting contact 420 (e.g., a tap gesture) on touch screen 406 at a location corresponding to audio track affordance 416-c.51 and FIG. 4B illustrates client device 104 displaying the audio track corresponding to audio track affordance 416-c. In FIG. 4B, the user interface includes album cover art 426, audio track information 428, and a waveform 430 for the audio track corresponding to audio track affordance 416-c. paragraph 0068-0069);
generating an audio clip corresponding to clipping information of the original audio material in response to the clipping information inputted by a user (“FIG. 4C illustrates moving end indicator 434 left-to-right and displaying start indicator 440 in response to the detecting the dragging gesture in FIG. 4B. For example, selected portion 436 remains a 30 second interval of the audio track between end indicator 434 and start indicator 440.” paragraph 0071, “In FIG. 4F, the user interface further includes representation 462 of the video clip recorded in FIGS. 4D-4E.” paragraph 0074, “In FIG. 4V, the user interface includes text entry box 4124 for adding a comment or hashtag to the media item and hashtags 4126 entered by the user of client device 104.” Paragraph 0040, “In some embodiments, the user of client device 104 is also able to apply, in real-time, overlay text, such as a title, to the video clip being recorded.” Paragraph 0121, last sentence); and
publishing the audio clip into a feed in response to a publishing operation of the user (“FIG. 4Z illustrates client device 104 displaying the publication user interface for the media item generated in FIGS. 4A-4U in response to detecting contact 4150 selecting back navigation affordance 4142 in FIG. 4Y. FIG. 4Z also illustrates client device 104 detecting contact 4152 on touch screen 406 at a location corresponding to social media application A 4134-a.” Paragraph 0095).
Snibbe does not appear to expressly teach generating visual content of an audio clip corresponding to clipping information of the original audio material, and publishing the visual content of the audio clip into a feed.
Sun teaches generating visual content of an audio corresponding to target audio information (“The target video may be automatically generated according to the target audio. Further, the target video may comprise the visualization material that is automatically generated according to the target audio. The visualization material is a video element which can be viewed in the target video. Optionally, the visualization material may comprise an image and/or text generated according to associated information of the target audio, which will be described in detail below.” Paragraph 0056-0058, 0122,0123, Fig. 4), and posting the visual content of the audio clip into a feed in response to a posting operation of the user (“For example, the first application may be a short-video application to which the target interaction interface belongs, and the sharing the target video may specifically be posting the target video in the short-video application to which the target interaction interface belong,… Continually referring to FIG. 4, a “post” button 410 may be displayed in the preset playing interface, so that when the user wants to share a finally edited target video, he/she may tap the “post” button 410 to post the target video as a daily video. 2. posting the target video in a second application other than the first application.” Paragraph 0207-0209, “As shown in FIG. 12, an electronic device 1201 displays a video playing interface, in which a target video 1202 automatically generated based on a song “Hulahula XXX” and finally edited and posted by Xiao C can be displayed” paragraph 0216). Sun further teaches “the first application may be a short-video application” paragraph 0207.
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Snibbe to comprise generating visual content of an audio clip corresponding to clipping information of the original audio material and publishing the visual content of the audio clip into a feed. One would have been motivated to make such a combination to lower a threshold for video production, so that the user can conveniently achieve the audio sharing with the video content, without the need of shooting or uploading the video (Sun, paragraph 0059,0221).
Snibbe and Sun do not appear to expressly teach wherein the visual content is provided with a Redirect-to control for redirecting from the visual content in a short video feed to the original audio material in an audio feed; wherein the audio feed comprises multiple audios; and
publishing, in response to a publishing operation of the user, the visual content of the audio clip into the short video feed and the original audio material into the audio feed, wherein the short video and the audio feed are different.
LI teaches wherein the visual content is provided with a Redirect-to control for redirecting from the visual content in a short video feed to the original audio material in an audio feed (“As shown in FIG. 4c, a certain multimedia such as a short video is presented on a first program interface on the user terminal. The target audio playing control 401 is presented in the area of the interface, or it can be superimposed over the multimedia. The target audio playing control 401 includes: a sound quality mark 402, a popularity mark 403, a music mark 404, a music video mark 405, and a jump anchor 406.” Paragraph 0067,0065, “S6021: operating the jump anchor or link corresponding to the audio playing application. In this step, the user can click the jump anchor or link corresponding to any of the audio playing applications to trigger the jump or backend invoking of the corresponding audio playing application.” Paragraph 0090-0091, “when the multimedia is a short video and the background music in the short video is an audio clip cut from some music, the target audio is the full version corresponding to the audio clip.” Paragraph 0045), wherein the audio feed comprises multiple audios (“the second application searches for the corresponding target audio in a licensed target audio library according to the identifier, and then transmits the target audio through the network and downloads it to the user terminal for playing.” Paragraph 0079); and
publishing, in response to a publishing operation of the user, the visual content of the audio clip into the short video feed and the original audio material into the audio feed (“In this step, the target audio includes an audio in the multimedia presented in the first application. For example, when the multimedia is a short video and the background music in the short video is an audio clip cut from some music, the target audio is the full version corresponding to the audio clip. According to the audio playing method provided in this embodiment, a second application is invoked in response to an operation on an interface of a first application, and then the second application is used to play a target audio including an audio in a multimedia in the interface of the first application.” Paragraph 0099-0101), wherein the short video and the audio feed are different (“the first application is a program including a short video playing application, and the second application includes an audio playing application. According to one or more embodiments of the present disclosure, the multimedia includes at least one of: audio, video, short video, text material including background music, and web page contents including audio and/or video.” Paragraph 0146-0147).
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Snibbe and Sun to comprise wherein the visual content is provided with a Redirect-to control for redirecting from the visual content in a short video feed to the original audio material in an audio feed; wherein the audio feed comprises multiple audios; and publishing, in response to a publishing operation of the user, the visual content of the audio clip into the short video feed and the original audio material into the audio feed, wherein the short video and the audio feed are different. One would have been motivated to make such a combination “By directly invoking the second application to play the target audio, the technical effect of skipping the searching step and improving the user experience is achieved.” LI paragraph 0021 last sentence).
As to dependent claim 2, Snibbe teaches the audio publishing method according to claim 1, Snibbe further teaches the method comprising:
displaying a preview of the visual content of the audio clip before the publishing of the visual content of the audio clip (“FIG. 4V illustrates client device 104 displaying a preview of the media item generated in FIGS. 4A-4U in response to detecting contact 4118 selecting forward navigation affordance 460 in FIG. 4U.” Paragraph 0090). Snibbe does not appear to expressly teach automatically generating visual content of the audio clip. However, Sun teaches a target video may be automatically generated according to the target audio and previewing (“The target video may be automatically generated according to the target audio. Further, the target video may comprise the visualization material that is automatically generated according to the target audio” paragraph 0056-0058).
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Snibbe to comprise generating visual content of an audio clip corresponding to clipping information of the original audio material and publishing the visual content of the audio clip into a feed. One would have been motivated to make such a combination to lower a threshold for video production, so that the user can conveniently achieve the audio sharing with the video content, without the need of shooting or uploading the video (Sun, paragraph 0059,0221).
As to dependent claim 3, Snibbe teaches the audio publishing method according to claim 1, Snibbe does not appear to expressly teach wherein the visual content further comprises description information of the audio clip.
Sun teaches wherein the visual content further comprises description information of the audio clip (“Optionally, the visualization material may comprise an image and/or text generated according to associated information of the target audio, which will be described in detail below.” Paragraph 0058).
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Snibbe to comprise generating visual content of an audio clip corresponding to clipping information of the original audio material and publishing the visual content of the audio clip into a feed. One would have been motivated to make such a combination to lower a threshold for video production, so that the user can conveniently achieve the audio sharing with the video content, without the need of shooting or uploading the video (Sun, paragraph 0059,0221).
As to dependent claim 4, Snibbe teaches the audio publishing method according to claim 3, Snibbe does not appear to expressly teach wherein the description information of the audio clip comprises at least one of a title, a cover image, or a text description.
Sun teaches wherein the description information of the audio clip comprises at least one of a title, a cover image, or a text description (“Optionally, the visualization material may comprise an image and/or text generated according to associated information of the target audio, which will be described in detail below.” Paragraph 0058).
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Snibbe to comprise generating visual content of an audio clip corresponding to clipping information of the original audio material and publishing the visual content of the audio clip into a feed. One would have been motivated to make such a combination to lower a threshold for video production, so that the user can conveniently achieve the audio sharing with the video content, without the need of shooting or uploading the video (Sun, paragraph 0059,0221).
As to dependent claim 5, Snibbe teaches the audio publishing method according to claim 3, Snibbe further teaches:
obtaining description information of the original audio material through a RSS feed link input by the user (“In FIG. 4B, the user interface includes album cover art 426, audio track information 428, and a waveform 430 for the audio track corresponding to audio track affordance 416-c. For example, audio track information 428 includes artist name(s), track title, the number of media items created with the audio track, and hashtags associated with the audio track corresponding to audio track affordance 416-c.” Paragraph 0069) wherein the original audio material is obtained through the RSS feed link (“In some embodiments, server-side module 106 communicates with one or more external services such as audio sources 124a . . . 124n (e.g., streaming audio service providers such as Spotify, SoundCloud, Rdio, Pandora, and the like) and media file sources 126a . . . 126n (e.g., service providers of images and/or video such as YouTube, Vimeo, Vine, Flickr, Imgur, and the like) through one or more networks 110. I/O interface to one or more external services 120 facilitates such communications.” Paragraph 0020);
displaying one or more input controls, each input control comprising the description information of the original audio material (“In FIG. 4B, the user interface includes album cover art 426, audio track information 428, and a waveform 430 for the audio track corresponding to audio track affordance 416-c. For example, audio track information 428 includes artist name(s), track title, the number of media items created with the audio track, and hashtags associated with the audio track corresponding to audio track affordance 416-c.” paragraph 0069); and
Snibbe does not appear to expressly teach wherein generating visual content of an audio clip corresponding to clipping information comprises:
generating the visual content of the original audio material according to the clipping information and the description information of the original audio material.
Sun teaches generating the visual content of the original audio material according to the audio information and the description information of the original audio material (“The target video may be automatically generated according to the target audio. Further, the target video may comprise the visualization material that is automatically generated according to the target audio. The visualization material is a video element which can be viewed in the target video. Optionally, the visualization material may comprise an image and/or text generated according to associated information of the target audio, which will be described in detail below.” Paragraph 0056-0058, 0122,0123, Fig. 4).
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Snibbe to comprise generating visual content of an audio clip corresponding to clipping information of the original audio material and publishing the visual content of the audio clip into a feed. One would have been motivated to make such a combination to lower a threshold for video production, so that the user can conveniently achieve the audio sharing with the video content, without the need of shooting or uploading the video (Sun, paragraph 0059,0221).
As to dependent claim 6, Snibbe teaches the audio publishing method according to claim 3, Snibbe does not appear to expressly teach wherein generating visual content of an audio clip corresponding to clipping information comprises:
generating a background color for the visual content based on a cover image in the case that the description information comprises the cover image.
Sun teaches generating a background color for the visual content based on a cover image in the case that the description information comprises the cover image (“The background material is a dynamic or static background image with the image feature of the associated image. Optionally, the image feature may comprise at least one of a color feature, a brightness feature, or a saturation feature.” Paragraph 0098-0099).
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Snibbe to comprise generating a background color for the visual content based on a cover image in the case that the description information comprises the cover image. One would have been motivated to make such a combination to improve a user experience of media publishing.
As to dependent claim 7, Snibbe teaches the audio publishing method according to claim 1, Snibbe does not appear to expressly teach wherein generating visual content of an audio clip corresponding to clipping information comprises:
generating an animation effect of the visual content.
Sun teaches generating an animation effect of the visual content (“the visualization material may comprise a third visualization material, which may have an animation effect generated according to the associated information.” Paragraph 0132).
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Snibbe to comprise generating an animation effect of the visual content. One would have been motivated to make such a combination to improve a user experience of media publishing.
As to dependent claim 8, Snibbe teaches the audio publishing method according to claim 7, Snibbe does not appear to expressly teach wherein the animation effect is associated with a change in an audio attribute of the audio clip.
Sun teaches wherein the animation effect is associated with a change in an audio attribute of the audio clip (“The associated information may comprise a music theory characteristic of the target audio. The music theory characteristic may comprise a characteristic feature of the target audio that is related to music theory, such as a rhythm feature, drumbeat feature, tone feature, and timbre feature.” Paragraph 0133).
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Snibbe to comprise wherein the animation effect is associated with a change in an audio attribute of the audio clip. One would have been motivated to make such a combination to improve a user experience of media publishing.
As to dependent claim 9, Snibbe teaches the audio publishing method according to claim 1, Snibbe further teaches wherein the visual content is further provided with user interaction control(s).
Sun teaches the visual content is further provided with user interaction control(s) (“FIG. 4, in the preset playing interface may be displayed a text button 405, a sticker button 406, an effect button 407, and a filter button 408. The text button 405 may be used for adding new text, the sticker button 406 may be used for adding a new sticker, the effect button 407 may be used for adding an effect for the target video, and the filter button 408 may be used for adding a filter for the target video.” paragraph 174).
As to dependent claim 13, Snibbe teaches the audio publishing method according to claim 1, Snibbe further teaches wherein generating visual content of an audio clip corresponding to clipping information of the original audio material in response to the clipping information is input by a user comprises:
displaying an audio clip editing interface in response to the user starting to edit the audio clip, wherein the audio clip editing interface comprises a track area of the original audio material, a first movement control, and a second movement control, and the first movement control and the second movement control are respectively used to identify a start position and an end position of the audio clip in the track area (“FIG. 4C illustrates moving end indicator 434 left-to-right and displaying start indicator 440 in response to the detecting the dragging gesture in FIG. 4B. For example, selected portion 436 remains a 30 second interval of the audio track between end indicator 434 and start indicator 440.” Paragraph 0071);
using playback time points corresponding to the first movement control and the second movement control as the clipping information of the original audio material in response to the user accomplishing editing the audio clip (FIG. 4C illustrates moving end indicator 434 left-to-right and displaying start indicator 440” paragraph 0071); and
Snibbe does not appear to expressly teach generating the visual content of the audio clip according to the clipping information.
Sun teaches generating the visual content of the audio clip according to the clipping information (“The target video may be automatically generated according to the target audio. Further, the target video may comprise the visualization material that is automatically generated according to the target audio. The visualization material is a video element which can be viewed in the target video. Optionally, the visualization material may comprise an image and/or text generated according to associated information of the target audio, which will be described in detail below.” Paragraph 0056-0058, 0122,0123, Fig. 4).
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Snibbe to comprise generating visual content of an audio clip corresponding to clipping information. One would have been motivated to make such a combination to lower a threshold for video production, so that the user can conveniently achieve the audio sharing with the video content, without the need of shooting or uploading the video (Sun, paragraph 0059,0221).
Claims 14-20 are substantially the same as claims 1-3, and 5-6 and are therefore rejected under the same rationale as above.
Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Snibbe et al. in view of Sun et al., LI et al., and further in view of Kim (US 20240231577 A1).
As to dependent claim 10, Snibbe teaches the audio publishing method according to claim 1, Snibbe does not appear to expressly teach wherein acquiring an original audio material comprises:
displaying an upload page in response to an operation on a Create control of the user, wherein the upload page comprises a Video Upload control and an Audio Upload control.
displaying an audio upload interface in response to a selection on the Audio Upload control of the user; and
acquiring the original audio material through the upload interface.
Kim teaches wherein acquiring an original audio material comprises:
displaying an upload page in response to an operation on a Create control of the user, wherein the upload page comprises a Video Upload control and an Audio Upload control (“the electronic device 101 may upload the edited video and project to a shared video service-related device according to the request of a user at the same time as or after the video data is stored through the export menu.” Paragraph 0110, “the operations and functions described below may be applied even when the user uploads original media and edits it for the first time.” Paragraph 0144)
displaying an audio upload interface in response to a selection on the Audio Upload control of the user (“Content may include not only videos and images, but also various types of media objects, such as audio, voice, music, text, and graphics” paragraph 0046); and
acquiring the original audio material through the upload interface (“the user uploads original media and edits it for the first time.” Paragraph 0144).
Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Snibbe to comprise displaying an upload page in response to an operation on a Create control of the user, wherein the upload page comprises a Video Upload control and an Audio Upload control. displaying an audio upload interface in response to a selection on the Audio Upload control of the user; and acquiring the original audio material through the upload interface. One would have been motivated to make such a combination to improve a user experience of media publishing.
As to dependent claim 11, Snibbe teaches the audio publishing method according to claim 10, Snibbe further teaches wherein the audio upload interface comprises an input box for an RSS feed link to obtain the original audio material (In FIG. 4A-4B, the first user interface prompts the user of client device 104 to choose an audio track for the media item.).
Response to Arguments
Applicant’s prior art arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Huang et al. US 20220327154 A1 teaches displaying a first interface; playing a segment of a first playing object on the first interface, where the first interface includes prompt information, and the prompt information is used to prompt that the segment of the first playing object is being played.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHELET SHIBEROU whose telephone number is (571)270-7493. The examiner can normally be reached Monday-Friday 9:00 AM-5:00 PM Eastern Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at 571-272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MAHELET SHIBEROU/Primary Examiner, Art Unit 2171