DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
2. Applicant's arguments received 02/13/2026 have been considered but are moot in view of the new ground(s) of rejection. Detailed response is given in sections 3-4 as set forth below in this Office action.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 1-4 and 6-20 are rejected under 35 U.S.C. 103 as being unpatentable over DENG et al. (CN 111131867 A, machine translation) in view of Yan (CN 102456340 A, machine translation) and CAMERON (US 20140365887 A1).
Regarding claims 1, 13 and 17, DENG discloses a method/system, including computer programs encoded on a storage device, for implementing the method/system, said system comprising a computer device having a memory configured to store executable instructions, and a processor configured to, when executing the executable instructions stored in the memory, implement a song processing method (see Fig. 2 and related text) including:
presenting a song recording interface (e.g., the first terminal of the first user) in response to a singing instruction triggered in a session interface of a group chat session (para. 0027, 0042-0043, 0052);
recording a song in response to a song recording instruction triggered in the song recording interface (para. 0029-0030);
synthesizing the recorded song with a music sound effect (e.g., an accompaniment audio) corresponding to the recorded song (para. 0114-0120: the first terminal 10 displays a reader interface of a target song, the first song segment is a song segment sung by the reader, and has a preset start time point, for example, the beginning of the song, and the first user may select any song time point selected from the target song, the recording option can activate the audio recording function to record the audio sung by the user, when the user wants to start the activation, the recording option can be triggered to sing the first song segment, and the first terminal 10 can synthesize the recorded audio with the accompaniment audio of the first song segment after the recording is completed to obtain the sung audio of the first song segment);
transmitting, in response to a song transmitting instruction, a target song obtained by processing the song based on the determined music sound effect to members of the group chat session, and
presenting a session message corresponding to the target song in the session interface, and presenting a pick-up singing function item corresponding to the target song in the session interface, the pick-up singing function item being used for implementing pick-up singing of the target song by a member of the group chat session (para. 0130, 0132, 0134-0135, 0138, 0140, 0142-0150: the second terminal can not only reproduce the accompaniment audio of the second music segment but also sequentially display the lyrics of the second music segment in the order of the reproduction time of the second music segment, so that the second user can sing the second music segment based on the accompaniment voice and lyrics of the second music segment, the second terminal can further activate the voice recording function to perform voice recording, so that the voice sung by the second user is recorded, the server connects the audio sung by the first music segment and the audio sung by the second music segment to obtain the audio sung by the target music, and distributes the audio sung by the target music to the guide interface or the interface that has sung, so that the user can use the guide interface. Further, it is recognized that the voice of the target song can be reproduced based on the singing interface, see also Figs. 1 to 16 and related text).
DENG does not mention explicitly: said step of synthesizing includes determining a reverberation effect and a corresponding reverberation effect image for the recorded song; wherein said pick-up singing function item is an icon presented next to the session message in the session interface, and wherein the session message having an icon for playing the target song in response to a user selection of the icon and the reverberation effect image.
Yan discloses a karaoke singing method/system based on Internet (Abstract; para. 0008), comprising: recording a song performed by a user (para. 0011, 0033: the vocal part sung by the karaoke user is recorded and buffered temporarily so that it can be mixed with the accompaniment of the song and the reverberation effect, forming an audio track of the song with both accompaniment and vocals), determining a reverberation effect (i.e., an audio effect that creates the sound of a space, like a room, hall, or cathedral, by mimicking how sound waves bounce off surfaces and fade away, adding depth, fullness, and natural blending to instruments and vocals, making them sound like they're in a real environment rather than a dry studio) corresponding to the song performed by the user, and synthesizing the recorded song with an accompaniment audio as well as the determined reverberation effect to generate a target song (para. 0033); and transmitting the target song to other Internet-connected members of a group karaoke session (para. 0034, 0039).
It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Yan’s teaching of determining a reverberation effect corresponding to the recorded song and synthesizing/mixing the recorded song with an accompaniment audio as well as the determined reverberation effect to generate the target song. Doing so would make the target song spacious, rich, and cohesive, as it is well-known in the field of music art. Furthermore, it is deemed that the skilled person would conceive and apply such modification without needing inventive skill but depending on practical considerations and according to the dictates of the circumstances since adding acoustic effects such as reverberation during singing is widely performed as a well-known/common practice.
The combination of DENG and Yan is silent on: said determining a reverberation effect comprises determining a corresponding reverberation effect image for the song; said pick-up singing function item is an icon presented next to the session message in the session interface, and the session message having an icon for playing the target song in response to a user selection of the icon and the reverberation effect image
CAMERON discloses a computer-implemented a method/system for generating an interactive platform of multimedia based on data associated with a user (Abstract), comprising: obtaining input data from a member of a group chat session comprising a session interface (para. 0013, 0015: “user may communicate with the platform server 105 via a device connected over a network 120 (e.g., the Internet)”, para. 0037), wherein said input data identifies an audio media (para. 0041, 0046: “the caption 309 may store audio input from the user”); determining a reverberation effect and a corresponding reverberation effect image for the audio media (para. 0030: “Similar to image processing effects, audio signal processing effects used by the collage generator component 215 may manipulate audio media to be used with (or instead of) manipulated imagery. Such effects may include delay, reverberation …”; para. 0031, 0049); presenting a graphical prompt or icon next to a session message in the session interface through which the user can interact with or manipulate the audio media (para. 0030-0032); wherein the session message having an icon for playing a target audio media in response to a user selection of the graphical prompt and the reverberation effect image (para. 0031: “Other outputs could be in the form of visible and audible presentations consisting of tables, graphs, and charts that identify user response characteristics and magnitudes. Visible media events may include moving images, animations, video, photos, drawings, textures, patterns, colors, shapes, and text”; see also para. 0048-0050, 0054-0055).
It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of DENG and Yan in view of CAMERON’s teaching of the interactive platform, including manipulation of the source media before generating the collage by using digital effects techniques, to remedy the deficiency of DENG/Yan as recited in instant claims 1, 13 and 17. Doing so would make the interactive platform more user-friendly as well as provide the user with an immersive musical and audio experience (CAMERON, para. 0001-0003).
Regarding claim 2, DENG discloses: wherein before the presenting a song recording interface in response to a singing instruction triggered in a session interface, the method further comprises: presenting the session interface (para. 0105-0107) and presenting a voice function item (note, the term “a voice function item” is given a broad interpretation) in the session interface (e.g., the lyrics of the first song segment displayed in chronological order of playback time); presenting at least two voice modes (a voice operation and a gesture operation) in response to a trigger operation on the voice function item (para. 0114); and receiving a selection operation for the voice mode as a singing mode, and triggering the singing instruction (para. 0114-0115).
Regarding claims 3, 14 and 18, the combination of DENG and Yan is silent on: wherein the determining a reverberation effect corresponding to the recorded song comprises: presenting a reverberation effect selection function item in the song recording interface; presenting a reverberation effect selection interface in response to a trigger operation on the reverberation effect selection function item; presenting at least two reverberation effects in the reverberation effect selection interface; and determining a corresponding target reverberation effect as the reverberation effect corresponding to the recorded song in response to a reverberation effect selection instruction triggered for a target reverberation effect.
However, Examiner takes official notice that a karaoke system, comprising a control panel, either physical (on a mixer or effects processor) or virtual (in software or an app), that allows a user to choose and adjust the type and intensity of a reverberation effect applied to a singer's voice, is well known in the art. It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate such well-known feature into the combination of DENG/Yan to arrive the claimed invention. One of ordinary skill in the art would have recognized that the results of such a combination were predictable for assisting the user in optimizing his performance output. The mere application of a known technique to a specific instance by those skilled in the art would have been obvious.
Regarding claim 4, DENG discloses: wherein the recording a song in response to a song recording instruction triggered in the song recording interface comprises: obtaining a song recording background image (e.g., the lyrics of the first song segment displayed in chronological order of playback time) corresponding to said sound effect (e.g., an accompaniment audio) (para. 0107, 0114-0115); using the song recording background image as a background of the song recording interface, and presenting a song recording button (e.g., a touch screen operation) in the song recording interface, recording the song in response to a press operation for the song recording button (para. 0121, 0208; see also the “recording” button shown in Fig. 4).
DENG does not mention: said sound effect is a reverberation effect. However, Yan teaches the reverberation effect (see discussion for claim 1 above). As such, the combination of DENG/Yan renders obvious the claim limitation about the reverberation effect.
As to the feature of finishing recording the song when the press operation is stopped, to obtain the recorded song, Examiner takes official notice that a karaoke system comprising a user interface (hardware or virtual) allowing the user to control recording of his performance such that, when a press operation is stopped, finishing recording the song is well known in the art. It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate such well-known feature into the combination of DENG/Yan to arrive the claimed invention. One of ordinary skill in the art would have recognized that the results of such a combination were predictable for facilitating the user in operating the system. The mere application of a known technique to a specific instance by those skilled in the art would have been obvious.
Regarding claim 6, DENG discloses: wherein the presenting a session message corresponding to the target song in the session interface comprises: obtaining a song poster (a graphic identifier) corresponding to the target song (para. 0050, 0127); and using the song poster as a background of a message card of the session message, and presenting the session message corresponding to the target song in the session interface through the message card (para. 0050, 0057, 0111, 0127).
Regarding claims 7, 15 and 19, DENG discloses: wherein the presenting a session message corresponding to the target song in the session interface comprises: matching (e.g., via the song identifier) the target song with a song in a song library, to obtain a matching result (para. 0099, 0111); determining, when the matching result represents that there is a song matching the target song, song information (e.g., the first song segment information displayed on the lead singer interface) of the target song according to the song matching the target song (para. 0111, 0118, 0141); and presenting the session message that carries the song information and corresponds to the target song in the session interface (para. 0111, 0118, 0141).
Regarding claim 8, DENG discloses: wherein after the presenting a pick-up singing function item corresponding to the target song, the method further comprises: presenting a recording interface (e.g., second terminal 20 in Fig. 2) of a pick-up song in response to a trigger operation for the pick-up singing function item (para. 0103, 0130); obtaining, when the target song is a song episode, a melody (e.g., the accompaniment audio in original vocal mode) of a song corresponding to the song episode, and playing at least a part of the melody of the song episode (para. 0113-0115); receiving a song recording instruction during playing of the at least a part of the melody (see discussion of Sep 302 in Fig. 3); stopping playing the at least a part of the melody, and playing a melody of a pick-up singing part, in response to the song recording instruction (see discussion of Sep 303/304 in Fig. 3); and recording a song based on the melody of the pick-up singing part, to obtain a recorded pick-up song (see discussion of Sep 305/306/307 in Fig. 3).
Regarding claim 9, The combination of DENG/Yan/CAMERON teaches the method of claim 1. DENG discloses: wherein after the presenting a pick-up singing function item corresponding to the target song, the method further comprises: presenting a recording interface of a pick-up song in response to a trigger operation for the pick-up singing function item (see discussion of Sep 305/306 in Fig. 3); determining, when the pick-up song is recorded based on the recording interface of the pick-up song, a position of the recorded pick-up song in a song corresponding to the target song, the position being used as a start position of pick-up singing, transmitting a session message that carries the position and corresponds to the pick-up song, and presenting the session message of the pick-up song in the session interface, the session message of the pick-up song indicating the start position of pick-up singing (see discussion of Sep 307in Fig. 3). As such, the combination of DENG/Yan/CAMERON renders the claimed invention obvious.
Regarding claims 10, 16 and 20, DENG discloses: presenting at least two pick-up singing modes (single-second-user mode vs. multiple users sing along mode) in the group chat session interface (para. 0077, 0099, 0165-0167); and determining, in response to a pick-up singing mode selection instruction triggered for a target pick-up singing mode, a selected pick-up singing mode as a target pick-up singing mode, the pick-up singing mode being used for indicating a session member having a pick-up singing permission, wherein the presenting a pick-up singing function item corresponding to the target song comprises: presenting, when it is determined that there is the pick-up singing permission according to the target pick-up singing mode, the pick-up singing function item corresponding to the target song (para. 0077, 0099, 0130, 0132, 0134-0135, 0138, 0140, 0142-0150, 0165-0167). As such, the combination of DENG/Yan/CAMERON renders the claimed invention obvious.
Regarding claim 11, DENG discloses: wherein after the presenting a pick-up singing function item corresponding to the target song, the method further comprises: receiving and presenting a session message corresponding to a pick-up song, the session message carrying prompt information indicating that pick-up singing is completed; and presenting a details page in response to a viewing operation for the prompt information, the details page being used for sequentially playing, when a trigger operation of playing a song is received, a song recorded by a session member participating in pick-up singing in an order of participating in pick-up singing (para. 0077, 0099, 0130, 0132, 0134-0135, 0138, 0140, 0142-0150, 0165-0167; see also Fig. 15 and related text). As such, the combination of DENG/Yan/CAMERON renders the claimed invention obvious.
Regarding claim 12, DENG discloses: presenting a chorus function item corresponding to the target song; the chorus function item being used for presenting, when a trigger operation for the chorus function item is received, a recording interface of a chorus song, and recording a song the same as the target song based on the recording interface of the chorus song (see discussion of the multiple users sing along mode in claim 10 above).
Allowable Subject Matter
5. Claim 5 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Reasons for Allowance
6. The following is a statement of reasons for the indication of allowable subject matter:
The primary reason for the allowance of claim 5 is the inclusion of the limitations: wherein the presenting a session message corresponding to the target song in the session interface comprises: obtaining a bubble style corresponding to the reverberation effect; determining, according to a duration of the target song, a bubble length matching the duration; and presenting, based on the bubble style and the bubble length, the session message corresponding to the target song by using a bubble card. It is these limitations found in the claim, in combination with the rest of the limitations as recited in independent claim 1, that has not been found, taught or suggested by the prior art of record, which makes claim 5 distinguish over the prior art.
Conclusion
7. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Contact Information
8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANCHUN QIN whose telephone number is (571)272-5981. The examiner can normally be reached 9AM-5:30PM EST M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dedei Hammond can be reached at (571)270-7938. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JIANCHUN QIN/Primary Examiner, Art Unit 2837