DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
IDS Forms (SB08) submitted on 20 May 2024 and 18 June 2024 have been considered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the broadest reasonable interpretation of a claims drawn to a computer readable medium (also called machine-readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, in particular when the specification is silent or open-ended (see applicant's own disclosure ¶0029: The functions of the above units are implemented by, for instance, arranging the acoustic device configured as a computer hardware and operating a processor of the acoustic device in accordance with a computer program. ¶0037: It should be noted that the acoustic device 100 optionally includes a communication interface (not illustrated), through which music data stored in an external storage, a computer, or the like is imported and stored in the music data storage 140. In this case, the acoustic device 100 does not include the music data storage 140 but the external storage serves as the music data storage 140). See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 US.C. §101 as covering non-statutory subject matter. The USPTO recognizes that applicants may have claims directed to computer-readable media that cover signals per se. In an effort to assist the patent community in overcoming a rejection or potential rejection under 35 US.C. § 101 in this situation, the USPTO suggests the following approach. A claim drawn to such a computer-readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 USC. § 101 by adding the limitation "non-transitory" to the claim.
The specification or claims must be amended to limit the computer-readable storage medium to only non-transitory signals, and state the exclusion of transitory signals (See Official Gazette Notice 1351 OG 212, dated February 23, 2010).
Claim 9 recites “a non-transitory computer-readable storage medium storing a program …”. However, the applicant's own disclosure is silent about “a non-transitory computer readable media”. Actually, the applicant's own disclosure is silent about “computer readable media”. The Examiner suggests amending the disclosure to state "A non-transitory computer-readable storage medium …" to overcome the rejection under 35 U.S.C. §101.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“A reservation unit” in Claims 1, 3, 4, 6, and 8 [¶0052; Fig. 1: 132, ¶0066; Fig. 5].
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 6, and 9-10 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Morsey et al. (US #2023/0335091).
Regarding Claim 1, Morsey discloses an acoustic device (Fig. 1: 35) configured to mix a first music piece (Fig. 1: 12 A) and a second music piece (Fig. 1: B), the acoustic device comprising:
a reservation unit (Morsey playback unit; playback function; ¶0019 discloses recombining the at least three decomposed tracks in such a manner that a user does not need to select individual volume levels for each of the three or more decomposed tracks, but instead is able to control the recombination result and thus the playback of the decomposed tracks by only setting first and second volume levels. Controlling first and second volume levels may be easily achieved by using two separate control elements [such as buttons or faders]) configured to set a playback start point of the second music piece (Morsey Figs. 4-5, ¶0032 disclose receiving mixed input data, decomposing the mixed input data, generating and playing output data are carried out in a continuous process. This means that processing of the audio data from input [receiving mixed input data] to output [playing output data] is carried out continuously, or on the fly, i.e. without substantial time delay. For example, playback of decomposed audio data can be started within a time period smaller than 2 seconds, preferably smaller than 150 milliseconds, most preferably smaller than 50 milliseconds, from the receipt of the mixed input data. ¶0046 discloses another advantage of segment-wise decomposition is that playback of the output data can be started at any desired position [at any desired playing time]. …Thus, it is possible to quickly and precisely jump forward and backward to arbitrary positions within an audio file with low or even without any recognizable delay, independent of the size and playback duration of the entire audio file. ¶0052 disclose in particular methods which process segments of the input audio file to decrease processing time to a level suitable for a live performance it would in principle be possible to start playback of the decomposed tracks at any desired position [time position within the output track] by processing a segment of specified size which starts right at the desired playing position. ¶0053 discloses … receiving a play position command from a user representing a user's command to play the input audio file from a certain start play position, identifying a first segment out of the plurality of predetermined segments such that the start play position is within the time interval …. ¶0139 discloses first and second decomposed tracks may be visualized in an overlaid manner such as to share a common baseline/time axis, but using different signal axes and/or different drawing styles so as to be visually distinguishable from one another, see Figs. 4-5); and
a display controller configured to display a first screen (Morsey Figs. 4-6: display) on which a first image indicating a playback status of the second music piece and time information of the playback start point is moved based on a second image indicating a current playback point on a display device when a reservation set by the reservation unit is executed (Morsey see Figs. 4-5, cursor 55-1 or 55-2, time marks of the music piece 58; ¶0139 discloses the first waveform section 52-1 may display a zoom-out version 55-1 of the first and second waveforms, in which first and second waveforms are displayed in an overlaid manner using a common baseline that is scaled to view a time interval containing the current play position and preferably having a size corresponding to the length of an input audio file, for example the whole song A and/or a size between 60 seconds and 20 minutes. ¶0140 discloses likewise, device 10 can be configured to display a second waveform section 52-2 in which waveforms representing the third and fourth decomposed tracks are displayed in the same manner [as described above] for the first waveform section 52-1 and the first and second decomposed tracks, in particular by means of a zoom-in version 53-2 and a zoom-out version 55-2. ¶0169 discloses an audio player including a recompose controlling section 24 having a control element 26-13 for controlling the first and second volume levels of respective first and second decomposed tracks [here decomposed vocal track and decomposed instrumental track] obtained from one audio file, and optionally a display region 66 displaying an overlaid representation of the first and second decomposed tracks, Fig. 14).
A non-transitory computer-readable storage medium Claim 9 is rejected for the same reasons as set forth in Claim 1 (Morsey ¶0111 discloses audio files A and B may be provided, downloaded or streamed from a remote server via Internet or other network connection, or may be provided by a local computer or a storage device integrated in the device 10 itself).
Claim 10 is rejected for the same reasons as set forth in Claim 1
Regarding Claim 2, Morsey discloses the acoustic device according to claim 1,
wherein the first image comprises an image indicating a music waveform of the second music piece and the playback start point provided on the music waveform (Morsey Figs. 4-5, cursor 55-1 or 55-2, time marks of the music piece 58. ¶0032 disclose receiving mixed input data, decomposing the mixed input data, generating and playing output data are carried out in a continuous process. This means that processing of the audio data from input [receiving mixed input data] to output [playing output data] is carried out continuously, or on the fly, i.e. without substantial time delay. For example, playback of decomposed audio data can be started within a time period smaller than 2 seconds, preferably smaller than 150 milliseconds, most preferably smaller than 50 milliseconds, from the receipt of the mixed input data. ¶0046 discloses another advantage of segment-wise decomposition is that playback of the output data can be started at any desired position [at any desired playing time]).
Regarding Claim 3, Morsey discloses the acoustic device according to claim 1,
wherein the reservation unit is configured to set the playback start point of the second music piece while the first music piece is being played (Morsey ¶0113 discloses device 10 further comprises a recompose controlling section 24 including at least one recompose control element 26, for example a first control element 26-1, a second recompose control element 26-2 and a mix control element 28. Recompose controlling section 24 may further comprise a first play control element 30-1 and a second play control element 30-2 for starting or stopping playback of audio signals originating from the first or second mixed input data, respectively. ¶0128 discloses in the recombination unit 32 the first and second decomposed tracks are then recombined with one another in a first recombination stage 32-1 based on the volume levels set by the first control element 26-1 to obtain a recombination A' from the first input audio file A. Further, the third and fourth decomposed tracks may be recombined in a second recombination stage 32-2 of the recombination unit 32 according to the third and fourth volume levels set by the second control element 26-2 such as to obtain a second recombination B' from the second input audio file B. Figs. 4-5).
Regarding Claim 6, Morsey discloses the acoustic device according to claim 1, wherein the reservation unit is configured to set a mixing point on each of the first music piece and the second music piece and set the playback start point corresponding to the mixing point (Morsey ¶0053 discloses receiving an input audio file having a predetermined file size and a predetermined playback duration, which contains audio data to play the mixed input data, partitioning the input audio file into a plurality of segments in succession, which contain audio data to play the mixed input data within a plurality of time intervals following each other, receiving a play position command from a user representing a user's command to play the input audio file from a certain start play position, identifying a first segment out of the plurality of predetermined segments such that the start play position is within the time interval which corresponds to the first segment, decomposing the first segment of the input audio file [segment to be processed first, not necessarily starting segment of the input audio file] to obtain a first segment of the first decomposed track and optionally a first segment of the second decomposed track, generating a first segment of the output data based on the first segment of the first decomposed track, preferably by recombining at least the first segment of the first decomposed track at the first volume level with the first segment of the second decomposed track at the second volume level, and playing the first segment of the output data starting at the start play position, which is a play position later than or equal to the start of the time interval of the first segment of the output data. For clarity, the first segment is not necessarily the starting segment of the audio file, but a segment containing the desired start play position and therefore to be decomposed first in the process. ¶0094 discloses there may be provided a device for representing audio data, for example a display device of a computer, said audio data comprising at least a first track and a second track, which are adapted to be played in a mix, said device comprising a first waveform generator generating a first waveform representative of the first track, a second waveform generator generating a second waveform representative of the second track, and an overlay-waveform generator generating an overlay-waveform showing the first waveform and the second waveform in an overlaid manner using one single baseline, wherein the waveforms are overlaid by the overlay-waveform generator using different signal axes and/or different drawing styles such as to be visually distinguishable from one another).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4-5 and 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Morsey et al. (US #2023/0335091) in view of Takahashi (US #2022/0394354).
Regarding Claim 4, Morsey discloses the acoustic device according to claim 1, but may not explicitly disclose wherein the display controller is configured to switchably display, on the display device, the first screen and a second screen on which the reservation unit sets the playback start point of the second music piece.
However, Takahashi (title, abstract, Figs. 1-7) teaches wherein the display controller (Takahashi Fig. 2: display controller 74, judging unit 73) is configured to switchably display, on the display device, the first screen and a second screen on which the reservation unit sets the playback start point of the second music piece (Takahashi ¶0116 discloses in response to the operation type determined by the judging unit 73 [i.e., reservation unit], the display controller 74 changes the display contents [i.e. switches] of the main display unit 4 to the display contents corresponding to the operation type and displays on the sub display unit 5 at least a part of the display contents having been displayed on the main display unit 4. ¶0118 discloses the playback controller 72 for controlling the playback of the music data is further provided. The display controller 74 displays the first playback screen S1 including the playback state information showing the playback state of the music data on the main display unit 4. When the judging unit 73 judges that the operation type is the track selection operation, the display controller 74 changes the display contents of the main display unit 4 from the first playback screen Si to the track selection screen SS and displays the third playback screen S3, which includes at least a part of the playback state information, on the sub display unit 5. ¶0128 discloses the display contents of the main display unit 4 can be switched from the track selection screen SS to the first playback screen S1 simultaneously with the completion of the track selection operation. ¶0132 discloses however, the display controller 74 optionally displays all of the display contents having been displayed on the main display unit 4 on the sub display unit 5. ¶0133 discloses further, in the exemplary embodiment, the display contents of the main display unit 4 [the first display unit] and the display contents of the sub display unit 5 [the second display unit] are switched when it is judged that the user's operation in a form of the track selection operation is performed. Refer to Figs. 1-3 and 5).
Morsey and Takahashi are analogous art as they pertain to communicating with multimedia devices. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify creative contribution of disc jockey during live show (as taught by Morsey) while performing the track selection operation by checking the track selection screen SS displayed on the main display unit 4, the user can recognize the playback state of the music data by checking at least a part of the playback state information displayed on the sub display unit 5 (as taught by Takahashi, ¶0119) accordingly, the playback state of the music data can be controlled while the track selection operation is performed (Takahashi, ¶0119).
Regarding Claim 5, Morsey in view of Takahashi discloses the acoustic device according to claim 4. But Morsey may not explicitly disclose wherein the display controller is configured to: on the first screen, display the first image in motion on the display device since before the current playback point of the second music piece reaches the playback start point; and on the second screen, display an image indicating a playback status of the first music piece in motion and the image indicating the playback status of the second music piece in motion on the display device in response to a user's operation.
However, Takahashi (title, abstract, Figs. 1-7) teaches wherein the display controller (Takahashi Fig. 2: display controller 74, judging unit 73) is configured to:
on the first screen, display the first image in motion on the display device since before the current playback point of the second music piece reaches the playback start point (Takahashi ¶0120 discloses the playback state information includes the playback time information including at least one of the playback elapsed time or the playback remaining time of the music data. The display controller 74 displays the third playback screen S3 including the playback time information on the sub display unit 5 when the judging unit 73 judges that the operation type is the track selection operation. ¶0121 discloses according to the above arrangement, even when a large part of the display area of the main display unit 4 is occupied by the track selection screen SS, the user can recognize the time until the end of the currently played music data by seeing the playback time information displayed on the sub display unit 5. In other words, the user can recognize the playback time information while performing the track selection operation Accordingly, the user can recognize the time available for the selection of the next music data to be played, thereby facilitating the track selection operation during a period until the end of the playback of the music data. Refer to Figs. 1-3 and 5); and
on the second screen, display an image indicating a playback status of the first music piece in motion (Takahashi ¶0088 discloses an area SS4 showing information on the music data loaded in the left deck 31 and currently played and is provided at a lower left side of the track selection screen SS [Fig. 5]. The area SS4 includes a time display area SS41, a playback position display area SS42, a marker MK and a BPM display area S544) and the image indicating the playback status of the second music piece in motion on the display device in response to a user's operation (Takahashi ¶0092 discloses an area SS5 showing information on the music data loaded in the right deck 31 and currently played is provided at a lower right side of the track selection screen SS. The area SS5 includes a time display area SS51, a playback position display area SS52, a marker MK and a BPM display area SS54 [Fig. 5]. ¶0123 discloses since the value indicating the playback time information [i.e. the numerical value indicating at least one of the playback elapsed time or the playback remaining time] is displayed, a user can accurately recognize the at least one of the times. ¶0127 discloses the display controller 74 displays the first playback screen Si on the main display unit 4 when the judging unit 73 judges that the track selection operation ends. ¶0128 discloses the display contents of the main display unit 4 can be switched from the track selection screen SS to the first playback screen S1 simultaneously with the completion of the track selection operation. ¶0133 discloses further, in the exemplary embodiment, the display contents of the main display unit 4 [the first display unit] and the display contents of the sub display unit 5 [the second display unit] are switched when it is judged that the user's operation in a form of the track selection operation is performed. However, the display contents of the main display unit 4 and the display contents of the sub display unit 5 are optionally switched when it is judged that a setting operation on the acoustic device 1 or an operation for displaying a menu screen is performed. In other words, the user's operation for displaying the display contents having been displayed on the main display unit 4 on the sub display unit 5 is changeable as necessary. Refer to Figs. 1-3 and 5).
Morsey and Takahashi are analogous art as they pertain to communicating with multimedia devices. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify creative contribution of disc jockey during live show (as taught by Morsey) while performing the track selection operation by checking the track selection screen SS displayed on the main display unit 4, the user can recognize the playback state of the music data by checking at least a part of the playback state information displayed on the sub display unit 5 (as taught by Takahashi, ¶0119) accordingly, the playback state of the music data can be controlled while the track selection operation is performed (Takahashi, ¶0119).
Regarding Claim 7, Morsey discloses the acoustic device according to claim 1, but may not explicitly disclose wherein the display controller is configured to, on the display device, fixedly display the second image and display the first image in a manner moving based on the second image in accordance with the current playback point on the first screen.
However, Takahashi (title, abstract, Figs. 1-7) teaches wherein the display controller (Takahashi Fig. 2: display controller 74, judging unit 73) is configured to, on the display device, fixedly display the second image and display the first image in a manner moving based on the second image in accordance with the current playback point on the first screen (Takahashi ¶0116 discloses in response to the operation type determined by the judging unit 73 [i.e., reservation unit], the display controller 74 changes the display contents [i.e. switches] of the main display unit 4 to the display contents corresponding to the operation type and displays on the sub display unit 5 at least a part of the display contents having been displayed on the main display unit 4. ¶0118 discloses the playback controller 72 for controlling the playback of the music data is further provided. The display controller 74 displays the first playback screen S1 including the playback state information showing the playback state of the music data on the main display unit 4. When the judging unit 73 judges that the operation type is the track selection operation, the display controller 74 changes the display contents of the main display unit 4 from the first playback screen Si to the track selection screen SS and displays the third playback screen S3, which includes at least a part of the playback state information, on the sub display unit 5).
Morsey and Takahashi are analogous art as they pertain to communicating with multimedia devices. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify creative contribution of disc jockey during live show (as taught by Morsey) while performing the track selection operation by checking the track selection screen SS displayed on the main display unit 4, the user can recognize the playback state of the music data by checking at least a part of the playback state information displayed on the sub display unit 5 (as taught by Takahashi, ¶0119) accordingly, the playback state of the music data can be controlled while the track selection operation is performed (Takahashi, ¶0119).
Regarding Claim 8, Morsey discloses the acoustic device according to claim 1, but may not explicitly disclose wherein the display controller is configured to display the first image with a display mode of a part of the first image indicating the playback status of the second music piece being changed on the first screen on the display device when the reservation set by the reservation unit is executed and the current playback point reaches the playback start point.
However, Takahashi (title, abstract, Figs. 1-7) teaches wherein the display controller (Takahashi Fig. 2: display controller 74, judging unit 73) is configured to display the first image with a display mode of a part of the first image indicating the playback status of the second music piece being changed on the first screen on the display device when the reservation set by the reservation unit is executed and the current playback point reaches the playback start point (Takahashi ¶0120 discloses the playback state information includes the playback time information including at least one of the playback elapsed time or the playback remaining time of the music data. The display controller 74 displays the third playback screen S3 including the playback time information on the sub display unit 5 when the judging unit 73 judges that the operation type is the track selection operation. ¶0121 discloses according to the above arrangement, even when a large part of the display area of the main display unit 4 is occupied by the track selection screen SS, the user can recognize the time until the end of the currently played music data by seeing the playback time information displayed on the sub display unit 5. In other words, the user can recognize the playback time information while performing the track selection operation Accordingly, the user can recognize the time available for the selection of the next music data to be played, thereby facilitating the track selection operation during a period until the end of the playback of the music data. Refer to Figs. 1-3 and 5).
Morsey and Takahashi are analogous art as they pertain to communicating with multimedia devices. Therefore it would have been obvious to someone of ordinary skill in the art before the effective filing date of the invention was made to modify creative contribution of disc jockey during live show (as taught by Morsey) while performing the track selection operation by checking the track selection screen SS displayed on the main display unit 4, the user can recognize the playback state of the music data by checking at least a part of the playback state information displayed on the sub display unit 5 (as taught by Takahashi, ¶0119) accordingly, the playback state of the music data can be controlled while the track selection operation is performed (Takahashi, ¶0119).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YOGESHKUMAR G PATEL whose telephone number is (571)272-3957. The examiner can normally be reached 7:30 AM-4 PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached at (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YOGESHKUMAR PATEL/Primary Examiner, Art Unit 2691