DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Arguments
Claims 1-5 and 7-16 are pending in this application and are considered below. Claims 6 and 17 are cancelled by Applicant.
Applicant’s arguments, see pages 8-14, filed February 12, 2026, with respect to claims 1-5 and 7-16, have been fully considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-4, 7-9, 11, and 14-16 are rejected under 35 U.S.C. 103 as unpatentable over Itoyama et al. (US 20100131086 A1, May 27, 2010), hereinafter Itoyama, in view of Furukawa (US 20030101862 A1, June 5, 2003), hereinafter Furukawa, and further in view of Ura et al. (US 5824930 A, October 20, 1998), hereinafter Ura.
Regarding claim 1, Itoyama teaches a signal processing apparatus comprising: a sound source separation unit configured to perform sound source separation on a mixed sound signal obtained by mixing a plurality of sound source signals (Itoyama ¶0001: "The present invention relates to a system, a method, and a program for sound source separation that enable separation of an instrument sound signal corresponding to each musical instrument from an input audio signal containing a plurality of types of instrument sound signals. The present invention relates in particular to a system, a method, and a computer program for sound source separation that separate an 'audio signal of sound mixtures obtained by playing a plurality of musical instruments' containing both harmonic-structure and inharmonic-structure signal components into sound sources for respective instrument parts."); and a sound source type determination unit configured to determine a type of a predetermined sound source signal obtained by the sound source separation (Itoyama ¶0084: "Fundamentally, sound source separation includes a step of separating and extracting sound sources (instrument sound signals) from a sound mixture, and a sound source estimation step of estimating what musical instruments correspond to the separated sound sources (instrument sound signals). The latter step... is implemented by estimating sound sources used in a musical piece played, for example a piano… given an ensemble audio signal as an input signal.").
Itoyama does not explicitly disclose an output destination control unit configured to output the predetermined sound source signal to a corresponding output device in a plurality of output devices on a basis of a determination result of the sound source type determination unit, wherein the plurality of output devices includes at least one automatic playing musical instrument, wherein the plurality of output devices includes a speaker separate from the automatic playing musical instruments, wherein the plurality of output devices includes a headphone separate from the automatic playing musical instruments, wherein the output destination control unit is connected to the plurality of output devices by wire or by wireless connections.
However, Furukawa teaches an output destination control unit configured to output the predetermined sound source signal to a corresponding output device in a plurality of output devices (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7") on a basis of a determination result of the sound source type determination unit (Furukawa ¶00058: "When a user instructs the music player to reproduce an ensemble through the manipulating panel 5… The controller 4 checks every event code D3 to see whether or not the event code D3 is representative of a piece of music data or the initiation of reading out the time series audio data. When the controller 4 acknowledges that the event code D3 is representative of a piece of music data, the controller 4 supplies the event code D3 to the automatic playing controller 9." Furukawa ¶0065: "The automatic playing controller 9 determines trajectories for the plungers of the solenoid-operated key/pedal actuators 14 a associated with the keys/pedals to be moved on the basis of the event codes D3 representative of the note-on… The depressed keys give rise to free rotation of the hammers, and the hammers strike the strings at the end of the free rotation. The strings vibrate, and generate acoustic piano tones."), wherein the plurality of output devices includes at least one automatic playing musical instrument (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7"), wherein the plurality of output devices includes a speaker separate from the automatic playing musical instruments (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7").
Furthermore, Ura teaches that the plurality of output devices includes a headphone separate from the automatic playing musical instruments (Ura col. 8, lines 12-19: "The electronic system 12 includes a plurality of key sensors 12a respectively associated with the black and white keys 10c/10d for monitoring the key motions, a plurality of solenoid-operated actuator units 12b respectively provided beneath the black and white keys 10c/10d, a controller 12c connected to the key sensors 12a and the solenoid-operated actuator units 12b and a headphone 12d and/or a speaker system 12e."), wherein the output destination control unit is connected to the plurality of output devices by wire or by wireless connections (the plurality of output devices are inherently connected to their control unit by wire or by wireless connections).
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing apparatus of Itoyama by adding the output destination control unit and output devices of Furukawa and Ura to create a music player for ensemble between different sorts of sound sources (Furukawa ¶0001).
Regarding claim 3, Itoyama (in view of Furukawa and further in view of Ura), teaches a signal processing apparatus comprising the features of claim 1.
Furukawa further teaches that the output destination control unit acquires reproduction apparatus information including at least a type of the output device, and decides an output device that outputs the predetermined sound source signal on a basis of the type of the predetermined sound source signal and the type of the output device indicated by the reproduction apparatus information (Furukawa ¶00058: "When a user instructs the music player to reproduce an ensemble through the manipulating panel 5… The controller 4 checks every event code D3 to see whether or not the event code D3 is representative of a piece of music data or the initiation of reading out the time series audio data. When the controller 4 acknowledges that the event code D3 is representative of a piece of music data, the controller 4 supplies the event code D3 to the automatic playing controller 9." Furukawa ¶0065: "The automatic playing controller 9 determines trajectories for the plungers of the solenoid-operated key/pedal actuators 14 a associated with the keys/pedals to be moved on the basis of the event codes D3 representative of the note-on… The depressed keys give rise to free rotation of the hammers, and the hammers strike the strings at the end of the free rotation. The strings vibrate, and generate acoustic piano tones.").
Regarding claim 4, Itoyama (in view of Furukawa and further in view of Ura), teaches a signal processing apparatus comprising the features of claim 1.
Itoyama further teaches that a plurality of sound source signals is obtained by the sound source separation (Itoyama ¶0056: "single tones (equivalent to notes on a musical score) of each instrument part in the SMF are completely synchronized, in the onset time (time at which each sound is produced) and the duration, with single tones of each instrument part in the actually input audio signal of a musical piece").
Furukawa further teaches that the output destination control unit outputs each of the plurality of the sound source signals to a corresponding output device (Furukawa ¶0059: "The third major task is to produce an analog audio signal from a digital audio signal. The digital audio signal is supplied from the tone generator for ensembles 8. The analog audio signal is supplied from the controller 4 to the mixer 13.").
Regarding claim 7, Itoyama (in view of Furukawa and further in view of Ura), teaches a signal processing apparatus comprising the features of claim 1.
Ura further suggests that the output device includes an automatic playing musical instrument (Ura abstract: "The automatic player piano is equipped with solenoid-operated actuators beneath a keyboard, and a controller selectively energizes the solenoid-operated actuators for generating acoustic sounds on the basis of music data codes representative of an original performance. ") and a speaker (Ura col. 27, line 60 -- col. 28, line 2: "While the player is practicing the fingering on the keyboard 11a, the controller 12c produces the music data codes through the execution of the A/D interruption routine program, the main routine program, the sub-routine programs and the timer interruption routine program, and produces the audio signal from the music data codes. The audio signal is supplied to the headphone 12d and/or the speaker system 12e, and the headphone 12d and/or the speaker system 12e generates the electronic sounds corresponding to the depressed keys 10c/10d."), and the output destination control unit outputs the predetermined sound source signal to the speaker in a case where the automatic playing musical instrument that outputs the predetermined sound source signal does not exist (Ura col. 1, lines 41-53: "However, when the stopper is changed to the blocking position, the hammer stopper prevents the strings from the hammers, and the hammers rebound on the hammer stopper before a strike at the string. For this reason, the silent piano does not generate an acoustic sound. The key sensors and the controller are incorporated in the electronic sound system. The key sensors monitors the key motions, and the controller generates the music data codes. The music data codes are immediately supplied to a tone generator, and the tone generator tailors an audio signal. The audio signal is, by way of example, supplied to a headphone, and produces electronic sounds in synchronism with the fingering on the keyboard.").
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing apparatus of Itoyama by adding the signal routing of Ura to enable a player to silently practice fingering on a keyboard (Ura col. 27, lines 36-37).
Regarding claim 8, Itoyama (in view of Furukawa and further in view of Ura), teaches a signal processing apparatus comprising the features of claim 7.
Ura further teaches that the output destination control unit outputs a plurality of sound source signals (Ura col. 10, lines 11-14: "The tone generator 12ai has sixteen channels, and sixteen electronic sounds are concurrently produced through the headphone 12d and/or the speaker system 12e at the maximum.") from the speaker in a case where there is the plurality of the sound source signals in which the automatic playing musical instrument to be output does not exist (Ura col. 28, lines 14-16: "when the cushion units 11b is in the blocking position BP, the electronic sounds are produced through the headphone 12d/speaker system 12e").
Regarding claim 9, Itoyama (in view of Furukawa and further in view of Ura), teaches a signal processing apparatus comprising the features of claim 8.
Ura further teaches that an output level of each of the plurality of the sound source signals output from the speaker can be set (Ura col. 10, lines 9-11: "The amplitude of the audio signal is controlled on the basis of the volume specified through the manipulation of the switch on the panel 12af.").
Regarding claim 11, Itoyama (in view of Furukawa and further in view of Ura), teaches a signal processing apparatus comprising the features of claim 4.
Furukawa further teaches a synchronization adjustment unit (Furukawa ¶0080: "The part reproduced through the automatic player piano 15 is delayed for the part produced through the speakers 7. The adjuster 241 converts the time lug, i.e., difference DF to the number DN of tempo clocks CT by dividing the difference DF by the pulse period τ. The product (TFD−TCD)/τ is equivalent to the time delay. The adjuster 241 fetches the delta-time code D4 from the delta-time register 203, and subtracts the number DN from the value ND4 of the delta-time code D4.") configured to set a delay amount for reproduction timing of each of the plurality of sound source signals (Furukawa ¶0109: "As will be understood from the foregoing description, the timing regulator monitors the time codes D 2 to see whether or not the transmission of event codes D3 is synchronized with the transmission of audio data codes D1. If the transmission of event codes D3 is advanced from or delayed for the transmission of audio data codes D1, the timing regulator sets the clock, i.e., N back or ahead so as to establish the synchronization between the plural parts of the piece of music.").
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing apparatus of Itoyama by adding the synchronization adjustment unit of Furukawa to establish synchronization between the plural parts of the piece of music (Furukawa ¶0109).
Regarding claim 14, Itoyama teaches a signal processing method comprising: performing, by a sound source separation unit, sound source separation on a mixed sound signal obtained by mixing a plurality of sound source signals (Itoyama ¶0001: "The present invention relates to a system, a method, and a program for sound source separation that enable separation of an instrument sound signal corresponding to each musical instrument from an input audio signal containing a plurality of types of instrument sound signals. The present invention relates in particular to a system, a method, and a computer program for sound source separation that separate an 'audio signal of sound mixtures obtained by playing a plurality of musical instruments' containing both harmonic-structure and inharmonic-structure signal components into sound sources for respective instrument parts."); and determining, by a sound source type determination unit, a type of a predetermined sound source signal obtained by the sound source separation (Itoyama ¶0084: "Fundamentally, sound source separation includes a step of separating and extracting sound sources (instrument sound signals) from a sound mixture, and a sound source estimation step of estimating what musical instruments correspond to the separated sound sources (instrument sound signals). The latter step... is implemented by estimating sound sources used in a musical piece played, for example a piano… given an ensemble audio signal as an input signal.").
Itoyama does not explicitly disclose outputting, by an output destination control unit, the predetermined sound source signal to a corresponding output device in a plurality of output devices on a basis of a determination result of the sound source type determination unit, wherein the plurality of output devices includes at least one automatic playing musical instrument, wherein the plurality of output devices includes a speaker separate from the automatic playing musical instruments, wherein the plurality of output devices includes a headphone separate from the automatic playing musical instruments, wherein the output destination control unit is connected to the plurality of output devices by wire or by wireless connections.
However, Furukawa teaches outputting, by an output destination control unit, the predetermined sound source signal to a corresponding output device in a plurality of output devices (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7") on a basis of a determination result of the sound source type determination unit (Furukawa ¶00058: "When a user instructs the music player to reproduce an ensemble through the manipulating panel 5… The controller 4 checks every event code D3 to see whether or not the event code D3 is representative of a piece of music data or the initiation of reading out the time series audio data. When the controller 4 acknowledges that the event code D3 is representative of a piece of music data, the controller 4 supplies the event code D3 to the automatic playing controller 9." Furukawa ¶0065: "The automatic playing controller 9 determines trajectories for the plungers of the solenoid-operated key/pedal actuators 14 a associated with the keys/pedals to be moved on the basis of the event codes D3 representative of the note-on… The depressed keys give rise to free rotation of the hammers, and the hammers strike the strings at the end of the free rotation. The strings vibrate, and generate acoustic piano tones."), wherein the plurality of output devices includes at least one automatic playing musical instrument (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7"), wherein the plurality of output devices includes a speaker separate from the automatic playing musical instruments (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7").
Furthermore, Ura teaches that the plurality of output devices includes a headphone separate from the automatic playing musical instruments (Ura col. 8, lines 12-19: "The electronic system 12 includes a plurality of key sensors 12a respectively associated with the black and white keys 10c/10d for monitoring the key motions, a plurality of solenoid-operated actuator units 12b respectively provided beneath the black and white keys 10c/10d, a controller 12c connected to the key sensors 12a and the solenoid-operated actuator units 12b and a headphone 12d and/or a speaker system 12e."), wherein the output destination control unit is connected to the plurality of output devices by wire or by wireless connections (the plurality of output devices are inherently connected to their control unit by wire or by wireless connections).
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing method of Itoyama by adding the output destination control unit and output devices of Furukawa and Ura to create a music player for ensemble between different sorts of sound sources (Furukawa ¶0001).
Regarding claim 15, Itoyama teaches a program causing a computer to execute a signal processing method comprising: performing, by a sound source separation unit, sound source separation on a mixed sound signal obtained by mixing a plurality of sound source signals (Itoyama ¶0001: "The present invention relates to a system, a method, and a program for sound source separation that enable separation of an instrument sound signal corresponding to each musical instrument from an input audio signal containing a plurality of types of instrument sound signals. The present invention relates in particular to a system, a method, and a computer program for sound source separation that separate an 'audio signal of sound mixtures obtained by playing a plurality of musical instruments' containing both harmonic-structure and inharmonic-structure signal components into sound sources for respective instrument parts."); and determining, by a sound source type determination unit, a type of a predetermined sound source signal obtained by the sound source separation (Itoyama ¶0084: "Fundamentally, sound source separation includes a step of separating and extracting sound sources (instrument sound signals) from a sound mixture, and a sound source estimation step of estimating what musical instruments correspond to the separated sound sources (instrument sound signals). The latter step... is implemented by estimating sound sources used in a musical piece played, for example a piano… given an ensemble audio signal as an input signal.").
However, Furukawa teaches outputting, by an output destination control unit, the predetermined sound source signal to a corresponding output device in a plurality of output devices (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7") on a basis of a determination result of the sound source type determination unit (Furukawa ¶00058: "When a user instructs the music player to reproduce an ensemble through the manipulating panel 5… The controller 4 checks every event code D3 to see whether or not the event code D3 is representative of a piece of music data or the initiation of reading out the time series audio data. When the controller 4 acknowledges that the event code D3 is representative of a piece of music data, the controller 4 supplies the event code D3 to the automatic playing controller 9." Furukawa ¶0065: "The automatic playing controller 9 determines trajectories for the plungers of the solenoid-operated key/pedal actuators 14 a associated with the keys/pedals to be moved on the basis of the event codes D3 representative of the note-on… The depressed keys give rise to free rotation of the hammers, and the hammers strike the strings at the end of the free rotation. The strings vibrate, and generate acoustic piano tones."), wherein the plurality of output devices includes at least one automatic playing musical instrument (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7"), wherein the plurality of output devices includes a speaker separate from the automatic playing musical instruments (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7").
Furthermore, Ura teaches that the plurality of output devices includes a headphone separate from the automatic playing musical instruments (Ura col. 8, lines 12-19: "The electronic system 12 includes a plurality of key sensors 12a respectively associated with the black and white keys 10c/10d for monitoring the key motions, a plurality of solenoid-operated actuator units 12b respectively provided beneath the black and white keys 10c/10d, a controller 12c connected to the key sensors 12a and the solenoid-operated actuator units 12b and a headphone 12d and/or a speaker system 12e."), wherein the output destination control unit is connected to the plurality of output devices by wire or by wireless connections (the plurality of output devices are inherently connected to their control unit by wire or by wireless connections).
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing program of Itoyama by adding the output destination control unit and output devices of Furukawa and Ura to create a music player for ensemble between different sorts of sound sources (Furukawa ¶0001).
Regarding claim 16, Itoyama teaches a signal processing system comprising: a transmission apparatus (Itoyama ¶0012: "A sound source separation system"), wherein the transmission apparatus comprises: a sound source separation unit configured to perform sound source separation on a mixed sound signal obtained by mixing a plurality of sound source signals (Itoyama ¶0001: "The present invention relates to a system, a method, and a program for sound source separation that enable separation of an instrument sound signal corresponding to each musical instrument from an input audio signal containing a plurality of types of instrument sound signals. The present invention relates in particular to a system, a method, and a computer program for sound source separation that separate an 'audio signal of sound mixtures obtained by playing a plurality of musical instruments' containing both harmonic-structure and inharmonic-structure signal components into sound sources for respective instrument parts."); and a sound source type determination unit configured to determine a type of a predetermined sound source signal obtained by the sound source separation (Itoyama ¶0084: "Fundamentally, sound source separation includes a step of separating and extracting sound sources (instrument sound signals) from a sound mixture, and a sound source estimation step of estimating what musical instruments correspond to the separated sound sources (instrument sound signals). The latter step... is implemented by estimating sound sources used in a musical piece played, for example a piano… given an ensemble audio signal as an input signal.").
Itoyama does not explicitly disclose a reception apparatus, wherein the reception apparatus is configured to output the predetermined sound source signal to a corresponding output device in a plurality of output devices on a basis of a determination result of the sound source type determination unit, wherein the plurality of output devices includes at least one automatic playing musical instrument, wherein the plurality of output devices includes a speaker separate from the automatic playing musical instruments, wherein the plurality of output devices includes a headphone separate from the automatic playing musical instruments, wherein the output destination control unit is connected to the plurality of output devices by wire or by wireless connections.
However, Furukawa teaches a reception apparatus (Furukawa abstract: "A music player/recorder"), wherein the reception apparatus is configured to output the predetermined sound source signal to a corresponding output device in a plurality of output devices (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7") on a basis of a determination result of the sound source type determination unit (Furukawa ¶00058: "When a user instructs the music player to reproduce an ensemble through the manipulating panel 5… The controller 4 checks every event code D3 to see whether or not the event code D3 is representative of a piece of music data or the initiation of reading out the time series audio data. When the controller 4 acknowledges that the event code D3 is representative of a piece of music data, the controller 4 supplies the event code D3 to the automatic playing controller 9." Furukawa ¶0065: "The automatic playing controller 9 determines trajectories for the plungers of the solenoid-operated key/pedal actuators 14 a associated with the keys/pedals to be moved on the basis of the event codes D3 representative of the note-on… The depressed keys give rise to free rotation of the hammers, and the hammers strike the strings at the end of the free rotation. The strings vibrate, and generate acoustic piano tones."), wherein the plurality of output devices includes at least one automatic playing musical instrument (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7"), wherein the plurality of output devices includes a speaker separate from the automatic playing musical instruments (Furukawa ¶0093: "the music player synchronously reproduces the two parts through the automatic player piano 15 and speakers 7").
Furthermore, Ura teaches that the plurality of output devices includes a headphone separate from the automatic playing musical instruments (Ura col. 8, lines 12-19: "The electronic system 12 includes a plurality of key sensors 12a respectively associated with the black and white keys 10c/10d for monitoring the key motions, a plurality of solenoid-operated actuator units 12b respectively provided beneath the black and white keys 10c/10d, a controller 12c connected to the key sensors 12a and the solenoid-operated actuator units 12b and a headphone 12d and/or a speaker system 12e."), wherein the output destination control unit is connected to the plurality of output devices by wire or by wireless connections (the plurality of output devices are inherently connected to their control unit by wire or by wireless connections).
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing system of Itoyama by adding the output destination control unit and output devices of Furukawa and Ura to create a music player for ensemble between different sorts of sound sources (Furukawa ¶0001).
Claim 2 is rejected under 35 U.S.C. 103 as unpatentable over Itoyama in view of Furukawa, and further in view of Ura and Flamini et al. (US 20090255396 A1, October 15, 2009), hereinafter Flamini.
Regarding claim 2, Itoyama (in view of Furukawa and further in view of Ura), teaches a signal processing apparatus comprising the features of claim 1.
Itoyama (in view of Furukawa and further in view of Ura) does not explicitly disclose a data format conversion unit configured to convert the predetermined sound source signal into a data format reproducible by the output device.
However, Flamini teaches a data format conversion unit configured to convert the predetermined sound source signal into a data format reproducible by the output device (Flamini abstract: "recording musical notes with a digital device, storing the recorded musical notes in memory as a WAV file, converting the WAV file into a second MIDI file.").
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing apparatus of Itoyama (as modified by Furukawa and Ura) by adding the MIDI converter of Flamini to convert the audio output of Itoyama to the MIDI format reproducible by the output devices of Furukawa and Ura to synchronize the input audio signal for sound source separation as musical score information (Itoyama ¶0056).
Claim 5 is rejected under 35 U.S.C. 103 as unpatentable over Itoyama in view of Furukawa, and further in view of Ura and Sone (US 5569869 A, October 29, 1996), hereinafter Sone.
Regarding claim 5, Itoyama (in view of Furukawa and further in view of Ura), teaches a signal processing apparatus comprising the features of claim 4.
Itoyama (in view of Furukawa and further in view of Ura) does not explicitly disclose that in a case where one sound source signal of the plurality of the sound source signals is output to a predetermined output device, the output destination control unit sets an output level of another sound source signal corresponding to the predetermined output device to mute.
However, Sone teaches that in a case where one sound source signal of the plurality of the sound source signals is output to a predetermined output device (Sone col. 1, line 66 -- col. 2, line 2: "In such a case, the karaoke apparatus sounds a mixture of the karaoke performance and the additional performance from a common built-in loudspeaker."), the output destination control unit sets an output level of another sound source signal corresponding to the predetermined output device to mute (Sone col. 8, lines 60-65: "When an external MIDI instrument is connected to the karaoke system and a particular timbre is specified by means of a panel interface 21B, the internal MIDI data of the same timbre is selectively blocked to silence a corresponding part of the karaoke accompaniment.").
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing apparatus of Itoyama (as modified by Furukawa and Ura) by adding the muting of Sone to substitute a different performance part in place of a silenced original performance part of the same timbre (Sone col. 8, lines 65-67).
Claim 10 is rejected under 35 U.S.C. 103 as unpatentable over Itoyama in view of Furukawa and further in view of Ura, McKillop et al. (US 20090003620 A1, January 1, 2009), hereinafter McKillop, and Coffman et al. (US 20110182441 A1, July 28, 2011), hereinafter Coffman.
Regarding claim 10, Itoyama (in view of Furukawa and further in view of Ura), teaches a signal processing apparatus comprising the features of claim 7.
Itoyama (in view of Furukawa and further in view of Ura) does not explicitly disclose that in a case where the speaker does not exist, the output destination control unit sets an output level of the predetermined sound source signal to mute and performs notification that the predetermined sound source signal is not output.
However, McKillop teaches that in a case where the speaker does not exist, the output destination control unit performs notification that the predetermined sound source signal is not output (McKillop ¶0111: "If there are no external audio devices (block 1903), at block 1905 the method 1900 displays an application control screen with a routing toggle button, such as screen 1701, and the audio routing module 1802 uses the default internal routing audio device unless the user presses the routing toggle button as described below.").
Furthermore, Coffman suggests that in a case where the speaker does not exist, the output destination control unit sets an output level of the predetermined sound source signal to mute (Coffman ¶0036: "If only disabled or unauthorized audio output routes are available… the audio circuitry may prevent the audio from being provided to the audio output routes.").
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing apparatus of Itoyama (as modified by Furukawa and Ura) by adding the muting of Coffman and the notification of McKillop to prevent audio from being output to unauthorized or incompatible audio output routes or devices (Coffman ¶0036).
Claim 12 is rejected over Itoyama in view of Furukawa and further in view of Ura and Johnston et al. (US 20070121955 A1, May 31, 2007), hereinafter Johnston.
Regarding claim 12, Itoyama (in view of Furukawa and further in view of Ura), teaches a signal processing apparatus comprising the features of claim 11.
Itoyama (in view of Furukawa and further in view of Ura) does not explicitly disclose that processing by the sound source type determination unit is passed in a case where the predetermined sound source signal is obtained by the sound source separation by the sound source separation unit and the type of the predetermined sound source signal is determined.
However, Johnston suggests that the synchronization adjustment unit compares the mixed sound signal with a mixed sound signal which includes the predetermined sound source signal (Johnston ¶0031: "The Fourier transform of the captured signal is multiplied by the conjugate of the Fourier transform of the calibration pulse (or, equivalently, divided by the Fourier transform of the calibration pulse). The resulting product or ratio is the complex system response."), is reproduced from the output device, and is collected by a sound pickup apparatus (Johnston ¶0030: "At the same time, a microphone is recording the emitted pulse as actually reproduced at the microphone position. The signal captured at the microphone is sent to the calibration computing device."), and sets the delay amount for each of the plurality of the sound source signals on a basis of a comparison result (Johnston ¶0028: "Given the calculations made by the calibration computing device, the time delays and gain in each speaker can be adjusted in order to cause the sound generated from each speaker to reach the preferred listening position simultaneously with the same acoustic level if the sound occurs simultaneously and at the same level in each channel of the program material.").
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing apparatus of Itoyama (as modified by Furukawa and Ura) by adding the synchronization adjustment unit of Johnston to establish synchronization between the plural parts of the piece of music (Furukawa ¶0109).
Claim 13 is rejected under 35 U.S.C. 103 as unpatentable over Itoyama in view of Furukawa and further in view of Ura and France et al. (US 20160029138 A1, January 28, 2016), hereinafter France.
Regarding claim 13, Itoyama (in view of Furukawa and further in view of Ura), teaches a signal processing apparatus comprising the features of claim 1.
Itoyama (in view of Furukawa and further in view of Ura) does not explicitly disclose that processing by the sound source type determination unit is passed in a case where the predetermined sound source signal is obtained by the sound source separation by the sound source separation unit and the type of the predetermined sound source signal is determined.
However, France suggests that processing by the sound source type determination unit is passed in a case where the predetermined sound source signal is obtained by the sound source separation by the sound source separation unit and the type of the predetermined sound source signal is determined (France ¶0025: "In some embodiments, object related metadata of the program includes durable metadata… Examples of durable metadata include an object ID for each user-selectable object or other object or set of objects of the program, and synchronization words (e.g., time codes) indicative of timing of each user-selectable object, or other object, relative to audio content of the bed of speaker channels or other elements of the program." France ¶0027: "In some embodiments, object related metadata provides a default mix of object content and bed (speaker channel) content, with default rendering parameters (e.g., default spatial locations of rendered objects).").
It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing apparatus of Itoyama (as modified by Furukawa and Ura) by adding the predetermined sound source signal separation of France to obtain an immersive perception of the audio content of the program (France, abstract).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHILIP SCOLES whose telephone number is (703)756-1831. The examiner can normally be reached Monday-Friday 8:30-4:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dedei Hammond can be reached on 571-270-7938. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHILIP G SCOLES/
Examiner, Art Unit 2837
/DEDEI K HAMMOND/Supervisory Patent Examiner, Art Unit 2837