Prosecution Insights
Last updated: April 19, 2026
Application No. 17/635,073

SIGNAL PROCESSING DEVICE. AND SIGNAL PROCESSING METHOD

Non-Final OA §103§112
Filed
Feb 14, 2022
Examiner
SCOLES, PHILIP GRANT
Art Unit
2837
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Sony Group Corporation
OA Round
3 (Non-Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
3y 10m
To Grant
77%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
30 granted / 54 resolved
-12.4% vs TC avg
Strong +21% interview lift
Without
With
+21.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
36 currently pending
Career history
90
Total Applications
across all art units

Statute-Specific Performance

§101
1.6%
-38.4% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
22.0%
-18.0% vs TC avg
§112
20.2%
-19.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 54 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Arguments Claims 1, 4, and 7-14 are pending in this application. Claims 2, 3, 5, and 6 are canceled by Applicant. Applicant's Request for Continued Examination of the last Office action is persuasive and, therefore, the finality of that action is withdrawn. Applicant’s arguments, see page 8, lines 15-17, filed 7/22/2025, with respect to claim 9, has been considered and are persuasive. In response to Applicant’s amendment of claim 9, the objection to claim 9 is withdrawn. Applicant’s arguments, see page 8, line 18 – page 10, line 12, filed 7/22/2025, with respect to claims 1, 4, and 7-14 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant argues in page 10, lines 3-6 that Izumisawa does not disclose, “functional processing or branching logic that determines different outcomes based on different ‘types’ of motion.” The scope of Applicant’s argument is incommensurate with the scope of the claims. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). The claims recite a “specific motion” and a “type of movement associated with the specific motion.” (Examiner understands this to mean, within BRI, “type of motion associated with the specific motion” as explained in the related 112(b) rejection below.) The claims in their current form do not recite “different types of motion;” they recite instead a “specific motion.” Applicant argues in page 10, lines 7-8 that, “Izumisawa does not classify or distinguish types of motion. Accordingly, Izumisawa also does not associate different motions with distinct animation effects.” Applicant’s argument is incommensurate with the scope of the claims. The BRI of Applicant’s invention as recited in amended claim 1 does not necessitate associating a plurality of types of motion with respective distinct animation effects; claim 1 within its BRI stipulates at minimum associating only one motion with an animation effect. Applicant argues in page 10, lines 9-12 that “independent claim 1 recites that the animation effect is determined based on the type of motion.” Applicant’s argument is incommensurate with the scope of the claims. Claim 1 does not recite “determining” an animation effect. Claim 1 recites that the animation effect is “based on” the type of movement. Applicant’s arguments, see page 10, lines 19-22, filed 7/22/2025, with respect to claims 12 and 13, have been considered but are unpersuasive. See response to arguments above regarding claim 1. Applicant’s arguments, see page 10, line 23 – page 11, line 6, filed 7/22/2025, with respect to claims 7, 10, and 11, have been considered but are unpersuasive. See response to arguments above regarding incorporated limitations from claim 1. Applicant’s argument that each of claims 7, 10, and 11 “separately recites subject matter not described or suggested by Izumisawa” fails to comply with 37 CFR 1.111(b) because it amounts to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. See below for the rejections of claims 7, 10, and 11. Applicant’s arguments, see page 11, lines 7-13, filed 7/22/2025, with respect to claim 4, have been considered but are unpersuasive. See response to arguments above regarding incorporated limitations from claim 1. Applicant’s argument that claim 4 “separately recites subject matter not taught or suggested by any of the cited references” fails to comply with 37 CFR 1.111(b) because it amounts to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. See below for the rejection of claim 4. Applicant’s arguments, see page 11, lines 14-21, filed 7/22/2025, with respect to claims 8 and 9, have been considered but are unpersuasive. See response to arguments above regarding incorporated limitations from claim 1. Applicant’s argument that each of claims 8 and 9 “separately recites subject matter not taught or suggested by any of the references” fails to comply with 37 CFR 1.111(b) because it amounts to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. See below for the rejections of claims 8 and 9. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1, 4, and 7-14 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. Claims 1 and 12-14 contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The claims recite “type of movement” which has no apparent support in the specification. Although the phrase “type of motion” is described in the specification, the phrase “type of movement” is not. Claims 4 and 7-11 are likewise rejected for depending, directly or indirectly, from claim 1. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1 and 12-14, it is unknown what is specifically meant by the phrase, “type of movement,” which overlaps in scope with, but is not identical to, “type of motion” as described in the specification. In the interest of advancing prosecution, Examiner will interpret “type of movement” as “type of motion.” Claims 4 and 7-11 are likewise rejected for depending, directly or indirectly, from claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1 and 7-14 are rejected under 35 U.S.C. 103 as unpatentable over Izumisawa (Japanese Patent Publication No. JP-H09237087 A, September 9, 1997), hereinafter Izumisawa, in view of Nishitani et al. (United States Patent Publication No. 20030066413 A1, April 10, 2003), hereinafter Nishitani, to the extent understood. Regarding claim 1, Izumisawa teaches a signal processing device, comprising: circuitry (Izumisawa ¶0007: "The CPU 20 is a central processing unit that controls the entire electronic piano based on a control program stored in the ROM 23"; Izumisawa ¶0007: "The panel circuit 21 comprises various switches on the panel for selecting tone colors, automatic performance music selection, etc.") configured to: acquire a conversion function (Izumisawa ¶0028: "As described above, by arbitrarily setting the characteristics of the variation width optimization conversion table 3, it is possible to adjust the way in which a musical tone changes in response to a change in distance in various ways."), wherein the conversion function is one of a non-linear curve or a polygonal line (Izumisawa ¶0027: "FIG. 9 is an explanatory diagram showing an example of characteristics of the variation width optimization conversion table 3… (f) The tone changes only when the distance is within a specific range." Fig. 9(f) comprises a polygonal line. Polygonal lines are inherently nonlinear.); acquire an acoustic signal (Izumisawa ¶0015: "The pitch information generating means 10 converts the pressed keyboard note number into a frequency number"; ¶0017: "the waveform data is read out from the built-in waveform memory at intervals corresponding to the key numbers"); acquire a sensing value that indicates one of a motion of a specific portion of a body of a user or a motion of a musical instrument (Izumisawa ¶0013: "A voltage corresponding to the distance/movement between the electronic piano and the player detected/measured by the distance sensor section 1 is converted into a digital signal by the A/D converter 2 "); calculate a first acoustic parameter based on the sensing value and the conversion function (Izumisawa ¶0023: "In step S 20, the A/D converted output value SD of the distance sensor unit is read from the A/D converter 2. In step S21, the SD is converted into the SDC using a variation width optimization conversion table."), wherein the first acoustic parameter changes non-linearly based on the sensing value (Izumisawa ¶0027: "FIG. 9 is an explanatory diagram showing an example of characteristics of the variation width optimization conversion table 3… (f) The tone changes only when the distance is within a specific range." Therefore, SD and SDC are non-linear over the entire range of movement.); and perform a first non-linear acoustic process on the acoustic signal based on the first acoustic parameter (Izumisawa ¶0017: "the waveform data is read out from the built-in waveform memory at intervals corresponding to the key numbers, and an interpolation calculation process is performed to output a musical tone waveform signal. The filter section 14 filters the musical tone waveform signal based on parameters such as the cutoff frequency set by the CPU, and the amplitude control section 15 multiplies the musical tone waveform signal by an envelope signal generated by an internal envelope signal generator based on the parameters set by the CPU, thereby performing amplitude control."). Izumisawa does not explicitly disclose: detect a specific motion of one of the specific portion of the body of the user or the musical instrument based on the sensing value; and add, based on the detection of the specific motion, an animation effect to a sound over a specific period of time, wherein the animation effect is based on a type of motion associated with the specific motion, and the sound is selected based on the specific motion. However, Nishitani teaches: detect a specific motion of one of the specific portion of the body of the user or the musical instrument based on the sensing value (Nishitani ¶0165: "In FIG. 7, as the performance participant swings or operates otherwise such a performance operator held with his or her hand, the one-dimensional acceleration sensor MSa generates a detection signal Ma only representative of acceleration α in a predetermined single direction (x-axis direction) from among acceleration applied by the participant's operation and outputs the detection signal Ma to the main system 1M."); and add, based on the detection of the specific motion (Nishitani ¶0166: "The information analyzation section AN analyzes the acceleration data, and extracts a peak time point Tp indicative of a time of occurrence of a local peak in a time-varying waveform |α|(t) of the absolute acceleration |α|, peak value Vp indicative of a height of the local peak, peak Q value Qp indicative of acuteness of the local peak, peak-to-peak interval indicative of a time interval between adjacent local peaks, depth of a bottom between adjacent local peaks, high-frequency component intensity at the peak, polarity of the local peak of the acceleration α(t), etc. trajectory (a) and an exemplary acceleration waveform (a) when the performance participant makes conducting motions for a two-beat 'espressivo' (=expressive) performance."), an animation effect to a sound over a specific period of time (Nishitani ¶0168: "Time period between the tone-generation start timing and the tone-generation end timing, i.e. tone-sounding time length, is called a 'gate time'. A staccato-like performance can be obtained by making an actual gate time GT shorter than a gate time value defined in the music piece data, e.g. multiplying the gate time value (provisionally represented here by GT 0) by a coefficient Agt"), wherein the animation effect is based on a type of motion associated with the specific motion (Nishitani ¶0167: "Thus, in response to such conducting motions of the performance participant… the articulation parameter AR is determined by the local peak Q value Qp"), and the sound is selected based on the specific motion (Nishitani ¶0169: "Thus, the above-mentioned gate time coefficient Agt is used as the articulation parameter AR, which is varied in accordance with the local peak Q value Qp."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing device of Izumisawa by adding the animation effect over a specific period of time of Nishitani to allow an inexperienced or unskilled performer to take part in the performance (Nishitani ¶0022). Regarding claim 7, Izumisawa (in view of Nishitani) teaches a signal processing device comprising the features of claim 1 as discussed above. Izumisawa further suggests that the circuitry is further configured to determine the animation effect for a type of the motion of the musical instrument (Izumisawa ¶0031: "it is also possible to detect the movement of the musical instrument itself and control the musical tones."). Regarding claim 8, Izumisawa (in view of Nishitani) teaches a signal processing device comprising the features of claim 1 as discussed above. Nishitani further teaches that the circuitry is further configured to: detect a first wave-shaped peak value of the sensing value (Nishitani ¶0024: "the sensor included in the motion detector may be an acceleration sensor, and the detection data may be data indicative of acceleration of the motion detected via the acceleration sensor. The plurality of analyzed data generated by the analyzer may include at least any one of peak point data indicative of an occurrence time of a local peak in a time-varying waveform of absolute acceleration of the motion"); obtain an initial value of the first acoustic parameter of the acoustic processing based on a basis of the detected first wave-shaped peak value (Nishitani ¶0111: "the performance-parameter determination section PS determines performance parameters corresponding to the analyzed results"); and add the animation effect to the acoustic signal based on the initial value of the first acoustic parameter (Nishitani ¶0111: "the performance-parameter determination section PS determines performance parameters corresponding to the analyzed results, and the tone reproduction section 1S generates tone performance data based on the performance parameters thus determined by the performance-parameter determination section PS"). Izumisawa further suggests that the circuitry is further configured to: change a specific value of the first acoustic parameter from the initial value (Izumisawa ¶0028: "by arbitrarily setting the characteristics of the variation width optimization conversion table 3, it is possible to adjust the way in which a musical tone changes in response to a change in distance in various ways."); and perform a second non-linear acoustic process on the acoustic signal, to which the animation effect is added, while changing based on the changed specific value of the first acoustic parameter (Izumisawa ¶0019: "The pitch change touch curve table 43 generates pitch change coefficients based on the converted touch data. This coefficient is multiplied by a predetermined constant K1 by multiplier 46 and further multiplied by a coefficient based on distance by multiplier 7. The information is then input to the pitch information generating means 10, and the pitch changes under the influence of both the touch information and the distance information"). Regarding claim 9, Izumisawa (in view of Nishitani) teaches a signal processing device comprising the features of claim 8 as discussed above. Nishitani further teaches a case where, at a specific time in an animation period, a second acoustic parameter corresponding to a second wave-shaped peak value is greater than the first acoustic parameter, the circuitry is further configured to perform the second non- linear acoustic process, the animation effect is performed in the animation period, and in the second non-linear acoustic process, the animation effect is added to the acoustic signal based on the initial value at the specific time (Nishitani ¶0365: "If no local peak has been detected at step S272, the personal computer 103 reverts from step S273 to step S270. If, on the other hand, a local peak has been detected at step S272, a swinging-motion tempo is determined, at step S274, on the basis of a time interval from the last or several previous detected local peaks, and is edited into tempo control data for transmission to the corresponding automatic performance control process and display control process at step S275. If a rewrite mode is being currently selected for rewriting the data of the tempo control data track of the corresponding performance data with the tempo control data generated under the user control (S276), then the data of the tempo control data track of the corresponding performance data is rewritten with the user-controlled tempo control data at step S277."). Regarding claim 10, Izumisawa (in view of Nishitani) teaches a signal processing device comprising the features of claim 1 as discussed above. Izumisawa further teaches that the acoustic signal includes a signal of the sound of the musical instrument that is played (Izumisawa ¶0015: "The pitch information generating means 10 converts the pressed keyboard note number into a frequency number"; ¶0017: "the waveform data is read out from the built-in waveform memory at intervals corresponding to the key numbers"). Regarding claim 11, Izumisawa (in view of Nishitani) teaches a signal processing device comprising the features of claim 1 as discussed above. Izumisawa further teaches that the acoustic signal includes a signal corresponding to one of a type of the motion of the specific portion of the body of the user or a type of motion of the musical instrument (Izumisawa ¶0031: "it is also possible to detect the movement of the musical instrument itself and control the musical tones"). Regarding claim 12, Izumisawa teaches a signal processing method, comprising: in a signal processing device: acquiring, by circuitry (Izumisawa ¶0007: "The CPU 20 is a central processing unit that controls the entire electronic piano based on a control program stored in the ROM 23"; Izumisawa ¶0007: "The panel circuit 21 comprises various switches on the panel for selecting tone colors, automatic performance music selection, etc."), a conversion function (Izumisawa ¶0028: "As described above, by arbitrarily setting the characteristics of the variation width optimization conversion table 3, it is possible to adjust the way in which a musical tone changes in response to a change in distance in various ways."), wherein the conversion function is one of anon-linear curve or a polygonal line (Izumisawa ¶0027: "FIG. 9 is an explanatory diagram showing an example of characteristics of the variation width optimization conversion table 3… (f) The tone changes only when the distance is within a specific range." Fig. 9(f) comprises a polygonal line. Polygonal lines are inherently nonlinear.); acquiring, by the circuitry, an acoustic signal (Izumisawa ¶0015: "The pitch information generating means 10 converts the pressed keyboard note number into a frequency number"; ¶0017: "the waveform data is read out from the built-in waveform memory at intervals corresponding to the key numbers"); acquiring, by the circuitry, a sensing value that indicates one of a motion of a specific portion of a body of a user or a motion of a musical instrument (Izumisawa ¶0013: "A voltage corresponding to the distance/movement between the electronic piano and the player detected/measured by the distance sensor section 1 is converted into a digital signal by the A/D converter 2 "); calculating, by the circuitry, an acoustic parameter based on the sensing value and the conversion function (Izumisawa ¶0023: "In step S 20, the A/D converted output value SD of the distance sensor unit is read from the A/D converter 2. In step S21, the SD is converted into the SDC using a variation width optimization conversion table."), wherein the acoustic parameter changes non-linearly based on the sensing value (Izumisawa ¶0027: "FIG. 9 is an explanatory diagram showing an example of characteristics of the variation width optimization conversion table 3… (f) The tone changes only when the distance is within a specific range." Therefore, SD and SDC are non-linear over the entire range of movement.); and performing, by the circuitry, a non-linear acoustic processing on the acoustic signal based on the acoustic parameter (Izumisawa ¶0017: "the waveform data is read out from the built-in waveform memory at intervals corresponding to the key numbers, and an interpolation calculation process is performed to output a musical tone waveform signal. The filter section 14 filters the musical tone waveform signal based on parameters such as the cutoff frequency set by the CPU, and the amplitude control section 15 multiplies the musical tone waveform signal by an envelope signal generated by an internal envelope signal generator based on the parameters set by the CPU, thereby performing amplitude control."). Izumisawa does explicitly disclose detecting, by the circuitry, a specific motion of one of the specific portion of the body of the user or the musical instrument based on the sensing value; and based on the detection of the specific motion, adding, by the circuitry, an animation effect to a sound over a specific period of time, wherein the animation effect is based on a type of motion associated with the specific motion, and the sound is selected based on the specific motion. However, Nishitani teaches: detecting, by the circuitry, a specific motion of one of the specific portion of the body of the user or the musical instrument based on the sensing value (Nishitani ¶0165: "In FIG. 7, as the performance participant swings or operates otherwise such a performance operator held with his or her hand, the one-dimensional acceleration sensor MSa generates a detection signal Ma only representative of acceleration α in a predetermined single direction (x-axis direction) from among acceleration applied by the participant's operation and outputs the detection signal Ma to the main system 1M."); and based on the detection of the specific motion, adding, by the circuitry (Nishitani ¶0166: "The information analyzation section AN analyzes the acceleration data, and extracts a peak time point Tp indicative of a time of occurrence of a local peak in a time-varying waveform |α|(t) of the absolute acceleration |α|, peak value Vp indicative of a height of the local peak, peak Q value Qp indicative of acuteness of the local peak, peak-to-peak interval indicative of a time interval between adjacent local peaks, depth of a bottom between adjacent local peaks, high-frequency component intensity at the peak, polarity of the local peak of the acceleration α(t), etc. trajectory (a) and an exemplary acceleration waveform (a) when the performance participant makes conducting motions for a two-beat 'espressivo' (=expressive) performance."), an animation effect to a sound over a specific period of time (Nishitani ¶0168: "Time period between the tone-generation start timing and the tone-generation end timing, i.e. tone-sounding time length, is called a 'gate time'. A staccato-like performance can be obtained by making an actual gate time GT shorter than a gate time value defined in the music piece data, e.g. multiplying the gate time value (provisionally represented here by GT 0) by a coefficient Agt"), wherein the animation effect is based on a type of motion associated with the specific motion (Nishitani ¶0167: "Thus, in response to such conducting motions of the performance participant… the articulation parameter AR is determined by the local peak Q value Qp"), and the sound is selected based on the specific motion (Nishitani ¶0169: "Thus, the above-mentioned gate time coefficient Agt is used as the articulation parameter AR, which is varied in accordance with the local peak Q value Qp."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing method of Izumisawa by adding the animation effect over a specific period of time of Nishitani to allow an inexperienced or unskilled performer to take part in the performance (Nishitani ¶0022). Regarding claim 13, Izumisawa teaches a non-transitory computer readable medium having stored thereon, computer executable instructions (Izumisawa ¶0007: "The CPU 20 is a central processing unit that controls the entire electronic piano based on a control program stored in the ROM 23"), which when executed by a computer, cause the computer to execute operations, the operations comprising: acquiring a conversion function (Izumisawa ¶0028: "As described above, by arbitrarily setting the characteristics of the variation width optimization conversion table 3, it is possible to adjust the way in which a musical tone changes in response to a change in distance in various ways."), wherein the conversion function is one of a non-linear curve or a polygonal line (Izumisawa ¶0027: "FIG. 9 is an explanatory diagram showing an example of characteristics of the variation width optimization conversion table 3… (f) The tone changes only when the distance is within a specific range." Fig. 9(f) comprises a polygonal line. Polygonal lines are inherently nonlinear.); acquiring an acoustic signal (Izumisawa ¶0015: "The pitch information generating means 10 converts the pressed keyboard note number into a frequency number"; ¶0017: "the waveform data is read out from the built-in waveform memory at intervals corresponding to the key numbers"); acquiring a sensing value that indicates one of a motion of a specific portion of a body of a user or a motion of a musical instrument (Izumisawa ¶0013: "A voltage corresponding to the distance/movement between the electronic piano and the player detected/measured by the distance sensor section 1 is converted into a digital signal by the A/D converter 2 "); calculating an acoustic parameter based on the sensing value and the conversion function (Izumisawa ¶0023: "In step S 20, the A/D converted output value SD of the distance sensor unit is read from the A/D converter 2. In step S21, the SD is converted into the SDC using a variation width optimization conversion table."), wherein the acoustic parameter changes non-linearly based on the sensing value (Izumisawa ¶0027: "FIG. 9 is an explanatory diagram showing an example of characteristics of the variation width optimization conversion table 3… (f) The tone changes only when the distance is within a specific range." Therefore, SD and SDC are non-linear over the entire range of movement.); and performing a non-linear acoustic processing on the acoustic signal based on the acoustic parameter (Izumisawa ¶0017: "the waveform data is read out from the built-in waveform memory at intervals corresponding to the key numbers, and an interpolation calculation process is performed to output a musical tone waveform signal. The filter section 14 filters the musical tone waveform signal based on parameters such as the cutoff frequency set by the CPU, and the amplitude control section 15 multiplies the musical tone waveform signal by an envelope signal generated by an internal envelope signal generator based on the parameters set by the CPU, thereby performing amplitude control."). Izumisawa does not explicitly disclose detecting a specific motion of one of the specific portion of the body of the user or the musical instrument based on the sensing value; and adding, based on the detection of the specific motion, an animation effect to a sound over a specific period of time, wherein the animation effect is based on a type of motion associated with the specific motion, and the sound is selected based on the specific motion. However, Nishitani teaches detecting a specific motion of one of the specific portion of the body of the user or the musical instrument based on the sensing value (Nishitani ¶0165: "In FIG. 7, as the performance participant swings or operates otherwise such a performance operator held with his or her hand, the one-dimensional acceleration sensor MSa generates a detection signal Ma only representative of acceleration α in a predetermined single direction (x-axis direction) from among acceleration applied by the participant's operation and outputs the detection signal Ma to the main system 1M."); and adding, based on the detection of the specific motion (Nishitani ¶0166: "The information analyzation section AN analyzes the acceleration data, and extracts a peak time point Tp indicative of a time of occurrence of a local peak in a time-varying waveform |α|(t) of the absolute acceleration |α|, peak value Vp indicative of a height of the local peak, peak Q value Qp indicative of acuteness of the local peak, peak-to-peak interval indicative of a time interval between adjacent local peaks, depth of a bottom between adjacent local peaks, high-frequency component intensity at the peak, polarity of the local peak of the acceleration α(t), etc. trajectory (a) and an exemplary acceleration waveform (a) when the performance participant makes conducting motions for a two-beat 'espressivo' (=expressive) performance."), an animation effect to a sound over a specific period of time (Nishitani ¶0168: "Time period between the tone-generation start timing and the tone-generation end timing, i.e. tone-sounding time length, is called a 'gate time'. A staccato-like performance can be obtained by making an actual gate time GT shorter than a gate time value defined in the music piece data, e.g. multiplying the gate time value (provisionally represented here by GT 0) by a coefficient Agt"), wherein the animation effect is based on a type of motion associated with the specific motion (Nishitani ¶0167: "Thus, in response to such conducting motions of the performance participant… the articulation parameter AR is determined by the local peak Q value Qp"), and the sound is selected based on the specific motion (Nishitani ¶0169: "Thus, the above-mentioned gate time coefficient Agt is used as the articulation parameter AR, which is varied in accordance with the local peak Q value Qp."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal non-transitory computer readable medium of Izumisawa by adding the animation effect over a specific period of time of Nishitani to allow an inexperienced or unskilled performer to take part in the performance (Nishitani ¶0022). Regarding claim 14, Izumisawa teaches a signal processing device, comprising: circuitry (Izumisawa ¶0007: "The CPU 20 is a central processing unit that controls the entire electronic piano based on a control program stored in the ROM 23"; Izumisawa ¶0007: "The panel circuit 21 comprises various switches on the panel for selecting tone colors, automatic performance music selection, etc.") configured to acquire a sensing value that indicates one of a motion of a specific portion of a body of a user or a motion of a musical instrument (Izumisawa ¶0013: "A voltage corresponding to the distance/movement between the electronic piano and the player detected/measured by the distance sensor section 1 is converted into a digital signal by the A/D converter 2 "). Izumisawa does not explicitly disclose: detect a specific motion of one of the specific portion of the body of the user or the musical instrument based on the sensing value; and add, based on the detection of the specific motion, an animation effect to a sound to be reproduced over a specific period of time, wherein the animation effect is based on a type of motion associated with the specific motion, and the sound to be reproduced is selected based on the specific motion. However, Nishitani teaches: detect a specific motion of one of the specific portion of the body of the user or the musical instrument based on the sensing value (Nishitani ¶0165: "In FIG. 7, as the performance participant swings or operates otherwise such a performance operator held with his or her hand, the one-dimensional acceleration sensor MSa generates a detection signal Ma only representative of acceleration α in a predetermined single direction (x-axis direction) from among acceleration applied by the participant's operation and outputs the detection signal Ma to the main system 1M."); and add, based on the detection of the specific motion (Nishitani ¶0166: "The information analyzation section AN analyzes the acceleration data, and extracts a peak time point Tp indicative of a time of occurrence of a local peak in a time-varying waveform |α|(t) of the absolute acceleration |α|, peak value Vp indicative of a height of the local peak, peak Q value Qp indicative of acuteness of the local peak, peak-to-peak interval indicative of a time interval between adjacent local peaks, depth of a bottom between adjacent local peaks, high-frequency component intensity at the peak, polarity of the local peak of the acceleration α(t), etc. trajectory (a) and an exemplary acceleration waveform (a) when the performance participant makes conducting motions for a two-beat 'espressivo' (=expressive) performance."), an animation effect to a sound to be reproduced over a specific period of time (Nishitani ¶0168: "Time period between the tone-generation start timing and the tone-generation end timing, i.e. tone-sounding time length, is called a 'gate time'. A staccato-like performance can be obtained by making an actual gate time GT shorter than a gate time value defined in the music piece data, e.g. multiplying the gate time value (provisionally represented here by GT 0) by a coefficient Agt"), wherein the animation effect is based on a type of motion associated with the specific motion (Nishitani ¶0167: "Thus, in response to such conducting motions of the performance participant… the articulation parameter AR is determined by the local peak Q value Qp"), and the sound to be reproduced is selected based on the specific motion (Nishitani ¶0169: "Thus, the above-mentioned gate time coefficient Agt is used as the articulation parameter AR, which is varied in accordance with the local peak Q value Qp."). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing device of Izumisawa by adding the animation effect over a specific period of time of Nishitani to allow an inexperienced or unskilled performer to take part in the performance (Nishitani ¶0022). Claim 4 is rejected under 35 U.S.C. 103 as unpatentable over Izumisawa in view of Nishitani, and further in view of Sato (Japanese Patent Publication No. JP-H04125693 A, April 27, 1992), hereinafter Sato, to the extent understood. Regarding claim 4, Izumisawa (in view of Nishitani) teaches a signal processing device comprising the features of claim 1 as discussed above. Izumisawa (as modified by Nishitani) does not explicitly disclose that the circuitry is further configured to receive, via a touch panel, a user input corresponding to selection of the conversion function among a plurality of conversion functions. However, Sato discloses that the circuitry is further configured to receive, via a touch panel, a user input (Sato lines 326-419: " FIG. 13(A) is a schematic diagram showing the configuration. The parameter input device 22 has an operation panel on the front surface, which has switches 65 for selecting the type of parameter, switches 66 for selecting the type of variable, a series of volumes 67 for adjusting the magnitude of the parameter, and a display 69.") corresponding to selection of the conversion function among a plurality of conversion functions (Sato lines 420-461: "The manner in which the parameters change is displayed on the display 69 . FIGS. 13(B) and (C) show examples of the manner in which parameters thus inputted change. FIG. 13(B) shows how the parameters change like a line graph in accordance with the values input at each point." Fig. 13(b) discloses a polygonal function, and fig. 13(c) discloses a nonlinear curve.). It would have been prima facie obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the signal processing device of Izumisawa (as modified by Nishitani) by adding the user input corresponding to selection of the conversion function among a plurality of conversion functions of Sato to choose the amount of discontinuity at the inflection points in the function (Sato lines 420-461). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHILIP SCOLES whose telephone number is (703)756-1831. The examiner can normally be reached Monday-Friday 8:30-4:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dedei Hammond can be reached on 571-270-7938. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PHILIP G SCOLES/ Examiner, Art Unit 2837 /JEFFREY DONELS/Primary Examiner, Art Unit 2837
Read full office action

Prosecution Timeline

Feb 14, 2022
Application Filed
Sep 26, 2024
Non-Final Rejection — §103, §112
Dec 30, 2024
Response Filed
Apr 16, 2025
Final Rejection — §103, §112
Jul 22, 2025
Request for Continued Examination
Jul 23, 2025
Response after Non-Final Action
Oct 14, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603073
ELECTRONIC PERCUSSION INSTRUMENT, CONTROL DEVICE FOR ELECTRONIC PERCUSSION INSTRUMENT, AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Apr 14, 2026
Patent 12597405
AUTO-RECORDING FOR MUSICAL INSTRUMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597406
ELECTRONIC CYMBAL AND STRIKING DETECTION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12586552
MULTI-LEVEL AUDIO SEGMENTATION USING DEEP EMBEDDINGS
2y 5m to grant Granted Mar 24, 2026
Patent 12579962
DEVICE AND ELECTRONIC MUSICAL INSTRUMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
77%
With Interview (+21.3%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 54 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month