Prosecution Insights
Last updated: April 19, 2026
Application No. 18/729,842

INFORMATION PROCESSING DEVICE, ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT SYSTEM, METHOD, AND STORAGE MEDIUM

Non-Final OA §101§102§103§112
Filed
Jul 17, 2024
Examiner
SULTANA, NADIRA
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Casio Computer Co. Ltd.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
72 granted / 97 resolved
+12.2% vs TC avg
Strong +31% interview lift
Without
With
+31.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
126
Total Applications
across all art units

Statute-Specific Performance

§101
25.4%
-14.6% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
3.6%
-36.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 97 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 07/17/2024 and 08/27/2024, are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2022-006321, filed on 01/19/2022. Status of Claims Claims were amended pursuant to a preliminary amendment filed together with the initial set on 07/17/2024. For the examination purpose, the claim set with amended claims have been used. Claims 1, 3-10, were amended, and 11-14 were newly added. Claims 1-14 are pending of which claims 1, 9 and 10 are independent. Claim Objections Claim 2 is objected to because of the following informalities: Claim 2 recites the limitation in line 2, “ the controller outputs the parameters”, which should be either “ the controller output the parameters” or “ the controller outputs the parameter”. Appropriate correction is required. Claim Rejections - 35 USC § 101 Claims 1-14 were evaluated under 35 U.S.C. 101 rejection to an abstract idea without significantly more, but due to the fact that the “ operation element” is defined as plurality of keys in a musical instrument, no 35 USC 101 rejections have been given. Claim Rejections - 35 USC § 112 7. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-5, 11-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Independent Claim 1 recites the limitation in line 2 “a parameter for a syllable start frame” and in line 6, “a parameter for a vowel frame”. But in claim 2, lines 2, 4, claim 3, line 2, claim 4, line 2, claim 5, line 5, “ the parameters” are recited, which is ambiguous and lacks proper antecedent basis as which of claim 1 parameter is referred in those claims. Claimed invention is not clear. Similar issue exists also in independent Claim 9 and dependent claims 11-14. Therefore, claims 2-5 and 11-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 and 6- 10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hamano et al. ( US 10504502 B2), hereinafter referenced as Hamano. Regarding Claim 1, Hamano teaches an information processing device ( Hamano: Column 3, lines 31-35, Fig. 1, A sound control device (information processing device) correspond to the singing sound generating apparatus 1) comprising: a controller that, iin response to detection of an operation on an operation element, causes sound emission of a syllable to start based on a parameter for a syllable start frame and ( Hamano: Column 3, lines 21-42, Fig. 1 illustrates a singing sound generating apparatus 1 ( information processing device) where CPU 10 is a central processing unit ( controller) that controls the whole singing sound generating apparatus 1. Column 4, lines 11-20, Fig.4, Keyboard 40 ( operation element) is provided as performance operator 16 of the singing sound generating apparatus 1. Column 7, lines 32-50, Figs. 2B, 4, when one of the keys on the keyboard 40 has started to be pressed and reaches the upper position, the first sensor 41a is turned on, and a sound generation instruction of the first key-on is accepted. The first syllable c1 is acquired ( step S20-S22) and the sound generation timing ( parameter) corresponding to the consonant sound type is set. Consonant component 43a of syllable c1 ( which is “h”) is generated), in a case where the operation ( Hamano: Column 7, lines 50-67, column 8, lines 1-6, Figs. 2B, 4, when the key corresponding to the key-on is pressed down to the intermediate position b and the second sensor 41b is turned on at time t2, sound generation of the vowel sound of the acquired syllable c1 is started in the sound source 13 (step S30 to step S34). “a” in the vowel component 43b is repeatedly reproduced until time t3 (parameter) at which the finger moves away from the key corresponding to the key-on and the first sensor 41a turns from on to off). Claim 9 is a method claim that is performed by a controller of an information processing device ( Hamano: Column 3, lines 21-42, Fig. 1 illustrates a singing sound generating apparatus 1 ( information processing device) where CPU 10 is a central processing unit ( controller) that controls the whole singing sound generating apparatus 1), performing the steps in device claim 1 above and as such, claim 9 is similar in scope and content to claim 1 and therefore, claim 9 is rejected under similar rationale as presented against claim 1 above. Claim 10 is a non-transitory computer-readable storage medium claim storing a program that causes a controller of an information processing device ( Hamano: Column 3, lines 21-49, Fig. 1 illustrates a singing sound generating apparatus 1 ( information processing device) where CPU 10 is a central processing unit ( controller) that controls the whole singing sound generating apparatus 1, a ROM (Read Only Memory) 11 , a RAM (Random Access Memory) 12, a data memory 18. The ROM 11 is a nonvolatile memory in which a control program and various data are stored. The RAM 12 is a volatile memory used for a work area of the CPU 10), performing the steps in device claim 1 above and as such, claim 10 is similar in scope and content to claim 1 and therefore, claim 10 is rejected under similar rationale as presented against claim 1 above. Regarding Claim 6, Hamano teaches the information processing device according to claim 1. Hamano further teaches, wherein the case where the operation ( Hamano: Column 7, lines 50-63, Figs. 2B, 4, when the key corresponding to the key-on is pressed down to the intermediate position b and the second sensor 41b is turned on at time t2, sound generation of the vowel sound of the acquired syllable c1 is started in the sound source 13 (step S30 to step S34. The envelope ENV1 is an envelope of a sustain sound in which the sustain persists till the key is pressed), and wherein the operation ( Hamano: Column 7, lines 64-67, column 8, lines 1-6, Figs. 2B, 4, “a” in the vowel component 43b is repeatedly reproduced until time t3 (parameter) at which the finger moves away from the key corresponding to the key-on ( pressed key released) and the first sensor 41a turns from on to off). Regarding Claim 7, Hamano teaches the information processing device according to claim 1. Hamano further teaches, an electronic musical instrument comprising: the information processing device according to claim 1 ( Hamano: Column 3, lines 21-35, 55-64, Fig. 1, A sound control device (information processing device) correspond to the singing sound generating apparatus 1 ( electronic musical instrument), which consists of CPU (Central Processing Unit) 10, a ROM 11 , a RAM 12, sound source 13, a sound system 14, a display unit (display) 15, a performance operator 16, a setting operator 17, a data memory 18, and a bus 19. The performance operator 16 generates performance information such as key-on and key-off, pitch, and velocity based on the on/off of the plurality of sensors, MIDI (musical instrument digital interface) message ); and a plurality of operation elements ( Hamano: Column 4, lines 34-41, Fig.4, Keyboard 40 includes a plurality of white keys 40a and black keys 40b ( operation elements)). Regarding Claim 8, Hamano teaches the information processing device according to claim 1. Hamano further teaches, an electronic musical instrument system comprising: the information processing device according to claim 1 ( Hamano: Column 3, lines 21-35, 55-64, Fig. 1, A sound control device (information processing device) correspond to the singing sound generating apparatus 1 ( electronic musical instrument), which consists of CPU (Central Processing Unit) 10, a ROM 11 , a RAM 12, sound source 13, a sound system 14, a display unit (display) 15, a performance operator 16, a setting operator 17, a data memory 18, and a bus 19. The performance operator 16 generates performance information such as key-on and key-off, pitch, and velocity based on the on/off of the plurality of sensors, MIDI (musical instrument digital interface) message. Column 14, lines 17-25, a computer system process the functions of the singing sound generating apparatus 1 ( electronic musical instrument), in a computer-readable recording medium, and reading the program recorded on this recording medium into a computer system, and executing the program); and an electronic musical instrument including a plurality of operation elements ( Hamano: Column 4, lines 34-41, Fig.4, singing sound generating apparatus 1 ( electronic musical instrument) consists of a performance operator 16, which is keyboard 40 and includes a plurality of white keys 40a and black keys 40b ( operation elements)). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-5 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hamano et al. ( US 10504502 B2), hereinafter referenced as Hamano, in view of Nakamura et al. (US 20190318712 A1), hereinafter referenced as Nakamura. Regarding Claim 2, Hamano teaches the information processing device according to claim 1. Hamano fails to explicitly teach, wherein the controller outputs the parameters to a vocal synthesizer of an electronic musical instrument, causes the vocal synthesizer to generate sound waveform data based on the parameters, and causes a sound based on the sound waveform data to be emitted. However, Nakamura does teach the claimed, wherein the controller outputs the parameters to a vocal synthesizer of an electronic musical instrument ( Nakamura: Para.[0030], Fig. 2, the electronic keyboard instrument 100 consists of a central processing unit 201 ( controller), a sound source large-scale integrated circuit (LSI) 204, a voice synthesis LSI 205. Para.[0037], [0039], Fig. 3 illustrates the voice synthesis LSI 205 which includes a voice training section 301 and a voice synthesis section 302. The voice synthesis LSI 205 is input with music data 215 instructed by the CPU 201. With this, the voice synthesis LSI 205 synthesizes and outputs singing voice inference data for a given singer 217), causes the vocal synthesizer to generate sound waveform data based on the parameters ( Nakamura: Para.[0044], [0048], Fig. 3, The voice synthesis section 302 includes a vocalization model unit 308 which includes a sound source generator 309 and a synthesis filter 310. The sound source generator 309, generates a sound source signal that periodically repeats at a fundamental frequency (FO) contained in the sound source information 319), and causes a sound based on the sound waveform data to be emitted ( Nakamura: Para.[0048], Fig. 3, The synthesis filter 310 forms a digital filter that models the vocal tract on the basis of a spectral information 318 sequence sequentially input thereto from the acoustic model unit 306, and using the sound source signal input from the sound source generator 309 as an excitation signal, generates and outputs singing voice inference data for a given singer 217 in the form of a digital signal). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Nakamura’s teaching of electronic musical instrument, into the method of a sound control device, taught by Hamano, because, the parameter generation algorithm that employs dynamic features, would allow the quality of voice synthesis to be improved.(Nakamura, Para.[0059]). Claim 11 is a method claim performing the steps in device claim 2 above and as such, claim 11 is similar in scope and content to claim 2 and therefore, claim 11 is rejected under similar rationale as presented against claim 2 above. Regarding Claim 3, Hamano teaches the information processing device according to claim 1 [[or 2]]. Hamano fails to explicitly teach, wherein the parameters are parameters inferred by a learned model generated by machine learning of a human voice. However, Nakamura does teach the claimed, wherein the parameters are parameters inferred by a learned model generated by machine learning of a human voice ( Nakamura: Para.[0039],[0041], [0043], Fig. 3 illustrates the voice synthesis LSI 205 which includes a voice training section 301 and a voice synthesis section 302. The voice training section 301 includes a training text analysis unit 303, a training acoustic feature extraction unit 304, and a model training unit 305. The training acoustic feature extraction unit 304 receives singing voice data 312 of a given singer sang the aforementioned lyric text and extracts and outputs a training acoustic feature sequence 314 representing the singing voice data for a given singer 312. The model training unit 305 outputs, as training result 315, model parameters expressing the acoustic model). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Nakamura’s teaching of electronic musical instrument, into the method of a sound control device, taught by Hamano, because, the parameter generation algorithm that employs dynamic features, would allow the quality of voice synthesis to be improved.(Nakamura, Para.[0059]). Claim 12 is a method claim performing the steps in device claim 3 above and as such, claim 12 is similar in scope and content to claim 3 and therefore, claim 12 is rejected under similar rationale as presented against claim 3 above. Regarding Claim 4, Hamano teaches the information processing device according to claim 1. Hamano fails to explicitly teach, wherein the parameters each include a spectrum parameter. However, Nakamura does teach the claimed, wherein the parameters each include a spectrum parameter ( Nakamura: Para.[0048], Fig. 3, The acoustic features expressed by the training acoustic feature sequence 314 and the acoustic feature sequence 317 include spectral information that models the vocal tract of a person, and sound source information that models the vocal chords of a person. A mel-cepstrum, line spectral pairs (LSP), or the like may be employed for the spectral parameters). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Nakamura’s teaching of electronic musical instrument, into the method of a sound control device, taught by Hamano, because, the parameter generation algorithm that employs dynamic features, would allow the quality of voice synthesis to be improved.(Nakamura, Para.[0059]). Claim 13 is a method claim performing the steps in device claim 4 above and as such, claim 13 is similar in scope and content to claim 4 and therefore, claim 13 is rejected under similar rationale as presented against claim 4 above. Regarding Claim 5, Hamano teaches the information processing device according to claim 1. Hamano fails to explicitly teach, wherein in response to a change instructing operation for a tone of a sound to be emitted, the change instructing operation being made by a user at a timing including a timing during a performance, the controller changes the parameters to parameters for another tone. However, Nakamura does teach the claimed, wherein in response to a change instructing operation for a tone of a sound to be emitted, the change instructing operation being made by a user at a timing including a timing during a performance, the controller changes the parameters to parameters for another tone ( Nakamura: Para.[0062]-[0064], Fig. 5B illustrates a scenario where the pitch G4 specified by a user pressing a key on the keyboard at timing t3, which corresponds to an original (correct) vocalization timing, does not match the third pitch B4 that should have been vocalized at timing t3. CPU controls the progression of lyrics and the progression of automatic accompaniment has been stopped at timing t3 in FIG. 5B, if the user specifies the pitch B4 at a timing t3' ( changing parameter), which matches the third pitch B4 that should have been specified, the CPU 201 will output the "i/in" (third character(s)') singing voice in the "Ki/twin" (the third character(s )) lyric data corresponding to the third pitch B4, for which sound was supposed to be produced at timing t3, and resumes the progression of lyrics and the progression of automatic accompaniment). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Nakamura’s teaching of electronic musical instrument, into the method of a sound control device, taught by Hamano, because, the parameter generation algorithm that employs dynamic features, would allow the quality of voice synthesis to be improved.(Nakamura, Para.[0059]). Claim 14 is a method claim performing the steps in device claim 5 above and as such, claim 14 is similar in scope and content to claim 5 and therefore, claim 14 is rejected under similar rationale as presented against claim 5 above. Conclusion Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant's disclosure. Iwamoto et al. (US 20210065669 A1) teaches a musical sound generation device includes a control device which, when a performance operator among a plurality of performance operators has been operated for a part that has been set to sound a predetermined number of simulated voices of an analog synthesizer, assigns a sounding parameter of one or two or more voices, which form a timbre of a simulated voice of the analog synthesizer corresponding to the operated performance operator and are selected from a plurality of sounding voices, to a sound generation circuit, and assigns, to the sound generation circuit, an information set selected from a plurality of information sets and each include a variation value that applies a variation to the sounding parameter of the one or two or more voices, and a sound generation circuit that performs a sounding process of the one or two or more voices using the sounding parameter and the information set. Ackerman et al. (US 20210312897 A1) teaches a method and system may provide for interactive song generation. In one aspect, a computer system may present options for selecting a background track. The computer system may generate suggested lyrics based on parameters entered by the user. User interface elements allow the computer system to receive input of lyrics. As the user inputs lyrics, the computer system may update its suggestions of lyrics based on the previously input lyrics. In addition, the computer system may generate proposed melodies to go with the lyrics and the background track. The user may select from among the melodies created for each portion of lyrics. The computer system may optionally generate a computer-synthesized vocal(s) or capture a vocal track of a human voice singing the song. The background track, lyrics, melodies, and vocals may be combined to produce a complete song without requiring musical training or experience by the user. Kashiwase et al. (WO 2019003350 A1) teaches a singing sound generation device capable of defining the sound production pitch of the singing sound to be generated at a period that corresponds to the syllable to be produced. A CPU 10 obtains a sound production or sound production removal instruction specifying pitch, determines the determination duration T according to the obtained syllable information, defines a single sound production pitch after the determination duration T elapses on the basis of the obtained sound production or sound production removal instruction, and generates a singing sound on the basis of the obtained syllable information and the defined sound production pitch. Lu et al. (WO 2021101665 A1 ) teaches methods and apparatuses for singing voice synthesis. First music score phoneme information extracted from a music score may be received, the first music score phoneme information comprising a first phoneme, and a pitch and a beat of a note corresponding to the first phoneme. A fundamental frequency residual and spectral parameters corresponding to the first phoneme may be generated based on the first music score phoneme information. A fundamental frequency corresponding to the first phoneme may be obtained through regulating the pitch of the note with the fundamental frequency residual. An acoustic waveform corresponding to the first phoneme may be generated based at least in part on the fundamental frequency and the spectral parameters. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NADIRA SULTANA whose telephone number is (571)272-4048. The examiner can normally be reached M-F,7:30 am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D. Shah can be reached on (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NADIRA SULTANA/Examiner, Art Unit 2653
Read full office action

Prosecution Timeline

Jul 17, 2024
Application Filed
Jan 31, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603086
CONTEXTUAL EDITABLE SPEECH RECOGNITION METHODS AND SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12591747
ENTITY-CONDITIONED SENTENCE GENERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12573413
AUDIO CODING METHOD AND RELATED APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12567420
METHOD AND APPARATUS FOR CONTROLLING SOUND RECEIVING DEVICE BASED ON DUAL-MODE AUDIO THREE-DIMENSIONAL CODE
2y 5m to grant Granted Mar 03, 2026
Patent 12536992
ELECTRONIC DEVICE AND METHOD FOR PROVIDING VOICE RECOGNITION SERVICE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+31.1%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 97 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month