Prosecution Insights
Last updated: April 19, 2026
Application No. 17/694,412

INFORMATION PROCESSING DEVICE, ELECTRONIC MUSICAL INSTRUMENT, AND INFORMATION PROCESSING METHOD

Final Rejection §103
Filed
Mar 14, 2022
Examiner
QIN, JIANCHUN
Art Unit
2837
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Casio Computer Co. Ltd.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
83%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
691 granted / 999 resolved
+1.2% vs TC avg
Moderate +14% lift
Without
With
+13.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
39 currently pending
Career history
1038
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
51.9%
+11.9% vs TC avg
§102
34.3%
-5.7% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 999 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments 2. Applicant's arguments received 02/09/2026 have been considered but are moot in view of the new ground(s) of rejection. Detailed response is given in sections 3-6 as set forth below in this Office action. Claim Rejections - 35 USC § 103 3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1, 4, 6, 9, 11-14 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Lezzoum et al. (US 12087284 B1) in view of SAKA (JP H09258758 A, machine translation). Regarding claims 1 and 14, Lezzoum disclose an information processing device and a method for practicing the device for voice synthesis (Abstract), comprising: at least one processor (col. 7, lines 8-16), implementing a first voice model and a second voice model different from the first voice model (col. 11, lines 60-62), the at least one processor performing the following: receiving data indicating a specified pitch (col. 8, lines 12-15 and 40-43; col. 9, lines 5-7 and 21-26); and causing the first voice model to output a first data (e.g., male voice) and the second voice model to output a second data (e.g., female voice; see col. 11, lines 49-68), and generating and outputting a third data (e.g., the synthesized speech or an audio signal having sounds corresponding to the text generated from the response generation component 116) corresponding to the specified pitch based on the first data and second data (col. 11, lines 49-68; col. 12, lines 1-17). Lezzoum does not mention explicitly: wherein the first voice model having a first sound range, and the second voice model having a second sound range different from and not overlapping with the first sound range so that there is a non-overlapping sound range between the first and second sound ranges; and when the specified pitch belongs to the non-overlapping range, causing the first voice model to output a first data and the second voice model to output a second data, and generating and outputting a third data corresponding to the specified pitch based on the first data and second data. SAKA disclose an information processing device and a method for practicing the device for voice synthesis (Abstract), comprising at least one processor configured to: implement a first (male) voice model (Fig. 9: S53-S55) and a second (female) voice model (Fig. 9: S56-S58) different from the first voice model, wherein the first voice model having a first sound range, and the second voice model having a second sound range different from and not overlapping with the first sound range so that there is a non-overlapping sound range between the first and second sound ranges (para. 0027: “male and female singers have different singing ranges”); receive data indicating a specified pitch (see discussion of pitch changing unit 45 of Fig. 2; see also para. 0029: “This pitch change data is sent from the I/F 42 in FIG. 2 to the pitch change section 45, where conditions for pitch change are set”); and when the specified pitch belongs to the non-overlapping range (para. 0027-0029), causing the first voice model to output a first data and the second voice model to output a second data, and generating and outputting a third data corresponding to the specified pitch based on the first data and second data (see discussion of S59-S60 of Fig. 9). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate SAKA’s teaching of male/female voice AI models into Lezzoum to arrive the claimed invention. The motivation would have been to use the computer-based synthesizer to generate information of synthetic voices having a harmonic overtone relation with the input voice based on the information of the voice (input voice) input from the voice inputting means (SAKA, para. 0005-0006). Regarding claim 4, Lezzoum disclose: wherein the at least one processor further receives information (acoustic cues 136 or non-acoustic cues 138) on a music piece to be played, and when the information on the music piece indicates that a sound range of the music piece does not correspond to a sound range of the first voice model or a sound range of the second voice model, the at least one processor generates the third data (see discussion about acoustic cues 136 and non-acoustic cues 138). Regarding claim 6, Lezzoum disclose: a performance unit for specifying a pitch (col. 8, lines 12-15 and 40-43; col. 9, lines 5-7 and 21-26); and the information processing device including the at least one processor (col. 7, lines 7-17), as set forth in claim 1, the at least one processor receiving the data indicating the specified pitch from the performance unit (col. 8, lines 12-15 and 40-43; col. 9, lines 5-7 and 21-26). Regarding claim 9, Lezzoum disclose the claimed invention (see discussion of claim 4 above). Regarding claim 11, Lezzoum disclose an electronic musical instrument (Abstract; Fig. 1), comprising: a performance unit (the unit for receiving the user's utterance 102) for specifying a pitch (col. 8, lines 12-15 and 40-43; col. 9, lines 5-7 and 21-26); a processor (col. 7, lines 7-17); and a communication interface configured to communicates with an information processing device (e.g., 100 in Fig. 1) that is externally provided (col. 8, lines 22-34; col. 9, lines 51-54). The rest of the claimed limitations are rendered obvious by the combination of Lezzoum and SAKA as discussed for claim 1 above. Regarding claims 12 and 13, Lezzoum disclose the claimed invention (see discussion of claim 1 above). Regarding claim 17, Lezzoum disclose the claimed invention (see discussion of claim 4 above). 5. Claims 2, 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Lezzoum et al. in view of SAKA and further in view of Ackerman et al. (US 20210312897 A1). Regarding claims 2, 7 and 15, Lezzoum does not mention explicitly: wherein the first voice model includes a trained model that has been trained with a singing voice of a first singer, wherein the second voice model includes a trained model that has trained with a singing voice of a second singer different from the first singer. Ackerman discloses an information processing device for voice synthesis (Abstract; para. 0091), comprising: a synthesizer implemented based on a first voice model and a second voice model different from the first voice model, wherein the first voice model includes a trained model that has been trained with a singing voice of a first singer, wherein the second voice model includes a trained model that has trained with a singing voice of a second singer different from the first singer (para. 0096). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Ackerman’s teaching of male/female voice AI models into the combination of Lezzoum and SAKA to arrive the claimed invention. The motivation would have been to use the computer-based synthesizer to generate vocal track in response to a selection of male/female voice(s) or multiple voices for the vocal track of the song (Ackerman, para. 0091, 0096). 6. Claims 3, 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Lezzoum et al. in view of SAKA and further in view of Kagoshima et al. (US 7251601 B2). Regarding claims 3, 8 and 16, Lezzoum does not mention explicitly: the at least one processor generates the third data by an interpolation calculation between formant frequencies corresponding to the first data and formant frequencies corresponding to the second data. Kagoshima discloses a speech synthesizer (Abstract; Fig. 1), comprising a processor configured to generate output data by an interpolation calculation between formant frequencies (402) corresponding to a first input data and formant frequencies (404) corresponding to a second input data (col. 4, lines 16-50). It would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to incorporate Kagoshima’s teaching of interpolation calculation between formant frequencies (402) corresponding to a first input data and formant frequencies (404) corresponding to a second input data into the combination of Lezzoum and SAKA to arrive the claimed invention. The motivation would have been to configure and program the synthesizer to generate a plurality of pitch waveforms and superposing the pitch waveforms according to a pitch period to generate a speech signal (Kagoshima, col. 2, lines 24-26). Conclusion 7. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Contact Information 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANCHUN QIN whose telephone number is (571)272-5981. The examiner can normally be reached 9AM-5:30PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dedei Hammond can be reached on (571)270-7938. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JIANCHUN QIN/Primary Examiner, Art Unit 2837
Read full office action

Prosecution Timeline

Mar 14, 2022
Application Filed
Sep 06, 2025
Non-Final Rejection — §103
Feb 09, 2026
Response Filed
Feb 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592215
Wireless Switching System for Musical Instruments and Related Methods
2y 5m to grant Granted Mar 31, 2026
Patent 12583712
ELEVATOR COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12586553
INFORMATION PROCESSING METHOD AND ELECTRONIC MUSICAL INSTRUMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12570498
BRAKE STATIC PLATE ASSEMBLY, BRAKE AND ELEVATOR SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12562074
PLAYBACK SYSTEM FOR SYNCHRONIZING AUDIOVISUAL WORKS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
83%
With Interview (+13.8%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 999 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month