Prosecution Insights
Last updated: April 19, 2026
Application No. 18/557,336

ACOUSTIC DEVICE, ACOUSTIC DEVICE CONTROL METHOD, AND PROGRAM

Final Rejection §102
Filed
Oct 26, 2023
Examiner
FLANDERS, ANDREW C
Art Unit
2655
Tech Center
2600 — Communications
Assignee
AlphaTheta Corporation
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
574 granted / 775 resolved
+12.1% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
9 currently pending
Career history
784
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
38.7%
-1.3% vs TC avg
§102
31.6%
-8.4% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 775 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) under Yamashita et al. and/or McNeeney have been considered but are moot because the new ground of rejection necessitated by applicant’s amendment does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Information Disclosure Statement The information disclosure statement filed 05 December 2024 fails to comply with 37 CFR 1.98(a)(3)(i) because it does not include a concise explanation of the relevance, as it is presently understood by the individual designated in 37 CFR 1.56(c) most knowledgeable about the content of the information, of each reference listed that is not in the English language. Specifically, citation 1 under Non-Patent Literature Documents “Japanese Notice of Allowance dated November 19, 2024, Application No. 2023-516000; English translation included,” does not include the stated English translation, nor a concise explanation of the relevance. It has been placed in the application file, but the information referred to therein has not been considered. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1 – 16 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Morsy et al. (hereinafter Morsy, U.S. Patent Application Publication 2023/0335091). Regarding Claim 1, Morsy discloses: An acoustic device for mixing at least two music pieces (e.g. device of fig. 1 which implements the mixing detailed in the twelfth embodiment of Figs. 17 and 18, which may comprise any or all of the above-mentioned features; [0173]; in other words indicating that embodiment 12 encompasses all prior teachings of embodiments 1-11 for the purposes of the following rejection) comprising a first music piece (e.g. input audio file A) and a second music piece (e.g. Input audio file B) the acoustic device comprising: a detector configured to detect a tempo of the first music piece and a tempo of the second music piece (e.g. device 10 determines a tempo (e.g. a BPM value (beats per minutes)) of track A and track B; [0160]) , the first and second music pieces each comprising a plurality of parts comprising a first part and a second part (e.g. Input audio files A and B both have multiple decomposed tracks, see First and Second corresponding to A, and Third and Fourth corresponding to B in Fig. 2; and further see specific decomposed tracks of A and B in Figs 17 and 18), the first part corresponding to a drum sound of the respective first and second music pieces (e.g. first note [0182] indicating audio input file B may be processed in the manner as input audio file A; further note drum track D1 decomposed from audio file A in Figs. 17, 18; which is also decomposed of B per [0182]’s teaching of processing B in the same manner as A), the second part corresponding to a sound of a musical instrument other than the drum sound of the respective first and second music pieces (e.g. any of bass, vocal, or complement tracks D2, D3, and D4 respectively, decomposed from audio file A in Figs. 17, 18; which are also decomposed of B per [0182]’s teaching of processing B in the same manner as A); and a playback controller configured to gradually change (e.g. tempo matching done in real-time using crossfades allowing the songs to be audible at the same time and not disturb the flow of the music; [0069]) a tempo of the first part of the first music piece or a tempo of the first part of the second music piece (e.g. tempo matching processing; 0069; further see tempo matching of a decomposed track to match track A; [0160]) and synchronize the tempo of the first part of the first music piece and the tempo of the first part of the second music piece (e.g. tempo matching such that the first and second output data can be synchronized to each other; [0068]; further note after tempo matching of decomposed track, the beat phase of the decomposed track is shifted in a synchronization step to match the beat phase of Track A; [0160]) at a mixing start time (e.g. user can start or stop playback by operation of play control element and/or change playback position; [0169]) or after an elapse of a predetermined time from the mixing start time (e.g. 2 seconds {required for processing}; para [0051]; starting frame and progression; [0118]). Regarding Claim 2, Morsy discloses: An acoustic device for mixing at least two music pieces (e.g. device of fig. 1 which implements the mixing detailed in the twelfth embodiment of Figs. 17 and 18, which may comprise any or all of the above-mentioned features; [0173]; in other words indicating that embodiment 12 encompasses all prior teachings of embodiments 1-11 for the purposes of the following rejection) comprising a first music piece (e.g. input audio file A) and a second music piece (e.g. Input audio file B) the acoustic device comprising: a detector configured to detect a tempo of the first music piece and a tempo of the second music piece (e.g. device 10 determines a tempo (e.g. a BPM value (beats per minutes)) of track A and track B; [0160]) , the first and second music pieces each comprising a plurality of parts comprising a first part and a second part (e.g. Input audio files A and B both have multiple decomposed tracks, see First and Second corresponding to A, and Third and Fourth corresponding to B in Fig. 2; and further see specific decomposed tracks of A and B in Figs 17 and 18), the first part corresponding to a drum sound of the respective first and second music pieces (e.g. first note [0182] indicating audio input file B may be processed in the manner as input audio file A; further note drum track D1 decomposed from audio file A in Figs. 17, 18; which is also decomposed of B per [0182]’s teaching of processing B in the same manner as A), the second part corresponding to a sound of a musical instrument other than the drum sound of the respective first and second music pieces (e.g. any of bass, vocal, or complement tracks D2, D3, and D4 respectively, decomposed from audio file A in Figs. 17, 18; which are also decomposed of B per [0182]’s teaching of processing B in the same manner as A); and a playback controller configured to gradually change (e.g. tempo matching done in real-time using crossfades allowing the songs to be audible at the same time and not disturb the flow of the music; [0069]; see further cross fading to achieve a more continuous flow of the music; [0148]) a tempo of the first part of the first music piece and a tempo of the first part of the second music piece (e.g. tempo matching processing; 0069; further see tempo matching of a decomposed track to match track A; [0160]; note that a decomposed track can be further processed by applying effects for resampling, time stretching, and seeking, e.g. for tempo and beat matching [0119]; in other words any and/or all decomposed tracks may have their tempo adjusted which indicates that in at least some circumstances, both would or could be modified) and synchronize the tempo of the first part of the first music piece and the tempo of the first part of the second music piece (e.g. tempo matching such that the first and second output data can be synchronized to each other; [0068]; further note after tempo matching of decomposed track, the beat phase of the decomposed track is shifted in a synchronization step to match the beat phase of Track A; [0160]) at a mixing start time (e.g. user can start or stop playback by operation of play control element and/or change playback position; [0169]) or after an elapse of a predetermined time from the mixing start time (e.g. 2 seconds {required for processing}; para [0051]; starting frame and progression; [0118]). Regarding Claim 3, in addition to the elements stated above regarding claim 1, Morsy further discloses: wherein the playback controller is configured to gradually shift the tempo of the first part of the first music piece or the second music piece to an original tempo thereof (e.g. tempo is synchronized w/ the master track; [0160] tempo matching done in real-time using crossfades allowing the songs to be audible at the same time and not disturb the flow of the music; [0069]; see further cross fading to achieve a more continuous flow of the music; [0148]) after synchronizing the tempos of the respective first parts of the first and second music pieces (e.g. note that effect chains such as tempo changing effects could be inserted at different positions in the signal flow, including after recombination; [0133]). Regarding Claim 4, in addition to the elements stated above regarding claim 1, Morsy further discloses: wherein the playback controller is configured to align a playback position of the first part of the first music piece or the first part of the second music piece with a playback position of the second part of the first music piece or the second part of the second music piece (e.g. after tempo matching of decomposed track, the beat phase of the decomposed track is shifted [“aligned”] in a synchronization step to match the beat phase of Track A; [0160]) after gradually shifting the tempo of the first part of the first music piece or the first part of the second music piece to an original tempo thereof (e.g. tempo is synchronized w/ the master track; [0160] tempo matching done in real-time using crossfades allowing the songs to be audible at the same time and not disturb the flow of the music; [0069]; see further cross fading to achieve a more continuous flow of the music; [0148]). Regarding Claim 5, in addition to the elements stated above regarding claim 1, Morsy further discloses: further comprising a cross fader (e.g. cross-fading; see paras [0137][0138][00147]), wherein the playback controller is configured to determine a time when the cross fader starts moving from one end toward the other end thereof as the mixing start time (e.g. when the crossfader is used/operated by a user; [0080][0081]; note further that the volume adjustment can be user configurable if needed [0137] ). Regarding Claim 6, in addition to the elements stated above regarding claim 1, Morsy further discloses: wherein in a predetermined section comprising the mixing start time or a point after the elapse of the predetermined time from the mixing start time (e.g. user can start or stop playback by operation of play control element and/or change playback position; [0169]; note also using 2 seconds {required for processing}; para [0051]; starting frame and progression; [0118]), the playback controller is configured to play the second part of the first music piece at an original tempo of the first music piece, play the second part of the second music piece at an original tempo of the second music piece (e.g. tempo is synchronized w/ the master track; [0160] tempo matching done in real-time using crossfades allowing the songs to be audible at the same time and not disturb the flow of the music; [0069]; see further cross fading to achieve a more continuous flow of the music; [0148]), and switch the second part of the first music piece to the second part of the second music piece (e.g. note the ability for a user to individually switch ON or OFF a selected one of the decomposed tracks; [0153]; and swapping tracks; [0082]). Regarding Claim 7, in addition to the elements stated above regarding claim 1, Morsy further discloses: wherein the first part is a part at least corresponding to a bass drum sound of the drum sound (e.g. see exemplary indication of presence of certain instruments such as kick drums; [0092]; note further the detail disclosed regarding timber, particularly with regard to certain drum set sounds; [0013] – [0017], [0022]). Regarding Claim 8, claim 8 is directed to the method corresponding to the device claimed in claim 1 and is rejected under the same grounds stated above. Regarding Claim 9, claim 9 is directed to the method corresponding to the device claimed in claim 2 and is rejected under the same grounds stated above. Regarding Claim 10, claim 10 is directed to the non-transitory computer-readable recording medium corresponding to the device claimed in claim 1 and is rejected under the same grounds stated above. Regarding Claim 11, claim 11 is directed to the non-transitory computer-readable recording medium corresponding to the device claimed in claim 2 and is rejected under the same grounds stated above. Regarding Claim 12, claim 12 is rejected under the same grounds as stated above regarding the rejection of claim 3. Regarding Claim 13, claim 13 is rejected under the same grounds as stated above regarding the rejection of claim 4. Regarding Claim 14, claim 14 is rejected under the same grounds as stated above regarding the rejection of claim 5. Regarding Claim 15, claim 15 is rejected under the same grounds as stated above regarding the rejection of claim 6. Regarding Claim 16, claim 16 is rejected under the same grounds as stated above regarding the rejection of claim 7. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Andrew C Flanders whose telephone number is (571)272-7516. The examiner can normally be reached M-F 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW C FLANDERS/ Supervisory Patent Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Oct 26, 2023
Application Filed
Jun 12, 2025
Non-Final Rejection — §102
Sep 12, 2025
Response Filed
Jan 05, 2026
Final Rejection — §102
Apr 07, 2026
Request for Continued Examination
Apr 13, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12562160
ARBITRATION BETWEEN AUTOMATED ASSISTANT DEVICES BASED ON INTERACTION CUES
2y 5m to grant Granted Feb 24, 2026
Patent 12547835
AUTOMATIC EXTRACTION OF SEMANTICALLY SIMILAR QUESTION TOPICS
2y 5m to grant Granted Feb 10, 2026
Patent 12512089
TESTING CASCADED DEEP LEARNING PIPELINES COMPRISING A SPEECH-TO-TEXT MODEL AND A TEXT INTENT CLASSIFIER
2y 5m to grant Granted Dec 30, 2025
Patent 12394416
DETECTING NEAR MATCHES TO A HOTWORD OR PHRASE
2y 5m to grant Granted Aug 19, 2025
Patent 11328007
GENERATING A DOMAIN-SPECIFIC PHRASAL DICTIONARY
2y 5m to grant Granted May 10, 2022
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
88%
With Interview (+14.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 775 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month