Prosecution Insights
Last updated: April 19, 2026
Application No. 18/506,510

FULL-BAND AUDIO SIGNAL RECONSTRUCTION ENABLED BY OUTPUT FROM A MACHINE LEARNING MODEL

Final Rejection §102§103
Filed
Nov 10, 2023
Examiner
ROBERTS, SHAUN A
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Shure Acquisition Holdings Inc.
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
86%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
491 granted / 647 resolved
+13.9% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
29.5%
-10.5% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 647 resolved cases

Office Action

§102 §103
DETAILED ACTION 1. This action is responsive to remarks filed 12/19/2025. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 3. Claims 1-6, 12-17, 20 have been amended. The 101 rejection has been overcome based on the amendment to claim 20. Response to Arguments 4. Applicant’s arguments filed have been fully considered but are moot based on the new grounds of rejection responsive to the amendments. Claim Rejections - 35 USC § 102 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 7. Claims 1-2, 4-10, 12-13, 15-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lund et al (2024/0005930). Regarding claim 1 Lund teaches An audio signal processing apparatus comprising at least one processor and a memory storing instructions that are operable, when executed by the processor, to cause the audio signal processing apparatus to ([0001] The present disclosure relates to methods for performing personalized bandwidth extension on an audio signal, and related audio devices configured for carrying out the methods; audio device: 17-18; 0026: processor; 0082: method; 0088: computer implemented method): Identify a first frequency portion of a full-band audio signal based on a hybrid audio processing frequency threshold (12-13; 39-40; 41; 94); generate a model input audio feature set for the first frequency portion of the full-band audio signal (0012: obtaining an input microphone signal with a first bandwidth; 0039: input microphone signal; 0040: input microphone signal having a first bandwidth 0013: obtaining a first user parameter; 0014: determining based on the first user parameter a bandwidth extension model; 0037: where audio being played through the headset is processed based on one or more characteristics of the user wearing the headset. A personalized bandwidth extension model may for example have defined an upper and/or lower perceivable threshold for the user, i.e., a threshold frequency for which the user will be able to perceive sound, such thresholds may then define the extent to which bandwidth extension is performed, e.g., if the user cannot perceive frequencies above 14 kHz there is no reason to bandwidth extend an incoming signal to 20 kHz, therefore a personalized bandwidth extension model may be limited to 14 kHz.; 45; 117); input the model input audio feature set to a machine learning model configured to generate a frequency characteristics output related to the first frequency portion of the full-band audio signal (13-15; [0045] The bandwidth extension model is a model configured for generating an output signal with a second bandwidth, based on the input microphone signal with the first bandwidth. The bandwidth extension model may generate the output signal by generating spectral content to the input microphone signal, e.g., adding spectral content to the received input microphone signal. The bandwidth extension model may generate the output signal by generating spectral content based on the input microphone signal, e.g., fully generating a new signal based on the input microphone signal. The bandwidth extension model used by the audio device is personalized, i.e., determined based on the user of the audio device. The bandwidth extension model may be configured to generate spectral content based on the input microphone signal. The bandwidth extension model may be configured to generate spectral content, based on the first user parameter and the input microphone signal; 0046; 67; 0073-74: neural network; 117); apply the frequency characteristics output to at least a second frequency portion of the full-band audio signal to generate a reconstructed full-band audio signal, wherein the second frequency portion is different from the first frequency portion (0015: generating an output signal with a second bandwidth by applying the determined bandwidth extension model; 45; 0071; 117); and output the reconstructed full-band audio signal to an audio output device (15; audio device: 17-18; 117). Regarding claim 2 Lund teaches The audio signal processing apparatus of claim 1, wherein the instructions are further operable to cause the audio signal processing apparatus to: generate, based on a magnitude of the first frequency portion and the model input audio feature set, a scaled magnitude of the first frequency portion of the full-band audio signal (0046: The audible levels of the user of the audio device may be defined by masking thresholds within an audio signal, where the masking thresholds defines masked and unmasked components within an audio signal. The audible levels may be defined within different frequency bins; 0097). Regarding claim 4 Lund teaches The audio signal processing apparatus of claim 1, wherein the instructions are further operable to cause the audio signal processing apparatus to: using one or more digital signal processing (DSP) techniques, apply the frequency characteristics output to the second frequency portion of the full-band audio signal (0121: The processor 503 then determines a bandwidth extension model based on the first user parameter, and generates an output signal 504 with a second bandwidth using the determined bandwidth extension model. The output signal 504 may undergo further processing in a digital signal processing module 505.). Regarding claim 5 Lund teaches The audio signal processing apparatus of claim 1, wherein the instructions are further operable to cause the audio signal processing apparatus to: generate the model input audio feature set based on a digital transform of the first frequency portion of the full-band audio signal, wherein the digital transform is defined based on the hybrid audio processing frequency threshold (0012-0014; 37; 39-40; 45). Regarding claim 6 Lund teaches The audio signal processing apparatus of claim 1, wherein the instructions are further operable to cause the audio signal processing apparatus to: apply the frequency characteristics output to the second frequency portion of the full-band audio signal to generate digitized audio data (0037; 0045); and transform the digitized audio data into a time domain format to generate the reconstructed full-band audio signal (0029: digital to analogue converter; 0117). Regarding claim 7 Lund teaches The audio signal processing apparatus of claim 1, wherein the instructions are further operable to cause the audio signal processing apparatus to: select the hybrid audio processing frequency threshold from a plurality of hybrid audio processing frequency thresholds, wherein each hybrid audio processing frequency threshold of the plurality of hybrid audio processing frequency thresholds is based on a type of audio processing associated with the machine learning model (0037; 0045; 0052; 95: user hearing profile [0048] The bandwidth extension model may be determined by a mapping function, where the mapping function maps different first user parameters to different bandwidth extension models. The different bandwidth extension models may be pre-generated models. The mapping function may also take into consideration additional parameters, such as the first bandwidth of the input microphone signal. The bandwidth extension model may be determined/generated in real-time based on an obtained first user parameter.). Regarding claim 8 Lund teaches The audio signal processing apparatus of claim 1, wherein the first frequency portion is a lower frequency portion defined below the hybrid audio processing frequency threshold, and the second frequency portion is a higher frequency portion defined above the hybrid audio processing frequency threshold (0042: The second bandwidth may comprise a plurality of bandwidth ranges, e.g., if the user of the audio device has a notch hearing loss in the frequency range of 3 kHz to 6 kHz, the second bandwidth may then comprise two bandwidth ranges from 50 Hz to 3 kHz and 6 kHz to 7 kHz thereby providing a personalized bandwidth based on the hearing loss of the user of the audio device; 45: adding spectral content; 0094). Regarding claim 9 Lund teaches The audio signal processing apparatus of claim 1, wherein the first frequency portion is a higher frequency portion defined above the hybrid audio processing frequency threshold, and the second frequency portion is a lower frequency portion defined below the hybrid audio processing frequency threshold (0042: The second bandwidth may comprise a plurality of bandwidth ranges, e.g., if the user of the audio device has a notch hearing loss in the frequency range of 3 kHz to 6 kHz, the second bandwidth may then comprise two bandwidth ranges from 50 Hz to 3 kHz and 6 kHz to 7 kHz thereby providing a personalized bandwidth based on the hearing loss of the user of the audio device; 0094). Regarding claim 10 Lund teaches The audio signal processing apparatus of claim 1, wherein the machine learning model is trained during a training phase based on training data extracted from frequency portions of prior audio signals, wherein the frequency portions correspond to frequencies of the first frequency portion ([0074] The neural network may be trained to bandwidth extend an input microphone signal with a first bandwidth to a second bandwidth to maximize the amount of perceptually relevant information for the user of the audio device; 88-93; 103). Regarding claim 12 Lund teaches A computer-implemented method, comprising: Identifying a first frequency portion of a full-band audio signal based on a hybrid audio processing frequency threshold; generating a model input audio feature set for the first frequency portion of the full-band audio signal; inputting the model input audio feature set to a machine learning model configured to generate a frequency characteristics output related to the first frequency portion of the full-band audio signal; applying the frequency characteristics output to at least a second frequency portion of the full-band audio signal to generate a reconstructed full-band audio signal, wherein the second frequency portion is different from the first frequency portion; and outputting the reconstructed full-band audio signal to an audio output device. Claim recites limitations similar to claim 1 and is rejected for similar rationale and reasoning Claim 13 recites limitations similar to claim 2 and is rejected for similar rationale and reasoning Claim 15 recites limitations similar to claim 4 and is rejected for similar rationale and reasoning Claim 16 recites limitations similar to claim 5 and is rejected for similar rationale and reasoning Claim 17 recites limitations similar to claim 6 and is rejected for similar rationale and reasoning Claim 18 recites limitations similar to claim 7 and is rejected for similar rationale and reasoning Claim 19 recites limitations similar to claim 10 and is rejected for similar rationale and reasoning Regarding claim 20 Lund teaches A computer program product, stored on a non-transitory computer readable storage medium, comprising instructions that, when executed by one or more processors of an audio signal processing apparatus, cause the one or more processors to: Identify a first frequency portion of a full-band audio signal based on a hybrid audio processing frequency threshold; generate a model input audio feature set for the first frequency portion of the full-band audio signal; input the model input audio feature set to a machine learning model configured to generate a frequency characteristics output related to the first frequency portion of the full-band audio signal; apply the frequency characteristics output to at least a second frequency portion of the full-band audio signal to generate a reconstructed full-band audio signal, wherein the second frequency portion is different from the first frequency portion; and output the reconstructed full-band audio signal to an audio output device. Claim recites limitations similar to claim 1 and is rejected for similar rationale and reasoning Claim Rejections - 35 USC § 103 8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 9. Claims 3 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Lund in view of Xia et al (2023/0040515). Regarding claim 3 Lund does not specifically teach The audio signal processing apparatus of claim 1, wherein the instructions are further operable to cause the audio signal processing apparatus to: calculate a spectrum power ratio for an overlapped frequency range proximate to the hybrid audio processing frequency threshold; and based on the spectrum power ratio, apply the frequency characteristics output to the full-band audio signal. Xia teaches power spectrum ratio of a current frequency in a current frequency area (abstract; 0008-9 power spectrum ratio) It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Xia and the power spectrum ratio to improve quality of audio signal (Xia 7) and accurately reconstruct the audio signal (Xia 9). Lund already teaches bandwidth extension using the model and adjusting to account for the received bandwidth, level, and a frequency threshold [0045] The bandwidth extension model is a model configured for generating an output signal with a second bandwidth, based on the input microphone signal with the first bandwidth. The bandwidth extension model may generate the output signal by generating spectral content to the input microphone signal, e.g., adding spectral content to the received input microphone signal. The bandwidth extension model may generate the output signal by generating spectral content based on the input microphone signal, e.g., fully generating a new signal based on the input microphone signal. And perceivable threshold (37), And one could thus look to Xia to incorporate and utilize the power spectrum ratio to properly reconstruct the bandwidth extending signal with the first and second bandwidths and accommodating the hybrid audio processing frequency threshold, and therefore teaching: calculate a spectrum power ratio for an overlapped frequency range proximate to the hybrid audio processing frequency threshold; and based on the spectrum power ratio, apply the frequency characteristics output to the audio signal. Claim 14 recites limitations similar to claim 3 and is rejected for similar rationale and reasoning 10. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Lund in view of Talwar et al (10,061,554). Regarding claim 11 Lund teaches The audio signal processing apparatus of claim 1, wherein the hybrid audio processing frequency threshold (0037; 45; 117) {is one of 500Hz, 4kHz, or 8kHz}; But does not specifically teach where Talwar teaches is one of 500Hz, 4kHz, or 8kHz (claim 1: 4kHz) It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Talwar presenting a reasonable expectation of success in still having/obtaining/utilizing the hybrid audio processing frequency threshold to generate a reconstructed full-band audio signal. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAUN A ROBERTS whose telephone number is (571)270-7541. The examiner can normally be reached Monday-Friday 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached on 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAUN ROBERTS/Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Nov 10, 2023
Application Filed
Sep 17, 2025
Non-Final Rejection — §102, §103
Dec 09, 2025
Applicant Interview (Telephonic)
Dec 09, 2025
Examiner Interview Summary
Dec 19, 2025
Response Filed
Jan 29, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586599
AUDIO SIGNAL PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM WITH MACHINE LEARNING AND FOR MICROPHONE MUTE STATE FEATURES IN A MULTI PERSON VOICE CALL
2y 5m to grant Granted Mar 24, 2026
Patent 12586568
SYNTHETICALLY GENERATING INNER SPEECH TRAINING DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12573376
Dynamic Language and Command Recognition
2y 5m to grant Granted Mar 10, 2026
Patent 12562157
GENERATING TOPIC-SPECIFIC LANGUAGE MODELS
2y 5m to grant Granted Feb 24, 2026
Patent 12555562
VOICE SYNTHESIS FROM DIFFUSION GENERATED SPECTROGRAMS FOR ACCESSIBILITY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
86%
With Interview (+10.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 647 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month