Prosecution Insights
Last updated: April 19, 2026
Application No. 18/483,506

ELECTRONIC DEVICE, METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE DEVICE ADAPTIVELY PROCESSING AUDIO BITSTREAM BY CONDITIONALLY EXECUTING BANDWIDTH EXTENSION

Non-Final OA §103
Filed
Oct 09, 2023
Examiner
ROBERTS, SHAUN A
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
86%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
491 granted / 647 resolved
+13.9% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
678
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
29.5%
-10.5% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 647 resolved cases

Office Action

§103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/23/2025 has been entered. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 3. Claims 1-2, 5, 21-24, 26-28, 30 have been amended. Response to Arguments 4. Applicant’s arguments filed have been fully considered but are moot based on the new grounds of rejection responsive to the amendments (see art rejection below); Where cited prior art Lee teaches previous (and potentially subsequent) bitstreams with wider frequency range information and higher bitrate (0006: high definition (HD) voice (wideband (WB)) sound quality; 0079: if packet data are received at a bit rate decreased based on network congestion detected during the call session (e.g., if a bandwidth is decreased to a narrow band (NB)), then the bandwidth controller 324 turns on the ABE 325 to synthesize sound quality of a standard definition (SD) voice as sound quality of a high definition (HD) voice). In regards to other previous presented dependent claims, Applicant argues that the limitations are not taught based on the cited art and due to dependency, and examiner respectfully disagrees based on cited prior art rejections presented below. Claim Rejections - 35 USC § 103 5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 7. Claims 1-3, 6, 8, 21-23, 26, 28 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (2017/0013503) in view of Lecomte et al (2016/0180854). Regarding claim 1 Lee (2017/0013503) teaches An electronic device (fig 2 electronic device) comprising: communication circuitry (fig 2 230 communication module); a speaker (fig 2 250 output device; 0132: speaker); and memory comprising one or more storage media storing one or more instructions (fig 2 210 memory); and at least one processor comprising processing circuitry (fig 2 220 processor), wherein the instructions, when executed by the at least one processor, cause the electronic device to: identify a bitrate of a first audio bitstream received via the communication circuitry from an external electronic device (fig 6; 79-81; 105-106; 79: bit rate of the received packet data); in response to the bitrate being identified as lower than a bitrate threshold, obtain an audio signal based on executing a bandwidth extension (BWE) for the first audio bitstream [based on at least one coding parameter obtained from a second audio bitstream received via the communication circuitry from the external electronic device before the first audio bitstream], wherein the second audio bitstream is obtained from a signal in both a first frequency range lower than a reference frequency and a second frequency range higher than the reference frequency (fig 6; 0006: HD/WB; 79-81; 81: wideband; 97: threshold; wideband; 105-106 0079: if packet data are received at a bit rate decreased based on network congestion detected during the call session (e.g., if a bandwidth is decreased to a narrow band (NB)), then the bandwidth controller 324 turns on the ABE 325 to synthesize sound quality of a standard definition (SD) voice as sound quality of a high definition (HD) voice.); in response to the bitrate being identified as higher than the bitrate threshold, obtain the audio signal based on bypassing to execute the BWE (fig 6; 79-81; 0097: threshold; 105-106 [0081] If network congestion is not detected after a network state becomes relatively good, then a bandwidth may be increased to a wideband (WB) as a bit rate is adaptively adjusted. In this case, the bandwidth controller 324 may suppress and/or prevent unnecessary calculation by dynamically turning off (or deactivating) the ABE 325 during a call session.); and Output audio via the speaker using the audio signal ([0086] The voice decoder 323 of the receiving end 320 may decode the received packet data into voice data. The decoded voice data may be provided to a user of the receiving-end electronic device in the form of being output through an output device such as a speaker, which may be included in the receiving-end electronic device.; 0132 output through an output device such as a speaker); But does not specifically teach where Lecomte teaches obtain an audio signal based on executing a bandwidth extension (BWE) for the first audio bitstream based on at least one coding parameter obtained from a second audio bitstream received via the communication circuitry from the external electronic device before the first audio bitstream (abstract: produce an audio signal from a bitstream; wherein the bandwidth extension module includes an energy adjusting module being configured in such way that in a current audio frame in which an audio frame loss occurs, an adjusted signal energy for the cur-rent audio frame for the at least one frequency band is set.; 0070: wherein the bandwidth extension module includes an energy adjusting module being configured in such way that in a current audio frame in which an audio frame loss occurs, an adjusted signal energy for the current audio frame for the at least one frequency band is set based on a current gain factor for the current audio frame, wherein the current gain factor is derived from a gain factor from a previous audio frame or from the bitstream, and based on an estimated signal energy for the at least one frequency band, wherein the estimated signal energy is derived from a spectrum of the current audio frame of the core band audio signal.). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Lecomte to allow for execution of the bandwidth extension to compensate for low bit rate and poor audio quality. Lee already teaches determining to execute bandwidth extension, and where the second audio bitstream is below and higher than a reference frequency (high definition/wideband). One could look to Lecomte to further execute the bandwidth extension using a previous coding parameter to create a more continuous, consistent, and accurate wideband signal for improved bandwidth extension and high band reconstruction (Lecomte 0130) and still allow to use an ABE algorithm for synthesizing sound quality of an high-definition (HD) voice again from sound quality of an standard-definition (SD) voice (Lee 0009). Regarding claim 2 Lee teaches The electronic device of claim 1, wherein the first audio bitstream having the bitrate lower than the bitrate threshold is obtained from a signal on the first frequency range among the first frequency range and the second frequency range (0079: received packet data; if packet data are received at a bit rate decreased based on network congestion detected during the call session (e.g., if a bandwidth is decreased to a narrow band (NB)); 0081: wideband), and wherein the first audio bitstream having the bitrate higher than or equal to the bitrate threshold is obtained from a signal on the first frequency range and the second frequency range ([0081] If network congestion is not detected after a network state becomes relatively good, then a bandwidth may be increased to a wideband (WB) as a bit rate is adaptively adjusted). Regarding claim 3 Lee does not specifically teach where Lecomte teaches The electronic device of claim 2, wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower that the bitrate threshold, obtain the audio signal further based on energy information for each of frequency bands obtained by encoding the second audio bitstream (0070: estimated signal energy for the at least one frequency band; abstract: produce an audio signal from a bitstream; wherein the bandwidth extension module includes an energy adjusting module being configured in such way that in a current audio frame in which an audio frame loss occurs, an adjusted signal energy for the cur-rent audio frame for the at least one frequency band is set.; 0070: wherein the bandwidth extension module includes an energy adjusting module being configured in such way that in a current audio frame in which an audio frame loss occurs, an adjusted signal energy for the current audio frame for the at least one frequency band is set based on a current gain factor for the current audio frame, wherein the current gain factor is derived from a gain factor from a previous audio frame or from the bitstream, and based on an estimated signal energy for the at least one frequency band, wherein the estimated signal energy is derived from a spectrum of the current audio frame of the core band audio signal.) Rejected for similar rationale and reasoning as claim 1 Regarding claim 6 Lecomte teaches The electronic device of claim 2, wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower that the bitrate threshold, obtain a signal on a frequency domain by executing an inverse quantization for the first audio bitstream (0007: de-quantized); and execute the BWE with respect to the signal on the frequency domain (abstract; 0124: bandwidth extension audio signal is based on a frequency domain). Rejected for similar rationale and reasoning as claim 1, where it would be obvious to one of ordinary skill in the art before the effective filing date to incorporate these well-known steps to convert the signal to the proper domain for processing (as the signal would first need to be put in the frequency domain to then perform the additional steps of extension as later explain in Lecomte). Regarding claim 8 Lecomte teaches The electronic device of claim 2, wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower that the bitrate threshold, obtain the audio signal further based on a parameter converted to at least one parameter for the BWE (abstract; 0070). Rejected for similar rationale and reasoning as claim 1 Regarding claim 21 Lee (2017/0013503) teaches An electronic device (fig 2 electronic device) comprising: communication circuitry (fig 2 230 communication module); a speaker (fig 2 240 output device; 0132: speaker); and memory comprising one or more storage media storing one or more instructions (fig 2 210 memory); and at least one processor comprising processing circuitry (fig 2 220 processor), wherein the instructions, when executed by the at least one processor, cause the electronic device to: identify a bitrate of a first audio bitstream received via the communication circuitry from an external electronic device (fig 6; 79-81; 105-106; 79: bit rate of the received packet data); in response to the bitrate being identified as higher than a bitrate threshold: obtain a first audio signal based on bypassing to execute a bandwidth extension (BWE) for the first audio bitstream (fig 6; 79-81; 0097: threshold; 105-106 [0081] If network congestion is not detected after a network state becomes relatively good, then a bandwidth may be increased to a wideband (WB) as a bit rate is adaptively adjusted. In this case, the bandwidth controller 324 may suppress and/or prevent unnecessary calculation by dynamically turning off (or deactivating) the ABE 325 during a call session.), and output audio via the speaker using the first audio signal ([0086] The voice decoder 323 of the receiving end 320 may decode the received packet data into voice data. The decoded voice data may be provided to a user of the receiving-end electronic device in the form of being output through an output device such as a speaker, which may be included in the receiving-end electronic device.; 0132 output through an output device such as a speaker), in response to the bitrate being identified as lower than the bitrate threshold: obtain a second audio signal by executing a bandwidth extension (BWE) for the first audio bitstream [based at least in part on at least one coding parameter obtained from a second audio bitstream received via the communication circuitry from the external electronic device before the first audio bitstream], wherein the second audio bitstream is obtained from a signal in both a first frequency range lower than a reference frequency and a second frequency range higher than the reference frequency (fig 6; 0006: HD/WB sound quality; 79-81 wideband; 97: threshold; wideband; 105-106 0079: if packet data are received at a bit rate decreased based on network congestion detected during the call session (e.g., if a bandwidth is decreased to a narrow band (NB)), then the bandwidth controller 324 turns on the ABE 325 to synthesize sound quality of a standard definition (SD) voice as sound quality of a high definition (HD) voice.), [processing a boundary of the second audio signal adjacent to a third audio signal corresponding to the second audio bitstream], and output audio via the speaker using the second audio signal ([0086] The voice decoder 323 of the receiving end 320 may decode the received packet data into voice data. The decoded voice data may be provided to a user of the receiving-end electronic device in the form of being output through an output device such as a speaker, which may be included in the receiving-end electronic device.; 0132 output through an output device such as a speaker); But does not specifically teach where Lecomte teaches obtain a second audio signal by executing a bandwidth extension (BWE) for the first audio bitstream based at least in part on at least one coding parameter obtained from a second audio bitstream received via the communication circuitry from the external electronic device before the first audio bitstream (abstract: produce an audio signal from a bitstream; wherein the bandwidth extension module includes an energy adjusting module being configured in such way that in a current audio frame in which an audio frame loss occurs, an adjusted signal energy for the cur-rent audio frame for the at least one frequency band is set.; 0070: wherein the bandwidth extension module includes an energy adjusting module being configured in such way that in a current audio frame in which an audio frame loss occurs, an adjusted signal energy for the current audio frame for the at least one frequency band is set based on a current gain factor for the current audio frame, wherein the current gain factor is derived from a gain factor from a previous audio frame or from the bitstream, and based on an estimated signal energy for the at least one frequency band, wherein the estimated signal energy is derived from a spectrum of the current audio frame of the core band audio signal.), processing a boundary of the second audio signal adjacent to a third audio signal corresponding to the second audio bitstream (abstract; 44: It attends for a smooth transition from the concealed signal to the correctly decoded signal in terms of energy gaps that may result from mismatched frame borders.; 0070; 0081; 87-88; 104-105;127; 144). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Lecomte to allow for execution of the bandwidth extension to compensate for low bit rate and poor audio quality. Lee already teaches determining to execute bandwidth extension, and where the second audio bitstream is below and higher than a reference frequency (high definition/wideband). One could look to Lecomte to further execute the bandwidth extension using a previous coding parameter and boundary information to create a more continuous, consistent, and accurate wideband signal for improved bandwidth extension and high band reconstruction (Lecomte 0130) and still allow to use an ABE algorithm for synthesizing sound quality of an high-definition (HD) voice again from sound quality of an standard-definition (SD) voice (Lee 0009). Regarding claim 22 Lee teaches The electronic device of claim 21, wherein the first audio bitstream having the bitrate lower than the bitrate threshold is obtained from a signal on the first frequency range among the first frequency range and the second frequency range (0079: received packet data; if packet data are received at a bit rate decreased based on network congestion detected during the call session (e.g., if a bandwidth is decreased to a narrow band (NB)); 0081: wideband), and wherein the first audio bitstream having the bitrate higher than or equal to the bitrate threshold is obtained from a signal on the first frequency range and the second frequency range ([0081] If network congestion is not detected after a network state becomes relatively good, then a bandwidth may be increased to a wideband (WB) as a bit rate is adaptively adjusted). Regarding claim 23 Lee does not specifically teach where Lecomte teaches The electronic device of claim 21, wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower that the bitrate threshold, obtain the second audio signal further based on energy information for each of frequency bands obtained by encoding the second audio bitstream (0070: estimated signal energy for the at least one frequency band; abstract: produce an audio signal from a bitstream; wherein the bandwidth extension module includes an energy adjusting module being configured in such way that in a current audio frame in which an audio frame loss occurs, an adjusted signal energy for the cur-rent audio frame for the at least one frequency band is set.; 0070: wherein the bandwidth extension module includes an energy adjusting module being configured in such way that in a current audio frame in which an audio frame loss occurs, an adjusted signal energy for the current audio frame for the at least one frequency band is set based on a current gain factor for the current audio frame, wherein the current gain factor is derived from a gain factor from a previous audio frame or from the bitstream, and based on an estimated signal energy for the at least one frequency band, wherein the estimated signal energy is derived from a spectrum of the current audio frame of the core band audio signal.) Rejected for similar rationale and reasoning as claim 21 Regarding claim 26 Lecomte teaches The electronic device of claim 21, wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower that the bitrate threshold, obtain a signal on a frequency domain by executing an inverse quantization for the first audio bitstream (0007: de-quantized); and execute the BWE with respect to the signal on the frequency domain (abstract; 0124: bandwidth extension audio signal is based on a frequency domain). Rejected for similar rationale and reasoning as claim 21, where it would be obvious to one of ordinary skill in the art before the effective filing date to incorporate these well-known steps to convert the signal to the proper domain for processing (as the signal would first need to be put in the frequency domain to then perform the additional steps of extension as later explain in Lecomte). Regarding claim 28 Lecomte teaches The electronic device of claim 21, wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower that the bitrate threshold, obtain the second audio signal further based on a parameter converted to at least one parameter for the BWE (abstract; 0070). Rejected for similar rationale and reasoning as claim 1 8. Claims 4, 7, 9, 24, 27, 29 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (2017/0013503) in view of Lecomte et al (2016/0180854) in further view of Sung et al (2020/0234720). Regarding claim 4 Lee and Lecomte do not specifically teach where Sung teaches The electronic device of claim 3, wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower that the bitrate threshold, obtain the audio signal further based on information for a signal having a strength greater than a reference strength within a predetermined time interval, obtained by encoding the second bitstream (0019: obtaining a gain of a current frame, based on at least one of the plurality of decoding parameters, obtaining an average of gains of the current frame and frames adjacent to the current frame, selecting a machine learning model for a transient signal when a difference between the gain of the current frame and the average of the gains is greater than a threshold, determining whether a window type included in the plurality of decoding parameters indicates short, when the difference between the gain of the current frame and the average of the gains is less than the threshold, selecting the machine learning model for the transient signal when the window type indicates short, and selecting a machine learning model for a stationary signal when the window type does not indicate short.; 195), (0019; 195; 200; 203: bandwidth extension technique). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Sung to allow for execution of the bandwidth extension to compensate for low bit rate and poor audio quality. Lee and Lecomte already teach determining to execute bandwidth extension and obtaining parameters to perform the extension. One could look to Sung to further execute the bandwidth extension using the additional parameters (of Sung) to allow the extension to better adapt to different characteristics so the decoded audio signal may be efficiently reconstructed (Sung 0223) and still allow to use an ABE algorithm for synthesizing sound quality of an high-definition (HD) voice again from sound quality of an standard-definition (SD) voice (Lee 0009). Regarding claim 7 Sung teaches The electronic device of claim 2, wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower that the bitrate threshold, obtain the audio signal by executing an inverse transform for a signal on a frequency domain obtained through the BWE (fig 2 234 inverse converter; 0060: The inverse converter 234 may convert the stereo signal of the frequency domain and output a decoded audio signal of the time domain.). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the well-known inverse transform to convert the signal back from the frequency domain to output the audio through the speaker for presentation. Regarding claim 9 Lee and Lecomte do not specifically teach where Sung teaches The electronic device of claim 8, wherein the at least one parameter is converted using a trained model (abstract: obtaining a reconstructed parameter by applying a machine learning model; 0001; 8; 0203) It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Sung and a model for bandwidth extension to allow the extension to better adapt to different characteristics so the decoded audio signal may be efficiently reconstructed (Sung 0223) and higher-quality audio may be reconstructed using the reconstructed decoding parameters (Sung 0007). Regarding claim 24 Lee and Lecomte do not specifically teach where Sung teaches The electronic device of claim 21, wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower that the bitrate threshold, obtain the second audio signal further based on information for a signal having a strength greater than a reference strength within a predetermined time interval, obtained by encoding the second bitstream (0019: obtaining a gain of a current frame, based on at least one of the plurality of decoding parameters, obtaining an average of gains of the current frame and frames adjacent to the current frame, selecting a machine learning model for a transient signal when a difference between the gain of the current frame and the average of the gains is greater than a threshold, determining whether a window type included in the plurality of decoding parameters indicates short, when the difference between the gain of the current frame and the average of the gains is less than the threshold, selecting the machine learning model for the transient signal when the window type indicates short, and selecting a machine learning model for a stationary signal when the window type does not indicate short.; 195), (0019; 195; 200; 203: bandwidth extension technique). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Sung to allow for execution of the bandwidth extension to compensate for low bit rate and poor audio quality. Lee and Lecomte already teach determining to execute bandwidth extension and obtaining parameters to perform the extension. One could look to Sung to further execute the bandwidth extension using the additional parameters (of Sung) to allow the extension to better adapt to different characteristics so the decoded audio signal may be efficiently reconstructed (Sung 0223) and still allow to use an ABE algorithm for synthesizing sound quality of an high-definition (HD) voice again from sound quality of an standard-definition (SD) voice (Lee 0009). Regarding claim 27 Sung teaches The electronic device of claim 21, wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower that the bitrate threshold, obtain the second audio signal by executing an inverse transform for a signal on a frequency domain obtained through the BWE (fig 2 234 inverse converter; 0060: The inverse converter 234 may convert the stereo signal of the frequency domain and output a decoded audio signal of the time domain.). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate the well-known inverse transform to convert the signal back from the frequency domain to output the audio through the speaker for presentation. Regarding claim 29 Lee and Lecomte do not specifically teach where Sung teaches The electronic device of claim 28, wherein the at least one parameter is converted using a trained model (abstract: obtaining a reconstructed parameter by applying a machine learning model; 0001; 8; 0203) It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Sung and a model for bandwidth extension to allow the extension to better adapt to different characteristics so the decoded audio signal may be efficiently reconstructed (Sung 0223) and higher-quality audio may be reconstructed using the reconstructed decoding parameters (Sung 0007). 9. Claims 5 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (2017/0013503) in view of Lecomte et al (2016/0180854) in further view of Nongpiur et al (2007/0150269). Regarding claim 5 Lee and Lecomte do not specifically teach where Nongpiur teaches The electronic device of claim 1 wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower than the bitrate threshold, obtain the audio signal further based on wherein the at least one coding parameter includes pitch information or harmonic overtone information that has been obtained when the second bitstream was encoded. Lee and Lecomte do not specifically teach where Nongpiur teaches pitch information or harmonic overtone information (0049: harmonics; pitch analysis; 0049: These systems and/or acts may use a pitch analysis, code books, linear mapping, or other methods to reconstruct missing harmonics before or during the bandwidth extension.; 38). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Nongpiur to allow for execution of the bandwidth extension to compensate for low bit rate and poor audio quality. Lee and Lacomte already teach bandwidth extension, and obtaining parameters from previous portions of the audio signal to execute the extension. One could look to Nongpiur to additionally obtain pitch or harmonic information to reconstruct missing harmonics before or during the bandwidth extension (0049) and enhances the quality and intelligibility of speech signals by reconstructing missing bands that may make speech sound more natural and robust (Nongpiur 0050) Regarding claim 25 Lee and Lecomte do not specifically teach where Nongpiur teaches The electronic device of claim 21 wherein the instructions, when executed by the at least one processor, cause the electronic device to: in response to the bitrate being identified as lower than the bitrate threshold, obtain the second audio signal further based pitch information or harmonic overtone information obtained by encoding the second audio bitstream. Lee and Lecomte do not specifically teach where Nongpiur teaches pitch information or harmonic overtone information (0049: harmonics; pitch analysis; 0049: These systems and/or acts may use a pitch analysis, code books, linear mapping, or other methods to reconstruct missing harmonics before or during the bandwidth extension.; 38). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Nongpiur to allow for execution of the bandwidth extension to compensate for low bit rate and poor audio quality. Lee and Lacomte already teach bandwidth extension, and obtaining parameters from previous portions of the audio signal to execute the extension. One could look to Nongpiur to additionally obtain pitch or harmonic information to reconstruct missing harmonics before or during the bandwidth extension (0049) and enhances the quality and intelligibility of speech signals by reconstructing missing bands that may make speech sound more natural and robust (Nongpiur 0050) 10. Claims 10 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Lee (2017/0013503) in view of Lecomte et al (2016/0180854) in further view of Lee et al (8,959,015). Regarding claim 10 Lee and Lecomte do not specifically teach where Lee teaches The electronic device of claim 2, wherein the instructions, when executed by the at least one processor, cause the electronic device to: obtain another audio signal from the second audio bitstream (col 2 l. 3-11; col 12 l. 4-13; claim 9: bitstream; previous frame), wherein a part of the audio signal is obtained by processing a part of the other audio signal that is overlapped with the part of the audio signal (col 1 l. 17-26: selects and operates one of the at least two encoding/decoding modules according to an input characteristic for each frame; col 2 l. 3-11: previous module information for overlapping; col 12 l. 4-13: overlapping; claim 9: overlap between previous frame and current frame). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Lee (and overlap information) to more properly selects and operates one of the at least two encoding/decoding modules according to an input characteristic for each frame (col 1 l. 25-27), and ultimately provide for more continuous execution and reproduction of the audio signal. Regarding claim 30 Lee and Lecomte do not specifically teach where Lee teaches The electronic device of claim 21, wherein a part of the second audio signal is obtained by processing a part of the third audio signal that is overlapped with the part of the third audio signal (col 1 l. 17-26: selects and operates one of the at least two encoding/decoding modules according to an input characteristic for each frame; col 2 l. 3-11: previous module information for overlapping; col 12 l. 4-13: overlapping; claim 9: overlap between previous frame and current frame). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Lee (and overlap information) to more properly selects and operates one of the at least two encoding/decoding modules according to an input characteristic for each frame (col 1 l. 25-27), and ultimately provide for more continuous execution and reproduction of the audio signal. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAUN A ROBERTS whose telephone number is (571)270-7541. The examiner can normally be reached Monday-Friday 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached on 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAUN ROBERTS/Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Oct 09, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §103
Jul 02, 2025
Interview Requested
Jul 17, 2025
Applicant Interview (Telephonic)
Jul 18, 2025
Examiner Interview Summary
Aug 20, 2025
Response Filed
Sep 19, 2025
Final Rejection — §103
Nov 23, 2025
Request for Continued Examination
Dec 02, 2025
Response after Non-Final Action
Jan 16, 2026
Non-Final Rejection — §103
Apr 09, 2026
Interview Requested
Apr 14, 2026
Examiner Interview Summary
Apr 14, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586599
AUDIO SIGNAL PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM WITH MACHINE LEARNING AND FOR MICROPHONE MUTE STATE FEATURES IN A MULTI PERSON VOICE CALL
2y 5m to grant Granted Mar 24, 2026
Patent 12586568
SYNTHETICALLY GENERATING INNER SPEECH TRAINING DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12573376
Dynamic Language and Command Recognition
2y 5m to grant Granted Mar 10, 2026
Patent 12562157
GENERATING TOPIC-SPECIFIC LANGUAGE MODELS
2y 5m to grant Granted Feb 24, 2026
Patent 12555562
VOICE SYNTHESIS FROM DIFFUSION GENERATED SPECTROGRAMS FOR ACCESSIBILITY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
86%
With Interview (+10.3%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 647 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month