Prosecution Insights
Last updated: April 19, 2026
Application No. 18/000,443

MAINTAINING INVARIANCE OF SENSORY DISSONANCE AND SOUND LOCALIZATION CUES IN AUDIO CODECS

Non-Final OA §103
Filed
Dec 01, 2022
Examiner
KAZEMINEZHAD, FARZAD
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
379 granted / 534 resolved
+9.0% vs TC avg
Strong +67% interview lift
Without
With
+67.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
24 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
13.6%
-26.4% vs TC avg
§103
36.9%
-3.1% vs TC avg
§102
18.3%
-21.7% vs TC avg
§112
18.5%
-21.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 534 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/17/2026 has been entered. Response to Amendment In response to the office action from 11/19/2025, the applicant has submitted a request for continued examination, amending claims 1, 10, 19-20, while arguing to traverse the prior art and 112 rejections. Applicant’s arguments have been fully considered and determined persuasive with respect to the 112(a) rejections but the amended claims are rejected further in view of Eguchi (US 2006/0004565) mandated by the latest amendments. Response to Arguments After some broad remarks on page 8 the 112(a) is discussed in the remainder of page 8. The arguments were determined persuasive and the 112(a) is overcome. The arguments on page 10 concern why the references of record fail to teach the latest amendments. Since a new reference is relied upon in combination with the references of record, therefore the examiner respectfully requests to visit the new office action further in view of Eguchi to determine how the latest amendments are treated. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 10, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guilford et al. (US 2017/0077964), in view of Moriya et al. (US 2003/0046064), and further in view of Eguchi (US 2006/0004565). Regarding claim 1, Guilford et al. do teach a method (Title, Abstract) comprising: receiving a plurality of audio channels based on an audio stream (¶ 0020 lines 6+: “The compression engine 108 may compress an input stream” (receiving an audio stream which comprises of two channels as shown in Fig. 1 one from the “input stream” into “string matcher” “110”, and the other from the “input stream” into the “Comparator” “130”; i.e. ¶ 0036 sentence 2: “The system 200 may begin with compressing an input stream in a first compression stage using a string matcher” (i.e., a channel from the “input stream” into the “string matcher”); ¶ 0023 sentence 1: “The system” “may further include a comparator 130 to compare the output bit stream with the input stream of data” (a second channel from the “input stream” into the “comparator 130”; the “input stream” is “audio” as it requires and “audio I/O” (¶ 0068 sentence before last)); applying a model based on at least one acoustic perception algorithm to the plurality of audio channels to generate a first modelled audio stream (¶ 0021 lines 1-3: “the compression engine 108 may include a string matcher 110, an entropy code generator 114” (i.e., the ‘input stream” (audio stream) when enters the “engine 108” is processed by “an entropy code generator 114” (is applied an acoustic perception algorithm) to generate according to ¶ 0017 lines 14+ “an output of an entropy code generator” (a first modelled audio stream)); quantizing the plurality of audio channels using a first set of quantization parameters (¶ 0020 lines 6-7: “The compression engine 108 may compress” (quantize) “an input stream” (the audio stream using e.g., ¶ 0114 lines 11-12: “512 bit, 256 bit, 128 bit, 64 bit, 32 bit, or 16 bit data” (a first set of quantization parameters)) “to generate a final compressed output”); dequantizing the quantized plurality of audio channels using the first set of quantization parameters (¶ 0020 lines 8+: “The decompression engine 120 may decompress” (dequantize) “the final compressed output” (the quantized audio stream using the same quantization parameter as it involves using the same “compressed output”)); applying the model based on at least one acoustic perception algorithm to the dequantized plurality of audio channels to generate a second modelled audio stream (¶ 0019 lines 11+: “A second decompression stage may include a hardware decoder to partially decompress the final compressed output” (for the dequantized audio stream) “to reverse encoding of the entropy code encoder” (apply the model based on the acoustic perception algorithm) “generating a partially decompressed output” (to generate a second modelled audio stream)); determining a difference in response to determining the difference does not meet a criterion, generating a second set of quantization parameters (¶ 0023 sentence 2: “The comparator 130 may compare” (the comparison or difference between) “every byte in the input” (i.e., “input stream” (the first modeled audio stream)) “and output buffers” (i.e., the “buffer of the decompression engine” ( the second modeled audio stream (¶ 0022 line 5)) “for equality” (to determine if they meet a criterion); ¶ 0039 last sentence: “The embodiments of the error correcting that carry additional bits” (due to the “error” or lack of the “equality” (the criterion) generating new “bits” (second set of quantization parameters are generated)); and quantizing the plurality of audio channels using the second set of quantization parameters (¶ 0039 last sentence: “The embodiments of the error correcting code that carry additional bits” (the second set of quantization parameters are used) “may be implemented” “by processor core 500” (on e.g. the audio stream)). Guilford et al. do not specifically disclose: Determining a difference in perception by comparing [a] first modelled audio stream and [a] second modelled audio stream; in response to determining the difference does not meet a criterion, generating a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization. Moriya et al. do teach: Determining a difference in perception by comparing [a] first modelled audio stream and [a] second modelled audio stream (¶ 0203 S1: “It is customary in the prior art that the high-compression-ratio coding of an acoustic signal is designed to minimize perceptual distortion” (i.e., a “distortion difference” (a difference) in perception is determined) “in view of the perceptual characteristics”, because according to ¶0565 S1: “calculating a distortion difference” (the said distortion difference is determined by finding a difference between) “between the spectral envelope of said provisional samples or restored samples” (a first modeled audio stream) “and said reconstructed spectral envelope” (and a second modeled audio stream) “setting said provisional samples as restored samples”). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the method of “compression-ratio coding” based on “perceptual distortion” for “quantization” of Moriya et al. into the “quantization” techniques of Guilford et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. to obtain an effective recipe in “imporv[ing] the signal quality” based on “quantization results” (i.e., quantization parameters) as disclosed in Moriya et al. ¶ 0226 last sentence. Guilford et al. in view of Moriya et al. do not specifically disclose: Generating a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization. Eguchi does teach: Generating a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization (¶ 0092 last S: “Thus, the size of quantizing noise” (generating a second set of quantization parameters) “in an audio signal encoding process can be reduced” (to reduce “encoding” (compression)) “which can contribute to the improvement of the tone quality” (to help in qualitative impact of the quantization) “of audio signal encoding/decoding equipment”). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “quantization” techniques of Eguchi into those of Moriya in Guilford et al. in view of Moriya et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. in view of Moriya et al. to adjust their “tone quality” based on adjusting “quantization” “step” as discussed in Eguchi ¶ 0092 last sentence. Regarding claim 10, Guilford et al. do teach a method (Title, Abstract) comprising: receiving an audio stream (¶ 0020 lines 6+: “The compression engine 108 may compress an input stream” (receiving an audio stream); the “input stream” is “audio” as it requires and “audio I/O” (¶ 0068 sentence before last)); applying a model based on at least one acoustic perception algorithm to the audio stream to generate a first modelled audio stream (¶ 0021 lines 1-3: “the compression engine 108 may include a string matcher 110, an entropy code generator 114” (i.e., the ‘input stream” (audio stream) when enters the “engine 108” is processed by “an entropy code generator 114” (is applied an acoustic perception algorithm) to generate according to ¶ 0017 lines 14+ “an output of an entropy code generator” (a first modelled audio stream)); compressing the audio stream using a first set of quantization parameters (¶ 0020 lines 6-7: “The compression engine 108 may compress” (compress) “an input stream” (the audio stream using e.g., ¶ 0114 lines 11-12: “512 bit, 256 bit, 128 bit, 64 bit, 32 bit, or 16 bit data” (a first set of quantization parameters)) “to generate a final compressed output”); decompressing the compressed audio stream using the first set of quantization parameters (¶ 0020 lines 8+: “The decompression engine 120 may decompress” (decompress) “the final compressed output” (the compressed audio stream using the same quantization parameter as it involves using the same “compressed output”)); applying the model based on at least one acoustic perception algorithm to the decompressed audio stream to generate a second modelled audio stream (¶ 0019 lines 11+: “A second decompression stage may include a hardware decoder to partially decompress the final compressed output” (for the dequantized audio stream) “to reverse encoding of the entropy code encoder” (apply the model based on the acoustic perception algorithm) “generating a partially decompressed output” (to generate a second modelled audio stream)); determining a difference in response to determining the difference does not meet a criterion, generating a second set of quantization parameters (¶ 0023 sentence 2: “The comparator 130 may compare” (the comparison or difference between) “every byte in the input” (i.e., “input stream” (the first modeled audio stream)) “and output buffers” (i.e., the “buffer of the decompression engine” ( the second modeled audio stream (¶ 0022 line 5)) “for equality” (to determine if they meet a criterion); ¶ 0039 last sentence: “The embodiments of the error correcting that carry additional bits” (due to the “error” or lack of the “equality” (the criterion) generating new “bits” (second set of quantization parameters are generated)); and compressing the audio stream using the second set of quantization parameters (¶ 0039 last sentence: “The embodiments of the error correcting code that carry additional bits” (the second set of quantization parameters are used) “may be implemented” “by processor core 500” (on e.g. the audio stream)). Guilford et al. do not specifically disclose: Determining a difference in perception by comparing [a] first modelled audio stream and [a] second modelled audio stream; in response to determining the difference does not meet a criterion, generating a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization. Moriya et al. do teach: Determining a difference in perception by comparing [a] first modelled audio stream and [a] second modelled audio stream (¶ 0203 S1: “It is customary in the prior art that the high-compression-ratio coding of an acoustic signal is designed to minimize perceptual distortion” (i.e., a “distortion difference” (a difference) in perception is determined) “in view of the perceptual characteristics”, because according to ¶0565 S1: “calculating a distortion difference” (the said distortion difference is determined by finding a difference between) “between the spectral envelope of said provisional samples or restored samples” (a first modeled audio stream) “and said reconstructed spectral envelope” (and a second modeled audio stream) “setting said provisional samples as restored samples”). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the method of “compression-ratio coding” based on “perceptual distortion” for “quantization” of Moriya et al. into the “quantization” techniques of Guilford et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. to obtain an effective recipe in “imporv[ing] the signal quality” based on “quantization results” (i.e., quantization parameters) as disclosed in Moriya et al. ¶ 0226 last sentence. Guilford et al. in view of Moriya et al. do not specifically disclose: Generating a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization. Eguchi does teach: Generating a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization (¶ 0092 last S: “Thus, the size of quantizing noise” (generating a second set of quantization parameters) “in an audio signal encoding process can be reduced” (to reduce “encoding” (compression)) “which can contribute to the improvement of the tone quality” (to help in qualitative impact of the quantization) “of audio signal encoding/decoding equipment”). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “quantization” techniques of Eguchi into those of Moriya in Guilford et al. in view of Moriya et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. in view of Moriya et al. to adjust their “tone quality” based on adjusting “quantization” “step” as discussed in Eguchi ¶ 0092 last sentence. Regarding claim 19, Guilford et al. do teach an apparatus (Title, Abstract), Comprising one or more processors, and a memory storing instructions which, when executed by the one or more processors (¶ 0040: “The processor core 500 includes a front end unit 530 coupled to an execution engine unit 550, and both are coupled to a memory unit 570. The processor core 500 may include a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, processor core 500 may include a special-purpose core, such as, for example, a network or communication core, compression engine, graphics core, or the like. In one embodiment, processor core 500 may be a multi-core processor or may be part of a multi-processor system”), cause the one or more processors to: receive a plurality of audio channels based on an audio stream (¶ 0020 lines 6+: “The compression engine 108 may compress an input stream” (receiving an audio stream which comprises of two channels as shown in Fig. 1 one from the “input stream” into “string matcher” “110”, and the other from the “input stream” into the “Comparator” “130”; i.e. ¶ 0036 sentence 2: “The system 200 may begin with compressing an input stream in a first compression stage using a string matcher” (i.e., a channel from the “input stream” into the “string matcher”); ¶ 0023 sentence 1: “The system” “may further include a comparator 130 to compare the output bit stream with the input stream of data” (a second channel from the “input stream” into the “comparator 130”; the “input stream” is “audio” as it requires and “audio I/O” (¶ 0068 sentence before last)); apply a model based on at least one acoustic perception algorithm to the plurality of audio channels to generate a first modelled audio stream (¶ 0021 lines 1-3: “the compression engine 108 may include a string matcher 110, an entropy code generator 114” (i.e., the ‘input stream” (audio stream) when enters the “engine 108” is processed by “an entropy code generator 114” (is applied an acoustic perception algorithm) to generate according to ¶ 0017 lines 14+ “an output of an entropy code generator” (a first modelled audio stream)); quantize the plurality of audio channels using a first set of quantization parameters (¶ 0020 lines 6-7: “The compression engine 108 may compress” (quantize) “an input stream” (the audio stream using e.g., ¶ 0114 lines 11-12: “512 bit, 256 bit, 128 bit, 64 bit, 32 bit, or 16 bit data” (a first set of quantization parameters)) “to generate a final compressed output”); dequantize the quantized plurality of audio channels using the first set of quantization parameters (¶ 0020 lines 8+: “The decompression engine 120 may decompress” (dequantize) “the final compressed output” (the quantized audio stream using the same quantization parameter as it involves using the same “compressed output”)); apply the model based on at least one acoustic perception algorithm to the dequantized plurality of audio channels to generate a second modelled audio stream (¶ 0019 lines 11+: “A second decompression stage may include a hardware decoder to partially decompress the final compressed output” (for the dequantized audio stream) “to reverse encoding of the entropy code encoder” (apply the model based on the acoustic perception algorithm) “generating a partially decompressed output” (to generate a second modelled audio stream)); determine a difference in response to determining the difference does not meet a criterion, generating a second set of quantization parameters (¶ 0023 sentence 2: “The comparator 130 may compare” (the comparison or difference between) “every byte in the input” (i.e., “input stream” (the first modeled audio stream)) “and output buffers” (i.e., the “buffer of the decompression engine” ( the second modeled audio stream (¶ 0022 line 5)) “for equality” (to determine if they meet a criterion); ¶ 0039 last sentence: “The embodiments of the error correcting that carry additional bits” (due to the “error” or lack of the “equality” (the criterion) generating new “bits” (second set of quantization parameters are generated)); and quantize the plurality of audio channels using the second set of quantization parameters (¶ 0039 last sentence: “The embodiments of the error correcting code that carry additional bits” (the second set of quantization parameters are used) “may be implemented” “by processor core 500” (on e.g. the audio stream)). Guilford et al. do not specifically disclose: Determining a difference in perception by comparing [a] first modelled audio stream and [a] second modelled audio stream; in response to determining the difference does not meet a criterion, generating a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization. Moriya et al. do teach: Determining a difference in perception by comparing [a] first modelled audio stream and [a] second modelled audio stream (¶ 0203 S1: “It is customary in the prior art that the high-compression-ratio coding of an acoustic signal is designed to minimize perceptual distortion” (i.e., a “distortion difference” (a difference) in perception is determined) “in view of the perceptual characteristics”, because according to ¶0565 S1: “calculating a distortion difference” (the said distortion difference is determined by finding a difference between) “between the spectral envelope of said provisional samples or restored samples” (a first modeled audio stream) “and said reconstructed spectral envelope” (and a second modeled audio stream) “setting said provisional samples as restored samples”). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the method of “compression-ratio coding” based on “perceptual distortion” for “quantization” of Moriya et al. into the “quantization” techniques of Guilford et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. to obtain an effective recipe in “imporv[ing] the signal quality” based on “quantization results” (i.e., quantization parameters) as disclosed in Moriya et al. ¶ 0226 last sentence. Guilford et al. in view of Moriya et al. do not specifically disclose: Generate a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization. Eguchi does teach: Generate a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization (¶ 0092 last S: “Thus, the size of quantizing noise” (generating a second set of quantization parameters) “in an audio signal encoding process can be reduced” (to reduce “encoding” (compression)) “which can contribute to the improvement of the tone quality” (to help in qualitative impact of the quantization) “of audio signal encoding/decoding equipment”). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “quantization” techniques of Eguchi into those of Moriya in Guilford et al. in view of Moriya et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. in view of Moriya et al. to adjust their “tone quality” based on adjusting “quantization” “step” as discussed in Eguchi ¶ 0092 last sentence. Regarding claim 20, Guilford et al. do teach a non-transitory computer readable medium containing instructions that when executed cause a processor of a computer system (¶ 0083: “The computer-readable storage medium 1124 may also be used to store instructions 1126 utilizing the processing device 1102, such as described with respect to FIGS. 1-4, and/or a software library containing methods that call the above applications. While the computer-readable storage medium 1124 is shown in an example embodiment to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions”) to receive a plurality of audio channels based on an audio stream (¶ 0020 lines 6+: “The compression engine 108 may compress an input stream” (receiving an audio stream which comprises of two channels as shown in Fig. 1 one from the “input stream” into “string matcher” “110”, and the other from the “input stream” into the “Comparator” “130”; i.e. ¶ 0036 sentence 2: “The system 200 may begin with compressing an input stream in a first compression stage using a string matcher” (i.e., a channel from the “input stream” into the “string matcher”); ¶ 0023 sentence 1: “The system” “may further include a comparator 130 to compare the output bit stream with the input stream of data” (a second channel from the “input stream” into the “comparator 130”; the “input stream” is “audio” as it requires and “audio I/O” (¶ 0068 sentence before last)); apply a model based on at least one acoustic perception algorithm to the plurality of audio channels to generate a first modelled audio stream (¶ 0021 lines 1-3: “the compression engine 108 may include a string matcher 110, an entropy code generator 114” (i.e., the ‘input stream” (audio stream) when enters the “engine 108” is processed by “an entropy code generator 114” (is applied an acoustic perception algorithm) to generate according to ¶ 0017 lines 14+ “an output of an entropy code generator” (a first modelled audio stream)); quantize the plurality of audio channels using a first set of quantization parameters (¶ 0020 lines 6-7: “The compression engine 108 may compress” (quantize) “an input stream” (the audio stream using e.g., ¶ 0114 lines 11-12: “512 bit, 256 bit, 128 bit, 64 bit, 32 bit, or 16 bit data” (a first set of quantization parameters)) “to generate a final compressed output”); dequantize the quantized plurality of audio channels using the first set of quantization parameters (¶ 0020 lines 8+: “The decompression engine 120 may decompress” (dequantize) “the final compressed output” (the quantized audio stream using the same quantization parameter as it involves using the same “compressed output”)); apply the model based on at least one acoustic perception algorithm to the dequantized plurality of audio channels to generate a second modelled audio stream (¶ 0019 lines 11+: “A second decompression stage may include a hardware decoder to partially decompress the final compressed output” (for the dequantized audio stream) “to reverse encoding of the entropy code encoder” (apply the model based on the acoustic perception algorithm) “generating a partially decompressed output” (to generate a second modelled audio stream)); determine a difference in response to determining the difference does not meet a criterion, generate a second set of quantization parameters (¶ 0023 sentence 2: “The comparator 130 may compare” (the comparison or difference between) “every byte in the input” (i.e., “input stream” (the first modeled audio stream)) “and output buffers” (i.e., the “buffer of the decompression engine” ( the second modeled audio stream (¶ 0022 line 5)) “for equality” (to determine if they meet a criterion); ¶ 0039 last sentence: “The embodiments of the error correcting that carry additional bits” (due to the “error” or lack of the “equality” (the criterion) generating new “bits” (second set of quantization parameters are generated)); and quantize the plurality of audio channels using the second set of quantization parameters (¶ 0039 last sentence: “The embodiments of the error correcting code that carry additional bits” (the second set of quantization parameters are used) “may be implemented” “by processor core 500” (on e.g. the audio stream)). Guilford et al. do not specifically disclose: Determine a difference in perception by comparing [a] first modelled audio stream and [a] second modelled audio stream; in response to determining the difference does not meet a criterion, generate a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization. Moriya et al. do teach: Determine a difference in perception by comparing [a] first modelled audio stream and [a] second modelled audio stream (¶ 0203 S1: “It is customary in the prior art that the high-compression-ratio coding of an acoustic signal is designed to minimize perceptual distortion” (i.e., a “distortion difference” (a difference) in perception is determined) “in view of the perceptual characteristics”, because according to ¶0565 S1: “calculating a distortion difference” (the said distortion difference is determined by finding a difference between) “between the spectral envelope of said provisional samples or restored samples” (a first modeled audio stream) “and said reconstructed spectral envelope” (and a second modeled audio stream) “setting said provisional samples as restored samples”). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the method of “compression-ratio coding” based on “perceptual distortion” for “quantization” of Moriya et al. into the “quantization” techniques of Guilford et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. to obtain an effective recipe in “imporv[ing] the signal quality” based on “quantization results” (i.e., quantization parameters) as disclosed in Moriya et al. ¶ 0226 last sentence. Guilford et al. in view of Moriya et al. do not specifically disclose: Generate a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization. Eguchi does teach: Generate a second set of quantization parameters configured to reduce compression relative to the first set of quantization parameters to change a qualitative impact of quantization (¶ 0092 last S: “Thus, the size of quantizing noise” (generating a second set of quantization parameters) “in an audio signal encoding process can be reduced” (to reduce “encoding” (compression)) “which can contribute to the improvement of the tone quality” (to help in qualitative impact of the quantization) “of audio signal encoding/decoding equipment”). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “quantization” techniques of Eguchi into those of Moriya in Guilford et al. in view of Moriya et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. in view of Moriya et al. to adjust their “tone quality” based on adjusting “quantization” “step” as discussed in Eguchi ¶ 0092 last sentence. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guilford et al. in view of Moriya et al. and Eguchi, and further in view of JPWO2011083849A1. Regarding claims 2 or 11, Guilford et al. in view of Moriya et al. and Eguchi do not specifically disclose the method of claim 1 (or 10), wherein the model based on at least one acoustic perception algorithm is a dissonance model. JPWO2011083849A1 do teach the method of claim 1 (or 11), wherein the model based on at least one acoustic perception algorithm is a dissonance model (¶ 0042 sentence 2: “The pitch period encoding unit” (a quantization technique) “encodes the differences between the integer part of the pitch periods” (based on “differences” “of” “pitch periods” (dissonance)). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “encoding” techniques of JPWO2011083849A1 into the “compression engine 108” of Guilford et al. in Guilford et al. in view of Moriya et al. and Eguchi would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. in view of Moriya et al. and Eguchi to resolve “smallest value of pitch periods” as disclosed in JPWO2011083849A1 ¶ 0008 lines 9-11. Claim(s) 3, 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guilford et al. in view of Moriya et al. and Eguchi, and further in view of HAGIWARA (JP2010109609A). Regarding claims 3, or 12, Guilford et al. in view of Moriya et al. and Eguchi do not specifically disclose the method of claim 1 (10), wherein the model based on at least one acoustic perception algorithm is a localization model. HAGIWARA does teach the method of claim 1 (10), wherein the model based on at least one acoustic perception algorithm is a localization model (¶ 0031 lines 1+: “sound” (an audio) “effect encoded data in which the localization” (perception algorithm based on localization) “information identification bit value 52” (specific quantization) “selected by the localization information selection unit 25 is added to the sound effect”). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “bit” allocation based on “localization” of HAGIWARA into the “compression engine 108” of Guilford et al. in Guilford et al. in view of Moriya et al. and Eguchi would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. in view of Moriya et al. and Eguchi to generate “background sound in addition to voice to a receiver” as disclosed in HAGIWARA ¶ 0001. Claim(s) 4, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guilford et al. in view of Moriya et al., and further in view of Shi (US 2014/0269901). Regarding claim 4 (or 13), Guilford et al. in view of Moriya et al. and Eguchi do not specifically disclose the method of claim 1, wherein the model based on at least one acoustic perception algorithm is a salience model. Shi does teach the method of claim 1 (10), wherein the model based on at least one acoustic perception algorithm is a salience model (¶ 0042 sentence 1: “determine a saliency score” (a salience acoustic perceptual model) “for adjusting” (for generating and updating) “a quantization parameter” (quantization model); ¶ 0049 lines 1-2: “source data” “may be” “audio”). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “QUANTIZATION” techniques of Shi into the “compression engine 108” of Guilford et al. in Guilford et al. in view of Moriya et al. and Eguchi would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. in view of Moriya et al. and Eguchi to have their “Video” (including its “audio” content) “quality” “is improved” as disclosed in ¶ 0017 lines 2-3. Claim(s) 5-6, 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guilford et al. in view of Moriya et al. and Eguchi, and further in view of SKORDILIS et al. (TW 202111692A). Regarding claims 5 or 14, Guilford et al. in view of Moriya et al. and Eguchi do not specifically disclose the method of claim 1, wherein the model based on at least one acoustic perception algorithm is a trained machine learning model trained using at least one of a supervised learning algorithm and an unsupervised learning algorithm. SKORDILIS et al. do teach the method of claim 1 (or 10), wherein the model based on at least one acoustic perception algorithm is a trained machine learning model trained using at least one of a supervised learning algorithm and an unsupervised learning algorithm (¶ 0122 last sentence: “Using a neural network” (a trained machine learning model used to) “model” “can compensate for higher quantization errors in the LP and pitch parameters” (to reduce “pitch” (an acoustic perception feature) error when “quantization” is used; ¶ 0122 sentence 1: this technique is also identified as a “Machine-Learning (ML)” technique which supports both supervised as well as unsupervised learning)). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “neural network” techniques based on “pitch” and “quantization” parameters of SKORDILIS et al. into the “compression engine 108” of Guilford et al. in Guilford et al. in view of Moriya et al. and Eguchi would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. in view of Moriya et al. and Eguchi to “compensate” “for higher quantization errors” (or “compression” errors) as disclosed in SKORDILIS et al. ¶ 0122 last sentence. Regarding claim 6 or 15, Guilford et al. in view of Moriya et al. and Eguchi do not specifically disclose the method claim 1, wherein the model based on at least one acoustic perception algorithm is based on a frequency and a level algorithm applied to the audio channels in the frequency domain. SKORDILIS et al. do teach the method of claim 1 (or 10), wherein the model based on at least one acoustic perception algorithm is based on a frequency and a level algorithm applied to the audio channels in the frequency domain (¶ 0122 last sentence: “Using a neural network” “model” (an acoustic perception algorithm) “can compensate for higher quantization errors in the LP and pitch parameters” (based on reducing “pitch” (an acoustic perception feature) error when “quantization” is used), where according to ¶ 0103 last sentence “pitch” is “deriv[ed]” “from the speech spectrum”, wherein the “spectrum” is obtained according to ¶ 0072 last sentence using “e.g., DFT, FFT, or other spectrum” (i.e., by using fast Fourier transform) of speech which implies it is in frequency domain, wherein the “spectrum” is also associated according to ¶ 0103 lines 12-13 with a “power spectral density” (a level)). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the techniques based on “pitch” “quantization” parameters of SKORDILIS et al. into the “compression engine 108” of Guilford et al. in Guilford et al. in view of Moriya et al. and Eguchi would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. in view of Moriya et al. and Eguchi to “compensate” “for higher quantization errors” (or “compression” errors) as disclosed in SKORDILIS et al. ¶ 0122 last sentence. Claim(s) 7 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guilford et al. in view of Moriya et al. and Eguchi, and further in view of Jang et al. (US 2004/0162720). Regarding claim 7 (or 16), Guilford et al. in view of Moriya et al. and Eguchi do not specifically disclose the method of claim 1 to (or 10), wherein the model based on at least one acoustic perception algorithm is based on a calculation of a masking level between at least two frequency components. Jang et al. do teach the method of claim 1 (10), wherein the model based on at least one acoustic perception algorithm is based on a calculation of a masking level between at least two frequency components ( ¶ 0014 sentence 1: “In this way, quantization using a psychoacoustic model is done to divide the audible frequency range into a number of frequency sub-bands” (for at least two frequency components) “of equal width and quantize” (a quantization is based on) “only audio data having a sound pressure level above the masking threshold” (a masking level between the components, or only “above” “threshold” components are determined as acoustically perceptible and worth bit allocation)). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the “quantization using psychoacoustic model” of Jang et al. into the “compression engine 108” of Guilford et al. in Guilford et al. in view of Moriya et al. and Eguchi would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. in view of Moriya et al. and Eguchi to avoid unnecessary quantization for frequency sub-bands not assessed perceptible based on the ”masking threshold” criterion. Claim(s) 8, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guilford et al. in view of Moriya et al. and Eguchi, and further in view of Yao (US 2007/0033027). Regarding claim 8, or 17, Guilford et al. in view of Moriya et al. and Eguchi do not specifically disclose the method of claim 1 or 10, wherein the model based on at least one acoustic perception algorithm is based on at least one of a time delta comparison, a level delta comparison and a transfer function applied to transients associated with a left audio channel and a right audio channel. Yao does teach the method of claim 1 (or 10), wherein the model based on at least one acoustic perception algorithm is based on at least one of a time delta comparison, a level delta comparison and a transfer function applied to transients associated with a left audio channel and a right audio channel (¶ 0042 last sentence: “estimate of channel distortion” (between a left and a right channel (e.g., the two channels of “input stream” in Guilford et al.)) “is: H l = H _ l - .times. .DELTA. H l .times” (a time delta is determined) “Q .function. ( .lamda. .times. | .times. .lamda. _ ) .DELTA. H l 2 .times. Q .function. ( .lamda. .times. | .times. .lamda. _ ) .times. | H l = H _ l , ( 12 ) ##EQU4## where .epsilon. is a factor between 0.0 and 1.0”). It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the calculations of paragraph 0042 of Yao into the “compression engine 108” of Guilford et al. in Guilford et al. in view of Moriya et al. and Eguchi would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Guilford et al. in view of Moriya et al. and Eguchi to use the “channel distortion” calculations to “estimate” “current background noise” as disclosed in Yao Abstract last sentence. Allowable Subject Matter Claims 9, and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARZAD KAZEMINEZHAD whose telephone number is (571)270-5860. The examiner can normally be reached 10:30 am to 11:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D. Shah can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Farzad Kazeminezhad/ Art Unit 2653 March 6th 2026.
Read full office action

Prosecution Timeline

Dec 01, 2022
Application Filed
Jun 15, 2025
Non-Final Rejection — §103
Sep 09, 2025
Interview Requested
Sep 19, 2025
Applicant Interview (Telephonic)
Sep 19, 2025
Examiner Interview Summary
Sep 25, 2025
Response Filed
Nov 15, 2025
Final Rejection — §103
Feb 12, 2026
Applicant Interview (Telephonic)
Feb 13, 2026
Examiner Interview Summary
Feb 17, 2026
Request for Continued Examination
Feb 19, 2026
Response after Non-Final Action
Mar 02, 2026
Examiner Interview (Telephonic)
Mar 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603080
GAZE-BASED AND AUGMENTED AUTOMATIC INTERPRETATION METHOD AND SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12592242
MACHINE LEARNING (ML) BASED EMOTION, IDENTITY AND VOICE CONVERSION IN AUDIO USING VIRTUAL DOMAIN MIXING AND FAKE PAIR-MASKING
2y 5m to grant Granted Mar 31, 2026
Patent 12586596
SYSTEM AND METHOD FOR BACKGROUND NOISE SUPPRESSION BY PROJECTING AN INPUT AUDIO INTO A HIGHER DIMENSION SPACE
2y 5m to grant Granted Mar 24, 2026
Patent 12555587
APPARATUS AND METHOD FOR ENCODING AN AUDIO SIGNAL USING AN OUTPUT INTERFACE FOR OUTPUTTING A PARAMETER CALCULATED FROM A COMPENSATION VALUE
2y 5m to grant Granted Feb 17, 2026
Patent 12537019
ACTIVITY CHARTING WHEN USING PERSONAL ARTIFICIAL INTELLIGENCE ASSISTANTS INCLUDING DIFFERENTIATING A PATIENT FROM A DIFFERENT PERSON BASED ON AUDIO ASSOCIATED WITH TOILETTING
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+67.2%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 534 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month