Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 5-14, 17, 19, 22, and 25 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Robinson et al. (US 10897570 B1, hereinafter “Robinson”).
Regarding claim 1, Robinson teaches an apparatus comprising: at least one processor; (processor 702)
and at least one memory storing instructions that, when executed with the at least one processor, cause the apparatus at least to: (see Column 16, line 52 - Column 17, line 19: memory device 706, storage device 708 holds instruction and data used by processor 702)
obtain audio content representing at least one audio space; (see Column 11, line 1-12: headset 110 includes frame 505 that presents media content, such as audio, to the user in parameters of room)
enable at least one digital signal processing operation to render the audio content such that the rendered audio content comprises at least one target response for the at least one audio space, wherein the enabling of the at least one digital signal processing operation to render the audio content is controlled based on obtaining the at least one target response for the at least one audio space; (see Fig. 2A; Column 4, line 1-41: the audio processing system 130, a specific application of digital signal processing, receives room parameters of room 102 from headset 110 and determines a room impulse response and acoustic parameters, then outputting an audio signal.)
and obtain at least one parameter for the at least one digital signal processing operation, and use the obtained at least one parameter to enable the at least one digital signal processing operation to reproduce an acoustic effect with the at least one target response for a user position within the at least one audio space; (see Column 9, line 36 – Column 10, line 21: the audio processing system 130, audio rendering module 220 that outputs an audio signal, determines the room impulse response based on a target location of an object and a position of the headset within the room, by parameters tracked by the headset 110. When the system obtains the target response, it means that the target response is known.)
or obtain at least one parameter for a neural network, based on the obtained at least one target response and use the neural network to determine at least one parameter for the at least one digital signal processing operation, wherein the determined at least one parameter enables the at least one digital signal processing operation to reproduce an acoustic effect with the at least one target response for the user position within the at least one audio space. (see Column 6, line 5-53: neural network 216 and neural network model store 219 determines room parameters. When the system does not go through the calculating process to obtain the impulse response, it means that the target response is considered unknown and the neural network is used.)
Under the broadest reasonable interpretation, obtaining at least one parameter for both the digital signal processing operation and the neural network based on the obtained at least one target response is not required to be present or operable. This operation only needs to be capable of using either a digital signal processing operation or neural network, not necessarily both, in accordance with the interpretation of “or”.
Regarding claim 5, Robinson teaches wherein the target response comprises target control gains for an output audio signal to enable an audio scene to be rendered to a user based on the user position within the at least one audio space. (see Column 9, line 52 – Column 10, line 21: audio rendering module 220 comprises room impulse response based on target location to output audio signal based on the user position in the room)
Regarding claim 6, Robinson teaches wherein the instructions, when executed with the at least one processor, cause the apparatus to at least one of: receive one or more acoustic effect parameters; or enable the one or more acoustic effect parameters, wherein the neural network is used to obtain the parameters for the at least one digital signal processing operation. (see Column 9, line 36 – Column 10, line 21: the audio processing system 130, audio rendering module 220 that outputs an audio signal, determines the room impulse response based on a target location of an object and a position of the headset within the room, by parameters tracked by the headset 110 ) (see Column 6, line 5-53: neural network 216 and neural network model store 219 determines room parameters)
Under the broadest reasonable interpretation, both receiving acoustic effect parameters and obtaining parameters through neural network is not required to be present or operable. This operation only needs to be capable of using either process, not necessarily both, in accordance with the interpretation of “or”.
Regarding claim 7, Robinson teaches wherein the one or more acoustic effect parameters comprise information indicative of the at least one target response for an audio signal. (see Fig. 2A; Column 4, line 1-41: the audio processing system 130, a specific application of digital signal processing, receives room parameters of room 102 from headset 110 and determines a room impulse response and acoustic parameters, then outputting an audio signal.)
Regarding claim 8, Robinson teaches wherein the instructions, when executed with the at least one processor, cause the apparatus to at least one of: receive one or more parameters for the neural network; use the parameters for the neural network to generate the neural network; and or obtain the parameters for the digital signal processing operation. (see Column 6, line 5-53: audio processing uses neural network to determine parameter, generating neural network, and obtain parameters for audio processing)
Regarding claim 9, Robinson teaches one or more parameters for the neural network are received from an encoding device. (see Column 6, line 5-20: neural network parameters may include GPU or integrated circuit to implement)
Regarding claim 10, Robinson teaches a wherein the instructions, when executed with the at least one processor, cause the apparatus to at least one of: receive information indicative of one or more weights for the neural network; use the information indicative of one or more weights for the neural network to adjust the neural network or use the adjusted neural network to obtain the parameters for the digital signal processing operation. (see Column 6, line 5-20; Column 9, line 19-35: the neural networks can receive information from different room parameters assigned to different weights, the closest match being selected based on the weighting of the room)
Regarding claim 11, Robinson teaches the information indicative of one or more weights for the neural network comprises at least one of; one or more values for one or more weights of the neural network; and, r one or more references to a stored set of weights for the neural network. (see Column 6, line 5-20; Column 9, line 19-35: the neural networks can receive information from different room parameters assigned to different weights, the closest match being selected based on the weighting of the room stored in the neural network model store 219)
Regarding claim 12, Robinson teaches wherein the instructions, when executed with the at least one processor, cause the apparatus to at least one of: update one or more weights for the neural network; use the updated weights to adjust the neural network; or use the adjusted neural network to obtain the parameters for the digital signal processing operation. . (see Column 6, line 5-20; Column 9, line 19-35: the neural networks can receive information from different room parameters assigned to different weights, the closest match being selected based on the weighting of the room stored in the neural network model store 219 and updated room impulse response in the database 212)
Regarding claim 13, Robinson teaches wherein the instructions, when executed with the at least one processor, cause the apparatus to determine a position of a user within the at least one audio space. (see Fig. 2A; Column 4, line 1-41: the audio processing system 130, a specific application of digital signal processing, receives room parameters of room 102 from headset 110 and determines a room impulse response and acoustic parameters, then outputting an audio signal.)
Regarding claim 14, Robinson teaches wherein the instructions, when executed with the at least one processor, cause the apparatus to provide a binaural audio output. (see Column 5, line 14-49: binaural room impulse response can be used to generate binaural audio output)
Regarding claim 17, the claimed limitations are a method claim directly corresponding to the apparatus claim 1; therefore, is rejected for the significant similar reasons as claim 1 as discussed above.
Regarding claim 19, the claimed limitations are a claim directly corresponding to apparatus claim 1; therefore, is rejected for the significant similar reasons as claim 1 as discussed above.
Regarding claim 22, Robinson teaches wherein the apparatus is at least one of: an audio rendering device; or an encoding device. (audio rendering module 220, GPU)
Regarding claim 25, the claimed limitations are a method claim directly corresponding to the apparatus claim 8; therefore, is rejected for the significant similar reasons as claim 8 as discussed above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-4, 18, and 23-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Robinson et al. (US 10897570 B1, hereinafter “Robinson”).
Regarding claim 2-4, Robinson teaches the idea of a filter in the signal path to perform any one or more of reverberator attenuation filtering, reverberator diffuse-to-direct ratio control, or directivity filtering. (see Column 5, line 64 – Column 6, line 41; Column 7, line 33-67: filter in the signal path 312, apparatus performs reverberation time or direct to reverberation ratio. The direct signal path may result in attenuation, filtering, and time delay, reflecting.)
Robinson does not mention that the filtering unit can be alternatively filterbanks. Official notice is taken that it is well known that filterbanks contain parameters, such as filterbank gain parameters, a graphic equalizer filterbank, and perform functions such as reverberating or attenuation filtering.
Therefore, it would have been obvious to one of ordinary skill in the art that the filtering unit of Robinson could have been replaced by filterbanks. Filterbanks are well known in the art, hence, desirable to use known methods to enhance the device by extending the filtering unit to cover multiple frequency bands through a filtering bank, further adding gain control to such a filter bank as a function to manage signal levels.
Regarding claim 18, 23, and 24, note the discussion of the rejections for claims 17 and 2-4, Robinson as modified meets the method limitations for the similar reasons as set forth above.
Response to Arguments
Applicant's arguments filed October 28, 2025 have been fully considered but they are not persuasive.
On page 9-12 of applicant’s remarks, applicant mainly argues that the art of record fails to disclose "obtain at least one parameter for the at least one digital signal processing operation and use the obtained at least one parameter to enable the at least one digital signal processing operation to reproduce an acoustic effect with the at least one target response for a user position within the at least one audio space" and "obtain at least one parameter for a neural network, based on the obtained at least one target response, and use the neural network to determine at least one parameter for the at least one digital signal processing operation, wherein the determined at least one parameter enables the at least one digital signal processing operation to reproduce an acoustic effect with the at least one target response for the user position within the at least one audio11 space." The Examiner disagrees and maintains as pointed out in the rejection above, Robinson clearly teaches and obtain at least one parameter for the at least one digital signal processing operation, and use the obtained at least one parameter to enable the at least one digital signal processing operation to reproduce an acoustic effect with the at least one target response for a user position within the at least one audio space; (see Column 9, line 36 – Column 10, line 21: the audio processing system 130, audio rendering module 220 that outputs an audio signal, determines the room impulse response based on a target location of an object and a position of the headset within the room, by parameters tracked by the headset 110. When the system obtains the target response, it means that the target response is known.)
or obtain at least one parameter for a neural network, based on the obtained at least one target response and use the neural network to determine at least one parameter for the at least one digital signal processing operation, wherein the determined at least one parameter enables the at least one digital signal processing operation to reproduce an acoustic effect with the at least one target response for the user position within the at least one audio space. (see Column 6, line 5-53: neural network 216 and neural network model store 219 determines room parameters. When the system does not go through the calculating process to obtain the impulse response, it means that the target response is considered unknown and the neural network is used.)
Under the broadest reasonable interpretation, obtaining at least one parameter for both the digital signal processing operation and the neural network based on the obtained at least one target response is not required to be present or operable. This operation only needs to be capable of using either a digital signal processing operation or neural network, not necessarily both, in accordance with the interpretation of “or”.
Firstly, the acoustic analysis module uses the room parameters to reference the room impulse response database and retrieve a room impulse response. Because the retrieved impulse response governs the acoustic rendering applied to the signal, the room parameters function as control inputs that determine the signal processing configuration. Accordingly, they operate as parameters of the digital signal processing system by determining which acoustic transformation is applied.
Secondly, although the room impulse is ultimately derived from the target response, the neural network can operate on the available target response to estimate or infer room characteristics. The inferred parameters can then be used to obtain the room impulse response. Furthermore, the system only requires at least one parameter to operate when the target response is known or unknown, so it does not depend on having both the target response and room impulse fully obtained simultaneously. Therefore, the sequence does not prevent the neural network from being on the target response.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNABELLE KANG whose telephone number is (571)270-3403. The examiner can normally be reached Monday-Thursday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANNABELLE KANG/Examiner, Art Unit 2695
/VIVIAN C CHIN/Supervisory Patent Examiner, Art Unit 2695