Prosecution Insights
Last updated: April 19, 2026
Application No. 18/692,741

SYSTEMS AND METHODS FOR EFFICIENT AND ACCURATE VIRTUAL ACCOUSTIC RENDERING

Non-Final OA §103
Filed
Mar 15, 2024
Examiner
SELLERS, DANIEL R
Art Unit
2694
Tech Center
2600 — Communications
Assignee
UNIVERSITY OF LOUISVILLE RESEARCH FOUNDATION, INC.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
84%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
401 granted / 595 resolved
+5.4% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
28 currently pending
Career history
623
Total Applications
across all art units

Statute-Specific Performance

§101
5.9%
-34.1% vs TC avg
§103
63.6%
+23.6% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Audfray et al. (US 2020/0112815 A1, hereafter Audfray) further in view of Kinoshita et al. (US 5,982,903 A, hereafter Kinoshita). Regarding claim 1, Audfray teaches: “A system for generating a virtual acoustic rendering, comprising: an input connection, configured to receive an input audio signal comprising at least one sound source signal” by teaching networking features (e.g., Wi-Fi capability), audio-visual content memory, and input audio signals for further processing (see Audfray, figure 4, unit 418, figure 5, unit 501, and ¶ 0023, 0029, 0039, and 0043-0044); “an output connection, configured to transmit modified output signals to at least two speakers” by teaching left and right speakers mounted on a wearable head device and an auxiliary unit to send modified audio to the speakers (see Audfray, figure 1, units 100 and 120A-120B, figure 4, units 400A, 400C, 412, 414, 422, and ¶ 0023, 0026, and 0029); “a processor” by teaching one or more processors in the system (see Audfray, figure 4, units 416, 420, and 422 and ¶ 0029); and “a memory having a set of instructions stored thereon” by teaching a non-transitory computer-readable medium that store instructions for the one or more processors to execute (see Audfray, figure 4, unit 418, ¶ 0029, and claim 31) “which, when executed by the processor, cause the processor to: receive the input audio signal from the input connection” where the one or more processors of the augmented reality system process the received input audio signal (see Audfray, figure 5, unit 501 and ¶ 0033 and 0039). Audfray teaches the above features, but does not appear to teach the feature to “apply PC weights to the at least one sound source signal of the input audio signal to obtain at least one weighted audio stream, wherein the PC weights were obtained from a principal components analysis of a set of head-related transfer functions (HRTFs)”, where PC weights are interpreted as “Principal Component weights”. Kinoshita teaches a method for construction of acoustic transfer functions measured at both ears for a large number of subjects and using principal component analysis of the acoustic transfer functions to reduce the storage requirement of a large set of acoustic transfer functions (see Kinoshita, abstract, column 4, line 59 - column 5, line 27, and column 21, lines 19-43). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Audfray with the teachings of Kinoshita for the purpose of reducing the storage requirements for a large amount of acoustic transfer functions (see Audfray, figure 4, unit 425 in view of Kinoshita, column 4, lines 59-67 and column 21, lines 19-43). Therefore, the combination of Audfray and Kinoshita makes obvious the features to: “apply PC weights to the at least one sound source signal of the input audio signal to obtain at least one weighted audio stream, wherein the PC weights were obtained from a principal components analysis of a set of head-related transfer functions (HRTFs)” by making it obvious to obtain principal component (PC) weights from a principal components analysis of a set of HRTFs such that the number of weights needed to represent an HRTF in a set of HRTFs is reduced (see Kinoshita, column 5, lines 9-27 and column 8, lines 24-34); “apply a set of PC filters to the at least one weighted audio stream to obtain filtered audio streams, wherein the PC filters were obtained from a principal components analysis of the set of HRTFs” by making it obvious to filter the audio stream with the weights and filters to obtain filtered audio streams (see Audfray, figure 5, units 503 and 540 in view of Kinoshita, column 11, line 48 - column 12, line 11 and figure 7, units 23HR, 23HL, and 24-27); “sum the filtered audio streams into at least two output channels” by teaching the mixing module to sum several virtual sound sources and the filtered output audio streams output by the HRTF filter bank are summed for a left and right output (see Audfray, figure 5, units 530, 540, 550, and 560 and ¶ 0040-0041 and 0044); and “transmit the at least two output channels for playback by the at least two speakers, to generate a virtual acoustic rendering to a listener” by teaching the output of the left and right output to left and right speakers mounted on the wearable head device, such as headphones, earbuds, etc. (see Audfray, figure 1, units 100 and 120A-120B, figure 4, units 400A, 400C, 412, 414, 422, and ¶ 0023, 0026, 0029, and 0033). Regarding claim 2, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system of claim 1, wherein the instructions further cause the processor to apply a time delay to the at least one sound source signal, prior to applying the PC weights” by teaching delays for far-field and/or near-field sources (see Audfray, ¶ 0035-0039 and 0083-0084, figure 5, units 502, figure 12, units 1240A-1240B, and figure 13, units 1340A-1340B). Regarding claim 4, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system of claim 1 wherein the set of PC filters comprises a quantity of filters that is less than the HRTFs of the set of HRTFs” by making it obvious to reduce the amount of data by only using the most significant components after applying PCA to the HRTF filters (i.e., the set of PC filters are smaller than the set of HRTF filters) (see Kinoshita, column 7, lines 7-15 and column 9, lines 13-29). Regarding claim 23, see the preceding rejection with respect to claim 1 above. Similar to claim 1, Audfray does not appear to teach the feature “applying PC weights and PC filters to each of the multiple sound source signals, to result in a set of weighted, filtered channels, the PC weights and PC filters having been derived from a set of device-related transfer functions (DRTFs)”, where PC weights are interpreted as “Principal Component weights”. Kinoshita teaches a method for construction of acoustic transfer functions measured at both ears for a large number of subjects and using principal component analysis of the acoustic transfer functions to reduce the storage requirement of a large set of acoustic transfer functions (see Kinoshita, abstract, column 4, line 59 - column 5, line 27, and column 21, lines 19-43). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Audfray with the teachings of Kinoshita for the purpose of reducing the storage requirements for a large amount of acoustic transfer functions (see Audfray, figure 4, unit 425 in view of Kinoshita, column 4, lines 59-67 and column 21, lines 19-43). Therefore the combination of Audfray and Kinoshita makes obvious: “A method for simulating an acoustic environment of a virtual reality setting comprising: receiving an audio signal comprising multiple sound source signals in an audio environment, the audio environment corresponding to a visual environment to be displayed to a user via virtual reality” by receiving audio signals for virtual, augmented, and/or mixed reality for a combined presentation with displays and speakers mounted on a wearable head device (see Audfray, figure 1, units 100, 110A-110B and 120A-120B, figure 4, units 400A, 400C, 408, 410, 412, 414, 420, 422, 424, and 426, and ¶ 0023, 0026, and 0029); “applying PC weights and PC filters to each of the multiple sound source signals, to result in a set of weighted, filtered channels, the PC weights and PC filters having been derived from a set of device-related transfer functions (DRTFs)” by making it obvious to obtain principal component (PC) weights from a principal components analysis of a set of HRTFs such that the number of weights needed to represent an HRTF in a set of HRTFs is reduced (see Kinoshita, column 5, lines 9-27 and column 8, lines 24-34), where it is obvious to apply these teachings to DRTFs, such as HRTFs with included EQ filters (see Audfray, ¶ 0084); “summing the weighted, filtered channels into at least two outputs” by teaching the mixing module to sum several virtual sound sources and the filtered output audio streams output by the HRTF filter bank are summed for a left and right output (see Audfray, figure 5, units 530, 540, 550, and 560 and ¶ 0040-0041 and 0044); and “rendering a simulated audio environment to a listener via at least two speakers” by teaching the output of the left and right output to left and right speakers mounted on the wearable head device, such as headphones, earbuds, etc. (see Audfray, figure 1, units 100 and 120A-120B, figure 4, units 400A, 400C, 412, 414, 422, and ¶ 0023, 0026, 0029, and 0033). Claim(s) 3 and 10-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Audfray and Kinoshita as applied to claim 2 above, and further in view of Chen et al. (US 5,500,900 A and hereafter Chen). Regarding claim 3, see the preceding rejection with respect to claim 2 above. The combination makes obvious the system of claim 2 where time delays, such as the ITD delays, are set according to direction information (see Audfray, ¶ 0035-0039 and figure 5, unit 502). However, the combination of Audfray and Kinoshita does not appear to teach the features where “the ITD delays having been obtained from a principal components analysis of the HRTFs”. Chen teaches methods and apparatus for producing directional sound (see Chen, abstract and figures 1-4). Herein, Chen teaches a method to determine basic filters and weights using eigenvector analysis of measured HRTFs (see Chen, column 3, line 50 - column 4, line 23 and column 5, lines 49-65 and figure 4). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Audfray and Kinoshita with the teachings of Chen for the purpose of trying to improve the sound quality of the reduced set of HRTF by using a spline model (see Chen, column 4, line 7 - column 6, line 15 and figure 4, steps 33-36). Therefore, the combination of Audfray, Kinoshita, and Chen makes obvious the “system of claim 2 wherein each time delay is selected from a set of ITD delays according to directional information of a source signal, the ITD delays having been obtained from a principal components analysis of the HRTFs, for each direction in which an HRTF was measured” by teaching the ITD delays (see Audfray, ¶ 0035-0039 and figure 5, unit 502), and by making obvious the PCA of the HRTFs to determine the ITD, where it is obvious to remove the ITD’s before modeling and apply them separately when using HRTF filters (see Audfray, figure 5, unit 502 in view of Chen, column 7, lines 6-21 and figure 5a, units 130-132). Regarding claim 10, see the preceding rejection with respect to claims 1 and 3 above. The combination of Audfray and Kinoshita makes obvious the system of claim 1 using HRTFs, but does not appear to teach the feature wherein “the set of HRTFs is transformed into a set of head-related impulse responses (HRIRs) prior to principal components analysis”. For the same reasons as stated above with respect to claim 3, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Audfray and Kinoshita with the teachings of Chen for the purpose of trying to improve the sound quality of the reduced set of HRTF by using a spline model (see Chen, column 4, line 7 - column 6, line 15 and figure 4, steps 33-36). Therefore, the combination of Audfray, Kinoshita, and Chen makes obvious the “system of claim 1 wherein the set of HRTFs is transformed into a set of head-related impulse responses (HRIRs) prior to principal components analysis” where Chen teaches that it is obvious to perform the analysis in the time domain, because the determined weights are not dependent on frequency (see Chen, column 6, line 56 - column 7, line 5). Regarding claim 11, see the preceding rejection with respect to claims 1 and 3 above. The combination of Audfray and Kinoshita makes obvious the system of claim 1 wherein the principal components analysis is used to generate the PC filters from the set of HRTFs. However, the combination does not appear to teach that the analysis is specifically “performed on combined real and imaginary components of the set of HRTFs, to generate the PC filters in the frequency domain” (see Kinoshita, column 7, lines 26-42). For the same reasons as stated above with respect to claim 3, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Audfray and Kinoshita with the teachings of Chen for the purpose of trying to improve the sound quality of the reduced set of HRTF by using a spline model (see Chen, column 4, line 7 - column 6, line 15 and figure 4, steps 33-36). Therefore, the combination of Audfray, Kinoshita, and Chen makes obvious the “system of claim 1 wherein the principal components analysis to generate the PC filters is performed on combined real and imaginary components of the set of HRTFs, to generate the PC filters in the frequency domain” by making it obvious to perform the PCA in the frequency domain using the FFT, wherein eigenvalue analysis is performed on the HRTFs in the frequency domain and the eigenvectors and eigenvalues are determined in part using complex conjugate transposition (see Chen, column 4, lines 39-62). Regarding claim 12, see the preceding rejection with respect to claim 11 above. The combination makes obvious the “system of claim 11 wherein frequency-domain PC filters resulting from the principal components analysis are transformed to time-domain FIR filters” by teaching that the filtering is performed in the time domain, which makes obvious the use of time-domain FIR or IIR filters (see Chen, column 6, line 56 - column 7, line 5). Claim(s) 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shennib (US 5,825,894 A) further in view of Kinoshita. Regarding claim 14, Shennib teaches: “A method for allowing a listener to hear an effect of hearing aids in a simulated environment, comprising: receiving an audio signal comprising a multiple sound source signals in an audio environment” (see Shennib, column 9, line 28 - column 10, line 14, where the system provides simulated hearing aid parameters and acoustic models to provide various listening conditions; see column 22, lines 8-13, column 22, line 66 - column 23, line 7 and figure 25, where the system provides a spatialization mode to present audio sources at chosen spatial positions; and see column 26, line 50 - column 27, line 37, figure 23, step 267, figures 27, and figure 35, unit 107, where digital audio files are used to present multiple sound sources in the simulated audio environment). Herein, Shennib teaches that sets of HRTFs are used to process the multiple sound sources so that a user perceives the sounds in an audio environment and to process the sounds as they would be heard through the hearing aid device (see Shennib, column 15, lines 12-48 and column 16, lines 14-32). However, Shennib does not appear to teach the feature for “applying PC weights and PC filters to each of the sound source signals to result in a set of weighted, filtered channels, wherein some of the PC weights and PC filters are based upon a set of HRTFs and some of the PC weights and PC filters are based upon a set of HARTFs”, where PC weights are interpreted as “Principal Component weights”. Kinoshita teaches a method for construction of acoustic transfer functions measured at both ears for a large number of subjects and using principal component analysis of the acoustic transfer functions to reduce the storage requirement of a large set of acoustic transfer functions (see Kinoshita, abstract, column 4, line 59 - column 5, line 27, and column 21, lines 19-43). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify Shennib with the teachings of Kinoshita for the purpose of reducing the storage requirements for a large amount of acoustic transfer functions (see Shennib, column 15, lines 12-48 and column 16, lines 14-50, in view of Kinoshita, column 4, lines 59-67 and column 21, lines 19-43). Therefore, the combination of Shennib and Kinoshita makes obvious the features for: “applying PC weights and PC filters to each of the sound source signals to result in a set of weighted, filtered channels, wherein some of the PC weights and PC filters are based upon a set of HRTFs and some of the PC weights and PC filters are based upon a set of HARTFs” by making it obvious to filter the audio stream with the weights and filters to obtain filtered audio streams according to the desired position of sound sources using HRTFs and according to the HARTFs to simulate a hearing aid (see Shennib, column 27, lines 8-37 and figures 34-35, in view of Kinoshita, column 11, line 48 - column 12, line 11 and figure 7, units 23HR, 23HL, and 24-27); “summing the weighted, filtered channels into at least one unaided output and at least one aided output” by teaching the mixing module to sum several virtual sound sources and the filtered output audio streams output by the HRTF filter bank are summed for a left and right output (see Shennib, column 16, lines 39-50 and figure 34, units 112 and 113); and “rendering a simulated audio environment to the listener, wherein the simulated audio environment can selectively be based upon the unaided output or a combination of the unaided output and the aided output to thereby allow the listener to hear the effect of using a hearing aid or not in the simulated environment” by teaching that the simulated hearing aid mode or unaided mode is selected for the user to compare the differences (see Shennib, column 29, lines 23-42). Regarding claim 15, see the preceding rejection with respect to claim 14 above. The combination makes obvious the “method of claim 14 wherein the at least one unaided output and at least one aided output are transmitted to one or more hearing aids for playback to the listener” where Shennib makes it obvious to provide unaided and aided outputs to the hearing aid (see Shennib, column 28, lines 38-59, column 29, lines 23-42, and figure 36, units 350-351). Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shennib and Kinoshita as applied to claim 14 above, and further in view of Audfray. Regarding claim 17, see the preceding rejection with respect to claim 14 above. The combination of Shennib and Kinoshita makes obvious the method of claim 14, but does not appear to teach the features “further comprising applying a time delay to the multiple sound source signals, prior to applying the PC weights”. Audfray teaches near-field sound source rendering, where HRTFs are used to filter sound sources so that they appear at virtual speaker positions (see Audfray, abstract). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Shennib and Kinoshita with the teachings of Audfray for the purpose of providing virtual sound positioning for presenting audio signals in a mixed reality environment (see Audfray, ¶ 0002). Therefore, the combination of Shennib, Kinoshita, and Audfray makes obvious the “method of claim 14, further comprising applying a time delay to the multiple sound source signals, prior to applying the PC weights” by teaching delays for far-field and/or near-field sources (see Audfray, ¶ 0035-0039 and 0083-0084, figure 5, units 502, figure 12, units 1240A-1240B, and figure 13, units 1340A-1340B). Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shennib, Kinoshita, and Audfray as applied to claim 17 above, and further in view of Chen. Regarding claim 18, see the preceding rejection with respect to claim 17 above. The combination of Shennib, Kinoshita, and Audfray makes obvious the method of claim 17, where time delays, such as the ITD delays, are set according to direction information (see Audfray, ¶ 0035-0039 and figure 5, unit 502). However, the combination of Audfray and Kinoshita does not appear to teach the features where “the ITD delays having been obtained from a principal components analysis of the HRTFs”. Chen teaches methods and apparatus for producing directional sound (see Chen, abstract and figures 1-4). Herein, Chen teaches a method to determine basic filters and weights using eigenvector analysis of measured HRTFs (see Chen, column 3, line 50 - column 4, line 23 and column 5, lines 49-65 and figure 4). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Shennib, Kinoshita, and Audfray with the teachings of Chen for the purpose of trying to improve the sound quality of the reduced set of HRTF by using a spline model (see Chen, column 4, line 7 - column 6, line 15 and figure 4, steps 33-36). Therefore, the combination of Shennib, Kinoshita, Audfray, and Chen makes obvious the “method of claim 17 wherein each time delay is selected from a set of ITD delays according to directional information of a sound source signal, the ITD delays having been obtained from a principal components analysis of the set of HRTFs, for each direction in which an HRTF was measured” by teaching the ITD delays (see Audfray, ¶ 0035-0039 and figure 5, unit 502), and by making obvious the PCA of the HRTFs to determine the ITD, where it is obvious to remove the ITD’s before modeling and apply them separately when using HRTF filters (see Audfray, figure 5, unit 502 in view of Chen, column 7, lines 6-21 and figure 5a, units 130-132). Claim(s) 19-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shennib and Kinoshita as applied to claim 14 above, and further in view of Chen. Regarding claim 19, see the preceding rejection with respect to claim 14 above. The combination of Shennib and Kinoshita makes obvious the method of claim 14 using HRTFs and HARTFs, but does not appear to teach the feature wherein “the set of HRTFs and the set of HARTFs are transformed into a set of head-related impulse responses (HRIRs) and hearing aid-related impulse responses (HARIRs), respectively, prior to a principal components analysis”. Chen teaches methods and apparatus for producing directional sound (see Chen, abstract and figures 1-4). Herein, Chen teaches a method to determine basic filters and weights using eigenvector analysis of measured HRTFs (see Chen, column 3, line 50 - column 4, line 23 and column 5, lines 49-65 and figure 4). It would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Shennib and Kinoshita with the teachings of Chen for the purpose of trying to improve the sound quality of the reduced set of HRTF by using a spline model (see Chen, column 4, line 7 - column 6, line 15 and figure 4, steps 33-36). Therefore, the combination of Shennib, Kinoshita, and Chen makes obvious the “method of claim 14 wherein the set of HRTFs and the set of HARTFs are transformed into a set of head-related impulse responses (HRIRs) and hearing aid-related impulse responses (HARIRs), respectively, prior to a principal components analysis” where Chen teaches that it is obvious to perform the analysis in the time domain, because the determined weights are not dependent on frequency (see Chen, column 6, line 56 - column 7, line 5). Regarding claim 20, see the preceding rejection with respect to claim 14 above. The combination of Shennib and Kinoshita makes obvious the method of claim 14 wherein the principal components analysis is used to generate the PC filters from the set of HRTFs and HARTFs. However, the combination does not appear to teach that the analysis is specifically “performing a principal components analysis to generate the PC filters on combined real and imaginary components of the set of HRTFs and the set of HARTFs, to generate the PC filters in the frequency domain”. For the same reasons as stated above with respect to claim 19, it would have been obvious to one of ordinary skill in the art at the time of the effective filing date to modify the combination of Shennib and Kinoshita with the teachings of Chen for the purpose of trying to improve the sound quality of the reduced set of HRTF by using a spline model (see Chen, column 4, line 7 - column 6, line 15 and figure 4, steps 33-36). Therefore, the combination of Shennib, Kinoshita, and Chen makes obvious the “method of claim 14 further comprising performing a principal components analysis to generate the PC filters on combined real and imaginary components of the set of HRTFs and the set of HARTFs, to generate the PC filters in the frequency domain” by making it obvious to perform the PCA in the frequency domain using the FFT, wherein eigenvalue analysis is performed on the HRTFs in the frequency domain and the eigenvectors and eigenvalues are determined in part using complex conjugate transposition (see Chen, column 4, lines 39-62). Regarding claim 21, see the preceding rejection with respect to claim 20 above. The combination makes obvious the “method of claim 20 wherein frequency-domain PC filters resulting from the principal components analysis are transformed to time-domain FIR filters” by teaching that the filtering is performed in the time domain, which makes obvious the use of time-domain FIR or IIR filters (see Chen, column 6, line 56 - column 7, line 5). Allowable Subject Matter Claims 5-7, 9, and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chalupper et al. (US 2004/0218771 A1 and hereafter Chalupper) teaches methods for production of an approximated partial transfer function (see Chalupper, abstract, figures 1-4, and ¶ 0049-0053 and 0059); and Faure et al. (US 2009/0067636 A1 and hereafter Faure) teaches optimization of binaural sound spatialization based on multichannel encoding (see Faure, abstract, figures 1 and 8-9, ¶ 0023-0024, and ¶ 0052-0061). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daniel R Sellers whose telephone number is (571)272-7528. The examiner can normally be reached Mon - Fri 10:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fan S Tsang can be reached at (571)272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daniel R Sellers/Primary Examiner, Art Unit 2694
Read full office action

Prosecution Timeline

Mar 15, 2024
Application Filed
Dec 24, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604151
COMPUTER SYSTEM FOR PROCESSING AUDIO CONTENT AND METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12562144
ACOUSTIC ECHO CANCELLATION UNIT
2y 5m to grant Granted Feb 24, 2026
Patent 12556879
SHARED POINT OF VIEW
2y 5m to grant Granted Feb 17, 2026
Patent 12556190
Startup Calibration and Digital Temperature Compensation for an Open-Loop VCO Based ADC Architecture
2y 5m to grant Granted Feb 17, 2026
Patent 12532139
AUDIO SIGNAL PROCESSING METHOD AND AUDIO SIGNAL PROCESSING APPARATUS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
84%
With Interview (+16.9%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month