Prosecution Insights
Last updated: April 19, 2026
Application No. 18/277,065

SOUND SOURCE SEPARATION APPARATUS, SOUND SOURCE SEPARATION METHOD, AND PROGRAM

Non-Final OA §101§103
Filed
Aug 12, 2023
Examiner
QUIGLEY, KYLE ROBERT
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
54%
Grant Probability
Moderate
1-2
OA Rounds
3y 10m
To Grant
87%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
254 granted / 466 resolved
-13.5% vs TC avg
Strong +33% interview lift
Without
With
+32.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
72 currently pending
Career history
538
Total Applications
across all art units

Statute-Specific Performance

§101
20.7%
-19.3% vs TC avg
§103
43.7%
+3.7% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 466 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) the abstract idea of a mathematical algorithm (i.e., use of the recited separation matrix) for estimating acoustic sources collected by a microphone array. This judicial exception is not integrated into a practical application because no improvement to the operation of the underlying microphone array is realized through performance of the algorithm. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the recited microphone array must be present for gathering the data needed to implement the algorithm. The recited computer component elements amount to the recitation of the components of a general-purpose computer for implementing the algorithm and do not serve to amount to the recitation of significantly more than the abstract idea itself (see Alice Corp. v. CLS Bank International, 573 U.S. 208 (2014)). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zia et al. (CN 110554357 A)[hereinafter “Xia”] and Unknown Author, MathWorks, Sensor spatial covariance matrix, 9.29.2020 (obtained using the WayBack Machine Internet Archive by the Examiner on 11/5/2025: PNG media_image1.png 91 821 media_image1.png Greyscale .Available at https://web.archive.org/web/20200929112615/https://www.mathworks.com/help/phased/ref/sensorcov.html)[hereinafter “MathWorks”]. Regarding Claims 1, 4, and 5, Xia discloses a sound source separation method (including a corresponding device comprising a processor and computer program stored on a computer-readable medium [See the last two paragraphs of Page 2])[Abstract – “The invention claims a sound source locating method and device, wherein the method comprises: the signal received by the microphone array to calculate to obtain the spatial spectrum; determining the number of spectrum peaks of the spatial spectrum, if a spatial spectrum having a plurality of peaks. using a fixed beam former to form a beam in a plurality of different directions corresponding to the microphone array, wherein the beam of said plurality of different directions at least comprises a first directional beam and the second directional beam, calculating the first directional beam of energy, the second directional beam of energy and the first directional beam and the second directional beam energy difference”] configured to execute operations comprising: estimating each sound source signal using a separation matrix from an observation signal obtained by collecting a mixed acoustic signal in which a plurality of sound source signals and diffusive noise are mixed by a microphone array formed by a plurality of microphones [Background – “using the microphone array in the real scene, when locating the speaker direction, interference inevitably will receive from the other direction of the, for example, television, music and other interference noise. at the same time due to the limitation of power supply, placing position of the microphone array will be close to the wall, accuracy reflected wave caused by wall is easy to influence the positioning.”Page 5, 4th paragraph – “In a further optional embodiment, the separation matrix using independent vector analysis to obtain a plurality of microphones corresponding to the receiving signal comprises: based on the short time Fourier transform modeling the signal received by the microphone array is X (t, f); using independent vector analysis matrix calculating separating matrix W (t, f) filtered signal received by the microphone array to obtain the estimation signal Y of the sound source signal (t, f), wherein Y (t, f) = W (t, f) *X (t, f), the estimation signal to wake-up modules in the device. and determining the separating matrix corresponding to the wake-up signal. so as to more quickly through the above way to determine separation matrix corresponding to the new mode, thereby facilitating calculating a spatial spectrum.”], wherein the separation matrix is configured to convert steering vectors from each sound source to the microphone into unit vectors [Page 6 – “calculating space of the desired signal using a separation matrix corresponding to the wake-up signal related matrix of RSS (t, f) = W1 (t, f) X (t, f) xh (t, f) W1h (t, f), performing characteristic value decomposition for Rss (t, f), a vector corresponding to the largest feature value is a signal space Us remaining, K-1 vector of noise space space spectral function of Un, using the following formula: PNG media_image2.png 30 245 media_image2.png Greyscale wherein, d (t, f, θ) represents the steering vector θ direction, so that the direction θ changes, by finding the peak to an estimated angle. if the spatial spectrum corresponding to only a single spectral peak, it is corresponding to the angle output.” Considered a unit vector as the vector only represents direction.]. Xia discloses that the separation matrix is configured to use a spatial covariance matrix of the diffusive noise [Page 5, 5th paragraph – “calculating a spatial spectrum of the separating matrix corresponding to the wake-up signal comprises a separation matrix using the wake-up signal corresponding to the calculated spatial covariance matrix, calculating the spatial covariance matrix obtained by performing characteristic value decomposition to obtain the maximum characteristic value, the corresponding vector is a signal space, noise remaining space vector; calculating a spatial spectrum based on signal space and a noise space.”], but fails to disclose that the separation matrix is configured to convert a spatial covariance matrix of the diffusive noise into a diagonal matrix. However, MathWorks discloses that a covariance matrix of such a type is an effective manner of representing diffusive noise [Page 3 – “Noise spatial covariance matrix specified as a non-negative, real-valued scalar, a non-negative, 1-by-N real-valued vector or an N-by-N, positive definite, complex-valued matrix. In this argument, N is the number of sensor elements. Using a non-negative scalar results in a noise spatial covariance matrix that has identical white noise power values (in watts) along its diagonal and has off-diagonal values of zero. Using a non-negative real-valued vector results in a noise spatial covariance that has diagonal values corresponding to the entries in ncov and has off-diagonal entries of zero. The diagonal entries represent the independent white noise power values (in watts) in each sensor. If ncov is N-by-N matrix, this value represents the full noise spatial covariance matrix between all sensor elements.”]. It would have been obvious to convert the spatial covariance matrix of the diffusive noise into a diagonal matrix in order to effectively represent diffusive noise. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: CN 117037836 A – Real-time Sound Source Separation Method And Device Based On Signal Covariance Matrix Reconstruction US 20180366135 A1 – SPATIAL CORRELATION MATRIX ESTIMATION DEVICE, SPATIAL CORRELATION MATRIX ESTIMATION METHOD, AND SPATIAL CORRELATION MATRIX ESTIMATION PROGRAM US 20110123046 A1 – SIGNAL PROCESSING APPARATUS, SIGNAL PROCESSING METHOD, AND PROGRAM THEREFOR US 20200152222 A1 – PROCESSING OF SOUND DATA FOR SEPARATING SOUND SOURCES IN A MULTICHANNEL SIGNAL US 20200411031 A1 – SIGNAL ANALYSIS DEVICE, SIGNAL ANALYSIS METHOD, AND RECORDING MEDIUM US 20180286423 A1 – AUDIO PROCESSING DEVICE, AUDIO PROCESSING METHOD, AND PROGRAM US 20110307251 A1 – Sound Source Separation Using Spatial Filtering And Regularization Phases US 20150256956 A1 – MULTI-MICROPHONE METHOD FOR ESTIMATION OF TARGET AND NOISE SPECTRAL VARIANCES FOR SPEECH DEGRADED BY REVERBERATION AND OPTIONALLY ADDITIVE NOISE US 20090132245 A1 – Denoising Acoustic Signals Using Constrained Non-Negative Matrix Factorization US 20110044462 A1 – SIGNAL ENHANCEMENT DEVICE, METHOD THEREOF, PROGRAM, AND RECORDING MEDIUM Anemuller et al., MULTI-CHANNEL SIGNAL ENHANCEMENT WITH SPEECH AND NOISE COVARIANCE ESTIMATES COMPUTED BY A PROBABILISTIC LOCALIZATION MODEL, IEEE, 2017 Chen et al., An Intelligent Hearing Aid System Based on Real-Time Signal Processing, IEEE, 2014 Dubnov et al., A METHOD FOR DIRECTIONALLY-DISJOINT SOURCE SEPARATION IN CONVOLUTIVE ENVIRONMENT, IEEE, 2004 Epain et al., Spherical Harmonic Signal Covariance and Sound Field Diffuseness, arXiv, 2016 Nikunen et al., Separation of Moving Sound Sources Using Multichannel NMF and Acoustic Tracking, IEEE, 2017 Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE ROBERT QUIGLEY whose telephone number is (313)446-4879. The examiner can normally be reached 9AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arleen Vazquez can be reached at (571) 272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE R QUIGLEY/Primary Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

Aug 12, 2023
Application Filed
Dec 05, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601396
PREDICTIVE MODELING OF HEALTH OF A DRIVEN GEAR IN AN OPEN GEAR SET
2y 5m to grant Granted Apr 14, 2026
Patent 12566218
BATTERY PACK MONITORING DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12566162
AUTOMATED CONTAMINANT SEPARATION IN GAS CHROMATOGRAPHY
2y 5m to grant Granted Mar 03, 2026
Patent 12523698
Battery Management Apparatus and Method
2y 5m to grant Granted Jan 13, 2026
Patent 12509981
Parametric Attribute of Pore Volume of Subsurface Structure from Structural Depth Map
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
54%
Grant Probability
87%
With Interview (+32.7%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 466 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month