Prosecution Insights
Last updated: April 19, 2026
Application No. 17/339,752

MACHINE LEARNING BASED INTERFERENCE WHITENER SELECTION

Non-Final OA §101§103
Filed
Jun 04, 2021
Examiner
HAN, KYU HYUNG
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
43%
Grant Probability
Moderate
3-4
OA Rounds
4y 6m
To Grant
85%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
3 granted / 7 resolved
-12.1% vs TC avg
Strong +42% interview lift
Without
With
+41.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
30 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
38.4%
-1.6% vs TC avg
§103
50.9%
+10.9% vs TC avg
§102
4.2%
-35.8% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/05/2025 has been entered. Response to Remarks Interview Summary Applicant claims that in the interview, there was agreement that the amendments overcome the cited references. Examiner respectfully disagrees and clearly stated that Venkatesan may teach the amendment, and Examiner did not explicitly agree that the amendment fully overcomes Xu either. Examiner recalls that Applicant agreed to review Venkatesan during the interview. Furthermore, the proposed amendment in the interview summary is different from the actual amendment filed, which excludes the “computation block”. During the interview, Applicant was advised to define the difference between terms such as resource block, computation block, covariance matrices and their pluralities. Examiner asserts that agreement was not reached in that interview. See Examiner Interview Summary Record. Claim Rejections – 35 U.S.C. 101 Applicant’s amendments have been fully considered and they are persuasive. The rejection of claims 1-20 under 35 U.S.C. 101 has been withdrawn. Claim Rejections – 35 U.S.C. 103 Applicant asserts (pg. 11-12) that the cited references do not teach the amendments "computing, using a first neural network, an output value corresponding to a first resource block (RB), based on the first set of features, the output value being an indication of estimated signal to interference ratio in the first resource block" and "selecting a first covariance matrix, from a plurality of covariance matrices, based on the output value, the selecting of the first covariance matrix comprising selecting a method for computing a covariance matrix for interference whitening" in claim 1. Applicant asserts that “from the interview, that the amendments to claim 1 made herein may overcome the Wu, Xu, and Venkatesan references. Examiner respectfully disagrees. Examiner recalls that during the interview, the attorney focused on the Xu reference not teaching the amended limitations (please see the interview agenda as it supports this assertion). However, Examiner told attorney during the interview that the Venkatesan reference may teach the amendments, and that Examiner would have to look deeper into Venkatesan to confirm with certainty. Examiner did not state with certainty that the amendments overcome Xu either. Examiner suggested during the interview for the attorney to also look into Venkatesan, as the attorney did not object to Venkatesan during the interview at all. In addition, the proposed amendment in the interview summary is different from the actual amendment filed, which excludes the “computation block”. During the interview, Applicant was advised to define the difference between terms such as resource block, computation block, covariance matrices and their pluralities. The rest of Applicant’s arguments in the Remarks are a generic statement that the cited references do not teach the amendments. Examiner asserts that the cited references do indeed teach the amended limitations. Please see rejection below for further details. The foregoing applies to all independent claims and their dependent claims. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as "configured to" or "so that"; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step for”) in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function. Absence of the word “means” (or “step for”) in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. The following limitations are interpreted as invoking 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: Claim 17, “and means for processing, the means for processing being configured to: receive, through the radio, a signal; extract a first set of features from the received signal; computing, using a first neural network, an output value corresponding to a first resource block (RB), based on the first set of features, the output value being an indication of estimated signal to interference ratio in the first resource block; select a first covariance matrix, from a plurality of covariance matrices, based on the output value, the selecting of the first covariance matrix comprising selecting a method for computing a covariance matrix for interference whitening; and improving the received signal by performing interference whitening on the received signal based on the selected first covariance matrix” The corresponding structure in the disclosure for performing the claimed processing is any combination of hardware, firmware, and software, employed to process data or digital signals (see detailed description [0062]). Therefore, the interpretation of the “means for processing…” is a generic processor with software that makes it able to receive signals, extract features from said signal, make a selection via neural network, and select a covariance matrix. Claim 20, “means for processing is further configured to: extract a second set of features from the signal; and computing, using a second neural network, a second output value based on the second set of features, wherein the first set of features corresponds to a first resource block, and the second set of features corresponds to a second resource block.” The corresponding structure in the disclosure for performing the claimed processing is any combination of hardware, firmware, and software, employed to process data or digital signals (see detailed description [0062]). Therefore, the interpretation of the “means for processing…” is a generic processor with software that makes it able to extract features from a signal and make a selection via neural network. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-4, 7-14, 17-20 are rejected under 35 U.S.C. 103 as being anticipated by Wu et al. (EP 3739356 A1) hereinafter known as Wu in view of Fang et al. (US 20210218483 A1) hereinafter known as Fang in view of Xu et al. (US 20220101831 A1) hereinafter known as Xu in view of Venkatesan et al. (“An iterative algorithm for computing a spatial whitening filter”) hereinafter known as Venkatesan. Regarding independent claim 1, Wu teaches: A method, comprising: receiving a signal; (Wu [Page 6, Lines 1-2]: “The receiver is configured for: receiving the wireless signal through the wireless multipath channel, extracting a plurality of time series of channel information (TSCI) of the wireless multipath channel from the wireless signal.” Wu teaches a receiver that can receive a wireless signal.) extracting a first set of features from the received signal; (Wu [Page 6, Lines 1-2]: “The receiver is configured for: receiving the wireless signal through the wireless multipath channel, extracting a plurality of time series of channel information (TSCI) of the wireless multipath channel from the wireless signal.” Wu teaches a receiver that can extract a plurality of time series channel information sets from the signal.) … … … Wu does not explicitly teach: computing, using a first neural network, an output value corresponding to a first resource block (RB), based on the first set of features, the output value being an indication of estimated signal to interference ratio in the first resource block; … for interference whitening; However, Fang teaches: computing, using a first neural network, an output value corresponding to a first resource block (RB), based on the first set of features, the output value being an indication of estimated signal to interference ratio in the first resource block; (Fang [¶ 0103]: “calculate a post-signal-to-noise ratio (SINR) value for a sub-band of the wireless communications; determine, using a neural network (NN) and the post-SINR value as input to the NN, a label for each of a plurality of channel quality indicator” Fang teaches that a computation, the post-signal-to-noise-ratio of a resource block, or sub-band of wireless communication, is calculated. This is used to create the label of the channel quality indicator, which thus reflects the signal to noise/interference ratio.) … for interference whitening; (Fang [Figure 15]: Fang teaches that the generation of the covariance matrix R is dependent on the whitening matrix W. Indeed in the figure, an equation is listed as R = sum(WH)^2(WH)). This shows that the selection of the whitening matrix and the selection of the covariance matrix is thus for interference whitening.) Wu and Fang are in the same field of endeavor as the present invention, since the references are directed to processing signals and generating a selection of the features using a neural network, and determining the channel quality indicator of a signal using a neural network and covariances of the signal, respectively. It would have been obvious, before the effective filing date of the claimed invention, to a person of ordinary skill in the art, to combine making a selection based on features as taught in Wu with making decisions in the neural network using the signal to interference ratio as taught in Fang. Fang provides this additional functionality. As such, it would have been obvious to one of ordinary skill in the art to modify the teachings of Wu to include teachings of Fang because the combination would allow for decreasing the error of the selection when there are inaccuracies with transferring data through the signal in a network. This has the potential benefit of mitigating the effect of interference in signal processing. Wu and Fang do not explicitly teach: selecting a first covariance matrix, from a plurality of covariance matrices, based on the output value, the selecting of the first covariance matrix comprising selecting a method for computing a covariance matrix …; However, Xu teaches: selecting a first covariance matrix, from a plurality of covariance matrices, based on the output value, the selecting of the first covariance matrix comprising selecting a method for computing a covariance matrix …; (Xu ¶ [0032]: “Additionally, replacing the matrix inversion with the GRU-Nets 208A,B may resolve an instability issue during joint training with NNs. MVDR coefficients can be obtained via the GRU-Nets… where the real and imaginary parts of the complex-valued covariance matrix Φ are concatenated together as input to the GRU-Nets 208A,B.” Xu teaches that the selection of a covariance matrix is made when one is being formed through concatenation of real and imaginary components, which generates mask-based minimum variance distortionless response (MVDR) coefficients. Xu ¶ [0036]: “At 306, the method 300 includes generating a predicted target waveform corresponding to a target speaker from among the one or more speakers by a minimum variance distortionless response function based on the estimated covariance matrices.” Xu teaches a prediction model, which is a selection of features, that is calculated from the covariance matrix above. A plurality of predictions can be made for a plurality of speakers, so there is a plurality of selections. Xu ¶ [0016]: “The minimum variance distortionless response (MVDR) filters aim to reduce the noise while keeping the target speech undistorted.” Xu teaches that the method is to lower the noise relative to the content speech, which is a form of signal quality similar to whitening.) Xu is in the same field of endeavor as the present invention, since it is directed to processing signals and generating a selection of the features using a neural network. It would have been obvious, before the effective filing date of the claimed invention, to a person of ordinary skill in the art, to combine making a selection based on the features as taught in Wu as modified by Fang with using covariance estimates from contiguous resources blocks to discern which selection to make as taught in Xu. Xu provides this additional functionality. As such, it would have been obvious to one of ordinary skill in the art to modify the teachings of Wu as modified by Fang to include teachings of Xu because the combination would allow for making a selection based on features using covariance estimates from contiguous resource blocks. This has the potential benefit of being able to make predictions on the error in contiguous data, such as audio. Wu, Fang, and Xu do not explicitly teach: and improving the received signal by performing interference whitening on the received signal based on the selected first covariance matrix. However, Venkatesan teaches: and improving the received signal by performing interference whitening on the received signal based on the selected first covariance matrix. (Venkatesan [Page 338, Column 1, Paragraph 1]: “iterative algorithm to compute a spatial whitening filter for a given covariance matrix.” Venkatesan teaches an algorithm for a signal whitening filter based on a given covariance matrix. Venkatesan [Page 342, Column 2, Paragraph 1]: “Through simulation, we demonstrated its usefulness in the context of a multiple-antenna wireless link with spatially colored interference at the receiver.” Venkatesan teaches that this filtering improved the wireless link signal.) Venkatesan is in the same field as the present invention, since it is directed to signal whitening using a given covariance matrix. It would have been obvious, before the effective filing date of the claimed invention, to a person of ordinary skill in the art, to combine the selection of the covariance matrix as taught in Wu as modified by Fang as modified by Xu with using the covariance matrix to whiten the signal as taught in Venkatesan. Venkatesan provides this additional functionality. As such, it would have been obvious to one of ordinary skill in the art to modify the teachings of Wu as modified by Fang as modified by Xu to include teachings of Venkatesan because the combination would allow for signals in wireless links to have their noise reduced, or whitened. This has the potential benefit of improving the communications between wireless devices, as the signal can be more accurate. Regarding dependent claim 2, Wu, Xu, and Venkatesan teach: The method of claim 1, Xu teaches: wherein the computing of the output value comprises computing the output value based on a plurality of initial covariance estimates, each corresponding to a respective resource block (RB) of a contiguous set of resource blocks. (Xu ¶ [0035]: “At 304, the method 300 includes estimating covariance matrices of target speech and noise associated with the received audio data based on a gated recurrent unit-based network.” Xu teaches that there is a plurality of covariance matrices generated. The audio data can be considered as a resource block, and the continuous format of audio can be considered as contiguous resource blocks. Xu ¶ [0032]: “Additionally, replacing the matrix inversion with the GRU-Nets 208A,B may resolve an instability issue during joint training with NNs. MVDR coefficients can be obtained via the GRU-Nets… where the real and imaginary parts of the complex-valued covariance matrix Φ are concatenated together as input to the GRU-Nets 208A,B.” Xu teaches that the selection of a covariance matrix is made when one is being formed through concatenation of real and imaginary components, which generates mask-based minimum variance distortionless response (MVDR) coefficients. Xu ¶ [0036]: “At 306, the method 300 includes generating a predicted target waveform corresponding to a target speaker from among the one or more speakers by a minimum variance distortionless response function based on the estimated covariance matrices.” Xu teaches a prediction model, which is a selection of features, that is calculated from the covariance matrix above. A plurality of predictions can be made for a plurality of speakers, so there is a plurality of selections.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 3, Wu, Xu, and Venkatesan teach: The method of claim 2, Xu teaches: wherein the contiguous set of resource blocks comprises all of the resource blocks in a bandwidth part. (Xu ¶ [0035]: “At 304, the method 300 includes estimating covariance matrices of target speech and noise associated with the received audio data based on a gated recurrent unit-based network.” Xu teaches that the audio data is received via a gated recurrent network, which uses bandwidth parts. The continuous audio data is necessarily all of some unit of bandwidth, which is a bandwidth part.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 4, Wu, Xu, and Venkatesan teach: The method of claim 2, Wu teaches: further comprising: extracting a second set of features from the signal; (Wu [Page 6, Lines 1-2]: “The receiver is configured for: receiving the wireless signal through the wireless multipath channel, extracting a plurality of time series of channel information (TSCI) of the wireless multipath channel from the wireless signal.” Wu teaches a receiver that can extract a plurality of time series channel information sets from the signal.) and computing, using a second neural network, a second output value based on the second set of features, wherein the first set of features corresponds to a first resource block, and the second set of features corresponds to a second resource block. (Wu ¶ [0092]: “The classifier may be applied to at least one of: each first section of the first time duration of the first TSCI, and/or each second section of the second time duration of the second TSCI, to obtain at least one tentative classification results. Each tentative classification result may be associated with a respective first section and a respective second section.” Wu teaches that a classifier is applied to the second set of time series features and makes a second selection by obtaining the second tentative classification result. This is based on the second resource block, which is the second set of time series data. Wu ¶ [0094]: “A projection for each Cl may be trained using a dimension reduction method based on the training TSCI. The dimension reduction method may comprise at least one of: … neural network, deep neural network … The projection may be applied to at least one of: the training TSCI associated with the at least one event, and/or the current TSCI, for the classifier. Wu teaches that the PCA for the classification can be done using a neural network – which would necessarily be different from the first neural network.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 5, Wu, Xu, and Venkatesan teach: The method of claim 4, … … and the selecting of the first covariance matrix comprises selecting a covariance matrix based on a first initial covariance estimate, the first initial covariance estimate corresponding to the first resource block. (Xu ¶ [0035]: “At 304, the method 300 includes estimating covariance matrices of target speech and noise associated with the received audio data based on a gated recurrent unit-based network.” Xu teaches that there is a plurality of covariance matrices generated. The audio data can be considered as a resource block, and the continuous format of audio can be considered as contiguous resource blocks. Xu ¶ [0032]: “Additionally, replacing the matrix inversion with the GRU-Nets 208A,B may resolve an instability issue during joint training with NNs. MVDR coefficients can be obtained via the GRU-Nets… where the real and imaginary parts of the complex-valued covariance matrix Φ are concatenated together as input to the GRU-Nets 208A,B.” Xu teaches that the selection of a covariance matrix is made when one is being formed through concatenation of real and imaginary components, which generates mask-based minimum variance distortionless response (MVDR) coefficients. Xu ¶ [0036]: “At 306, the method 300 includes generating a predicted target waveform corresponding to a target speaker from among the one or more speakers by a minimum variance distortionless response function based on the estimated covariance matrices.” Xu teaches a prediction model, which is a selection of features, that is calculated from the covariance matrix above. A plurality of predictions can be made for a plurality of speakers, so there is a plurality of selections.) Fang teaches: … wherein: the output value is an indication of estimated signal to interference ratio in the first resource block; (Fang ¶ [0048]: “User device 100 circuitry (e.g., baseband processor 110) can calculate a SINR of a specified sub-band of the plurality of sub-bands 606 at blocks 610. When the training process is complete, the user device 100 may have calculated all, or a subset of all, of the post-SINR for each sub-band or resource block (RB) thereof In some available systems, Mutual Information Effective SNR Mapping (MIESM) can be used to generate the effective SINR for CQI mapping. Then, the best-M method is used to filter the sub-band with best channel status to provide a best MCS at block 612.” Fang teaches generating a model via selection based on the estimated signal to interference ratio.) the output value corresponds to a signal to interference ratio less than a first threshold; (Fang ¶ [0049]: “FIG. 7 illustrates fields of the circle buffer database 618 according to some aspects. For each MCS, the database 618 stores information 702 for a plurality of sub-bands. Data 704 for each sub-band includes at least of r-dimensional ordered post-SINR 706, where r is determined by the number of RBs used (and is related to system bandwidth). The CRC calibration result 708 is used to verify that the packet error rate (PER) is below a threshold, e.g., below about 10% although aspects are not limited thereto.” Fang teaches that a threshold on the PER is used. As PER and SIR are inversely causally related, there is effectively a threshold on the SIR.) … The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 6, Wu, Xu, and Venkatesan teach: The method of claim 4, Fang teaches: wherein: the output value is an indication of estimated signal to interference ratio in the first resource block; (Fang ¶ [0048]: “User device 100 circuitry (e.g., baseband processor 110) can calculate a SINR of a specified sub-band of the plurality of sub-bands 606 at blocks 610. When the training process is complete, the user device 100 may have calculated all, or a subset of all, of the post-SINR for each sub-band or resource block (RB) thereof In some available systems, Mutual Information Effective SNR Mapping (MIESM) can be used to generate the effective SINR for CQI mapping. Then, the best-M method is used to filter the sub-band with best channel status to provide a best MCS at block 612.” Fang teaches generating a plurality of models via selection based on the estimated signal to interference ratios of the plurality of resource blocks.) the output value corresponds to a signal to interference ratio greater than a first threshold; (Fang ¶ [0049]: “FIG. 7 illustrates fields of the circle buffer database 618 according to some aspects. For each MCS, the database 618 stores information 702 for a plurality of sub-bands. Data 704 for each sub-band includes at least of r-dimensional ordered post-SINR 706, where r is determined by the number of RBs used (and is related to system bandwidth). The CRC calibration result 708 is used to verify that the packet error rate (PER) is below a threshold, e.g., below about 10% although aspects are not limited thereto.” Fang teaches that a threshold on the PER is used. As PER and SIR are inversely causally related, there is effectively a threshold on the SIR. Since there are a plurality of sub-bands, there may be a plurality of thresholds.) the second output value is an indication of estimated signal to interference ratio in the second resource block; (Fang ¶ [0048]: “User device 100 circuitry (e.g., baseband processor 110) can calculate a SINR of a specified sub-band of the plurality of sub-bands 606 at blocks 610. When the training process is complete, the user device 100 may have calculated all, or a subset of all, of the post-SINR for each sub-band or resource block (RB) thereof In some available systems, Mutual Information Effective SNR Mapping (MIESM) can be used to generate the effective SINR for CQI mapping. Then, the best-M method is used to filter the sub-band with best channel status to provide a best MCS at block 612.” Fang teaches generating a plurality of models via selection based on the estimated signal to interference ratios of the plurality of resource blocks.) the second output value corresponds to a signal to interference ratio greater than the first threshold; (Fang ¶ [0049]: “FIG. 7 illustrates fields of the circle buffer database 618 according to some aspects. For each MCS, the database 618 stores information 702 for a plurality of sub-bands. Data 704 for each sub-band includes at least of r-dimensional ordered post-SINR 706, where r is determined by the number of RBs used (and is related to system bandwidth). The CRC calibration result 708 is used to verify that the packet error rate (PER) is below a threshold, e.g., below about 10% although aspects are not limited thereto.” Fang teaches that a threshold on the PER is used. As PER and SIR are inversely causally related, there is effectively a threshold on the SIR. Since there are a plurality of sub-bands, there may be a plurality of thresholds.) Xu teaches: the selecting of the first covariance matrix comprises selecting a covariance matrix based on a first initial covariance estimate and on a second initial covariance estimate; (Xu ¶ [0035]: “At 304, the method 300 includes estimating covariance matrices of target speech and noise associated with the received audio data based on a gated recurrent unit-based network.” Xu teaches that there is a plurality of covariance matrices generated. The audio data can be considered as a resource block, and the continuous format of audio can be considered as contiguous resource blocks. Xu ¶ [0032]: “Additionally, replacing the matrix inversion with the GRU-Nets 208A,B may resolve an instability issue during joint training with NNs. MVDR coefficients can be obtained via the GRU-Nets… where the real and imaginary parts of the complex-valued covariance matrix Φ are concatenated together as input to the GRU-Nets 208A,B.” Xu teaches that the selection of a plurality of covariance matrices are made when one is being formed through concatenation of real and imaginary components, which generates mask-based minimum variance distortionless response (MVDR) coefficients. Xu ¶ [0036]: “At 306, the method 300 includes generating a predicted target waveform corresponding to a target speaker from among the one or more speakers by a minimum variance distortionless response function based on the estimated covariance matrices.” Xu teaches a prediction model, which is a selection of features, that is calculated from the covariance matrix above. A plurality of predictions can be made for a plurality of speakers, so there is a plurality of selections.) the first initial covariance estimate corresponds to the first resource block; (Xu ¶ [0035]: “At 304, the method 300 includes estimating covariance matrices of target speech and noise associated with the received audio data based on a gated recurrent unit-based network.” Xu teaches that a plurality of covariance estimates is generated.) and the second initial covariance estimate corresponds to the second resource block. (Xu ¶ [0035]: “At 304, the method 300 includes estimating covariance matrices of target speech and noise associated with the received audio data based on a gated recurrent unit-based network.” Xu teaches that a plurality of covariance estimates is generated.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 7, Wu, Xu, and Venkatesan teach: The method of claim 1, Xu teaches: further comprising calculating a first initial covariance estimate, wherein a first feature of the first set of features is based on the first initial covariance estimate. (Xu ¶ [0035]: “At 304, the method 300 includes estimating covariance matrices of target speech and noise associated with the received audio data based on a gated recurrent unit-based network.” Xu teaches that there is a plurality of covariance matrices calculated based on the first set of audio data and thus the first set of features.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 8, Wu, Xu, and Venkatesan teach: The method of claim 7, Wu teaches: wherein the first feature includes an eigenvalue of the first initial covariance estimate. (Wu ¶ [0353]: “Therefore, the correlation matrix is used instead, which can be expressed accordingly as … The eigenvalues LAMBDA_1, …, LAMBDA_M of R are sorted in a non-descending order.” Wu teaches that the correlation matrix, which is effectively a covariance matrix when the standard deviation is known, comprises eigenvalues. The first feature is based on the covariance matrix so therefore the first feature includes an eigenvalue of the first initial covariance estimate.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 9, Wu, Xu, and Venkatesan teach: The method of claim 7, Wu teaches: wherein the first feature includes a QR decomposition of the first initial covariance estimate. (Wu ¶ [0116]: “An event may be monitored based on the TSCI. … The task or the wireless smart sensing task may comprise: … eigen-decomposition … other decomposition” Wu teaches that the first feature, which is based on the time series information, includes other types of matrix decomposition other than eigen-decomposition, which includes QR decomposition.) The reasons to combine are substantially similar to those of claim 1. Regarding dependent claim 10, Wu, Xu, and Venkatesan teach: The method of claim 7, Xu teaches: wherein the first feature includes an element of the first initial covariance estimate. (Xu ¶ [0035]: “At 304, the method 300 includes estimating covariance matrices of target speech and noise associated with the received audio data based on a gated recurrent unit-based network.” Xu teaches that there is a plurality of covariance matrices calculated based on the first set of audio data and thus the first set of features. More specifically, the first feature includes an element of the first initial covariance estimate because it is the first feature selection.) The reasons to combine are substantially similar to those of claim 1. Independent claim 11 is rejected on the same grounds under 35 U.S.C. 103 as claim 1, as claim 11 is substantially similar to claim 1, but has the following additional elements: Wu teaches: A device, comprising: a radio; (Wu ¶ [0036]: “wherein each of the N1 TSCI is associated with an antenna of the transmitter and an antenna of the receiver.” Wu teaches an a device with an antenna, which is also known as a radio.) and a processing circuit, the processing circuit being configured to: receive, through the radio, a signal; (Wu ¶ [0048]: “The Type 1 /Type 2 device may comprise at least one of: electronics, circuitry, transmitter (TX)/receivers (RX)/transceiver, RF interface, "Origin Satellite"/"Tracker Bot", unicast/multicast/broadcasting device, wireless source device…” Wu teaches a device that comprises a processing circuit and a radio. Wu ¶ [0036]: “wherein each of the N1 TSCI is associated with an antenna of the transmitter and an antenna of the receiver.” Wu teaches receiving the signal through the radio.) The reasons to combine are substantially similar to those of claim 1. Claims 12-14 are rejected on the same grounds under 35 U.S.C. 103 as claims 2-4 as they are substantially similar, respectively. Mutatis mutandis. Claims 15-16 are rejected on the same grounds under 35 U.S.C. 103 as claims 5-6 as they are substantially similar, respectively. Mutatis mutandis. Claim 17 is rejected on the same grounds under 35 U.S.C. 103 as claim 11 as they are substantially similar, respectively. Mutatis mutandis. Claims 18-20 are rejected on the same grounds under 35 U.S.C. 103 as claims 12-14 as they are substantially similar, respectively. Mutatis mutandis. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYU HYUNG HAN whose telephone number is (703) 756-5529. The examiner can normally be reached on MF 9-5. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached on (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Kyu Hyung Han/ Examiner Art Unit 2123 /ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Jun 04, 2021
Application Filed
Dec 06, 2024
Non-Final Rejection — §101, §103
Mar 11, 2025
Interview Requested
Mar 17, 2025
Examiner Interview Summary
Mar 17, 2025
Applicant Interview (Telephonic)
Apr 18, 2025
Response Filed
Aug 02, 2025
Final Rejection — §101, §103
Oct 13, 2025
Interview Requested
Oct 23, 2025
Applicant Interview (Telephonic)
Oct 31, 2025
Examiner Interview Summary
Dec 02, 2025
Request for Continued Examination
Dec 09, 2025
Response after Non-Final Action
Feb 20, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585928
HARDWARE ARCHITECTURE FOR INTRODUCING ACTIVATION SPARSITY IN NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12387101
SYSTEMS AND METHODS FOR PRUNING BINARY NEURAL NETWORKS GUIDED BY WEIGHT FLIPPING FREQUENCY
2y 5m to grant Granted Aug 12, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
43%
Grant Probability
85%
With Interview (+41.7%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month