DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to preliminary amendments and remarks filed on 03/31/2023. In the current amendments, the specification is amended, claims 1, 5-6, 8-18, and 21 are amended, and claims 19-20 and 22-23 have been cancelled. Claims 1-18 and 21 are pending and have been examined.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 03/31/2023 and 12/19/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “606” has been used to designate both a step 606 of performing the n and a network in Fig. 7. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3-5, 7-9, and 11-13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 3 recites the limitation “the number of training dataset” in line 5. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the number of training dataset” has been interpreted as “a number of training dataset”.
Claim 5 recites the limitation “the j-th regenerated original data” in lines 1-2. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the j-th regenerated original data” has been interpreted as “a j-th regenerated original data”.
Claim 5 recites the limitation “j ∈ {1, …, L}” in line 2. This limitation lacks clarity because “j ∈ {1, …, L}” renders the limitation uncertain regarding what “L” is.
Claim 7 recites the limitation “the standard deviation of the noise N” in line 2. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the standard deviation of the noise N” has been interpreted as “a standard deviation of the noise N”.
Claim 7 recites the limitation “the standard deviation of the original data X” in line 3. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the standard deviation of the original data X” has been interpreted as “a standard deviation of the original data X”.
Claim 8 recites the limitation “the mutual information” in line 2. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the mutual information” has been interpreted as “mutual information”.
Claim 8 recites the limitation “the noisy observations Y” in line 3. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the noisy observations Y” has been interpreted as “the noisy input data Y” in reference to “noisy input data Y” in line 2 of claim 1.
Claim 9 recites the limitation “the mutual information” in line 2. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the mutual information” has been interpreted as “mutual information”.
Claim 9 recites the limitation “the noisy observations Y” in line 3. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the noisy observations Y” has been interpreted as “the noisy input data Y” in reference to “noisy input data Y” in line 2 of claim 1.
Claim 11 recites the limitation “the original subcarrier signals” in line 3. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the original subcarrier signals” has been interpreted as “original subcarrier signals”.
Claim 12 recites the limitation “the position” in line 3. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the position” has been interpreted as “a position”.
Claim 13 recites the limitation “the image” in line 2. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the image” has been interpreted as “the corrupted image” in reference to “a corrupted image” in line 2.
Claim 13 recites the limitation “the original image” in line 3. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, “the original image” has been interpreted as “an original image”.
Dependent claims 4-5 are rejected based on being directly or indirectly dependent on rejected claim 3.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-16 and 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1,
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 1 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“learn noise N in noisy input data Y”
“regenerating original data X by subtracting the learned noise N from the noisy input data Y”
As drafted, under their broadest reasonable interpretations, cover mental processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion)) and mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) but for the recitation of mere instructions to apply language (See MPEP 2106.05(f)). The above limitations in the context of this claim encompass learning noise N in noisy input data Y (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, can learn a noise N in noisy input data Y); and regenerating original data X by subtracting the learned noise N from the noisy input data Y (corresponds to mathematical calculation).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The limitation:
“using a neural network”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 2,
Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 2 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: Please see the analysis of claim 1. The limitations of claim 2 are only additional elements to the abstract ideas of claim 1.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The limitations:
“wherein using the neural network to learn the noise N comprises inputting the noisy input data Y into an encoder of the neural network, and the learned noise N is output from a decoder of the neural network”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). In addition, the recitation of additional elements in claim 1 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network, encoder, and decoder for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 3,
Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 3 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein parameters θ and θ’ of the neural network are optimized as follows for all i ∈ {1, ... ,M}:
PNG
media_image1.png
56
328
media_image1.png
Greyscale
where Loss is a loss function, n is a realization vector of the noise N, y is a realization vector of the noisy input data Y, M is the number of training dataset, the parameter θ is {W, b}, W is a weight matrix for encoding, b is a bias vector for encoding, the parameter θ’ is {W', b'}, W' is a weight matrix for decoding, b' is a bias vector for decoding, gθ’ is a decoding function of the decoder of the neural network, and fθ is an encoding function of the encoder of the neural network”
As drafted, under their broadest reasonable interpretations, cover mental processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion)) and mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) but for the recitation of mere instructions to apply language (See MPEP 2106.05(f)). The above limitations in the context of this claim encompass optimizing parameters of the neural network using the given equation (corresponds to mathematical calculations and mathematical equations).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The recitation of additional elements in claim 2 of a generic neural network, encoder, and decoder, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network, encoder, and decoder for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 4,
Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 4 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein fθ(y) = S(Wy+b), gθ’(fθ(y)) = S(W’(fθ(y))+b’), and S is a sigmoid activation function for neural networks”
As drafted, is part of the abstract idea of claim 3 of optimizing parameters of the neural network. The limitation of claim 4 further limits the limitation of claim 3 by further defining what the equations for the fθ(y) and gθ’(fθ(y)) functions comprise. The above limitations in the context of this claim encompass optimizing parameters of the neural network using the given equations (corresponds to mathematical calculations and mathematical equations).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The recitation of additional elements in claim 3 of a generic neural network, encoder, and decoder, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network, encoder, and decoder for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 5,
Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 5 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein
x
-
n
l
(
j
)
is the j-th regenerated original data and is represented as follows for all j ∈ {1, …, L}:
PNG
media_image2.png
46
280
media_image2.png
Greyscale
”
As drafted, under their broadest reasonable interpretations, cover mental processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion)) and mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) but for the recitation of mere instructions to apply language (See MPEP 2106.05(f)). The above limitations in the context of this claim encompass calculating the regenerated original data using the given equation (corresponds to mathematical calculations and mathematical equations).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The recitation of additional elements in claim 3 of a generic neural network, encoder, and decoder, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network, encoder, and decoder for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 6,
Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 6 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“determining whether to use a noise learning-based denoising autoencoder (nlDAE) method or a denoising autoencoder (DAE) method that learns the original data X directly”
“learn the noise N and regenerating the original data X by subtracting the learned noise N from the noisy input data Y in response to determining to use the nlDAE method”
As drafted, under their broadest reasonable interpretations, cover mental processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion)) and mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) but for the recitation of mere instructions to apply language (See MPEP 2106.05(f)). The above limitations in the context of this claim encompass determining whether to use a nlDAE method or a DAE method that learns the original data X directly (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, can determine whether to use a nlDAE method or DAE method); and in response to determining to use the nlDAE method, learning the noise N and regenerating the original data X by subtracting the learned noise N from noisy input data Y (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, can, in response to determining to use the nlDAE method, learning the noise N and regenerating the original data X by subtracting the learned noise N from noisy input data Y).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The limitation:
“using the neural network”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). In addition, the recitation of additional elements in claim 1 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 7,
Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 7 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein determining whether to use the nlDAE method or the DAE method is based on a ratio between the standard deviation of the noise N and the standard deviation of the original data X”
As drafted, under their broadest reasonable interpretations, cover mental processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion)) and mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) but for the recitation of mere instructions to apply language (See MPEP 2106.05(f)). The above limitations in the context of this claim encompass determining whether to use the nlDAE method or the DAE method based on a ratio between the standard deviation of the noise N and the standard deviation of the original data X (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, can use a ratio between the standard deviation of the noise N and the standard deviation of the original data X to determine whether to use the nlDAE method or the DAE method).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The recitation of additional elements in claim 6 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 8,
Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 8 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein determining whether to use the nlDAE method or the DAE method is based on the mutual information between the original data X and the noisy observations Y”
As drafted, under their broadest reasonable interpretations, cover mental processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion)) and mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) but for the recitation of mere instructions to apply language (See MPEP 2106.05(f)). The above limitations in the context of this claim encompass determining whether to use the nlDAE method or the DAE method based on mutual information between the original data X and the noisy observations Y (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, can use mutual information between the original data X and the noisy observations Y to determine whether to use the nlDAE method or the DAE method).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The recitation of additional elements in claim 6 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 9,
Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 9 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein determining whether to use the nlDAE method or the DAE method is based on the mutual information between the noise N and the noisy observations Y”
As drafted, under their broadest reasonable interpretations, cover mental processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion)) and mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) but for the recitation of mere instructions to apply language (See MPEP 2106.05(f)). The above limitations in the context of this claim encompass determining whether to use the nlDAE method or the DAE method based on mutual information between the noise N and the noisy observations Y (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, can use mutual information between the noise N and the noisy observations Y to determine whether to use the nlDAE method or the DAE method).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The recitation of additional elements in claim 6 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 10,
Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 10 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: Please see the analysis of claim 1. The limitations of claim 10 are only additional elements to the abstract ideas of claim 1.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)) or insignificant extra-solution activity (See MPEP 2106.05(g)). The limitations:
“training the neural network, wherein training the neural network comprises: inputting noisy training data into an encoder of the neural network; and outputting training noise from a decoder of the neural network”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). In addition, the recitation of additional elements in claim 1 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network, encoder, decoder, and generic training of the neural network for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 11,
Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 11 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein the noisy input data Y are subcarrier signals of an orthogonal frequency-division multiplexing (OFDM) scheme, the regenerated original data X are the original subcarrier signals”
As drafted, is part of the abstract ideas of claim 1 of learning noise N and regenerating original data X. The limitations of claim 11 further limit the limitations of claim 1 by further defining what the noisy input data Y and the regenerated original data X comprise. The above limitations in the context of this claim encompass learning noise N in noisy input data Y comprising subcarrier signals of an orthogonal frequency-division multiplexing (OFDM) scheme (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, can learn a noise N in noisy input data Y comprising subcarrier signals of an orthogonal frequency-division multiplexing (OFDM) scheme); and regenerating original data X comprising original subcarrier signals by subtracting the learned noise N from the noisy input data Y (corresponds to mathematical calculation). The limitation:
“demodulating the original subcarrier signals”
As drafted, under their broadest reasonable interpretations, cover mental processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion)) and mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) but for the recitation of mere instructions to apply language (See MPEP 2106.05(f)). The above limitations in the context of this claim encompass demodulating the original subcarrier signals (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, can demodulate the original subcarrier signals).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The recitation of additional elements in claim 1 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 12,
Claim 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 12 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitation:
“wherein the noisy input data Y are estimated distances between a target node and reference nodes”
As drafted, is part of the abstract idea of claim 1 of learning noise N. The limitation of claim 12 further limits the limitation of claim 1 by further defining what the noisy input data Y comprises. The above limitation in the context of this claim encompasses learning noise N in noisy input data Y comprising estimated distances between a target node and reference nodes (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, can learn a noise N in noisy input data Y comprising estimated distances between a target node and reference nodes). The limitation:
“using the original data X to estimate the position of the target node”
As drafted, under their broadest reasonable interpretations, cover mental processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion)) and mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) but for the recitation of mere instructions to apply language (See MPEP 2106.05(f)). The above limitations in the context of this claim encompass estimating the position of the target node using the original data X (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, use the original data X to estimate the position of the target node).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The recitation of additional elements in claim 1 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 13,
Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 13 is directed to a method, which is directed to a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“wherein the noisy input data Y are a corrupted image, the noise N is corruptions in the image, and the original data X is the original image”
As drafted, are part of the abstract ideas of claim 1 of learning noise N and regenerating original data X. The limitations of claim 11 further limit the limitations of claim 1 by further defining what the noisy input data Y, the noise N, and the regenerated original data X comprise. The above limitations in the context of this claim encompass learning noise N comprising corruptions in an image in noisy input data Y comprising a corrupted image (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, can learn a noise N comprising corruptions in an image in noisy input data Y comprising a corrupted image); and regenerating original data X comprising an original image by subtracting the learned noise N from the noisy input data Y (corresponds to mathematical calculation).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The recitation of additional elements in claim 1 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 14,
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 14 is directed to an apparatus, which is directed to a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“learn noise N in noisy input data Y”
“regenerate original data X by subtracting the learned noise N from the noisy input data Y”
As drafted, under their broadest reasonable interpretations, cover mental processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion)) and mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) but for the recitation of mere instructions to apply language (See MPEP 2106.05(f)). The above limitations in the context of this claim encompass learning noise N in noisy input data Y (corresponds to evaluation and judgement; in particular, a human, with the assistance of pen and paper, can learn a noise N in noisy input data Y); and regenerating original data X by subtracting the learned noise N from the noisy input data Y (corresponds to mathematical calculation).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The limitation:
“use a neural network”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 15,
Claim 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 15 is directed to an apparatus, which is directed to a machine, one of the statutory categories.
Step 2A Prong One Analysis: Please see the analysis of claim 14. The limitations of claim 15 are only additional elements to the abstract ideas of claim 14.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The limitation:
“wherein the neural network comprises an encoder and a decoder”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). In addition, the recitation of additional elements in claim 14 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network, encoder, and decoder for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 16,
Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 16 is directed to an apparatus, which is directed to a machine, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“subtract the learned noise N from the noisy input data Y”
As drafted, under their broadest reasonable interpretations, cover mental processes (concepts performed in the human mind (including an observation, evaluation, judgement, opinion)) and mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations) but for the recitation of mere instructions to apply language (See MPEP 2106.05(f)). The above limitations in the context of this claim encompass subtracting the learned noise N from the noisy input data Y (corresponds to mathematical calculations).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The limitation:
“a subtractor”
As drafted, is an additional element that amounts to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). In addition, the recitation of additional elements in claim 14 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network and subtractor for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 21,
Claim 21 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 21 is directed to an apparatus, which is directed to a machine, one of the statutory categories.
Step 2A Prong One Analysis: Please see the analysis of claim 14. The limitations of claim 21 are only additional elements to the abstract ideas of claim 14.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. In particular, the claim recites additional elements that are mere instructions to apply (See MPEP 2106.05(f)). The limitations:
“processing circuitry”
“a memory containing instructions executable by said processing circuitry”
“whereby said apparatus is operative to perform the using the neural network to learn the noise N and the regenerating the original data X”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). In addition, the recitation of additional elements in claim 14 of a generic neural network, as drafted, are reciting mere instructions to apply language such that it amounts to no more than mere instructions to apply the exceptions. Therefore, the additional elements do not integrate the abstract ideas into a practical application.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are “mere instructions to apply an exception” (I.e. the additional elements describe a generic neural network, encoder, and decoder for applying the abstract ideas) Mere instructions to apply an exception cannot provide an inventive concept. The claim is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 6, 10, 14-18, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Migliori et al. (US 10,291,268 B1) in view of Lin et al. ("Speech Enhancement Using Forked Generative Adversarial Networks with Spectral Subtraction").
Regarding Claim 1,
Migliori et al. teaches a denoising method (Fig. 7; Col. 7, lines 56-62: "FIG. 7 is a flow chart showing steps in a process for denoising input signals according to illustrative embodiments. It should be appreciated that the steps and order of steps described and illustrated are provided as examples. Fewer, additional, or alternative steps may also be involved in the process for denoising an input signal, and/or some steps may occur in a different order" teaches a method for denoising input data) comprising:
using a neural network to learn noise N in noisy input data Y (Fig. 1; Fig. 7; Col. 7, line 63 - Col. 8, line 4: "Referring to FIG. 7, the process 700 begins at step 710 at which a series of random reference signals is received, along with the reference signals mixed with unknown noise in a transmission environment. These reference sample signals may include I/Q modulated signals 110A, and the noise may be produced by the noise source 120A. These input signals are converted into vector form. At step 720, features associated with the noise mixed with the reference signals are learned by the convolutional autoencoder 200" teaches using a convolutional autoencoder 200 (neural network) to learn features of the noise (learn noise N) from reference signals mixed with unknown noise (noisy input data Y)).
Migliori et al. does not appear to explicitly teach regenerating original data X by subtracting the learned noise N from the noisy input data Y.
However, Lin et al. teaches regenerating original data X by subtracting the learned noise N from the noisy input data Y (Fig. 1; Section 2.1, first paragraph: "The noise information that is learned by the extra decoder can be integrated into the GAN-based framework via spectral subtraction. Spectral subtraction is as such used to recover the speech signal by subtracting an estimate of the average noise spectrum from the noisy signal spectrum" teaches that the learned noise output from the decoder (part of the neural network) can be used in spectral subtraction to subtract the noise from the noisy input signal. Fig. 1; Section 2.2, third paragraph: "Spectral subtraction is one of the traditional algorithms for enhancing a single speech channel. Since the noisy signal xt =
x
~
t + vt is the addition of the desired signal value ˜xt and the noise value vt at time t, the standard spectral subtraction is defined in the frequency domain as:
PNG
media_image3.png
40
392
media_image3.png
Greyscale
where X(jω),
X
~
(jω) and V(jω) are Fourier transforms of xt,
x
~
t, vt, respectively" teaches regenerating the desired signal
x
~
t (original data X) by subtracting the noise vt (learned noise N) from the noisy signal xt (noisy input data Y)).
Migliori et al. and Lin et al. are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate regenerating original data X by subtracting the learned noise N from the noisy input data Y as taught by Lin et al. to the disclosed invention of Migliori et al.
One of ordinary skill in the art would have been motivated to make this modification "to capture both speech and noise patterns" to perform "speech signal extraction … using a spectral subtraction loss term and a margin-based loss term to further improve the quality of the enhanced speech signals" (Lin et al. Section 5).
Regarding Claim 2,
Migliori et al. in view of Lin et al. teaches the method of claim 1.
In addition, Migliori et al. further teaches wherein using the neural network to learn the noise N comprises inputting the noisy input data Y into an encoder of the neural network (Fig. 1; Fig. 7; Col. 7, line 63 - Col. 8, line 4: "Referring to FIG. 7, the process 700 begins at step 710 at which a series of random reference signals is received, along with the reference signals mixed with unknown noise in a transmission environment. These reference sample signals may include I/Q modulated signals 110A, and the noise may be produced by the noise source 120A. These input signals are converted into vector form. At step 720, features associated with the noise mixed with the reference signals are learned by the convolutional autoencoder 200" teaches that reference signals mixed with unknown noise (noisy input data Y) input into the convolutional autoencoder 200 (neural network). Fig. 3; Col. 4, lines 4-23: "FIG. 3 illustrates in detail an example configuration for performing denoising using a convolutional autoencoder according to illustrative embodiments. Referring to FIG. 3, the convolutional autoencoder 200 includes at least one layer including convolutional elements 210A and pooling elements 220A … The convolutional elements 210A include a convolving encoder 212 having a bank of n filters, the filters also referred to as a “kernel”. The input data 110A (for the training phase) or 110B (for the post-training phase) is converted to a vector form, and the convolving encoder 212 convolves the input vector v with the kernel" teaches that the input data 110B (noisy input data Y) is input into an encoder of the convolutional autoencoder 200 (neural network)).
Additionally, Lin et al. further teaches the learned noise N is output from a decoder of the neural network (Fig. 1; Algorithm 1; Section 2.1, third paragraph: "In Step 4, the speech decoder Ψs(·) and the noise decoder Ψv(·) aim to generate the speech signal and the additive noise signal, respectively. … The resulting outputs are the clean speech prediction
s
^
∈ Rm, and the noise prediction
v
^
∈ Rm" teaches that the noise decoder of the neural network outputs an additive noise signal (learned noise N)).
Migliori et al. and Lin et al. are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the learned noise N is output from a decoder of the neural network as taught by Lin et al. to the disclosed invention of Migliori et al.
One of ordinary skill in the art would have been motivated to make this modification "to capture both speech and noise patterns" to perform "speech signal extraction … using a spectral subtraction loss term and a margin-based loss term to further improve the quality of the enhanced speech signals" (Lin et al. Section 5).
Regarding Claim 6,
Migliori et al. in view of Lin et al. teaches the method of claim 1.
In addition, Lin et al. further teaches further comprising: determining whether to use a noise learning-based denoising autoencoder (nlDAE) method or a denoising autoencoder (DAE) method that learns the original data X directly (Fig. 2; Table 1; Section 4, third paragraph: "Given that S-ForkGAN and GAN-AE are auto-encoder based methods, the performance is determined for different input features … From Table 1, one can see that both GAN-AE and S-ForkGAN with LPS features outperform systems with raw audio as input. The results show that directly operating on LPS is more helpful for the ASR tasks. Note that S-ForkGAN outperforms GAN-AE with respect to these two features" teaches determining whether to use the S-ForkGAN (nlDAE) or GAN-AE (DAE) autoencoder based method for the input features to generate denoised signals. Section, seventh paragraph: "The setup for GAN-AE is similar to S-ForkGAN. The only difference is that the generator is an auto-encoder architecture with the same configuration as the proposed method, using one decoder to generate clean speech" teaches that GAN-AE (DAE) autoencoder based method generate clean speech signals directly (e.g. learns the original data directly)); and
using the neural network to learn the noise N and regenerating the original data X by subtracting the learned noise N from the noisy input data Y in response to determining to use the nlDAE method (Fig. 1; Section 2.1, first paragraph: "The proposed S-ForkGAN architecture uses a generator and a discriminator network as shown in Fig. 1 … The noise information that is learned by the extra decoder can be integrated into the GAN-based framework via spectral subtraction. Spectral subtraction is as such used to recover the speech signal by subtracting an estimate of the average noise spectrum from the noisy signal spectrum" teaches learning a noise signal as an output from the decoder (part of the neural network) that can then be used in spectral subtraction to subtract the noise from the noisy input signal as part of using the S-ForkGAN (nlDAE) method. Fig. 1; Section 2.2, third paragraph: "Spectral subtraction is one of the traditional algorithms for enhancing a single speech channel. Since the noisy signal xt =
x
~
t + vt is the addition of the desired signal value ˜xt and the noise value vt at time t, the standard spectral subtraction is defined in the frequency domain as:
PNG
media_image3.png
40
392
media_image3.png
Greyscale
where X(jω),
X
~
(jω) and V(jω) are Fourier transforms of xt,
x
~
t, vt, respectively" regenerating the desired signal
x
~
t (original data X) by subtracting the noise vt (learned noise N) from the noisy signal xt (noisy input data Y)).
Migliori et al. and Lin et al. are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate further comprising: determining whether to use a noise learning-based denoising autoencoder (nlDAE) method or a denoising autoencoder (DAE) method that learns the original data X directly; and using the neural network to learn the noise N and regenerating the original data X by subtracting the learned noise N from the noisy input data Y in response to determining to use the nlDAE method as taught by Lin et al. to the disclosed invention of Migliori et al.
One of ordinary skill in the art would have been motivated to make this modification "to capture both speech and noise patterns" to perform "speech signal extraction … using a spectral subtraction loss term and a margin-based loss term to further improve the quality of the enhanced speech signals" (Lin et al. Section 5).
Regarding Claim 10,
Migliori et al. in view of Lin et al. teaches the method of claim 1.
In addition, Migliori et al. further teaches further comprising training the neural network (Fig. 1; Col. 3, lines 3-12: "FIG. 1 illustrates an example configuration for training a convolutional autoencoder for performing denoising according to illustrative embodiments. As shown in FIG. 1, reference transmission signals 110A exposed to a noise source 120A representing a transmission environment having unknown noise are used to train a convolutional autoencoder 200. The reference transmission signals 110A are received by the convolutional autoencoder 200, along with noisy input signals including the reference signals 110A mixed with the noise from a noise source 120A" teaches training the convolutional autoencoder 200 (neural network)), wherein training the neural network comprises:
inputting noisy training data into an encoder of the neural network (Fig. 1; Fig. 7; Col. 7, line 63 - Col. 8, line 4: "Referring to FIG. 7, the process 700 begins at step 710 at which a series of random reference signals is received, along with the reference signals mixed with unknown noise in a transmission environment. These reference sample signals may include I/Q modulated signals 110A, and the noise may be produced by the noise source 120A. These input signals are converted into vector form. At step 720, features associated with the noise mixed with the reference signals are learned by the convolutional autoencoder 200" teaches that reference signals mixed with unknown noise (noisy input data) input into the convolutional autoencoder 200 (neural network). Fig. 3; Col. 4, lines 4-23: "FIG. 3 illustrates in detail an example configuration for performing denoising using a convolutional autoencoder according to illustrative embodiments. Referring to FIG. 3, the convolutional autoencoder 200 includes at least one layer including convolutional elements 210A and pooling elements 220A … The convolutional elements 210A include a convolving encoder 212 having a bank of n filters, the filters also referred to as a “kernel”. The input data 110A (for the training phase) or 110B (for the post-training phase) is converted to a vector form, and the convolving encoder 212 convolves the input vector v with the kernel" teaches that the input data 110A (noisy training data) is input into an encoder of the convolutional autoencoder 200 (neural network)).
Additionally, Lin et al. further teaches outputting training noise from a decoder of the neural network (Fig. 1; Algorithm 1; Section 2.1, first-third paragraphs: "The noise information that is learned by the extra decoder can be integrated into the GAN-based framework via spectral subtraction. Spectral subtraction is as such used to recover the speech signal by subtracting an estimate of the average noise spectrum from the noisy signal spectrum … In Step 4, the speech decoder Ψs(·) and the noise decoder Ψv(·) aim to generate the speech signal and the additive noise signal, respectively. … The resulting outputs are the clean speech prediction
s
^
∈ Rm, and the noise prediction
v
^
∈ Rm" teaches that the neural network is trained using learned noise information and that the noise decoder of the neural network outputs an additive noise signal (training noise)).
Migliori et al. and Lin et al. are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate outputting training noise from a decoder of the neural network as taught by Lin et al. to the disclosed invention of Migliori et al.
One of ordinary skill in the art would have been motivated to make this modification "to capture both speech and noise patterns" to perform "speech signal extraction … using a spectral subtraction loss term and a margin-based loss term to further improve the quality of the enhanced speech signals" (Lin et al. Section 5).
Regarding Claim 14,
Migliori et al. teaches an apparatus (Fig. 7; Col. 7, lines 56-62: "FIG. 7 is a flow chart showing steps in a process for denoising input signals according to illustrative embodiments. It should be appreciated that the steps and order of steps described and illustrated are provided as examples. Fewer, additional, or alternative steps may also be involved in the process for denoising an input signal, and/or some steps may occur in a different order" teaches a method for denoising input data. Fig. 8; Col. 8, lines 45-47: "FIG. 8 is a block diagram of a computing device with which the denoising system may be implemented, according to illustrative embodiments" teaches a computing device (apparatus) for implementing the denoising method) adapted to:
use a neural network to learn noise N in noisy input data Y (Fig. 1; Fig. 7; Col. 7, line 63 - Col. 8, line 4: "Referring to FIG. 7, the process 700 begins at step 710 at which a series of random reference signals is received, along with the reference signals mixed with unknown noise in a transmission environment. These reference sample signals may include I/Q modulated signals 110A, and the noise may be produced by the noise source 120A. These input signals are converted into vector form. At step 720, features associated with the noise mixed with the reference signals are learned by the convolutional autoencoder 200" teaches using a convolutional autoencoder 200 (neural network) to learn features of the noise (learn noise N) from reference signals mixed with unknown noise (noisy input data Y)).
Migliori et al. does not appear to explicitly teach regenerate original data X by subtracting the learned noise N from the noisy input data Y.
However, Lin et al. teaches regenerate original data X by subtracting the learned noise N from the noisy input data Y (Fig. 1; Section 2.1, first paragraph: "The noise information that is learned by the extra decoder can be integrated into the GAN-based framework via spectral subtraction. Spectral subtraction is as such used to recover the speech signal by subtracting an estimate of the average noise spectrum from the noisy signal spectrum" teaches that the learned noise output from the decoder (part of the neural network) can be used in spectral subtraction to subtract the noise from the noisy input signal. Fig. 1; Section 2.2, third paragraph: "Spectral subtraction is one of the traditional algorithms for enhancing a single speech channel. Since the noisy signal xt =
x
~
t + vt is the addition of the desired signal value ˜xt and the noise value vt at time t, the standard spectral subtraction is defined in the frequency domain as:
PNG
media_image3.png
40
392
media_image3.png
Greyscale
where X(jω),
X
~
(jω) and V(jω) are Fourier transforms of xt,
x
~
t, vt, respectively" teaches regenerating the desired signal
x
~
t (original data X) by subtracting the noise vt (learned noise N) from the noisy signal xt (noisy input data Y)).
Migliori et al. and Lin et al. are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate regenerate original data X by subtracting the learned noise N from the noisy input data Y as taught by Lin et al. to the disclosed invention of Migliori et al.
One of ordinary skill in the art would have been motivated to make this modification "to capture both speech and noise patterns" to perform "speech signal extraction … using a spectral subtraction loss term and a margin-based loss term to further improve the quality of the enhanced speech signals" (Lin et al. Section 5).
Regarding Claim 15,
Migliori et al. in view of Lin et al. teaches the apparatus of claim 14.
In addition, Lin et al. further teaches wherein the neural network comprises an encoder and a decoder (Fig. 1; Algorithm 1; Section 2.1, first-third paragraphs: "The noise information that is learned by the extra decoder can be integrated into the GAN-based framework via spectral subtraction. Spectral subtraction is as such used to recover the speech signal by subtracting an estimate of the average noise spectrum from the noisy signal spectrum … In Step 1, the LPS features are extracted as the input of encoder using an FFT. Then, in Step 2, the encoder function Φ(·) extracts a latent vector c from the received noisy speech signal
s
^
… In Step 4, the speech decoder Ψs(·) and the noise decoder Ψv(·) aim to generate the speech signal and the additive noise signal, respectively. … The resulting outputs are the clean speech prediction
s
^
∈ Rm, and the noise prediction
v
^
∈ Rm" teaches that the neural network for learning the noise comprises an encoder and a decoder).
Migliori et al. and Lin et al. are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate wherein the neural network comprises an encoder and a decoder as taught by Lin et al. to the disclosed invention of Migliori et al.
One of ordinary skill in the art would have been motivated to make this modification "to capture both speech and noise patterns" to perform "speech signal extraction … using a spectral subtraction loss term and a margin-based loss term to further improve the quality of the enhanced speech signals" (Lin et al. Section 5).
Regarding Claim 16,
Migliori et al. in view of Lin et al. teaches the apparatus of claim 14.
In addition, Lin et al. further teaches wherein the apparatus comprises a subtractor configured to subtract the learned noise N from the noisy input data Y (Fig. 1; Section 2.1, first paragraph: "The noise information that is learned by the extra decoder can be integrated into the GAN-based framework via spectral subtraction. Spectral subtraction is as such used to recover the speech signal by subtracting an estimate of the average noise spectrum from the noisy signal spectrum" teaches that the learned noise output from the decoder (part of the neural network) can be used in spectral subtraction to subtract the noise from the noisy input signal. Fig. 1; Section 2.2, third paragraph: "Spectral subtraction is one of the traditional algorithms for enhancing a single speech channel. Since the noisy signal xt =
x
~
t + vt is the addition of the desired signal value ˜xt and the noise value vt at time t, the standard spectral subtraction is defined in the frequency domain as:
PNG
media_image3.png
40
392
media_image3.png
Greyscale
where X(jω),
X
~
(jω) and V(jω) are Fourier transforms of xt,
x
~
t, vt, respectively" teaches regenerating the desired signal
x
~
t (original data X) by subtracting the noise vt (learned noise N) from the noisy signal xt (noisy input data Y)).
Migliori et al. and Lin et al. are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate wherein the apparatus comprises a subtractor configured to subtract the learned noise N from the noisy input data Y as taught by Lin et al. to the disclosed invention of Migliori et al.
One of ordinary skill in the art would have been motivated to make this modification "to capture both speech and noise patterns" to perform "speech signal extraction … using a spectral subtraction loss term and a margin-based loss term to further improve the quality of the enhanced speech signals" (Lin et al. Section 5).
Regarding Claim 17,
Migliori et al. teaches a method for training a noise learning-based denoising autoencoder (nlDAE) (Fig. 7; Col. 7, line 56 - Col. 8, line 4: "FIG. 7 is a flow chart showing steps in a process for denoising input signals according to illustrative embodiments … Referring to FIG. 7, the process 700 begins at step 710 at which a series of random reference signals is received, along with the reference signals mixed with unknown noise in a transmission environment. These reference sample signals may include I/Q modulated signals 110A, and the noise may be produced by the noise source 120A. These input signals are converted into vector form. At step 720, features associated with the noise mixed with the reference signals are learned by the convolutional autoencoder 200" teaches a method for denoising input data using a convolutional autoencoder 200 to learn features of the noise (nlDAE)), the method comprising:
inputting noisy input data Y into an encoder of a neural network (Fig. 1; Fig. 7; Col. 7, line 63 - Col. 8, line 4: "Referring to FIG. 7, the process 700 begins at step 710 at which a series of random reference signals is received, along with the reference signals mixed with unknown noise in a transmission environment. These reference sample signals may include I/Q modulated signals 110A, and the noise may be produced by the noise source 120A. These input signals are converted into vector form. At step 720, features associated with the noise mixed with the reference signals are learned by the convolutional autoencoder 200" teaches that reference signals mixed with unknown noise (noisy input data Y) input into the convolutional autoencoder 200 (neural network). Fig. 3; Col. 4, lines 4-23: "FIG. 3 illustrates in detail an example configuration for performing denoising using a convolutional autoencoder according to illustrative embodiments. Referring to FIG. 3, the convolutional autoencoder 200 includes at least one layer including convolutional elements 210A and pooling elements 220A … The convolutional elements 210A include a convolving encoder 212 having a bank of n filters, the filters also referred to as a “kernel”. The input data 110A (for the training phase) or 110B (for the post-training phase) is converted to a vector form, and the convolving encoder 212 convolves the input vector v with the kernel" teaches that the input data 110B (noisy input data Y) is input into an encoder of the convolutional autoencoder 200 (neural network)).
Migliori et al. does not appear to explicitly teach outputting noise N from a decoder of the neural network.
However, Lin et al. teaches outputting noise N from a decoder of the neural network (Fig. 1; Algorithm 1; Section 2.1, third paragraph: "In Step 4, the speech decoder Ψs(·) and the noise decoder Ψv(·) aim to generate the speech signal and the additive noise signal, respectively. … The resulting outputs are the clean speech prediction
s
^
∈ Rm, and the noise prediction
v
^
∈ Rm" teaches that the noise decoder of the neural network outputs an additive noise signal (learned noise N)).
Migliori et al. and Lin et al. are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate outputting noise N from a decoder of the neural network as taught by Lin et al. to the disclosed invention of Migliori et al.
One of ordinary skill in the art would have been motivated to make this modification "to capture both speech and noise patterns" to perform "speech signal extraction … using a spectral subtraction loss term and a margin-based loss term to further improve the quality of the enhanced speech signals" (Lin et al. Section 5).
Regarding Claim 18,
Migliori et al. teaches an apparatus (Fig. 7; Col. 7, line 56 - Col. 8, line 4: "FIG. 7 is a flow chart showing steps in a process for denoising input signals according to illustrative embodiments … Referring to FIG. 7, the process 700 begins at step 710 at which a series of random reference signals is received, along with the reference signals mixed with unknown noise in a transmission environment. These reference sample signals may include I/Q modulated signals 110A, and the noise may be produced by the noise source 120A. These input signals are converted into vector form. At step 720, features associated with the noise mixed with the reference signals are learned by the convolutional autoencoder 200" teaches a method for denoising input data using a convolutional autoencoder 200 to learn features of the noise (nlDAE). Fig. 8; Col. 8, lines 45-47: "FIG. 8 is a block diagram of a computing device with which the denoising system may be implemented, according to illustrative embodiments" teaches a computing device (apparatus) for implementing the denoising method) adapted to:
receive noisy input data Y at inputs to an encoder of a neural network (Fig. 1; Fig. 7; Col. 7, line 63 - Col. 8, line 4: "Referring to FIG. 7, the process 700 begins at step 710 at which a series of random reference signals is received, along with the reference signals mixed with unknown noise in a transmission environment. These reference sample signals may include I/Q modulated signals 110A, and the noise may be produced by the noise source 120A. These input signals are converted into vector form. At step 720, features associated with the noise mixed with the reference signals are learned by the convolutional autoencoder 200" teaches that reference signals mixed with unknown noise (noisy input data Y) input into the convolutional autoencoder 200 (neural network). Fig. 3; Col. 4, lines 4-23: "FIG. 3 illustrates in detail an example configuration for performing denoising using a convolutional autoencoder according to illustrative embodiments. Referring to FIG. 3, the convolutional autoencoder 200 includes at least one layer including convolutional elements 210A and pooling elements 220A … The convolutional elements 210A include a convolving encoder 212 having a bank of n filters, the filters also referred to as a “kernel”. The input data 110A (for the training phase) or 110B (for the post-training phase) is converted to a vector form, and the convolving encoder 212 convolves the input vector v with the kernel" teaches that the input data 110B (noisy input data Y) is input into an encoder of the convolutional autoencoder 200 (neural network)).
Migliori et al. does not appear to explicitly teach output noise N from a decoder of the neural network.
However, Lin et al. teaches output noise N from a decoder of the neural network (Fig. 1; Algorithm 1; Section 2.1, third paragraph: "In Step 4, the speech decoder Ψs(·) and the noise decoder Ψv(·) aim to generate the speech signal and the additive noise signal, respectively. … The resulting outputs are the clean speech prediction
s
^
∈ Rm, and the noise prediction
v
^
∈ Rm" teaches that the noise decoder of the neural network outputs an additive noise signal (learned noise N)).
Migliori et al. and Lin et al. are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate output noise N from a decoder of the neural network as taught by Lin et al. to the disclosed invention of Migliori et al.
One of ordinary skill in the art would have been motivated to make this modification "to capture both speech and noise patterns" to perform "speech signal extraction … using a spectral subtraction loss term and a margin-based loss term to further improve the quality of the enhanced speech signals" (Lin et al. Section 5).
Regarding Claim 21,
Migliori et al. in view of Lin et al. teaches the apparatus of claim 14.
In addition, Migliori et al. further teaches the apparatus comprising: processing circuitry; and a memory containing instructions executable by said processing circuitry (Fig. 8; Col. 9, lines 5-31: "The computing device 800 includes a processor 810 that receives inputs and transmits outputs via I/O Data Ports 820 … The processor 810 communicates with the memory 830 and the hard drive via, e.g., an address/data bus … The memory 830 is representative of the overall hierarchy of memory devices containing the software and data used to implement the functionality of the device 800 … As shown in FIG. 8, the memory 830 may include several categories of software and data used in the device 800, including applications 840, a database 850, an operating system (OS) 860, etc. … The applications 840 can be stored in the memory 830 and/or in a firmware (not shown) as executable instructions, and can be executed by the processor 810. The applications 840 include various programs that implement the various features of the device 800. For example, the applications 840 may include applications to implement the functions of the convolutional autoencoder 200 (including training and post-training denoising)" teaches the computing device 800 (apparatus) comprising a processor 810 (processing circuitry) and a memory 830 comprising instructions for execution by the processor),
whereby said apparatus is operative to perform the using the neural network to learn the noise N and the regenerating the original data X (Fig. 8; Col. 9, lines 5-31: "The computing device 800 includes a processor 810 that receives inputs and transmits outputs via I/O Data Ports 820 … The processor 810 communicates with the memory 830 and the hard drive via, e.g., an address/data bus … The memory 830 is representative of the overall hierarchy of memory devices containing the software and data used to implement the functionality of the device 800 … As shown in FIG. 8, the memory 830 may include several categories of software and data used in the device 800, including applications 840, a database 850, an operating system (OS) 860, etc. … The applications 840 can be stored in the memory 830 and/or in a firmware (not shown) as executable instructions, and can be executed by the processor 810. The applications 840 include various programs that implement the various features of the device 800. For example, the applications 840 may include applications to implement the functions of the convolutional autoencoder 200 (including training and post-training denoising)" teaches the computing device 800 (apparatus) is operative to perform the functions using the convolutional autoencoder 200 (neural network) including training and denoising (e.g. to learn the noise and regenerate denoised original data)).
Claims 11 are rejected under 35 U.S.C. 103 as being unpatentable over Migliori et al. (US 10,291,268 B1) in view of Lin et al. ("Speech Enhancement Using Forked Generative Adversarial Networks with Spectral Subtraction") and further in view of O'Shea (US 2018/0314985 A1).
Regarding Claim 11,
Migliori et al. in view of Lin et al. teaches the method of claim 1.
Migliori et al. in view of Lin et al. does not appear to explicitly teach wherein the noisy input data Y are subcarrier signals of an orthogonal frequency-division multiplexing (OFDM) scheme, the regenerated original data X are the original subcarrier signals, and the method further comprises demodulating the original subcarrier signals.
However, O'Shea teaches wherein the noisy input data Y are subcarrier signals of an orthogonal frequency-division multiplexing (OFDM) scheme (Fig. 3; Fig. 5; [0082]: "FIG. 5 illustrates an example of an RF signal and a compact representation of the RF signal that may be learned by machine-learning networks. This example illustrates a signal compression effect on a signal, including an input signal 502, compressed representation 504, and reconstructed signal 506 … the compact representation learning techniques disclosed herein may be applied to any suitable type of RF signal" teaches that the input signals (input data) to the machine learning network for learning a reconstructed (regenerated) signal can be any suitable type of RF signal (subcarrier signals). [0051]: "the network structure 200 may be implemented as a denoising autoencoder. By introducing noise into the input of the training or into intermediate layer representations, but evaluating its reconstruction of the unmodified input, denoising autoencoders can perform an additional input noise regularization effect which models additive Gaussian thermal noise that is prevalent in communications systems. In this way, the network can learn the structural components of a signal, removing certain stochastic effects such as noise from the reconstruction of the signal. This can be useful in removing or lowering the noise level present within a radio signal which may aide in the processing of the signal for other purposes" teaches that the input RF signals (subcarrier signals) for the machine learning network can have added noise (noisy input data Y). [0074]: "the training may begin with a fixed set of basis functions, such as commonly used RF communication basis functions including Quadrature Phase-Shift Keying (QPSK) or Gaussian Binary Frequency Shift Keying (GFSK), orthogonal frequency division multiple access (OFDM), or other fixed set of basis functions" teaches that the input RF signals (subcarrier signals) can be of an orthogonal frequency-division multiplexing (OFDM) scheme),
the regenerated original data X are the original subcarrier signals (Fig. 3; Fig. 5; [0083]: "As shown in this example, the input signal 502 is encoded by an encoder machine-learning network (e.g., encoder network 302 in FIG. 3, above) to produce an intermediate compressed signal representation 504. This compressed signal representation 504 is then processed by a decoder machine-learning network (e.g., decoder network 304 in FIG. 3, above) to produce an output signal 506, which is a reconstruction of the original signal 502" teaches producing an output signal that is a reconstruction (regenerated original data X) of the original signal (original subcarrier signals)), and
the method further comprises demodulating the original subcarrier signals (Fig. 3; Fig. 8; [0105]: "The reconstructed (decompressed) RF signal that is generated by decoder 807 may then be processed by an RF signal-processor 809, such as a radio signal demodulator" teaches demodulating the reconstructed RF signals (original subcarrier signals)).
Migliori et al., Lin et al., and O’Shea are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate wherein the noisy input data Y are subcarrier signals of an orthogonal frequency-division multiplexing (OFDM) scheme, the regenerated original data X are the original subcarrier signals, and the method further comprises demodulating the original subcarrier signals as taught by O’Shea to the disclosed invention of Migliori et al. in view of Lin et al.
One of ordinary skill in the art would have been motivated to make this modification to "provide a novel capability for representing and compressing radio signals" such that "various types of radio signals may be stored more compactly and reconstructed more efficiently and effectively, providing a smaller compressed signal size and lower computational complexity for compression compared to existing techniques" (O'Shea [0024]).
Claims 12 are rejected under 35 U.S.C. 103 as being unpatentable over Migliori et al. (US 10,291,268 B1) in view of Lin et al. ("Speech Enhancement Using Forked Generative Adversarial Networks with Spectral Subtraction") and further in view of Dokmanic et al. ("Euclidean Distance Matrices: Essential theory, algorithms, and applications").
Regarding Claim 12,
Migliori et al. in view of Lin et al. teaches the method of claim 1.
Migliori et al. in view of Lin et al. does not appear to explicitly teach wherein the noisy input data Y are estimated distances between a target node and reference nodes, and the method further comprises using the original data X to estimate the position of the target node.
However, Dokmanic et al. teaches wherein the noisy input data Y are estimated distances between a target node and reference nodes (Page 1, first-second columns: "Euclidean distance matrices (EDMs) are matrices of the squared distances between points. The definition is deceivingly simple; thanks to their many useful properties, they have found applications in ... machine learning ... We often work with distances because they are convenient to measure or estimate. In wireless sensor networks, for example, the sensor nodes measure the received signal strengths of the packets sent by other nodes or the time of arrival (TOA) of pulses emitted by their neighbors [1]. Both of these proxies allow for distance estimation between pairs of nodes; thus, we can attempt to reconstruct the network topology. ... a number of tools related to EDMs, including multidimensional scaling (MDS)—the problem of finding the best point set representation of a given set of distances. More abstractly, we can study EDMs for objects such as images, which live in high dimensional vector spaces" teaches the use of EDMs for machine learning applications by using a multidimensional scaling (MDS) algorithm input distance values. Algorithm 1; Page 5, second column: "The classical MDS algorithm with the geometric centering matrix is spelled out in Algorithm 1. … Algorithm 1 can handle noisy distances too as it discards all but the d largest eigenvalues" teaches that the MDS algorithm for estimating distances between nodes (e.g. between target node and reference nodes) can accept noisy distance estimates as inputs (noisy input data Y)), and
the method further comprises using the original data X to estimate the position of the target node (Algorithm 1; Page 5, first column: "It is straightforward to verify that the reconstructed point set
X
^
generates the original EDM, D = edm (X); as we have learned,
X
^
and X are related by a rigid transformation. The described procedure is called the classical MDS, with a particular choice of the coordinate system: x1 is fixed at the origin" teaches that the reconstructed original Euclidean distance matrices (EDM) (original data X) is generated using the MDS with fixed positions on a coordinate system. Page 1, second column: "We often work with distances because they are convenient to measure or estimate. In wireless sensor networks, for example, the sensor nodes measure the received signal strengths of the packets sent by other nodes or the time of arrival (TOA) of pulses emitted by their neighbors [1]. Both of these proxies allow for distance estimation between pairs of nodes; thus, we can attempt to reconstruct the network topology. ... a number of tools related to EDMs, including multidimensional scaling (MDS)—the problem of finding the best point set representation of a given set of distances. More abstractly, we can study EDMs for objects such as images, which live in high dimensional vector spaces" teaches the EDM generated using a multidimensional scaling (MDS) algorithm is used to estimate distances between nodes (e.g. estimate position of target node based on reconstructed distances of original data)).
Migliori et al. and Lin et al. are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
Dokmanic et al. is analogous to the claimed invention because it is directed towards denoising inputs for machine learning that are distances between nodes.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate wherein the noisy input data Y are estimated distances between a target node and reference nodes, and the method further comprises using the original data X to estimate the position of the target node as taught by Dokmanic et al. to the disclosed invention of Migliori et al. in view of Lin et al.
One of ordinary skill in the art would have been motivated to make this modification to enable "the various EDM (Euclidean distance matrices) properties [to] be used to design algorithms for completing and denoising distance data" (Dokmanic et al. Page 1, first column).
Claims 13 are rejected under 35 U.S.C. 103 as being unpatentable over Migliori et al. (US 10,291,268 B1) in view of Lin et al. ("Speech Enhancement Using Forked Generative Adversarial Networks with Spectral Subtraction") and further in view of Park et al. (US 2020/0034948 A1).
Regarding Claim 13,
Migliori et al. in view of Lin et al. teaches the method of claim 1.
Migliori et al. in view of Lin et al. does not appear to explicitly teach wherein the noisy input data Y are a corrupted image, the noise N is corruptions in the image, and the original data X is the original image.
However, Park et al. teaches wherein the noisy input data Y are a corrupted image, the noise N is corruptions in the image, and the original data X is the original image (Fig. 9; [0096]-[0099]: "the cascaded DL algorithm splits the MRI SR reconstruction network process into three stages: 1) construction of an image denoising autoencoder (DAE) model to subtract noise ηk from a noisy LR image input … a denoising autoencoder (DAE), illustrated in FIG. 9, is included in the MRI SR network that is configured to learn a mapping from noisy LR MR images to corresponding denoised MR images through pairs of training data … Specifically, if a noise-free image is denoted as x and a corrupted image of x as
x
~
, the DAE is trained to minimize the reconstruction error … since the noise-free image x is usually unavailable, an appropriate denoising algorithm is selected to denoise the corrupted image
x
~
and treat the denoised image as x … the basic framework of the DAE comprises an encoder that maps a noisy input image to some hidden representation and a decoder that maps this hidden representation back to the reconstructed version of the de-noised input image" teaches a denoising autoencoder that has a noisy corrupted image (the noisy input data Y) as input, with the noise being corruptions in the image (noise N), and the reconstructed image being the original noise free image (original data X)).
Migliori et al., Lin et al., and Park et al. are analogous to the claimed invention because they are directed towards a neural network model for denoising data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate wherein the noisy input data Y are a corrupted image, the noise N is corruptions in the image, and the original data X is the original image as taught by Park et al. to the disclosed invention of Migliori et al. in view of Lin et al.
One of ordinary skill in the art would have been motivated to make this modification to "enable the production of high resolution MR images based on low resolution MR images" (Park et al. [0157]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN J HALES whose telephone number is (571)272-0878. The examiner can normally be reached M-F 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRIAN J HALES/Examiner, Art Unit 2125
/KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125