DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Specification
The disclosure is objected to because of the following informalities: para [0050] line 2 refers to “Fig. 11” which should read “Fig. 1.”
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 8-9, and 12-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Oshikiri (US 2007/0253481 A1).
Regarding claim 1, Oshikiri teaches:
An audio signal encoding method, comprising:
receiving a current frame signal and a reconstructed previous frame signal (Fig. 15 element 205, para [0048], [0119], where the previous frame and current frame inputs are used);
generating a predicted current frame signal, based on the current frame signal and the reconstructed previous frame signal (Fig. 15 element 205, para [0048], [0055], [0119], where the predictive coefficients of the current frame are generated using the previous frame and current frame inputs with predictive coefficients); and
outputting a reconstructed residual signal, based on the current frame signal and the predicted current frame signal (Fig. 3, para [0056], where an error signal is generated by subtracting the two signals).
Regarding claim 2, Oshikiri teaches:
The audio signal encoding method of claim 1, wherein
the current frame signal comprises a time domain signal for a current frame (para [0041], where an input from a microphone is a time domain signal), and
the reconstructed previous frame signal comprises a reconstructed frequency domain signal for a previous frame (Fig. 15, para [0118-119], where the previous frame signal is a spectrum, which is a frequency domain signal).
Regarding claim 8, Oshikiri teaches:
An audio signal decoding method comprising:
receiving difference information between a current frame signal and a reconstructed previous frame signal (Fig. 6, para [0067], where the predictive coefficients are received);
generating a predicted current frame signal from the reconstructed previous frame signal using the difference information (Fig. 6 element 606, para [0070], where the predictive coefficients are used to predict a current frame); and
obtaining a reconstructed current frame signal from the predicted current frame signal based on a residual signal between the current frame signal and the predicted current frame signal (Fig. 6 element 607, para [0070], where the decoded spectrum is converted to the time domain, determined using the predictive coefficients).
Regarding claim 9, Oshikiri teaches:
The audio signal decoding method of claim 8, wherein the difference information comprises information on a difference between a frequency domain signal for a current frame and a reconstructed frequency domain signal for a previous frame (Fig. 3, para [0056], where an error signal is generated by subtracting the two signals corresponding to the frequency domain signals).
Regarding claim 12, Oshikiri teaches:
The audio signal decoding method of claim 9, wherein the generating of the predicted current frame signal comprises generating a predicted frequency domain signal for a current frame from the reconstructed frequency domain signal using the information (Fig. 6 element 606, para [0070], where the predictive coefficients are used to predict a spectrum of the current frame).
Regarding claim 13, Oshikiri teaches:
The audio signal decoding method of claim 12, wherein the obtaining of the reconstructed current frame signal comprises obtaining a reconstructed frequency domain signal for a current frame by synthesizing the predicted frequency domain signal with a reconstructed signal of the residual signal (Fig. 6 element 607, para [0070], where the decoded spectrum is converted to the time domain, determined using the predictive coefficients).
Regarding claim 14, Oshikiri teaches:
An apparatus for encoding an audio signal, the apparatus comprising:
a memory configured to store instructions (para [0129], where memory is used); and
a processor electrically connected to the memory and configured to execute the instructions (para [0132], where processor is used),
wherein, when the instructions are executed by the processor, the processor is configured to control a plurality of operations (para [0129], where the instructions in memory are executed using an information processing means), and
wherein the plurality of operations comprises:
receiving a current frame signal and a reconstructed previous frame signal (Fig. 15 element 205, para [0048], [0119], where the previous frame and current frame inputs are used);
generating a predicted current frame signal, based on the current frame signal and the reconstructed previous frame signal (Fig. 15 element 205, para [0048], [0055], [0119], where the predictive coefficients of the current frame are generated using the previous frame and current frame inputs with predictive coefficients); and
outputting a reconstructed residual signal, based on the current frame signal and the predicted current frame signal (Fig. 3, para [0056], where an error signal is generated by subtracting the two signals).
Regarding claim 15, Oshikiri teaches:
The apparatus of claim 14, wherein:
the current frame signal comprises a time domain signal for a current frame (para [0041], where an input from a microphone is a time domain signal), and
the reconstructed previous frame signal comprises a reconstructed frequency domain signal for a previous frame (Fig. 15, para [0118-119], where the previous frame signal is a spectrum, which is a frequency domain signal).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 6 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oshikiri, in view of Sung et al. (US 2013/0030795 A1), hereinafter referred to as Sung.
Regarding claim 6, Oshikiri teaches:
The audio signal encoding method of claim 1, wherein the outputting of the reconstructed residual signal comprises:
calculating a residual signal by using the current frame signal and the predicted current frame signal (Fig. 3, para [0056], where an error signal is generated by subtracting the two signals);
quantizing the residual signal (para [0077], Claim 5, where vector quantization of the predictive coefficients is performed); and
Oshikiri does not teach:
outputting the reconstructed residual signal by dequantizing a quantized residual signal.
Sung teaches:
outputting the reconstructed residual signal by dequantizing a quantized residual signal (Fig. 4 elements 412, 415, para [0074], where a residual signal is passed through MDCT, quantized, then dequantized by the encoder).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Oshikiri by using the dequantization of Sung (Sung para [0074]) on the quantized signal of Oshikiri (Oshikiri para [0077]), in order to effectively compensate for a quantization error (Sung para [0006-7]).
Regarding claim 19, Oshikiri teaches:
The apparatus of claim 14, wherein the outputting of the reconstructed residual signal comprises:
calculating a residual signal by using the current frame signal and the predicted current frame signal (Fig. 3, para [0056], where an error signal is generated by subtracting the two signals);
quantizing the residual signal (para [0077], Claim 5, where vector quantization of the predictive coefficients is performed); and
Oshikiri does not teach:
outputting the reconstructed residual signal by dequantizing a quantized residual signal.
Sung teaches:
outputting the reconstructed residual signal by dequantizing a quantized residual signal (Fig. 4 elements 412, 415, para [0074], where a residual signal is passed through MDCT, quantized, then dequantized by the encoder).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Oshikiri by using the dequantization of Sung (Sung para [0074]) on the quantized signal of Oshikiri (Oshikiri para [0077]), in order to effectively compensate for a quantization error (Sung para [0006-7]).
Allowable Subject Matter
Claims 3-5, 7, 10-11, 16-18, and 20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: the closest prior art of Oshikiri and Sung do not teach the limitations of the claims. Specifically, none of the cited prior art teaches that the generation of the predicted current frame signal includes calculating a phase and gain difference between the frequency domain signals corresponding to the current frame and the reconstructed previous frame signals, in combination with the other limitations of the claims. Hence, none of the cited prior art, either alone or in combination thereof, teaches the combination of limitations found in the cited claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 2008/0189118 A1 Fig. 3 element 360, para [0056] teaches calculating a phase difference between successive frames of an audio signal.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRYAN S BLANKENAGEL whose telephone number is (571)270-0685. The examiner can normally be reached 8:00am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at 571-272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRYAN S BLANKENAGEL/Primary Examiner, Art Unit 2658