DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-17 of U.S. Patent No. 11/948,558. Although the claims at issue are not identical, they are not patentably distinct from each other because
Regarding claims 1, 8 and 14, Wang discloses a speech processing method, apparatus and medium, hereinafter referenced as a method performed by an electronic device, the method comprising: determining a first speech feature and a first text bottleneck feature based on to-be-processed speech information; determining a first combined feature vector based on the first speech feature and the first text bottleneck feature; inputting the first combined feature vector to a trained unidirectional long short-term memory (LSTM) model; performing speech processing on the first combined feature vector to obtain speech information after noise reduction; and transmitting the obtained speech information after noise reduction to another electronic device for playing (claims 1, 7 and 12).
Regarding claims 2, 9 and 15, Wang discloses a method wherein the determining the first speech feature based on the to-be-processed speech information comprises: performing framing and windowing on the to-be-processed speech information; and extracting the first speech feature from the to-be-processed speech information obtained after the framing and windowing, wherein the first speech feature includes at least one of a logarithmic power spectrum feature and a Mel-frequency cepstrum coefficient (MFCC) feature (claims 2, 8 and 13).
Regarding claims 3, 10 and 16, Wang discloses a method wherein the determining the first text bottleneck feature based on the to-be-processed speech information comprises at least one of: extracting an N-dimensional filter-bank feature and an M-dimensional pitch feature from the to-be-processed speech information, wherein N and M are positive integers; splicing the N-dimensional filter-bank feature and the M-dimensional pitch feature to obtain a second speech feature; inputting the second speech feature into a trained automatic speech recognition (ASR) network; and extracting the first text bottleneck feature from a linear layer of a bottleneck of the trained ASR network (claims 1, 7 and 12).
Regarding claims 4, 11 and 17, Wang discloses a method wherein the ASR network is trained by: (a) aligning a text annotation included in a corpus with an audio file corresponding to the text annotation by using a Gaussian mixture model (GMM) to obtain a first text feature, the corpus being used for training the ASR network; (b) extracting an N-dimensional filter-bank feature and an M-dimensional pitch feature from the audio file; (c) splicing the N-dimensional filter-bank feature and the M-dimensional pitch feature to obtain a third speech feature; (d) inputting the third speech feature to the ASR network and training the ASR network to obtain a second text feature outputted by an output layer of the ASR network; (e) determining a value of cross entropy (CE) of the ASR network based on a value of the first text feature and a value of the second text feature; and repeatedly performing steps (a)-(e) to obtain a trained ASR network when a difference between a first value of CE of the ASR network obtained through training and a second value of CE of the ASR network obtained through training at a previous time is in a first threshold range (claims 3, 9 and 14).
Regarding claims 5 and 18, Wang discloses a method wherein the ASR network comprises a deep neural network (DNN) with four hidden layers as an input layer, a linear layer of a bottleneck, and a probability distribution softmax layer as an output layer (claims 4 and 15).
Regarding claims 6, 12 and 19, Wang discloses a method wherein the performing speech processing on the first combined feature vector to obtain speech information after noise reduction comprises: performing speech enhancement on the first combined feature vector by using the trained unidirectional LSTM model; performing inverse feature transformation on a processing result; and converting speech information from a frequency domain to a time domain to obtain the speech information after noise reduction (claims 5, 10 and 16).
Regarding claims 7, 13 and 20, Wang discloses a method wherein the unidirectional LSTM model is trained by: acquiring speech with noise and speech without noise included in a noise reduction training corpus; extracting a fourth speech feature and a second text bottleneck feature from the speech with noise; extracting a fifth speech feature from the speech without noise; combining the fourth speech feature and the second text bottleneck feature to obtain a second combined feature vector; inputting the second combined feature vector to the unidirectional LSTM model; and training the unidirectional LSTM model to obtain a trained unidirectional LSTM model when a minimum mean square error between a reference value outputted by the unidirectional LSTM model and a value of the fifth speech feature is less than or equal to a second threshold (claims 6, 11 and 17).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. This information has been detailed in the PTO 892 attached (Notice of References Cited).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAKIEDA R JACKSON whose telephone number is (571)272-7619. The examiner can normally be reached Mon - Fri 6:30a-2:30p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571.272.5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAKIEDA R JACKSON/Primary Examiner, Art Unit 2657