Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
2. Claims 7 and 14 are objected to because of the following informalities: The claims recite “wherein the digitized speech audio comprises one or more of a directly digitized audio waveform, a spectrogram and a spectrogram”. The limitation is interpreted as “wherein the digitized speech audio comprises one or more of a directly digitized audio waveform, and a spectrogram”. Appropriate correction is required.
Claim Rejections - 35 USC § 101
3. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1: Is the claimed invention to a process, machine, manufacture or composition of matter?
The claimed invention is directed to a method (process), system (machine), and computer readable medium (manufacture) for receiving digitized speech audio processed into mel filter bank bin values; producing, via an acoustic model, a phoneme sequence based on the digitized speech audio; tokenizing the phoneme sequence into a token sequence of tokens from a pronunciation dictionary, wherein a phoneme subsequence does not match a token in the pronunciation dictionary; and adding a new token to the pronunciation dictionary, the new token having the phoneme subsequence as its pronunciation.
Step 2A, prong 1: Does the claim recite an abstract idea, law or nature, or natural phenomenon?
Under the 35 U.S.C. 101 new guidelines, the broadest reasonable interpretation of the claims, the claimed steps fall within the “Mental Processes” grouping of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III.
The step of receiving digitized speech audio, producing a phoneme sequence based on the digitized speech audio; tokenizing the phoneme sequence into a token sequence of tokens from a pronunciation dictionary; and adding a new token to the pronunciation dictionary, may be practically performed in the human mind using observation, evaluation, judgment, and opinion. For example, a person can receive digital data representing an audio signal, generate phoneme sequences out the digital data, split the sequences of phonemes into smaller units called tokens, using a pronunciation dictionary, and add a new token to the pronunciation dictionary using a pen and paper.
Therefore, the claimed steps fall within the mental process grouping of abstract ideas
Step 2A, prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional element “a computer”. The computer is recited at a high level of generality, and it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f). Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, and the claims are directed to the judicial exception.
Step 2B: Does the claim recite additional elements that amount to significantly more than the abstract idea?
As to whether the claims as a whole amount to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim (Step 2B), as explained above in Step 2A, Prong 2, the use of “computer” is at high level of generality, and even when considered in combination, these additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, and therefore do not provide an inventive concept. Accordingly, the claims are ineligible.
Dependent claims 2-7, 9-14, and 16-20 further refer and describe the pronunciation dictionary, the phoneme subsequence, speech signal; and describing the process of incrementing an occurrence count of the phoneme subsequence, identifying a slot for an entity where the phoneme subsequence fits in the semantic grammar, and updating the token sequence probabilities, which encompasses a mental process that is practically performed in the human mind, as explained above in Step 2A, Prong 1. Accordingly, claims 1-20 are directed to an abstract idea, and are not patent eligible.
Double Patenting
4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-20 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim 1-6 of U.S Patent 12080275. Although conflicting claims 1-7 and 15-20 are not identical, they are not patentably distinct from each other because claims1-7 and 15-20 of the instant application merely broadens the scope of the claims of the Patent by eliminating the elements and their functions of the claims. It has been held that the omission of an element and its function is an obvious expedient if the remaining elements perform the same function as before. In re Karlson, 136 USPQ 184 (CCPA). Also note Ex parte Rainu, 168 USPQ 375 (Bd.App.1969); omission of a reference element whose function is not needed would be obvious to one skilled in the art.
Claims 8-14, beside broadening the scope of the claims of the Patent, they recite the limitation “searching the pronunciation dictionary for a token with a pronunciation that is within a specific edit distance of the phoneme subsequence, wherein the edit distance is inversely related to the phonetic similarity of phonemes”, which is obvious over the prior art Li, as evidenced by the prior rejection below.
Current Application
US 12080275
1. A computer-implemented method for automatically enhancing natural language recognition in an Automated Speech Recognition (ASR) system, the method comprising:
receiving digitized speech audio processed into mel filter bank bin values;
producing, via an acoustic model, a phoneme sequence based on the digitized speech audio;
tokenizing the phoneme sequence into a token sequence of tokens from a pronunciation dictionary, wherein a phoneme subsequence does not match a token in the pronunciation dictionary; and
adding a new token to the pronunciation dictionary, the new token having the phoneme subsequence as its pronunciation.
2. The computer-implemented method of claim 1, further comprising: incrementing an occurrence count of the phoneme subsequence across a multiplicity of speech audio segments, wherein the adding step is conditioned on the occurrence count satisfying a threshold.
3. The computer-implemented method of claim 1, wherein the pronunciation dictionary is domain specific.
4. The computer-implemented method of claim 1, further comprising: identifying, via applying a semantic grammar to the token sequence, a slot for an entity where the phoneme subsequence fits in the semantic grammar.
5. The computer-implemented method of claim 4, wherein the phoneme subsequence represents a new entity in the semantic grammar.
6. The computer-implemented method of claim 5, further comprising: updating the token sequence probabilities of a statistical language model including the new entity.
7. The computer-implemented method of claim 1, wherein the digitized speech audio comprises one or more of a directly digitized audio waveform, a spectrogram and a spectrogram.
1. A computer-implemented method for automatically enhancing natural language recognition in an Automated Speech Recognition (ASR) system, the method comprising:
receiving digitized speech audio comprising one or more of a directly digitized audio waveform, a spectrogram and a spectrogram processed into mel filter bank bin values; producing, via an acoustic model, a phoneme sequence based on the digitized speech audio; generating a token sequence from the phoneme sequence via a pronunciation dictionary, wherein a token represents a word in the pronunciation dictionary;
identifying a phoneme subsequence from the phoneme sequence that does not match a token in the pronunciation dictionary;
identifying, via applying a semantic grammar to the token sequence, a slot for an entity where the phoneme subsequence fits in the semantic grammar, wherein the phoneme subsequence represents a new entity in the semantic grammar;
adding a new token to the pronunciation dictionary, the new token having the phoneme subsequence as its pronunciation; and
adding, to an entity list that is domain specific, the new entity with the phoneme subsequence as its pronunciation.
2. The computer-implemented method of claim 1 further comprising: incrementing an occurrence count of the phoneme subsequence across a multiplicity of speech audio segments, wherein the adding step is conditioned upon the occurrence count exceeding a threshold.
3. The computer-implemented method of claim 1 further comprising: updating the token sequence probabilities of a statistical language model including the new entity.
Claim Rejections - 35 USC § 103
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1, 3-5, 7, 15, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Hunt (US 2004/0193408) in view of Roth (US 2006/0173683).
As per claim 1, Hunt teaches receiving digitized speech audio (Fig. 1 and [0008], receiving a digital waveform corresponding to the input speech “James Smith”) processed into mel filter bank bin values;
producing, via an acoustic model, a phoneme sequence based on the digitized speech audio ([0009], wherein a phonetic sequence, corresponding to the input speech, is output from the phonetic decoder module 4);
tokenizing the phoneme sequence into a token sequence of tokens from a pronunciation dictionary, wherein a phoneme subsequence does not match a token in the pronunciation dictionary ([0008]- [0010], Fig. 1, step 4, discrepancies between the output of the phonetic recognizer module 4 and the reference sequence obtained from a reference list of the pronunciation dictionary, wherein at least one phoneme subsequence that does not match a token in the reference pronunciation dictionary. See the example of /ch ey m s n ih/, as compared to the correct phonetic reference /jh ey m z s m ih th/. See also, [0034], wherein said, at least one of the plurality of reference phonetic sequences stored in the reference list is different from a phonetic sequence that is capable of being output by the phonetic recognizer). Hunt may not explicitly disclose adding a new token to the pronunciation dictionary, the new token having the phoneme subsequence as its pronunciation. Roth in the same field of endeavor teaches interpreting the token sequence according to a semantic grammar; and adding, to an entity list, a new entity with the phoneme subsequence as its pronunciation ([0023]- [0026], wherein the used system uses semantic rules to interpret tokenized words; and checks each selected word using an embedded look-up function to determine if it is already present in the lexicon. If the system finds the word, it ignores the word. If it does not find the word, it adds the word to the list of words being imported. When one or more words have been selected for importation into the lexicon, the device generates the pronunciation for each word and stores them as phonetic representations; and adds the selected text words, together with their pronunciations, to the lexicon). Therefore, it would have been obvious at the time the application was filed to use Roth’s semantic grammar and adding alternative pronunciation with the system of Hunt in order to improve the speed and/or accuracy of speech recognition. As to the digitized speech audio is processed into mel filter bank bin values, the examiner notes that mel frequency bands are well known in the art for a long time, they are used as part of the feature extraction process in generating mel-frequency cepstral coefficients (MFCCs), a fundamental feature set for automatic speech recognition systems. Therefore, it would have been obvious at the time the application was for the speech recognition system of Hunt in view of Roth to process the digitized speech audio into mel filter bank bin values. This would improve computational efficiency while retaining critical information for a task like speech recognition.
As per claim 3, Hunt may not explicitly disclose wherein the pronunciation dictionary is domain specific. Roth in the same field of endeavor teaches wherein the pronunciation dictionary is domain specific (Roth, [0023], [0029]). Therefore, it would have been obvious at the time the application was filed to use Roth’s domain specific entity list with the system of Hunt in order to improve the speed and/or accuracy of speech recognition.
As per claim 4, Hunt identifying, via applying grammar to the token sequence, a slot for an entity where the phoneme subsequence fits in the semantic grammars ([0018]- [0028], [0059], interpreting the token sequence, corresponding to a named entity, according to grammars). Hunt may not explicitly disclose that the grammar is a semantic grammar. Roth in the same field of endeavor teaches interpreting the token sequence according to a semantic grammar; and adding, to an entity list, a new entity with the phoneme subsequence as its pronunciation ([0023]- [0026], wherein the used system uses semantic rules to interpret tokenized words; and checks each selected word using an embedded look-up function to determine if it is already present in the lexicon. If the system finds the word, it ignores the word. If it does not find the word, it adds the word to the list of words being imported. When one or more words have been selected for importation into the lexicon, the device generates the pronunciation for each word and stores them as phonetic representations; and adds the selected text words, together with their pronunciations, to the lexicon). Therefore, it would have been obvious at the time the application was filed to use Roth’s semantic grammar and adding alternative pronunciation with the system of Hunt in order to improve the speed and/or accuracy of speech recognition.
As per claim 5, Hunt may not explicitly disclose wherein the phoneme subsequence represents a new entity in the semantic grammar. Roth in the same field of endeavor teaches wherein the phoneme subsequence represents a new entity in the semantic grammar ([0025]). Therefore, it would have been obvious at the time the application was filed to use Roth’s above feature with the system of Hunt in order to improve the speed and/or accuracy of speech recognition.
As per claim 7, Hunt teaches wherein the digitized speech audio comprises one or more of a directly digitized audio waveform, and a spectrogram (Fig. 1).
As per claims 15 and 17-19, Hunt teaches a computer readable medium ([0040]). The remaining steps are rejected under the same rationale as applied to the method steps of rejected claims 1 and 3-5.
Claims 2 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Hunt (US 2004/0193408) in view of Roth (US 2006/0173683), and further in view of Tsunoo (US 20190189124).
As per claim 2 and 16, Hunt in view of Roth may not explicitly disclose incrementing an occurrence count of the phoneme subsequence across a multiplicity of speech audio segments, wherein the adding step is conditioned on the occurrence count satisfying a threshold. Tsunoo in the same field of endeavor teaches incrementing the minimum number of phonemes to a set range, in order to decrease the error rate ([0073]). As to wherein the adding step is conditioned upon the occurrence count exceeding a threshold, Tsunoo does not add the subsequent phonemes until the occurrence of phonemes is within the range P1 and P2, in other word, the number of phonemes exceeds the predetermined number P1 ([0073] and Fig. 10).
Therefore, it would have been obvious at the time the application was filed to use Tsunoo’s features of incrementing occurrence count and exceeding a threshold with the system of Hunt in view of Roth in order to provide a novel and improved speech recognition systems capable of obtaining a more precise certainty factor for an estimated word string (Tsunoo, [0005]).
Claim 6 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hunt (US 2004/0193408) in view of Roth (US 2006/0173683), and further in view of Quast (US 2017/0018268).
As per claims 6 and 20 Hunt in view of Roth may not explicitly disclose updating the token sequence probabilities of a statistical language model including the new entity. Quast in the same field of endeavor teaches updating the token sequence probabilities of a statistical language model including the new entity ([0042]). Therefore, it would have been obvious at the time the application was filed to use Quat’s feature of updating the probability of the used statistical language model with the system of Hunt in view of Roth in order to provide an improved speech recognition systems with higher performance.
Claims 8, 10, 12, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Hunt (US 2004/0193408) in view of Roth (US 2006/0173683), and further in view of Li (US 20200118551).
As per claim 8, Hunt teaches receiving digitized speech audio (Fig. 1 and [0008], receiving a digital waveform corresponding to the input speech “James Smith”) processed into mel filter bank bin values;
producing, via an acoustic model, a phoneme sequence based on the digitized speech audio ([0009], wherein a phonetic sequence, corresponding to the input speech, is output from the phonetic decoder module 4);
tokenizing the phoneme sequence into a token sequence of tokens from a pronunciation dictionary, wherein a phoneme subsequence does not match a token in the pronunciation dictionary ([0008]- [0010], Fig. 1, step 4, discrepancies between the output of the phonetic recognizer module 4 and the reference sequence obtained from a reference list of the pronunciation dictionary, wherein at least one phoneme subsequence that does not match a token in the reference pronunciation dictionary. See the example of /ch ey m s n ih/, as compared to the correct phonetic reference /jh ey m z s m ih th/. See also, [0034], wherein said, at least one of the plurality of reference phonetic sequences stored in the reference list is different from a phonetic sequence that is capable of being output by the phonetic recognizer), and adding the phoneme subsequence as an alternate pronunciation of the dictionary token.
Hunt may not explicitly disclose adding a new token to the pronunciation dictionary, the new token having the phoneme subsequence as its pronunciation. Roth in the same field of endeavor teaches interpreting the token sequence according to a semantic grammar; and adding, to an entity list, a new entity with the phoneme subsequence as its pronunciation ([0023]- [0026], wherein the used system uses semantic rules to interpret tokenized words; and checks each selected word using an embedded look-up function to determine if it is already present in the lexicon. If the system finds the word, it ignores the word. If it does not find the word, it adds the word to the list of words being imported. When one or more words have been selected for importation into the lexicon, the device generates the pronunciation for each word and stores them as phonetic representations; and adds the selected text words, together with their pronunciations, to the lexicon). Therefore, it would have been obvious at the time the application was filed to use Roth’s semantic grammar and adding alternative pronunciation with the system of Hunt in order to improve the speed and/or accuracy of speech recognition. As to the digitized speech audio is processed into mel filter bank bin values, the examiner notes that mel frequency bands are well known in the art for a long time, they are used as part of the feature extraction process in generating mel-frequency cepstral coefficients (MFCCs), a fundamental feature set for automatic speech recognition systems. Therefore, it would have been obvious at the time the application was for the speech recognition system of Hunt in view of Roth to process the digitized speech audio into mel filter bank bin values. This would improve computational efficiency while retaining critical information for a task like speech recognition.
Hunt in view of Roth may not explicitly disclose searching the pronunciation dictionary for a token with a pronunciation that is within a specific edit distance of the phoneme subsequence, wherein the edit distance is inversely related to the phonetic similarity of phonemes, and in response to the edit distance for a dictionary token being below a threshold, adding the phoneme subsequence as an alternate pronunciation of the dictionary token.
Li in the same field of endeavor teaches a speech recognition system for determining a matching degree between keywords based on an edit distance algorithm and select the ones with an edit distance below a threshold (matching degree higher than a threshold). Lee explicitly recites, as used herein, the term “edit distance” between a first text and a second text may refer to a minimum number of editing operations required to transform a first text to a second text. One applicable editing operation may include replacing one character with another character, inserting one character, or deleting one character, or the like. The edit distance may be inversely proportional to the similarity between the first text and the second text. This is, the smaller the edit distance is, the greater the similarity of the first text and the second is ([0115], [0117]). Therefore, it would have been obvious at the time the application was filed to use the edit distance feature of Li with the system of Hunt in view of Roth, in order to search the pronunciation dictionary for a token with a pronunciation that is within a specific edit distance of the phoneme subsequence, and in response, adding the phoneme subsequence as an alternate pronunciation of the dictionary token, as claimed. This would provide an efficient approach to handle errors and optimize recognition quality.
As per claim 10, Hunt may not explicitly disclose wherein the pronunciation dictionary is domain specific. Roth in the same field of endeavor teaches wherein the pronunciation dictionary is domain specific (Roth, [0023], [0029]). Therefore, it would have been obvious at the time the application was filed to use Roth’s domain specific entity list with the system of Hunt in order to improve the speed and/or accuracy of speech recognition.
As per claim 11, Hunt in view of Roth may not explicitly disclose wherein the specific edit distance is weighted by the similarity of the pronunciation and the phoneme subsequence. Li in the same field of endeavor teaches a speech recognition system for determining a matching degree between keywords based on an edit distance algorithm. The edit distance may be inversely proportional to the similarity between the first text and the second text ([0115], [0117]). Therefore, the edit distance is necessarily weighted by the similarity of the phonemes of the two phonemes sequences corresponding to the first text and the second text. Therefore, it would have been obvious at the time the application was filed to for the system of Hunt in view of Roth and Li to weigh the specific edit distance by the similarity of the pronunciation and the phoneme subsequence. This would provide an efficient approach to handle errors and optimize recognition quality.
As per claim 12, Hunt identifying, via applying grammar to the token sequence, a slot for an entity where the phoneme subsequence fits in the semantic grammars ([0018]- [0028], [0059], interpreting the token sequence, corresponding to a named entity, according to grammars). Hunt may not explicitly disclose that the grammar is a semantic grammar, and wherein the phoneme subsequence represents a new entity in the semantic grammar. Roth in the same field of endeavor teaches interpreting the token sequence according to a semantic grammar; and adding, to an entity list, a new entity with the phoneme subsequence as its pronunciation; and wherein the phoneme subsequence represents a new entity in the semantic grammar ([0023]- [0026], wherein the used system uses semantic rules to interpret tokenized words; and checks each selected word using an embedded look-up function to determine if it is already present in the lexicon. If the system finds the word, it ignores the word. If it does not find the word, it adds the word to the list of words being imported. When one or more words have been selected for importation into the lexicon, the device generates the pronunciation for each word and stores them as phonetic representations; and adds the selected text words, together with their pronunciations, to the lexicon). Therefore, it would have been obvious at the time the application was filed to use Roth’s semantic grammar and adding alternative pronunciation with the system of Hunt in order to improve the speed and/or accuracy of speech recognition.
As per claim 14, Hunt teaches wherein the digitized speech audio comprises one or more of a directly digitized audio waveform, and a spectrogram (Fig. 1).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Hunt in view of Roth and Li, and further in view of Tsunoo (US 20190189124).
As per claim 9, Hunt in view of Roth may not explicitly disclose incrementing an occurrence count of the phoneme subsequence across a multiplicity of speech audio segments, wherein the adding step is conditioned on the occurrence count satisfying a threshold. Tsunoo in the same field of endeavor teaches incrementing the minimum number of phonemes to a set range, in order to decrease the error rate ([0073]). As to wherein the adding step is conditioned upon the occurrence count exceeding a threshold, Tsunoo does not add the subsequent phonemes until the occurrence of phonemes is within the range P1 and P2, in other word, the number of phonemes exceeds the predetermined number P1 ([0073] and Fig. 10).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Hunt in view of Roth and Li, and further in view of Quast (US 2017/0018268).
As per claim 13, Hunt in view of Roth may not explicitly disclose updating the token sequence probabilities of a statistical language model including the new entity. Quast in the same field of endeavor teaches updating the token sequence probabilities of a statistical language model including the new entity ([0042]). Therefore, it would have been obvious at the time the application was filed to use Quat’s feature of updating the probability of the used statistical language model with the system of Hunt in view of Roth in order to provide an improved speech recognition systems with higher performance.
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDELALI SERROU whose telephone number is (571)272-7638. The examiner can normally be reached M-F 9 Am - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABDELALI SERROU/Primary Examiner, Art Unit 2659