DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments/Amendments
2. With respect to Claim Rejection 35 U.S.C § 102/103, Applicant’s arguments have been considered but are moot because the new ground to rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenge in the argument.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 1-3 and 8 are rejected under 35 U.S.C.103 as being unpatentable over
Nakadai et al. (US 2015/0154957 A1) in view of Swvigaradoss et al. (US 2021/0375261 A1.)
With respect to Claim 1, Nakadai et al. disclose
A mobile terminal (Nakada et al. [0058] a mobile phone) comprising:
a microphone configured to generate voice signals in response to voices of speakers (Nakadai et al. Fig. 16 element 11 the sound collecting unit, [0060] The sound collecting unit 11 records sound signals of N (where N is an integer greater than 1, for example, 8) channels and transmits the recorded sound signals of N channels to the sound signal acquiring unit 12. The sound collecting unit 11 includes N microphones 101-1 to 101-N receiving, for example, sound waves having a frequency-band component (for example, 200 Hz to 4 kHz). The sound collecting unit 11 may transmit the recorded sound signals of N channels in a wireless manner or a wired manner);
a processor configured to generate separated voice signals related to the respective voices by performing voice source separation of the voice signals based on respective voice source positions of the voices (Nakadai et al. [0227], Fig. 16 element 21 Sound source localizing unit, element 22 sound source separating unit, [0062] In case of sound signals from the plurality of speakers, the speech recognizing unit 13 distinguishes the speakers and recognizes the speech details for each distinguished speaker, [0134] The sound source localizing unit 21 estimates an azimuth of a sound source on the basis of an input signal input from the sound signal acquiring unit 12 and outputs azimuth information indicating the estimated azimuth and sound signals of N channels to the sound source separating unit 22. The azimuth estimated by the sound source localizing unit 21 is, for example, a direction in the horizontal plane with respect to the direction of a predetermined microphone out of the N microphones from the point of the center of gravity of the positions of the N microphones of the sound collecting unit 11. For example, the sound source localizing unit 21 estimates the azimuth using a generalized singular-value decomposition-multiple signal classification (GSVD-MUSIC) method, [0137] the sound source separating unit 22 may calculate a sound feature quantity for each sound signal of N channels and may separate the sound signals into the sound signals by speakers on the basis of the calculated sound feature quantity and the azimuth information input from the sound source localizing unit 21), and output translation results for the respective voices based on the separated voice signals (Nakadai et al. [0158] The language displayed in an image presented to each speaker may be based on a language selected in advance from a menu. For example, when the speaker Sp1 selects Japanese as the language from the menu, the translation unit 24 may translate the speech uttered in French by another speaker and may display the translation result in the first character presentation image 322C. Accordingly, even when another speaker utters speech in French, English, or Chinese, the conversation support apparatus 1A may display the speech pieces of other speakers in Japanese in the fourth character presentation image 352C in FIG. 18); and
wherein the processor is configured to output the translation results in which the languages of the voices of the speakers have been translated from the source languages into target languages to be translated based on the source language information and the separated voice signals (Nakadai et al. [0140] The translation unit 24 translates the speech details if necessary on the basis of the speech details, the information indicating the speakers, and the information indicating a language for each speaker which are input from the speech recognizing unit 13A, adds or replaces information indicating the translated speech details to or for the information input from the speech recognizing unit 13A, and outputs the resultant to the image processing unit 14. Specifically, an example where two speakers of the first speaker Sp1 and the second speaker Sp2 are present as the speakers, the language of the first speaker Sp1 is Japanese, the language of the second speaker Sp2 is English will be described below with reference to FIG. 14. In this case, the translation unit 24 translates the speech details so that the images 534A to 534D displayed in the second character presentation image 532 are translated from Japanese in which the first speaker Sp1 utters speech to English which is the language of the second speaker Sp2 and are then displayed. The translation unit 24 translates the speech details so that the images 524A to 524C displayed in the first character presentation image 522 are translated from English in which the second speaker Sp2 utters speech to Japanese which is the language of the first speaker Sp1 and are then displayed. See paragraph [0111-0112] and Fig. 14),
wherein the processor is configured to:
output the translation results for the respective voices in accordance with source languages represented by the read source language information (Nakadai et al. [0138] this paragraph discloses detecting a language of each speaker, [0140] this paragraph discloses displaying the translation results. See more at paragraph [0111-0112] and Fig. 14.)
Nakadai et al. fail to explicitly teach
a memory configured to store source language information representing information on source languages that are pronounced languages of the voices of the speakers, the source language information corresponding to the positions of the respective speakers,
determine the source languages corresponding to positions of the voices based on the source language information by comparing the respective voice source positions of the voices with position information included in the source language information stored in the memory,
reading the determined source language information corresponding to positions of the voices, and
However, Swvigaradoss et al. teach
a memory configured to store source language information representing information on source languages that are pronounced languages of the voices of the speakers, the source language information corresponding to the positions of the respective speakers (Swvigaradoss et al. [0062] describes storing language of a user and a corresponding location in a profile of a user),
determine the source languages corresponding to positions of the voices based on the source language information by comparing the respective voice source positions of the voices with position information included in the source language information stored in the memory (Swvigaradoss et al. [0062] describes detecting a location of the user and determining language of the user by comparing the detected location of the user and location corresponding with language stored in the profile),
reading the determined source language information corresponding to positions of the voices (Swvigaradoss et al. [0062] describes determining language of the user based on the detected location), and
Nakadai et al. and Swvigaradoss et al. are analogous art because they are from a similar field of endeavor in the Speech Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of separating the sound sources based on the sound source positions as taught by Nakadai et al., using teaching of detecting a location of a user as taught by Swvigaradoss et al for the benefit of determining language of the user (Swvigaradoss et al. [0062] describes detecting a location of the user and determining language of the user by comparing the detected location of the user and location corresponding with language stored in the profile.)
With respect to Claim 2, Nakada et al. in view of Swvigaradoss et al. teach
further comprising a display configured to visually output the translation results (Nakadai et al. [0158] The language displayed in an image presented to each speaker may be based on a language selected in advance from a menu. For example, when the speaker Sp1 selects Japanese as the language from the menu, the translation unit 24 may translate the speech uttered in French by another speaker and may display the translation result in the first character presentation image 322C. Accordingly, even when another speaker utters speech in French, English, or Chinese, the conversation support apparatus 1A may display the speech pieces of other speakers in Japanese in the fourth character presentation image 352C in FIG. 18. See paragraphs [0227-0228].)
With respect to Claim 3, Nakada et al. in view of Swvigaradoss et al. teach
wherein the microphone comprises a plurality of microphones disposed to form an array (Nakadai et al. [0147] When a microphone array is constituted by the microphones 101-1 to 101-N of the sound collecting unit 11, a speaker may not input or select information indicating that the corresponding speaker utters speech to the conversation support apparatus 1A at the time of uttering speech. In this case, the conversation support apparatus 1A can separate the speech into speech pieces by speakers using the microphone array),
wherein the plurality of microphones are configured to generate the voice signals in response to the voices (Nakadai et al. Fig. 16 elements 11 Sound Collecting Unit with microphone array 101-1 to 101-N, element 12 Sound Signal Acquiring Unit.)
With respect to Claim 8, Nakadai et al. disclose
An operation method of a mobile terminal capable of processing voices (Nakada et al. [0058] a mobile phone), the operation method comprising:
generating voice signals in response to voices of speakers; (Nakadai et al. Fig. 16 element 11 the sound collecting unit, [0060] The sound collecting unit 11 records sound signals of N (where N is an integer greater than 1, for example, 8) channels and transmits the recorded sound signals of N channels to the sound signal acquiring unit 12. The sound collecting unit 11 includes N microphones 101-1 to 101-N receiving, for example, sound waves having a frequency-band component (for example, 200 Hz to 4 kHz). The sound collecting unit 11 may transmit the recorded sound signals of N channels in a wireless manner or a wired manner);
performing voice source separation of the voice signals based on respective voice source positions of the voices (Nakadai et al. [0227], Fig. 16 element 21 Sound source localizing unit, element 22 sound source separating unit, [0062] In case of sound signals from the plurality of speakers, the speech recognizing unit 13 distinguishes the speakers and recognizes the speech details for each distinguished speaker, [0134] The sound source localizing unit 21 estimates an azimuth of a sound source on the basis of an input signal input from the sound signal acquiring unit 12 and outputs azimuth information indicating the estimated azimuth and sound signals of N channels to the sound source separating unit 22. The azimuth estimated by the sound source localizing unit 21 is, for example, a direction in the horizontal plane with respect to the direction of a predetermined microphone out of the N microphones from the point of the center of gravity of the positions of the N microphones of the sound collecting unit 11. For example, the sound source localizing unit 21 estimates the azimuth using a generalized singular-value decomposition-multiple signal classification (GSVD-MUSIC) method, [0137] the sound source separating unit 22 may calculate a sound feature quantity for each sound signal of N channels and may separate the sound signals into the sound signals by speakers on the basis of the calculated sound feature quantity and the azimuth information input from the sound source localizing unit 21);
generating separated voice signals related to the respective voices in accordance with the result of the voice source separation (Nakadai et al. [0227], Fig. 16 element 21 Sound source localizing unit, element 22 sound source separating unit, [0062] In case of sound signals from the plurality of speakers, the speech recognizing unit 13 distinguishes the speakers and recognizes the speech details for each distinguished speaker, [0134] The sound source localizing unit 21 estimates an azimuth of a sound source on the basis of an input signal input from the sound signal acquiring unit 12 and outputs azimuth information indicating the estimated azimuth and sound signals of N channels to the sound source separating unit 22. The azimuth estimated by the sound source localizing unit 21 is, for example, a direction in the horizontal plane with respect to the direction of a predetermined microphone out of the N microphones from the point of the center of gravity of the positions of the N microphones of the sound collecting unit 11. For example, the sound source localizing unit 21 estimates the azimuth using a generalized singular-value decomposition-multiple signal classification (GSVD-MUSIC) method, [0137] the sound source separating unit 22 may calculate a sound feature quantity for each sound signal of N channels and may separate the sound signals into the sound signals by speakers on the basis of the calculated sound feature quantity and the azimuth information input from the sound source localizing unit 21); and
outputting translation results for the respective voices based on the separated voice signals (Nakadai et al. [0140] The translation unit 24 translates the speech details if necessary on the basis of the speech details, the information indicating the speakers, and the information indicating a language for each speaker which are input from the speech recognizing unit 13A, adds or replaces information indicating the translated speech details to or for the information input from the speech recognizing unit 13A, and outputs the resultant to the image processing unit 14. Specifically, an example where two speakers of the first speaker Sp1 and the second speaker Sp2 are present as the speakers, the language of the first speaker Sp1 is Japanese, the language of the second speaker Sp2 is English will be described below with reference to FIG. 14. In this case, the translation unit 24 translates the speech details so that the images 534A to 534D displayed in the second character presentation image 532 are translated from Japanese in which the first speaker Sp1 utters speech to English which is the language of the second speaker Sp2 and are then displayed. The translation unit 24 translates the speech details so that the images 524A to 524C displayed in the first character presentation image 522 are translated from English in which the second speaker Sp2 utters speech to Japanese which is the language of the first speaker Sp1 and are then displayed. See paragraph [0111-0112] and Fig. 14),
wherein the outputting of the translation results includes:
outputting the translation results in which the languages of the voices of the speakers have been translated from the source languages into target languages that are languages to be translated based on the source language information and the separated voice signals (Nakadai et al. [0140] The translation unit 24 translates the speech details if necessary on the basis of the speech details, the information indicating the speakers, and the information indicating a language for each speaker which are input from the speech recognizing unit 13A, adds or replaces information indicating the translated speech details to or for the information input from the speech recognizing unit 13A, and outputs the resultant to the image processing unit 14. Specifically, an example where two speakers of the first speaker Sp1 and the second speaker Sp2 are present as the speakers, the language of the first speaker Sp1 is Japanese, the language of the second speaker Sp2 is English will be described below with reference to FIG. 14. In this case, the translation unit 24 translates the speech details so that the images 534A to 534D displayed in the second character presentation image 532 are translated from Japanese in which the first speaker Sp1 utters speech to English which is the language of the second speaker Sp2 and are then displayed. The translation unit 24 translates the speech details so that the images 524A to 524C displayed in the first character presentation image 522 are translated from English in which the second speaker Sp2 utters speech to Japanese which is the language of the first speaker Sp1 and are then displayed. See paragraph [0111-0112] and Fig. 14),
wherein the outputting of the translation results comprises:
outputting the translation results for the respective voices in accordance with source languages represented by the read source language information (Nakadai et al. [0138] this paragraph discloses detecting a language of each speaker, [0140] this paragraph discloses displaying the translation results. See more at paragraph [0111-0112] and Fig. 14.)
Nakadai et al. fail to explicitly teach
storing source language information representing information on source languages that are pronounced languages of the voices of the speakers, the source language information corresponding to the positions of each of the respective speakers; and
determining the source languages corresponding to positions of the voices based on the source language information by comparing the respective voice source positions of the voices with position information included in the source language information stored in the memory;
reading the determined source language information corresponding to positions of the voices, and
However, Swvigaradoss et al. teach
storing source language information representing information on source languages that are pronounced languages of the voices of the speakers, the source language information corresponding to the positions of each of the respective speakers (Swvigaradoss et al. [0062] describes storing language of a user and a corresponding location in a profile of a user); and
determining the source languages corresponding to positions of the voices based on the source language information by comparing the respective voice source positions of the voices with position information included in the source language information stored in the memory (Swvigaradoss et al. [0062] describes detecting a location of the user and determining language of the user by comparing the detected location of the user and location corresponding with language stored in the profile);
reading the determined source language information corresponding to positions of the voices(Swvigaradoss et al. [0062] describes determining language of the user based on the detected location), and
Nakadai et al. and Swvigaradoss et al. are analogous art because they are from a similar field of endeavor in the Speech Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of separating the sound sources based on the sound source positions as taught by Nakadai et al., using teaching of detecting a location of a user as taught by Swvigaradoss et al for the benefit of determining language of the user (Swvigaradoss et al. [0062] describes detecting a location of the user and determining language of the user by comparing the detected location of the user and location corresponding with language stored in the profile.)
5. Claims 4-5 and 9-10 are rejected under 35 U.S.C.103 as being unpatentable over
Nakadai et al. (US 2015/0154957 A1) in view of Swvigaradoss et al. (US 2021/0375261 A1)
and Adsumilli (US 9,749,738 B1.)
With respect to Claim 4, Nakadai et al. in view of Swvigaradoss et al. teach all the limitations of Claim 3 upon which Claim 4 depends. Nakadai et al. in view of Swvigaradoss et al. fail to explicitly teach
wherein the processor is configured to:
judge the voice source positions of the respective voices based on a time delay among a plurality of voice signals generated from the plurality of microphones, and
generate the separated voice signals based on the judged voice source positions.
However, Adsumilli teaches
wherein the processor (Adsumilli et al. col. 22 lines 15-29) is configured to:
judge the voice source positions of the respective voices based on a time delay among a plurality of voice signals generated from the plurality of microphones (Adsumilli col. 11, lines 11-54, gain and delays are used to estimate sound source position), and
generate the separated voice signals based on the judged voice source positions (Adsumilli col. 10, lines 1-28, “the audio source separation module 232 may receive source information about the number of expected source signals, the audio characteristics of the source signals, or the position of the audio sources” to “separate signals into estimated source signals”).
Nakadai et al., Swvigaradoss et al. and Adsumilli are analogous art because they are from a similar field of endeavor in the Speech Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of separating the sound sources based on the sound source positions as taught by Nakadai et al., using teaching of detecting a location of a user as taught by Swvigaradoss et al for the benefit of determining language of the user, using teaching of delay as taught by Adsumilli for the benefit of estimating sound source position (Adsumilli col. 11, lines 11-54, gain and delays are used to estimate sound source position.)
With respect to Claim 5, Nakadai et al. in view of Swvigaradoss et al. teach all the limitations of Claim 3 upon which Claim 5 depends. Nakadai et al. in view of Swvigaradoss et al. fail to explicitly teach
wherein the processor is configured to: generate voice source position information representing the voice source positions of the respective voices based on a time delay among a plurality of voice signals generated from the plurality of microphones, and match and store, in the memory, the voice source position information for the voices with the separated voice signals for the voices.
However, Adsumilli teaches
wherein the processor is configured to: generate voice source position information representing the voice source positions of the respective voices based on a time delay among a plurality of voice signals generated from the plurality of microphones (Adsumilli col. 11, lines 11-54, gain and delays are used to estimate sound source position), and match and store, in the memory, the voice source position information for the voices with the separated voice signals for the voices (Adsumilli col. 15, lines 51-57, “The set of audio source signals and their associated time-varying positions may compose a spatial audio scene” which may be provided “to other modules or devices to allow them to synthesize audio from the spatial audio scene”; sending the results to other modules or subsystems for further processing is considered “storing” since other modules or subsystems would have to hold/store the data for further processing.)
Nakadai et al., Swvigaradoss et al. and Adsumilli are analogous art because they are from a similar field of endeavor in the Speech Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of separating the sound sources based on the sound source positions as taught by Nakadai et al., using teaching of detecting a location of a user as taught by Swvigaradoss et al for the benefit of determining language of the user, using teaching of delay as taught by Adsumilli for the benefit of estimating sound source position and separating the sound sources based on the sound source positions (Adsumilli col. 11, lines 11-54, gain and delays are used to estimate sound source position, col. 15, lines 51-57, “The set of audio source signals and their associated time-varying positions may compose a spatial audio scene” which may be provided “to other modules or devices to allow them to synthesize audio from the spatial audio scene”.)
With respect to Claim 9, Nakadai et al. in view of Swvigaradoss et al. teach all the limitations of Claim 8 upon which Claim 9 depends. Nakadai et al. in view of Swvigaradoss et al. fail to explicitly teach
wherein the generating of the separated voice signals comprises:
judging the voice source positions of the respective voices based on a time delay among a plurality of generated voice signals; and
generating the separated voice signals based on the judged voice source positions.
However, Adsumilli teaches
wherein the generating of the separated voice signals comprises:
judging the voice source positions of the respective voices based on a time delay among a plurality of generated voice signals (Adsumilli col. 11, lines 11-54, gain and delays are used to estimate sound source position); and
generating the separated voice signals based on the judged voice source positions (Adsumilli col. 10, lines 1-28, “the audio source separation module 232 may receive source information about the number of expected source signals, the audio characteristics of the source signals, or the position of the audio sources” to “separate signals into estimated source signals”).
Nakadai et al., Swvigaradoss et al. and Adsumilli are analogous art because they are from a similar field of endeavor in the Speech Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of separating the sound sources based on the sound source positions as taught by Nakadai et al., using teaching of detecting a location of a user as taught by Swvigaradoss et al for the benefit of determining language of the user, using teaching of delay as taught by Adsumilli for the benefit of estimating sound source position (Adsumilli col. 11, lines 11-54, gain and delays are used to estimate sound source position.)
With respect to Claim 10, Nakadai et al. in view of Swvigaradoss et al. and Adsumilli et al. teach
further comprising:
generating voice source position information representing the voice source positions of the respective voices based on a time delay among a plurality of voice signals generated from a plurality of microphones (Adsumilli col. 11, lines 11-54, gain and delays are used to estimate sound source position); and
matching and storing the voice source position information for the voices with the separated voice signals for the voices (Adsumilli col. 15, lines 51-57, “The set of audio source signals and their associated time-varying positions may compose a spatial audio scene” which may be provided “to other modules or devices to allow them to synthesize audio from the spatial audio scene”; sending the results to other modules or subsystems for further processing is considered “storing” since other modules or subsystems would have to hold/store the data for further processing.)
6. Claim 7 is rejected under 35 U.S.C.103 as being unpatentable over
Nakadai et al. (US 2015/0154957 A1) in view of Swvigaradoss et al. (US 2021/0375261 A1) and Kim et al. (US 2020/0387676 A1.)
With respect to Claim 7, Nakadai et al. in view of Swvigaradoss et al. teach all the limitations of Claim 1 upon which Claim 7 depends. Nakadai et al. in view of Swvigaradoss et al. fail to explicitly teach
further comprising a communication device configured to communicate with an external device,
wherein the communication device is configured to transmit the translation results output by the processor to the external device.
However, Kim et al. teach
further comprising a communication device configured to communicate with an external device (Kim et al. [0067] Although FIG. 3 illustrates that the electronic device 200 includes the obtainer 310, an obtainer 310 according to another embodiment may also be embedded in a separate device and connected to the electronic device 200 via a wired or wireless network. FIG. 3 illustrates the obtainer 310 and the processor 320 as separate components for descriptive convenience. However, the embodiment is not limited thereto. The obtainer 310 according to an embodiment may be included in the processor 320, or some or all of the functions performed by the obtainer 310 may be conducted by the processor 320),
wherein the communication device is configured to transmit the translation results output by the processor to the external device (Kim et al. [0075] The output unit 330 may output a result of translation performed by the processor 320. The output unit 330 may inform a user of the translation result or transmit the translation result to an external device (e.g., smart phone, smart TV, smart watch, and server). For example, the output unit 330 may include a display to output a translated text or a speaker to output a speech signal converted from the translated text.)
Nakadai et al., Swvigaradoss et al. and Kim et al. are analogous art because they are from a similar field of endeavor in the Speech Processing techniques and applications. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the steps of separating the sound sources based on the sound source positions as taught by Nakadai et al., using teaching of detecting a location of a user as taught by Swvigaradoss et al for the benefit of determining language of the user, using teaching of the external device as taught by Kim et al. for the benefit of outputting the translation result at the external device (Kim et al. [0075] The output unit 330 may output a result of translation performed by the processor 320. The output unit 330 may inform a user of the translation result or transmit the translation result to an external device (e.g., smart phone, smart TV, smart watch, and server). For example, the output unit 330 may include a display to output a translated text or a speaker to output a speech signal converted from the translated text.)
Conclusion
7. The prior art made of record and not relied upon is considered pertinent to application’s disclosure. See PTO-892.
a. Aue et al. (US 2015/0347399 A1.) In this reference, Aue et al. disclose a method and a system for generating, separately form the translation of the source user's speech, a further translation of the target user's speech in the source language to be transmitted to the source user.
b. Murthy et al. (US 2016/0350286 A1.) In this reference, Murthy et al. disclose a method and a system for translating different languages in the vehicle.
c. Efros et al. (US 2023/0267942 A1.) In this reference, Efros et al. disclose a method and a system for generating a translation of isolated speech.
8. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to THUYKHANH LE whose telephone number is (571)272-6429. The examiner can normally be reached Mon-Fri: 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew C. Flanders can be reached on 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THUYKHANH LE/Primary Examiner, Art Unit 2655