DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the response to this office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 13, 15, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chao-Suren et al. (US 20110134910 A1, hereinafter Chao-Suren).
Claim 1: Chao-Suren teaches a multi-user cross-language interactive system based on language models (title and abstract, ln 1-19, either n-way and/or bi-way in figs. 4, 6-7, 11 and translation module 1106 in fig. 11, including multiple servers 1, 2, …, 6 having components for language translations, and components for identifying incoming languages, para 45, 51-52), comprising:
a master smart terminal and a plurality of slave smart terminals (a computer hardware of fig. 13 and used by 404 user 1 in fig. 4, or 602 as user 1 in fig. 6 and 702 in fig. 7 and other users 2, 3, …, 6 as slave computer hardware of fig. 13 and receiving broadcast audio packets from the user 1, para 45 and vise verse in figs. 6-7), wherein, the master smart terminal is configured for: obtaining first to be translated data from a first user (through the user interface 1322 above, either in text by the keyboard 1324 or speech by the microphone 1332 in fig. 13, para 77, e.g., user1 602 through a synch boundary 425 and starting an issue service requests to the sync, para 47, in fig. 6), translating the first to be translated data into at least one first data through a first LLM according to a first translation prompt (through a server at optima location for translation in fig. 9, para 56, and prompting by identifying incoming language whether the incoming language is different from the language profiled for the associated users, para 45, 55-56), and distributing the at least one first data to at least one corresponding slave smart terminal for output (through listening server 2, etc., in fig. 4, para 45, 56-58), wherein a language of the at least one first data corresponds to a language used by the at least one corresponding slave smart terminal (e.g., Spanish for user2, French for user3, etc., as slave terminal hardware in fig. 13), and the first language module is configured on the master smart terminal or a server (part of listening server2, 3, …, 6 and responsible for translation if incoming language is different from the language profiled for the associated users, para 45, e.g., translation module 1106 in fig. 11, para 62, and prompting by identifying incoming language whether the incoming language is different from the language profiled for the associated users, para 45); and
the slave smart terminal is configured for: obtaining second to be translated data from a second user (e.g., user 2 initiated with Bonjour in French in fig. 6, in the n-way and bi-way in figs. 4, 6-7, 11 and translation module 1106 in fig. 11), translating the second to be translated data into second data through a second language module according to a second translation prompt (prompting by identifying incoming language whether the incoming language is different from the language profiled for the associated users, para 45), and transmitting the second data to the master smart terminal for output, wherein a language of the second data is a language used by the master smart terminal (e.g., user 2’s “Bonjour” in French is translated into English “Hello” and displayed on user’s display 602 in fig. 6), and the second language model is configured on the slave smart terminals or the server (each of the server 1, 2, …, 6 having own translation functions, para 46, 51-52).
However, Chao-Suren does not explicitly teach wherein the first/second language models are large language model LLMs and the server is not cloud server.
An Official Notice is taken that large language model LLMs for different language translation tasks used in natural language processing including multiple language translations and having input and output and cloud server carrying natural language translation and the LLMs are well-known in the art for benefits of large and complexity computation capacities related to langue processing including translation and relative larger resources including databases and libraries serving to the language processing in cloud servers environment and accessible from local computing devices.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the first/second LLMs and the cloud server, as taught by the well-known in the art, to the first/second language models and the servers in the multi-user cross-language interactive system, as taught by Chao-Suren, for the benefits discussed above.
Claim 13. Chao-Suren teaches a smart terminal based on large language model LLM (title and abstract, ln 1-19, ), comprising:
an input device (text or voice as input, para 43, by a user interface 1322 having keyword 1324 with display 1338 and adapter 1336, and microphone 1332 in fig. 13), a processor (1310a/1310b in fig. 13), a wireless communication component (communications adapter 1334 in fig. 13) and a memory (RAM 1314, ROM 1316, disk 1321, tape drive 1340, in fig. 13), wherein the processor is electrically connected to the input device, the wireless communication component and the memory (through a system bus 1312 in fig. 13);
wherein one or more computer programs executable on the processor are stored in the memory (program instructions stored in the medium above, para 73-74), and the one or more computer programs comprise instructions for (the instructions are executed by processor of the computer, para 73):
in response to a first configuration instruction, configuring the smart terminal as a host (User A started a voice communication in English predefined in a profile configuration windown participant profile configuration in figs. 3A/3B, para 43-44, e.g., start speaking in English in fig. 3B, para 44 and anyone of SIP-Based User 1, 2, …, 6 either in n-way session in figs. 7, 9 or bi-way session in fig. 6); when the smart terminal acts as the host (the n-way session fig. 4), obtaining, by the input device, first to be translated data (through the user interface 1322 above, either in text by the keyboard 1324 or speech by the microphone 1332 in fig. 13, para 77, e.g., user1 602 through a synch boundary 425 and starting an issue service requests to the sync, para 47, in fig. 6), and transmitting, by the wireless communication component (discussed above), the first to be translated data to a server (e.g., talking server 1 402 in the VOIP networks 400 in fig. 4 or VOIP servers 600 in fig. 6), so as to: through the server, translate the first to be translated data into at least one first data using a language model on the server according to a first translation prompt (part of listening server2, 3, …, 6 and responsible for translation if incoming language is different from the language profiled for the associated users, para 45, e.g., translation module 1106 in fig. 11, para 62, and prompting by identifying incoming language whether the incoming language is different from the language profiled for the associated users, para 45), and distribute the at least one first data to at least one slave smart terminal (through listening server 2, etc., in fig. 4, para 45), wherein the first to be translated data comprises a first to be translated text or a first to be translated speech from a user of the smart terminal as the host (e.g., SIP based user 1 in English, as a talking person and the input can be text or speech discussed above), and a language of the at least one first data corresponds to a language used by the at least one slave smart terminal (e.g., Spanish for user2, French for user3, etc., as slave terminal hardware in fig. 13);
in response to a second configuration instruction, configuring the smart terminal as a slave (in the bi-way session in fig. 6); and when the smart terminal acts as the slave (602 user 1 can receive translated version of the communication from the 606 user 2 in fig. 6, para 51), obtaining, by the input device, second to be translated data (e.g., dialog “Hello” to a potential 606 user 2 in fig. 6), and transmitting the second to be translated data to the server (e.g., a server coupled to the user 1 in fig. 4), so as to: through the server, translate the second to be translated data into second data using the language model according to a second translation prompt (the translation module 1106 in fig. 11, and identified French as the destination language, para 51 and completing the language translation from English to French through the VOIP network and server in fig. 6), and transmit the second data to a master smart terminal (e.g., 606 user 2 in fig. 6), wherein the second to be translated data comprises a second to be translated text or a second to be translated speech from the user of the smart terminal as the slave (input by keyboard or by microphone discussed above at the 604 user 1, e.g., says “Hello” in fig. 6), and a language of the second data corresponds to a language used by the master smart terminal (e.g., the “Hello” of User 1 is translated to “Bonjour” in French with the same meaning and displayed at the 606 user 2 as the master terminal, para 51).
However, Chao-Suren does not explicitly teach wherein the language model is large language model LLM and the server is not cloud server.
An Official Notice is taken that large language model LLM used in natural language processing including translation and having input and output and cloud server for natural language translation are well-known in the art for benefits of large and complexity computation capacities related to langue processing including translation and relative larger resources including databases and libraries serving to the language processing in cloud servers environment.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the LLM and the cloud server, as taught by the well-known in the art, to the model and the server in the smart terminal, as taught by Chao-Suren, for the benefits discussed above.
Claim 20 recited a method and has been analyzed and rejected according claims 1, 13 above.
Claim 2: Chao-Suren further teaches, according to claim 1 above, wherein the master smart terminal is further configured for:
determining whether all the slave smart terminals use a same language as the first user (430 user 4 is identified as the same language as the sender user 1, para 46 and by set up configuration with home language with different users in figs. 3A and “Add New” for more users added in and a preferred language English and alternative one French);
in response to all the slave smart terminals using the same language as the first user, distributing the first to be translated data to each of the slave smart terminals for output (listening to the user 1 does not need to perform translation, and instead, directly transmission non-translated to the user 4 due to the user 4 is preferred language is English, which has been identified, para 46, similar to the registered alternative language above); and
in response to a language used by at least one terminal in the slave smart terminals being different from a language used by the first user (e.g., 404 user 1 is by English, as the first user, while 426 user 2 is by server2 in Spanish in fig. 4 and the language can be identified at one of the servers 1, 2, …, 6 while receiving the incoming speech or text, para 45, 59), transmitting the first to be translated data to at least one first terminal in the slave smart terminals for output (transmitting from 402 talking server 1 to 406 listening server 2 coupled to the 426 user 2 in fig. 4), translating the first to be translated data into the at least one first data through the first LLM according to the first translation prompt (based on the identified different between the sending language and receiving language and discussion in claim 1 above), and distributing the at least one first data to at least one second terminal in the slave smart terminals for output (the computer the 426 user 2 owns and coupled to the listening server 2 in fig. 4), wherein the language of the at least one first data corresponds to a language used by the at least one second terminal (e.g., Spanish in fig. 4), and wherein a language used by the at least one first terminal is the same as the language used by the first user (English used by 402 talking server 1 with the 404 user 1 in fig. 4), and the language used by the at least one second terminal is different from the language used by the first user (user 1, English, while user 2, Spanish in fig. 4); and
wherein the slave smart terminal is further configured for:
determining whether a language used by the second user is the same as the language used by the first user (via the listening server2 by receiving incoming speech from the user 1 or talking server 1 by receiving from the user 2 in fig. 4, 6-7, para 45, 59);
in response to the language used by the second user being the same as the language used by the first user (discussed above, and directly forwarding the English language broadcast audio packets to 439 user 4, para 46), transmitting the second to be translated data to the master smart terminal for output (transmission while destination language and source language are different and discussion above); and in response to the language used by the second user not being the same as the language used by the first user, translating the second to be translated data into the second data through the second LLM according to the second translation prompt (e.g., the 412 coupled to user 5 with Russian language in fig. 4), and transmitting the second data to the master smart terminal for output (the translated version from the sever 1 to the user 1 with translated version, e.g., from Russian to English discussed above).
Claim 15 has been analyzed and rejected according to claims 13, 2 above.
Claims 3, 14 are rejected under 35 U.S.C. 103 as being unpatentable over Chao-Suren (above) and in view of reference Huang et al. (CN 117236347 A, hereinafter Huang, translation and original versions are attached herein and the translation version referred by paragraph under Huang below).
Claim 3: Chao-Suren teaches, according to claim 1 above, the master smart terminal and the slave smart terminal (the computing system in fig. 13 and the discussion in claim 1 above) and the first/second LLMs with cloud servers (the discussion in claim 1 above), except explicitly teaching wherein the master smart terminal comprises a master smart wearable device and/or a master smart mobile terminal; wherein the slave smart terminal comprises a slave smart wearable device and/or a slave smart mobile terminal; and wherein each of the first LLM and the second LLM comprises: a generative artificial intelligence large language model GAILLM and/or a multimodal large language model MLLM.
Huang teaches an analogous field of endeavor by disclosing a multi-user cross-language interactive system (title and abstract, ln 1-12 and a system in fig. 1) and wherein a master smart terminal and a slave smart terminal are disclosed (anyone of the terminal 102 communicated with the server 104 and database through a network in fig. 1, para 3-4, p.9 and a sender terminal of interactive text with a source language is master terminal and a display terminal using a target language, para 4, p.12, in users interactive environment or among interactive parties, para 3, p.2, para 2, p.9, and para 2, p.12) and wherein the master terminal comprises a master smart wearable device and/or a master smart mobile terminal (the terminal 102 in fig. 1, as a sender with source language, and can be intelligent watch, hand ring, head device, or portable wearable device in fig. 1, para 4, page 10); wherein the slave smart terminal comprises a slave smart wearable device and/or a slave smart mobile terminal (102 in fig. 1, receiver or party having the target language, can be portable wearable device, intelligent watch, hand ring, head device, etc., para 4, p.10); and wherein a first LLM and a second LLM are disclosed to comprise: a generative artificial intelligence large language model GAILLM and/or a multimodal large language model MLLM (fine adjustment large language model used in downstream tasks, semantic understanding, machine translation, robot question/answer, knowledge map, etc., para 3, p.11, and through artificial intelligence field model training, i.e., AI large language model, or generative artificial intelligence LLM and placed on a cloud or servers, para 2, p.10 and by cloud technology and artificial intelligence, para 4, p.10) for benefits of accessible large and complexity computation and natural language processing capacities (accomplishing with interactive text translation, interactive text display, para 2, p.2) and improved performance (by increasing translation accuracy, para 4, p.2).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the master smart terminal and the slave smart terminal and wherein the master smart terminal comprises the master smart wearable device and/or the master smart mobile terminal and the slave smart terminal comprises the slave smart wearable device and/or the slave smart mobile terminal; and wherein each of the first LLM and the second LLM comprises: a generative artificial intelligence large language model GAILLM and/or a multimodal large language model MLLM, as taught by Huang, to the master smart terminal and the slave smart terminal and the first/second language models in servers in the multi-user cross-language interactive system, as taught by Chao-Suren, for the benefits discussed above.
Claim 14 has been analyzed and rejected according to claims 13, 3 above.
Claims 4-5, 7-12, 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Chao-Suren (above) and in view of reference Huang (above) and Akiyama et al. (WO 2022113189 A1, hereinafter Akiyama, US 20240370669 A1 being the translation version thereof and referred by paragraph under Akiyama, and the original version is attached herein).
Claim 4: the combination of Chao-Suren and Huang teaches, according to claim 3 above, wherein the master smart terminal comprises the master smart wearable device and the master smart mobile terminal (Huang, including the smart watch and mobile phone in fig. 1, and the discussion in claim 3 above), the multi-user cross-language interactive system further comprises a management server (Huang, one of the server cluster of multiple servers, para 4, p.10), and the first to be translated data comprises a text or a speech from the first user (Chao-Suren, voice inputted from the microphone and the text inputted from the keyboard in fig. 13 and the discussion in claim 13 above), except explicitly teaching wherein the master smart wearable device is configured for: obtaining the first to be translated data, and transmitting the first to be translated data to the master smart mobile terminal; the master smart mobile terminal is configured for: transmitting the first to be translated data to the management server; and the management server is configured for: generating the first translation prompt; converting, by using a speech-to-text engine, the speech in the first to be translated data into a first to be translated text, wherein the speech-to-text engine is configured on the management server or a speech-to-text server; translating, through the first LLM, the first to be translated text or the text in the first to be translated data into at least one first text data according to the first translation prompt, wherein the first LLM is configured on the management server or a model server; converting, by using a text-to-speech engine, the at least one first text data into at least one first speech data, wherein the text-to-speech engine is configured on the management server or a text-to-speech server; and distributing the at least one first text data and/or the at least one first speech data as the at least one first data to the at least one corresponding slave smart terminal for output.
Akiyama teaches an analogous field of endeavor by disclosing a multi-user cross-language interactive system (title and abstract, ln 1-15 and a system in fig. 1) and the multi-user cross-language interactive system further comprises a management server (part of mobile functions, including speech signal processing implemented by a speech signal processor 202, and part of a communicator 206 in fig. 2) wherein a master smart wearable device (wearable speech input/output apparatus 2 in fig. 1) and a master smart mobile terminal are disclosed (part of mobile terminal 1 in fig. 1) and
wherein the master smart wearable device (2 in fig. 1) is configured for: obtaining the first to be translated data (via a microphone 30 in fig. 2 and step S1 in fig. 7, and step 201 in fig. 10), and transmitting the first to be translated data to the master smart mobile terminal (S2 in fig. 7, S202 in fig. 10 and via BLUETOOTH 209, para 96); the master smart mobile terminal is configured for: transmitting the first to be translated data to the management server (through system bus 213 in fig. 2); and
the management server is configured for:
generating the first translation prompt (posting a text to the translation server through the communicator 206 in fig. 2);
converting, by using a speech-to-text engine (speech recognizer 203 in fig. 2), the speech in the first to be translated data into a first to be translated text (converting the speech to text by speech recognizer 203 in fig. 2, para 89), wherein the speech-to-text engine is configured on the management server or a speech-to-text server (within the mobile device 2 in fig. 1);
translating, through the first LLM (translation server 5, at S4 in fig. 7 or S203 in fig. 10), the first to be translated text or the text in the first to be translated data into at least one first text data according to the first translation prompt, wherein the first LLM is configured on the management server or a model server (the text in language 1 from text in language 2 through step 203 in fig. 10 or S4 in fig. 7);
converting, by using a text-to-speech engine (speech synthesizer 204 in fig. 2), the at least one first text data into at least one first speech data, wherein the text-to-speech engine is configured on the management server or a text-to-speech server (through a network 4 to the translation server 5 in fig. 1); and
distributing the at least one first text data and/or the at least one first speech data as the at least one first data to the at least one corresponding slave smart terminal for output (output speech in language 2 as speech in fig. 6 or display language 1 in text) for benefits of improving the operation of the language translation (by improving usability in a interactive manner, para 76 and accuracy is improved by trained machine learning function such as deep learning, para 92 and speech quality can be improved by using headset, para 101).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the mast smart wearable device and the master smart mobile terminal and wherein the master smart wearable device is configured for: obtaining the first to be translated data, and transmitting the first to be translated data to the master smart mobile terminal; the master smart mobile terminal is configured for: transmitting the first to be translated data to the management server; and the management server is configured for: generating the first translation prompt; converting, by using a speech-to-text engine, the speech in the first to be translated data into a first to be translated text, wherein the speech-to-text engine is configured on the management server or a speech-to-text server; translating, through the first LLM, the first to be translated text or the text in the first to be translated data into at least one first text data according to the first translation prompt, wherein the first LLM is configured on the management server or a model server; converting, by using a text-to-speech engine, the at least one first text data into at least one first speech data, wherein the text-to-speech engine is configured on the management server or the text-to-speech server; and distributing the at least one first text data and/or the at least one first speech data as the at least one first data to the at least one corresponding slave smart terminal for output, as taught by Akiyama, to the master smart wearable device and the master smart mobile terminal in the multi-user cross-language interactive system, as taught by the combination of Chao-Suren and Huang, for the benefits discussed above.
Claim 5 has been analyzed and rejected according to claims 4 above and the combination of Chao-Suren, Huang, and Akiyama further teaches, wherein, after the second data is output (Akiyama, through the speech outputter 205 in fig. 2), the master smart wearable device is further configured for: obtaining a third to be translated data, and transmitting the third to be translated data to the master smart mobile terminal (Akiyama, input speech language 2 at S207 in fig. 10), wherein the third to be translated data comprises a speech or a text from the first user (Akiyama, e.g., speech from the wireless earphone in fig. 10); the master smart mobile terminal is further configured for: determining at least one first target language and determining at least one first target terminal from the slave smart terminals based on a conversation mode (Akiyama, through a language registration 56, etc., in fig. 5 and details in fig. 6 to register language as priority of translation language 62 in fig. 6), and transmitting the third to be translated data and information of the at least one first target language and the at least one first target terminal to the management server (Akiyama, on the mobile 1 in fig. 6 and serving to the speech recognizer 203 and speech synthesizer 204, para 89 and the discussion in claim 4 above); and the management server is further configured for:
converting, by using the speech-to-text engine, the speech in the third to be translated data into a third to be translated text (Akiyama, converting the speech to text by speech recognizer 203 in fig. 2, para 89 and discussion in claim 4 above);
generating a third translation prompt according to the information of the at least one first target language (Akiyama, posting a text to the translation server through the communicator 206 in fig. 2);
translating, through the first LLM (Akiyama, translation server 5, at S4 in fig. 7 or S203 in fig. 10), the third to be translated text or the text in the third to be translated data into at least one third text data according to the third translation prompt (Akiyama, the text in language 1 from text in language 2 through step 203 in fig. 10 or S4 in fig. 7 and the discussion in claim 4 above); and
converting, by using the text-to-speech engine (Akiyama, speech synthesizer 204 in fig. 2), the at least one third text data into at least one third speech data (through a network 4 to the translation server 5 in fig. 1), and
distributing the at least one third text data and/or the at least one third speech data to the at least one first target terminal according to the information of the at least one first target terminal (Akiyama, output speech in language 2 as speech in fig. 6 or display language 1 in text and the discussion in claim 4 above).
Claim 7 has been analyzed and rejected according to claim 5 above (the smart slave mobile terminal and slave smart wearable device mapped to the smart master mobile terminal and master smart wearable device of claim 5, and translation and flow are the similar to fig. 5, 10 of Akiyama’s disclosure and discussed in claims 4-5 above).
Claim 8 has been analyzed and rejected according to claim 5 above (the target language and target terminal for the slave are similar to the target language and the target terminal for the master, etc., as discussed in claims 4-5 above and through the configuration, Akiyama, figs. 5-6 and Chao-Suren’s fig. 3A)
Claim 9 has been analyzed and rejected according to claims 3-4 above.
Claim 10 has been analyzed and rejected according to claims 3-5 above.
Claim 11 has been analyzed and rejected according to claims 7, 9 above.
Claim 12: the combination of Chao-Suren, Huang, and Akiyama further teaches, according to claim 3-4 above, wherein the multi-user cross-language interactive system further comprises a management server (the discussion in claim 4 above), and the second LLM is configured on the management server (Akiyama, combination of translation server and the part of the mobile terminal in figs. 6, 10), the slave smart mobile terminal is further configured for:
switching an operation mode to a conference mode in response to a first switching instruction; and in the conference mode (Chao-Suren, switching between n-way and bi-directional communications at step 202 in fig. 2, the discussion in claim 1 above), determining at least one second target language according to at least one second target terminal indicated by a selecting action of a user (Chao-Suren, through the user interface in fig. 3A), and transmitting the second to be translated data and information of the at least one second target terminal and the at least one second target language to the management server (Chao-Suren, each of terminals is set up for the language and transmitted the language identification and user ID for language translation and discussed in claim 1 above and Arkiyama, through the setup user interface in figs. 5-6); the management server is configured for: generating the second translation prompt according to the information of the at least one second target language; and translating, through the second LLM, the second to be translated data into at least one second data corresponding to the at least one second target language according to the second translation prompt, and distributing the at least one second data to the at least one second target terminal for output according to the information of the at least one second target terminal; the slave smart mobile terminal is further configured for: switching the operation mode to a tour guide mode in response to a second switching instruction; and in the tour guide mode, transmitting the second to be translated data and language information of the first user to the management server; the management server is further configured for: generating the second translation prompt according to the language information of the first user; and translating, through the second LLM, the second to be translated data into the second data corresponding to the language of the first user according to the second translation prompt, and transmitting the second data to the master smart terminal for output (the discussion in claim 4 above, about management server).
Claim 16 has been analyzed and rejected according to claims 13, 5, 7 above (Chao-Suren, through the user interface to add new user with language preferred, etc., in figs. 3A, and Akiyama, the adding the new user with language registration in figs. 5-6).
Claim 17 has been analyzed and rejected according to claims 16, 4 above.
Claim 18 has been analyzed and rejected according to claims 14, 5 above.
Claim 19 has been analyzed and rejected according to claims 14, 12 above.
Allowable Subject Matter
Claim 6 is objected to as being dependent upon a rejected base claims 1, 3-5, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LESHUI ZHANG whose telephone number is (571)270-5589. The examiner can normally be reached Monday-Friday 6:30amp-4:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LESHUI ZHANG/
Primary Examiner,
Art Unit 2695