DETAILED ACTION
This office action is in response to Applicant’s submission filed on 3/20/2024. Claims 1-20 are pending in the application. As such, claims 1-20 have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 6/02/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is
directed to an abstract idea without significantly more.
Regarding claim 1, the claim recites “(a) receiving a set of communication records, the set of communication records representing one or more communications between a first person and a second person,” “(b) receiving a set of analytical parameters associated with the set of communication records,” “(c) generating a plurality of segments from the communication records,” “(d) for each analytical parameter in the set of analytical parameters: determining a subset of segments semantically associated with the respective analytical parameter,” “(e) generating an evaluation of each segment of the respective subset of segments with respect to the respective analytical parameter,” “(f) generating a response to the analytical parameter based on the evaluations of the segments,” and “(g) outputting a full evaluation of the set of communication records based on the set of analytical parameters and the respective generated responses to the analytical parameters”. Limitations (a) – (g) recite mental processes that may be practically performed in the mind using pen and paper. For example, limitation (a) can be done by a person receiving chat logs between two people. Limitation (b) can be done by a person receiving a set of parameters related to the chat logs. Limitation (c) can be done by a person determining different sections of a set of chat logs. Limitation (d) can be done by a person determining a subset of input text is semantically related to a parameter. Limitation (e) can be done by a person evaluating each section of an input text corresponding a specific parameter. Limitation (f) can be done by a person determining a response to a parameter based on evaluation different text sections. Limitation (g) can be done by a person evaluating a chat log based specific parameters and responses to those parameters, and determining an output evaluation. Under its broadest reasonable interpretation when read in light of the specification, the actions of “receiving,” “generating,” “determining,” and “outputting” encompass mental processes practically performed in the human mind by observation, evaluation, and judgement using pen and paper. Accordingly, the claim recites an abstract idea (Step 2A, Prong One).
The judicial exception is not integrated into a practical application. In particular, the claim recites additional element of “(h) using a trained large language model (‘LLM’).” Further, limitations (a) - (g) are recited as being performed by a computer. In limitations (a) - (b), the computer is used as a tool to perform the generic computer function of receiving data. In limitations (c) - (g), the computer is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. The limitation (h) provides nothing more than mere instructions to implement an abstract idea on a generic computer. The large language model (LLM) recited in limitations (h) is used to perform limitations (e) – (f) without placing any limits on how the model functions. Rather, this model only recites the outcomes and does not include any details on how the outcomes are accomplished. Additionally, limitation (h) merely indicates a field of use or technological environment in which the judicial exception is performed. This type of limitation merely confines the use of the abstract idea to a particular technological environment (LLMs) and thus fails to add an inventive concept to the claims. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to an abstract idea (Step 2A: YES).
The claim does not include additional elements that are sufficient to amount to more than the judicial exception. As discussed above, the recitation of a computer to perform limitations (a) – (g) amounts to no more than mere instructions to apply the exception using a generic computer component. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept (Step 2B).
Regarding claims 13 and 19, the claims are rejected with similar analysis to claim 1.
Similarly, dependent claims 2-12, 14-18, and 20 include additional steps that are considered abstract ideas because they fail to provide meaningful significance that goes beyond generally linking the use of an abstract idea to a particular technological environment and using the computer to perform an abstract idea.
Claims 2 and 14 reads on someone using a generic computer to determine numerical representations of parameters and numerical representations of communications segments, and determining associations between parameters and segments based on the numerical representations.
Claims 3, 15, and 20 recite using a generic computer component to perform the mental process of generating a justification for a response.
Claims 4 and 16 read on a person creating a prompt using a text segment, a parameter, and an evaluation format and inputting the prompt into ChatGPT using a generic computer.
Claims 5 and 17 recite a person determining a specific answer format to include in a prompt.
Claims 6 and 18 read on a person using a generic computer to determine scores for a text segment and a parameter, and determining associations between them based on the score.
Claims 7, 9, and 11 read on someone using a generic computer to determine a response to a parameter based on a range of determined scores.
Claims 8 and 12 read on someone using a generic computer to determine justifications for the score evaluation (best and worst) of a communication segment corresponding to a parameter.
Claim 10 reads on someone using a generic computer to determine a summary of the justifications for the evaluations of each communication segment corresponding to a parameter.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 3-5, 11, 13, 15-17, and 19-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Anwade et al. (US 20250200491 A1; hereinafter referred to as Anwade).
Regarding claim 1, Anwade teaches: a method comprising: receiving a set of communication records, the set of communication records representing one or more communications between a first person and a second person ([0107] An interaction may be an interaction or a recording of an interaction via a digital channel, e.g. a text-based chat such as an online chat or a text chat using an application between an agent device of an agent and a customer device of a customer. An interaction recording can be a textual representation of an interaction such as an audio transcription, a chat transcription an email or a transcript of any other form of digital communication);
receiving a set of analytical parameters associated with the set of communication records ([0117] evaluation questions and evaluation parameters of evaluation forms such as “Number of elevated calls to a supervisor for an agent” may be extracted from an evaluation form and used in the generation of an evaluation prompt. The extraction of content present in evaluation forms may be analyzed using Gen AI-based LLMs);
generating a plurality of segments from the communication records ([0108] An interaction, e.g. an interaction recording such as a transcript of a conversation between an agent and a customer may include interaction data items. Interaction data items can be excerpts or snippets of an interaction recording, e.g. input from an agent or input from a customer in an interaction);
for each analytical parameter in the set of analytical parameters: determining a subset of segments semantically associated with the respective analytical parameter ([0117] a service may identify chunks of an interaction recording that are most relevant to an evaluation prompt created by a user by identifying a similarity of chunks, e.g. chunks that may include interaction data items, and a prompt, e.g. an evaluation prompt. This may be done by computing a similarity between a vector representation of a prompt and vectors of chunks, e.g. to identify a semantic similarity between chunks of an interaction recording and the evaluation prompt. The evaluation prompt can contain parameters.);
generating, using a trained large language model ("LLM"), an evaluation of each segment of the respective subset of segments ([0105] processor 421 of computing device 420 may be configured to create a plurality of evaluation prompts for evaluating interaction data items of one or more interactions. For example, a processor such as processor 403, 411 and/or 421 may be configured to create a plurality of evaluation prompts using machine learning, e.g. generative artificial intelligence (Gen AI) and a large language model) with respect to the respective analytical parameter ([0109] Evaluation prompts may include threshold parameters for interaction data items present in interactions and may allow comparing an interaction data item for an interaction, e.g. time for handling a customer agent interaction, with a threshold parameter for the interaction data items);
and generating, using the trained LLM, a response to the analytical parameter based on the evaluations of the segments ([0109] Generated evaluation results may include answers to evaluation prompts that have been identified in an interaction. They may further include reasons for an evaluation result, may identify interaction data items that are above a threshold, e.g. meet or exceed expectations when compared to a threshold value for an interaction data item);
and outputting a full evaluation of the set of communication records ([0196] An example output of an evaluation result 924 for an evaluation prompt created for the evaluation of an interaction data item may include: an answer to the question present in the prompt, a reason for the answer, positive feedback, negative feedback, suggestions for improvements in dealing with the question and a grade how a question was handled) based on the set of analytical parameters and the respective generated responses to the analytical parameters ([0117] For example, questions of evaluation forms that have been embedded into evaluation prompts may be used in a similarity search to assess whether interaction items within an interaction transcript share or relate to the same content of as content included in an evaluation question. In a similarity search, a service may identify chunks of an interaction recording that are most relevant to an evaluation prompt created by a user by identifying a similarity of chunks, e.g. chunks that may include interaction data items, and a prompt, e.g. an evaluation prompt).
Regarding claim 3, Anwade teaches: the method of claim 1, wherein generating the response to the analytical parameter comprises generating, using the trained LLM, a justification associated with the response to the analytical parameter ([0109] Generated evaluation results may include answers to evaluation prompts that have been identified in an interaction. They may further include reasons for an evaluation result).
Regarding claim 4, Anwade teaches: the method of claim 1, further comprising: generating, for each analytical parameter and associated segment ([0108] An interaction, e.g. an interaction recording such as a transcript of a conversation between an agent and a customer may include interaction data items. Interaction data items can be excerpts or snippets of an interaction recording, e.g. input from an agent or input from a customer in an interaction) of the respective subset of segments, a prompt for the LLM, the prompt comprising the analytical parameter, the segment ([0117] implement LLM in applications may be used to generate evaluation prompts from evaluation prompt templates 602 and transcripts from interaction analytics 606C which may include interaction data items), and an indication of an evaluation format to be generated ([0117] evaluation questions and evaluation parameters of evaluation forms such as “Number of elevated calls to a supervisor for an agent” may be extracted from an evaluation form and used in the generation of an evaluation prompt);
providing the prompt to the LLM ([0117] Input to a LLM may be in form of a prompt, e.g. an evaluation prompt or a training recommendation prompt or may include input that may be derived from an interaction recording);
and wherein generating the respective evaluation of each segment is based on the prompt ([0105] A processor such as processor 403 of computing device 402 processor 411 of device 410, and/or processor 421 of computing device 420 may be configured to generate evaluation results for the interaction data items using the plurality of evaluation prompts and machine learning).
Regarding claim 5, Anwade teaches: the method of claim 4, wherein the evaluation format comprises one of (a) a “yes” or "no" answer, (b) a selection from multiple choices, (c) a numerical rating ([0109] may identify interaction data items that are below a threshold, e.g. don't meet expectations when compared to a threshold value for an interaction data item and can be summarized in a grade for an evaluation question within a certain scale, e.g. customer satisfaction for handling a call was rated 7 out of 10 (on a scale from 1 to 10, 10 being the highest and 1 being the lowest score)), or (d) a free-form answer ([0108] Evaluation prompts may include, for example, questions that are used in the evaluation of an agent such as “What is the average handling time for a call for an agent?”).
Regarding claim 11, Anwade teaches: the method of claim 1, wherein generating, using the trained LLM, the response to the analytical parameter comprises: generating the response based on the respective evaluation for the analytical parameter ([0109] Evaluation prompts may include threshold parameters for interaction data items present in interactions and may allow comparing an interaction data item for an interaction, e.g. time for handling a customer agent interaction, with a threshold parameter for the interaction data items, e.g. average handling time of customer agent interactions among all agents of a contact center. Generated evaluation results may include answers to evaluation prompts that have been identified in an interaction) having a worst score of the respective evaluations ([0109] They may further include reasons for an evaluation result, may identify interaction data items that are above a threshold, e.g. meet or exceed expectations when compared to a threshold value for an interaction data item, may identify interaction data items that are below a threshold, e.g. don't meet expectations when compared to a threshold value for an interaction data item and can be summarized in a grade for an evaluation question within a certain scale, e.g. customer satisfaction for handling a call was rated 7 out of 10 (on a scale from 1 to 10, 10 being the highest and 1 being the lowest score). A worst score can be a score for a interaction data item that doesn’t meet a threshold score.).
Regarding claim 13, Anwade teaches: a system comprising: a non-transitory computer-readable medium; and one or more processors communicatively connected to the non-transitory computer-readable medium, the one or more processors configured to execute processor-executable instructions stored in the non-transitory computer-readable medium to cause the one or more processors… ([0102] Embodiments of the invention may include one or more article(s) (e.g. memory 320 or storage 330) such as a computer or processor non-transitory readable medium, or a computer or processor non-transitory storage medium). The rest of the claim recites similar limitations as claim 1 and therefore is rejected similarly.
Regarding claim 15, it recites similar limitations as claim 3 and therefore is rejected similarly.
Regarding claim 16, it recites similar limitations as claim 4 and therefore is rejected similarly.
Regarding claim 17, it recites similar limitations as claim 5 and therefore is rejected similarly.
Regarding claim 19, Anwade teaches: a non-transitory computer-readable medium comprising processor-executable instructions configured to cause one or more processors… ([0102] Embodiments of the invention may include one or more article(s) (e.g. memory 320 or storage 330) such as a computer or processor non-transitory readable medium, or a computer or processor non-transitory storage medium). The rest of the claim recites similar limitations as claim 1 and therefore is rejected similarly.
Regarding claim 20, it recites similar limitations as claim 3 and therefore is rejected similarly.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 7-8, 10, 12, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Anwade in view of Desgarennes et al. (US 20240289686 A1; hereinafter referred to as Desgarennes).
Regarding claim 2, Anwade teaches: the method of claim 1… generating, using the trained ML model, segment embeddings for each segment of the plurality of segments… ([0171] Vectors (e.g., ordered lists of numbers) may be generated by converting textual or other complex data, e.g. present in interaction recordings or chunks of interaction recordings, into numerical representations using techniques like word embeddings, sentence embeddings, or other feature extraction methods).
Anwade does not explicitly, but Desgarennes discloses: generating, using a trained machine learning ("ML") model ([0053] the trained ML model may be utilized to process the one or more of the input, systemic context, specific context, environment guidelines, and/or the intent objectives and output one or more prompts responsive to the input), analytical parameter embeddings for each analytical parameter of the set of analytical parameters… ([0053] an embedding may be generated for the input and/or intent objective singularly and/or collectively. The one or more embeddings may be stored in a data store 112 and/or integration manager 106 for future use);
and wherein determining the subset of segments semantically associated with the respective analytical parameter is based on the respective analytical parameter embedding and the segment embeddings ([0046] an embedding may be generated singularly and/or for one or more portions of each of the input, systemic context, specific context, and/or environment guidelines based on the granularity desired within the system. The embeddings may then be utilized to identify one or more semantically associated intent objectives from a data store 112 which in this instance may be configured as an embedding object memory).
Anwade and Desgarennes are considered analogous in the field of large language models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Anwade to combine the teachings of Desgarennes because doing so would allow for embeddings to be determined for input data, improving prompt generation for an LLM by using contextual information and leading to better evaluation of a communication segment (Desgarennes [0043] The specific contextual indicators which may be gathered by the application 104 and/or application 105 and/or the director service 120 in addition to the directly provided input serve the purpose of providing additional contextual information which can be utilized to refine and improve prompt generation).
Regarding claim 7, Anwade teaches: the method of claim 1. Anwade does not explicitly, but Desgarennes teaches: wherein generating, using the trained LLM, the response to the analytical parameter comprises: generating the response based on the respective evaluation for the analytical parameter having a best score of the respective evaluations ([0077] the evaluation may be performed by generating a confidence score for one or more components of the output and comparing the one or more confidence scores to a threshold value. The confidence score may be determined using evaluation metrics developed for the ML model and the output. The confidence score may be a measure of the output's responsiveness to the input and how well the output satisfies the environment guidelines based on the one or more evaluation metrics).
Anwade and Desgarennes are considered analogous in the field of large language models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Anwade and Desgarennes to further combine the teachings of Desgarennes because doing so would allow for output from a LLM to be evaluated using a scoring method to determine responsiveness to an input prompt, leading to improved responses from the LLM and better evaluations for analytical parameters (Desgarennes [0077] The confidence score may be determined using evaluation metrics developed for the ML model and the output. The confidence score may be a measure of the output's responsiveness to the input and how well the output satisfies the environment guidelines based on the one or more evaluation metrics. At operation 214, the output evaluator may attempt to modify one or more aspects of the model output and/or data mask portions of the output to make the output responsive to the input and/or the guidelines as required).
Regarding claim 8, the combination of Anwade and Desgarennes teaches: the method of claim 7. Anwade further teaches: wherein generating, using the trained LLM, the response to the analytical parameter further comprises: generating, using the trained LLM, a justification based on the respective evaluation for the analytical parameter having the best score of the respective evaluations ([0109] Generated evaluation results may include answers to evaluation prompts that have been identified in an interaction. They may further include reasons for an evaluation result, may identify interaction data items that are above a threshold, e.g. meet or exceed expectations when compared to a threshold value for an interaction data item);
and wherein outputting the full evaluation comprises outputting the justification ([0140] A generated evaluation result may include items for a question such as, for example: an answer to a question, a reason for the answer, positive feedback, negative feedback, suggestions for improvements, and a grade for an assessed interaction date item).
Regarding claim 10, the combination of Anwade and Desgarennes teaches: the method of claim 7. Anwade further teaches: wherein generating, using the trained LLM ([0137] A system may use generative AI models and large language models to generate evaluation results (810) from QM evaluation forms and interaction transcripts), the response to the analytical parameter further comprises: generating, using the trained LLM, a justification for each respective evaluation for the analytical parameter ([0109] Generated evaluation results may include answers to evaluation prompts that have been identified in an interaction. They may further include reasons for an evaluation result, may identify interaction data items that are above a threshold, e.g. meet or exceed expectations when compared to a threshold value for an interaction data item, may identify interaction data items that are below a threshold, e.g. don't meet expectations when compared to a threshold value for an interaction data item and can be summarized in a grade for an evaluation question);
generating, using the trained LLM, a summary justification ([0231] Table 1 is an example summary of evaluation results for an agent 1. The evaluation results shown in Table 1 may include focus areas, e.g. CSAT and Productivity and analyzed behaviors of interactions for the focus areas) based on the generated justifications for the respective evaluation for the analytical parameters ([0196] An example output of an evaluation result 924 for an evaluation prompt created for the evaluation of an interaction data item may include: an answer to the question present in the prompt, a reason for the answer, positive feedback, negative feedback, suggestions for improvements in dealing with the question and a grade how a question was handled);
and wherein outputting the full evaluation comprises outputting the justification ([0140] A generated evaluation result may include items for a question such as, for example: an answer to a question, a reason for the answer, positive feedback, negative feedback, suggestions for improvements, and a grade for an assessed interaction date item).
Regarding claim 12, the combination of Anwade and Desgarennes teaches: the method of claim 7. Anwade further teaches: wherein generating, using the trained LLM, the response to the analytical parameter further comprises: generating, using the trained LLM, a justification based on the respective evaluation for the analytical parameter having the worst score of the respective evaluations ([0109] They may further include reasons for an evaluation result, may identify interaction data items that are above a threshold, e.g. meet or exceed expectations when compared to a threshold value for an interaction data item, may identify interaction data items that are below a threshold, e.g. don't meet expectations when compared to a threshold value for an interaction data item and can be summarized in a grade for an evaluation question within a certain scale, e.g. customer satisfaction for handling a call was rated 7 out of 10 (on a scale from 1 to 10, 10 being the highest and 1 being the lowest score). A worst score can be a score for a interaction data item that doesn’t meet a threshold score.);
and wherein outputting the full evaluation comprises outputting the justification ([0140] A generated evaluation result may include items for a question such as, for example: an answer to a question, a reason for the answer, positive feedback, negative feedback, suggestions for improvements, and a grade for an assessed interaction date item).
Regarding claim 14, it recites similar limitations as claim 2 and therefore is rejected similarly.
Claims 6 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Anwade in view of Gajek et al. (US 20240289559 A1; hereinafter referred to as Gajek).
Regarding claim 6, Anwade teaches: the method of claim 1. Anwade does not explicitly, but Gajek teaches: wherein determining the subset of segments semantically associated with the respective analytical parameter comprises: for each analytical parameter: for each segment: providing, to a cross-encoder for each segment, the respective analytical parameter and respective segment ([0368] FIG. 15 illustrates a cross-encoder modeling system, configured in accordance with one or more embodiments. The cross-encoder modeling system accepts as input both a query portion 1502 and a text portion 1504. The query and text portions are separated in the input by a separator 1506. The query portion can represent the analytical parameter and the text portion can represent the segment.);
and obtaining a score for the respective analytical parameter and respective segment ([0368] The query and text portions are separated in the input by a separator 1506. The cross-encoder modeling system that employs a number of layers of cross-linked neurons 1508 to produce a relevance score 1510);
and associating one or more segments with each analytical parameter based on the respective score ([0372] If it is determined that the relevance score does not exceed the designated threshold, then at 1414 the selected text portion is excluded for query analysis. If instead it is determined that the relevance score does exceed the designated threshold, then at 1416 the selected text portion is included for query analysis).
Anwade and Gajek are considered analogous in the field of large language models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Anwade to combine the teachings of Gajek because doing so would allow for the use of a cross-encoder to determine relevancy between an input segment and a parameter, leading to relevancy scores that can be used to determine and improve the accuracy of a LLM response (Gajek [0371] the designated threshold may be determined so as to select a particular number or proportion of the text portions as relevant. As another example, the designated threshold may be determined so as to select more or fewer text portions as relevant, which may involve various tradeoffs. For instance, setting a lower designated threshold may result in selecting more documents as relevant, potentially leading to improved accuracy in answering the query at the expense of relatively greater cost and compute time).
Regarding claim 18, it recites similar limitations as claim 6 and therefore is rejected similarly.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Anwade in view of Fallon (US 12058091 B1).
Regarding claim 9, Anwade teaches: the method of claim 1. Anwade does not explicitly, but Fallon teaches: wherein generating, using the trained LLM, the response to the analytical parameter comprises: generating the response based on an averaging of the respective evaluations for the analytical parameter ([col 4, lines 59-67] When the communication spaces and conversations of the identified communication group have been processed as determined at operations 235 or 250, an overall score for the identified communication group is determined at operation 255 based on the similarity scores of the communication spaces and conversations of the identified communication group. The similarity scores may be combined in any fashion (e.g., summed, averaged, etc.) to produce the overall score).
Anwade and Fallon are considered analogous in the field of large language models. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Anwade to combine the teachings of Fallon because doing so would allow for evaluation scores for a set communications to be combined to determine an overall score for evaluation, leading to more accurate LLM output that is based on the average score (Fallon [col 8, lines 4-10] The communication spaces of each communication group are compared to the communication space to determine the weighted attribute scores for the communication group. The attribute scores for communication spaces and conversations of the communication group are combined (e.g., summed, averaged, etc.) to produce the overall score for the communication group as described above).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Camenares et al. (US 20220013114 A1) – discloses a method for determining the effectiveness of a meeting by analyzing the spoken content of each user of a meeting to generate a score.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nathan Tengbumroong whose telephone number is (703)756-1725. The examiner can normally be reached Monday - Friday, 11:30 am - 8:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NATHAN TENGBUMROONG/Examiner, Art Unit 2654
/HAI PHAN/Supervisory Patent Examiner, Art Unit 2654