DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
As per Claim 1 (and similarly claim 11 and 20 [where claim 20 recites “response” instead of “answer”]):
“the answer for the question” in the 2nd to last line of claim 1 is interpreted as referring to “an answer to the question” in lines 3-4 of claim 1.
As per Claims 1 and 20:
These two claims are not interpreted as substantial duplicates because “answer” in claim 1 is interpreted as referring to a natural language response while “response” in claim 20 could be other types of responses (including, for example, executable code or a task to be performed)
As per Claim 4 (and similarly claim 14):
“the question-answer combination” in line 2 of claim 4, in lines 3-4 of claim 4, in line 8 of claim 4, in line 11 of claim 4, and in the 2nd to last line of claim 4 (recited twice in the 2nd to last line of claim 4) is interpreted as referring to “a question-answer combination” in the 3rd to last line of claim 1 (not to “a recorded question-answer combination” in lines 1-2 of claim 4).
Recitations of “the question/answer of the [recorded] question-answer combination” in claim 4 are interpreted as having implied/inherent antecedent basis because “question-answer combinations” naturally include a combination of a question and an answer.
Recitations of “the recorded question” and “the recorded answer” in claim 4 are interpreted as having implied/inherent antecedent basis from “a recorded question-answer combination” in lines 1-2 of claim 4 which naturally includes “a recorded question” and “a recorded answer”.
As per Claim 5 (and similarly claims 6-7, and 15-17):
“a list of frequently asked questions and answers” in line 3 of claim 5 is interpreted as referring to a list, where the list comprises “frequently asked questions” and comprises “answers” (not where “a list of frequently asked questions” is generated and where a separate set of “answers” is also generated).
As per Claim 8 (and similarly claim 18):
“the question in the recorded question-answer combination” in lines 1-2 of claim 8 is interpreted as having implied/inherent antecedent basis from “the recorded question-answer combination” (which naturally includes a question and an answer).
As per Claim 9 (and similarly claim 19):
“the recorded question” in line 2 of claim 9 is interpreted as having implied/inherent antecedent basis from “a recorded question-answer combination” in lines 1-2 of claim 4 (where a recorded question-answer combination naturally includes a recorded question and a recorded answer)
Claim Objections
Claim 20 is objected to because of the following informalities:
Line 5 of claim 20 recites “model:” which seems like it should be --model, and:-- (see line 4 of claim 1 and line 7 of claim 11).
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 4-9 and 11-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per Claim 11:
“mode” in line 7 of claim 11 seems like it may be a typo for “model” (see e.g. 4 of claim 1 and see “model” in claim 12)
As per Claim 12:
“the machine learning model” in lines 2-3 of claim 12 lacks antecedent basis (claim 11 recites “machine learning mode”).
As per Claim 4 (and similarly claim 14):
Applicant’s intent for claiming “querying the database for a recorded question-answer combination that comprises the question-answer combination” in lines 1-2 of claim 4 is fairly clear (i.e. the database is queried to see if any question-answer combinations recorded in the database comprise “a question-answer combination” in the 3rd to last line of claim 1), but the plain meaning of “querying the database for a recorded question-answer combination that comprises the question-answer combination” is where the database is queried for a particular recorded question-answer combination, where the particular recorded question-answer combination necessarily includes “a question-answer combination” in the 3rd to last line of claim 1, and this means that the first 2 “when” conditions of claim 4 will always be satisfied and the last 2 “when” conditions of claim 4 will never be satisfied (because lines 1-2 of claim 4 establishes that “a recorded question-answer combination” necessarily includes “the question-answer combination” and thus necessarily includes “the question of the question-answer combination” and “the answer of the question-answer combination”), and so it is not clear, at least, if Applicant meant to claim “a recorded question-answer combination that comprises the question-answer combination” in lines 1-2 of claim 4 (i.e. where the recorded combination necessarily includes the question-answer combination in claim 1, which would mean the “when” conditions in claim 4 are relatively pointless [because they would either always be satisfied or never be satisfied]).
“assessing whether the answer of the recorded question-answer combination comprises the answer of the recorded question-answer combination” in lines 5-6 of claim 4 seems like it may have been intended to be --assessing whether the answer of the recorded question-answer combination comprises the answer of the question-answer combination—(no “recorded” in line 6 of claim 4) because as currently claimed “the recorded question-answer combination” always comprises “the answer of the recorded question-answer combination” (because the recorded question-answer combination is the recorded question-answer combination).
The 2nd and 3rd “when…” steps in claim 4 are fairly clearly intended to be follow-up steps to “assessing whether the answer of the recorded question-answer combination comprises the answer of [the question-answer] combination” in lines 5-6 of claim 4, but the grammar of lines 5-12 of claim 4 is unusual because line 6 of claim 4 just ends with a colon (where the colon, by itself, does not grammatically establish lines 7-12 of claim 4 as follow-up steps that are based on the results of the “assessing” in line 5 of claim 4. Amending “combination:” in line 6 of claim 4 to recite –combination, and:-- should suffice to resolve this issue (similar to how line 2 of claim 4 ends in “combination, and:” which establishes the first and fourth “when” steps are follow-up steps for the “querying” in lines 1-2 of claim 4).
“adding the answer to the recorded question” in lines 11-12 of claim 4 is unclear, because this phrase can refer to either:
1. where “the answer” is “add[ed]… to the recorded question”, in which case “the answer” in line 11 of claim 4 is ambiguous (it can refer to either “the answer” in the “recorded question-answer combination” or to “the answer” in “the question-answer combination” or to “an answer to the question” in lines 3-4 of claim 1)
or
2. “the answer to the recorded question” (presumably referring to the answer of the recorded question-answer combination which is most intuitively a recorded answer that is an answer to the recorded question of the recorded question-answer combination) is “add[ed]” to some unspecified entity.
As per Claim 5 (and similarly claim 15):
“the questions-answer combinations” in lines 3-4 of claim 5 lacks antecedent basis (line 2 of claim 5 recites “question-answer combinations” [no “s” after “question”]) and it seems like Applicant may have meant to claim “answers to the frequently asked questions”, but this is not clearly the case. As currently claimed “a list of frequently asked questions and answers to the questions-answer combinations” refers to either a list of “frequently asked questions and answers” which are “frequently asked questions” and “answers” (answers are typically not “asked”) “to the questions-answer combinations” (which is unusual because questions are not typically asked to combinations and answers are typically answers to questions and not answers to question-answer combinations) or to “a list” which comprises “frequently asked questions” and “answers to the questions-answer combinations” (where, again, answers are typically answers to questions and not answers to question-answer combinations).
As per Claim 6 (and similarly claim 16):
“the automatic provision of an answer to a received question” in line 2 of claim 6 lacks antecedent basis.
As per Claim 8 (and similarly claim 18):
“the question” in line 1 of claim 8 is ambiguous (it can refer to any one of “question” in line 2 of claim 1, the question in the “question-answer combination” in the 3rd to last line of claim 1 or to the question in the “recorded question-answer combination” in lines 1-2 of claim 4).
“the questions” in line 3 of claim 8 is unclear, because it can refer to any two or more of “question” in line 2 of claim 1, the question in the “question-answer combination” in the 3rd to last line of claim 1 or to the question in the “recorded question-answer combination” in lines 1-2 of claim 4 (and as discussed in the previous paragraph, “the question” in line 1 of claim 8 is ambiguous so even assuming “the questions” refers to “the question and the question in the recorded question-answer combination” in lines 1-2 of claim 8, it is not clear which question is the “first” question in “the question and the question in the recorded question-answer combination” in lines 1-2 of claim 8)
As per Claim 9 (and similarly claim 19):
“the question” in line 2 of claim 9 is ambiguous (it can refer to any one of “question” in line 2 of claim 1, the question in the “question-answer combination” in the 3rd to last line of claim 1 or to the question in the “recorded question-answer combination” in lines 1-2 of claim 4).
“the questions” in line 1 of claim 9 is unclear, because it can refer to any two or more of “question” in line 2 of claim 1, the question in the “question-answer combination” in the 3rd to last line of claim 1 or to the question in the “recorded question-answer combination” in lines 1-2 of claim 4 (and as discussed in the previous paragraph, “the question” in line 1 of claim 8 is ambiguous so even assuming “the questions” refers to “the question and the question in the recorded question-answer combination” in lines 1-2 of claim 8, it is not clear which question is the “first” question in “the question and the question in the recorded question-answer combination” in lines 1-2 of claim 8)
The dependent claims include the issues of their respective parent claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 recites A method of language processing (mental process, a human can mentally analyze/process language information [e.g. text or speech])
comprising: identifying a question and context for the question in an interaction recording; (mental process, a human can read a transcript of a conversation/”interaction” or listen to a recording of a conversation/”interaction” and mentally identify a question that is present in the transcript/recording and can also mentally identify any relevant utterances surrounding the question as “context for the question”)
identifying whether the context for the question comprises an answer to the question…, (mental process, a human can identify whether any of the relevant utterances contain information that answers the mentally identified question)
and: when the context for the question includes the answer to the question, generating a question-answer combination and storing the combination in a database; (mental process, if the human does identify information that an answer the mentally identified question in the relevant utterances, the human can write the mentally identified question and the answer information into a document/book including questions and corresponding answers)
and when the context does not include the answer for the question, storing the question in the database (mental process, if the human does not identify information that an answer the mentally identified question in the relevant utterances, the human can write the mentally identified question into the document/book including questions and corresponding answers)
Claim 11 recites …language processing… (mental process, a human can mentally analyze/process language information [e.g. text or speech])
…identify a question and context for the question in an interaction recording; (mental process, a human can read a transcript of a conversation/”interaction” or listen to a recording of a conversation/”interaction” and mentally identify a question that is present in the transcript/recording and can also mentally identify any relevant utterances surrounding the question as “context for the question”)
identify whether the context for the question comprises an answer to the question…, (mental process, a human can identify whether any of the relevant utterances contain information that answers the mentally identified question)
and: when the context for the question includes the answer to the question, generate a question-answer combination and store the combination in a database; (mental process, if the human does identify information that an answer the mentally identified question in the relevant utterances, the human can write the mentally identified question and the answer information into a document/book including questions and corresponding answers)
and when the context does not include the answer for the question, store the question in the database (mental process, if the human does not identify information that an answer the mentally identified question in the relevant utterances, the human can write the mentally identified question into the document/book including questions and corresponding answers)
Claim 20 recites A method of language processing of interaction recordings, (mental process, a human can mentally analyze/process language information [e.g. text or speech] in transcripts/recordings of conversations)
the method comprising: identifying a question and context for the question in a recording of an interaction; (mental process, a human can read a transcript of a conversation/”interaction” or listen to a recording of a conversation/”interaction” and mentally identify a question that is present in the transcript/recording and can also mentally identify any relevant utterances surrounding the question as “context for the question”)
identifying whether the context for the question comprises a response to the question…: (mental process, a human can identify whether any of the relevant utterances contain information that answers the mentally identified question)
when the context for the question includes the response to the question, creating a question-response pair and storing the pair in a database; (mental process, if the human does identify information that an answer the mentally identified question in the relevant utterances, the human can write the mentally identified question and the answer information into a document/book including questions and corresponding answers)
and when the context does not include the response for the question, storing the question in the database (mental process, if the human does not identify information that an answer the mentally identified question in the relevant utterances, the human can write the mentally identified question into the document/book including questions and corresponding answers)
This judicial exception is not integrated into a practical application because:
Claim 1 recites A method of language processing comprising: identifying a question and context for the question in an interaction recording; identifying whether the context for the question comprises an answer to the question using a machine learning model, and: when the context for the question includes the answer to the question, generating a question-answer combination and storing the combination in a database; and when the context does not include the answer for the question, storing the question in the database.
Claim 11 recites A system for language processing, the system comprising: a computing device; a memory; and a processor, the processor configured to: identify a question and context for the question in an interaction recording; identify whether the context for the question comprises an answer to the question using a machine learning mode, and: when the context for the question includes the answer to the question, generate a question-answer combination and store the combination in a database; and when the context does not include the answer for the question, store the question in the database.
Claim 20 recites A method of language processing of interaction recordings, the method comprising: identifying a question and context for the question in a recording of an interaction; identifying whether the context for the question comprises a response to the question using a machine learning model: when the context for the question includes the response to the question, creating a question-response pair and storing the pair in a database; and when the context does not include the response for the question, storing the question in the database.
The underlined portions of claims 1, 11, and 20 require no more than generic computer implementation of the mental processes discussed above, which is not sufficient to qualify as a practical application of the abstract idea or significantly more than the abstract idea (see “even if an element does not integrate a judicial exception into a practical application or amount to significantly more on its own (e.g., because it is merely a generic computer component performing generic computer functions)” in MPEP 2106.07[b], “claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible” and “For example, an examiner could explain that implementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two or add significantly more in Step 2B, similar to how the recitation of the computer in the claim in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer” in MPEP 2106.05[f], “Examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality… iii. Mere automation of manual processes, such as using a generic computer to process an application for financing a purchase, Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 1055, 123 USPQ2d 1100, 1108-09 (Fed. Cir. 2017) or speeding up a loan-application process by enabling borrowers to avoid physically going to or calling each lender and filling out a loan application, LendingTree, LLC v. Zillow, Inc., 656 Fed. App'x 991, 996-97 (Fed. Cir. 2016) (non-precedential)” and “Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology” in MPEP 2106.05[a], MPEP 2106.04[a][2] III., “In bracket 3, explain why the combination of additional elements fails to integrate the judicial exception into a practical application. For example, if the claim is directed to an abstract idea with additional generic computer elements, explain that the generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer” in 2106.07[a][1]).
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the underlined portions of claims 1, 11, and 20 require no more than generic computer implementation of the mental processes discussed above, which is not sufficient to qualify as a practical application of the abstract idea or significantly more than the abstract idea.
As per Claim 2 (and similarly claim 12 [where claim 12 recites “wherein the processor is configured to” which is directed to generic computer implementation]):
wherein the method comprises separating the question-answer combination from the interaction recording using the machine learning model (mental process with generic computer implementation, particularly in the example of a text transcript, a human can cut out a portion of the transcript containing a question and its answer from the transcript, and using the machine learning model is directed to generic computer implementation).
As per Claim 3:
wherein the machine learning model is a large language model (mental process with generic computer implementation, claim 1, as discussed above, is directed to a series of mental processes, and claim 3 is directed to implementing one of those processes using generic computer implementation [i.e. using a machine learning model which is more particularly a large language model]).
As per Claim 4 (and similarly claim 14 [where claim 14 recites “wherein the processor is configured to” which is directed to generic computer implementation]):
querying the database for a recorded question-answer combination that comprises the question-answer combination, (mental process, a human can read the document/book to see if any question-answer pairs are the same as the question-answer pair that he/she wrote down)
and: when the recorded question-answer combination comprises the question of the question-answer combination, increasing a counter for the recorded question and assessing whether the answer of the recorded question-answer combination comprises the answer of the recorded question-answer combination: when the recorded question-answer combination comprises the answer of the question-answer combination, increasing a counter for the recorded answer; and when the recorded question-answer combination does not comprise the answer of the question-answer combination, adding the answer to the recorded question; (mental process, a human can examine the document/book to see if either of the written question or the written answer of the written question-answer pair are in question-answer pairs in the document/book, and if he/she detects the written question, he/she can increase a mental count corresponding to the written question [which is also in the document/book] by 1, and then determine if there is an answer corresponding to the written question among the question-answer pairs that have questions matching the written question, and if there is an answer corresponding to the written question, he/she can increase a mental count corresponding to the answer and if there is no answer, the human can write the written answer into the document/book)
and when the recorded question-answer combination does not comprise the question of the question-answer combination, adding the question-answer combination to the database (mental process, if the human does not find the written question-answer pair in the document/book, then the human can write the question-answer pair into the document/book).
As per Claim 5 (and similarly claim 15 [where claim 15 recites “wherein the processor is configured to” which is directed to generic computer implementation]):
periodically identifying question-answer combinations which have a counter value that is above a threshold counter value and generating a list of frequently asked questions and answers to the questions-answer combinations (mental process, a human can periodically identify any question-answer pairs in the document/book that have mental counts above a threshold value and then write down a list of question-answer pairs which have counts exceeding the threshold value as a list of frequently asked questions and corresponding answers).
As per Claim 6 (and similarly claim 16 [where claim 16 recites “wherein the processor is configured to” which is directed to generic computer implementation]):
using the list of frequently asked questions and answers in the automatic provision of an answer to a received question (mental process with generic computer implementation, a human can use the list to answer a question that another person asked, and “automatic” is directed to generic computer implementation).
As per Claim 7 (and similarly claim 17 [where claim 17 recites “wherein the processor is configured to” which is directed to generic computer implementation]):
using the list of frequently asked questions and answers in providing an agent with recommended answers to a customer query (mental process, a human can vocally or in writing recommend answers that another person can provide in response to a question that a customer has asked).
As per Claim 8:
wherein a similarity between the question and the question in the recorded question-answer combination is identified by assessing a semantic similarity between the questions (mental process, a human can understand a question in the transcript/recording and question[s] in the document/book and determine if any of the questions have the same meaning).
As per Claim 9:
wherein the semantic similarity between the questions is assessed by vectorial representation of the question and the recorded question and calculation of a cosine similarity (mental process, a human can think of or write down numbers corresponding to the meanings of different questions and then perform cosine similarity math to determine the similarity between the numbers).
As per Claim 10:
automatically updating recorded question- answer combinations by updating one or more of: a question, an answer to a question, a question-answer combination (mental process with generic computer implementation, a human can update/add-to the document/book by writing new questions and/or answers into the document/book, and “automatically” is directed to generic computer implementation)
Allowable Subject Matter
The following is a statement of reasons for the indication of allowable subject matter:
As per Claim(s) 1 (and similarly claim[s] 11 and 20, and consequently claim[s] 2-10 and 12-19 which depend on claim[s] 1 and 11), the prior art of record does not teach or suggest the combination of all limitations in claim(s) 1, including (i.e. in combination with the remaining limitations in claim[s] 1) A method of language processing comprising: identifying a question and context for the question in an interaction recording; identifying whether the context for the question comprises an answer to the question using a machine learning model, and: when the context for the question includes the answer to the question, generating a question-answer combination and storing the combination in a database; and when the context does not include the answer for the question, storing the question in the database.
J. Ajmera, S. Joshi, A. Verma and A. Mittal, "Automatic generation of question answer pairs from noisy case logs,", 2014, 2014 IEEE 30th International Conference on Data Engineering, Chicago, IL, USA, pp. 436-447, teaches generating question answer pairs from noisy case logs (Title). This reference appears to form question answer pairs based on querying a knowledge repository using segments of case logs, and not identifying an answer that is present in the case log.
2022/0318230 teaches “Question-answer pair generation module 222 receives the processed input text and/or input text 210 and generates a plurality of question-answer pairs based on the processed input text and/or input text 210. In some embodiments, the processed input text comprises a plurality of text segments. Question-answer pair generation module 222 generates, for each text segment, a plurality of question-answer pairs based on the text segment” (paragraph 30) and “In some embodiments, each question-answer pair comprises a question based on the processed input text and an answer, from the processed input text, corresponding to the question. In some embodiments, question-answer pair generation module 222 generates one or more questions that do not have an answer in the processed input text. The question-answer pairs corresponding to the one or more questions includes data indicating that no answer to the question is in the processed input text, rather than including a corresponding answer” (paragraph 31). This reference describes where question-answer pairs which do not have an answer for a question are generated (which is still a pair which contains the question such that the reference can be interpreted as describing storing of the question that has no answer when there is no answer for the question in the text). This reference also seems to determine, in one embodiment, answers first, and then determine questions for the answers (paragraph 32). This reference does not appear to describe where the input text is an interaction recording (paragraph 23 seems to describe examples that are not interaction recordings) and does not appear to describe where the answer for a question is specifically found/not-found in “context for the question” (paragraphs 54 and 67 seems to describe context data associated with an answer which is “portions of the processed text 320 or the input text 210 around the answer”).
WO 2019153612 A1 teaches (see Google Translation) adding question and answer pairs derived from “agent text data including question and answer data recorded by all customers and customer service during the question and answer process” (which seem to be recordings of agent-customer interactions). This reference appears to describe “the problem statement without the corresponding answer sentence and the answer statement without the corresponding question statement are removed” which seems to indicate that questions without corresponding answers are not stored.
JP 2023026316 A teaches “A question-response pair generator 180 may be implemented to generate candidate question-response pairs based on the isolated context. The question-response pair generator 180 may generate a solution” and “generating question-response pairs based on analysis of an original text and a method for generating question-response pairs based on a natural language model for constructing question-response pair data” and “a question-response pair generator determines a solution in the context, determines a question corresponding to the solution through machine reading comprehension, and generates the candidate question-response pair”. This reference appears does not appear to describe where the context is for a question (the answer is found in the context and then a corresponding question for the answer is determined).
2023/0316000 teaches “The environment context may include a corpus of information associated with the interaction environment, such as information that may be answered while using the interaction environment. This information may be processed by the EQA model to determine whether or not the answer to the query is within the environment context 606” (paragraph 43). This reference does not appear to describe where the query is stored in response to determining that there is no answer in the corpus.
8769417 teaches “Additionally or alternatively, the answer module 125 may determine whether the post is responsive to the question in a similar manner as described below in reference to determining good answer criteria, discussed in reference to FIG. 4. For example, in determining whether the post is responsive to the question, the answer module 125 may determine whether the post provides an answer to the question based on context, user behavior, user votes, etc., as described below. In some embodiments, a question may only be considered unanswered if a certain amount of time has passed since the question was asked. The amount of time before a question is considered unanswered may depend on the context in which the question was asked, which may include determining the average frequency with which users post in a forum in which the question was asked. This reference does not appear to store unanswered questions.
8983962 teaches “extracting a second question and answer data from the history data of said dialogue content as associated question and answer data to said first question and answer data in response to detecting that a third question part or a third answer part similar to the extracted second question and answer data is present in the vicinity of the first question part or the first answer part” (claim 1) and “The question and answer data editing device for editing dialog content to generate question and answer data, includes a detecting unit that detects a part of the dialog content similar to existing question and answer data stored, and a extracting unit that extracts a context in which the dialog content is made from dialog content in the proximity of the similar part detected and registers the context extracted as new question and answer data or as index information of the question and answer data” (Abstract). This reference does not appear to determine whether context for a question includes an answer to the question, and does not appear to register/store the question when the context does not include the answer for the question.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC YEN whose telephone number is (571)272-4249. The examiner can normally be reached M-F 12:00PM -8:30PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RICHEMOND DORVIL can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
EY 2/20/2026
/ERIC YEN/Primary Examiner, Art Unit 2658