DETAILED ACTION
Introduction
This office action is in response to applicant’s remarks filed 1/21/2026. Claims 1-20 are currently pending and have been examined. Applicant’s IDS have been considered. There is no claim to foreign priority.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 1/21/2026 have been fully considered but they are not persuasive. More specifically, applicant argues,
“However, Applicant respectfully submits that the program-rewriting function does not rewrite the user utterance. Moreover, Applicant respectfully submits that rewriting the data-flow program based on a concept in the dialogue history is not same thing as rewriting the user utterance or a query. Additionally, Applicant respectfully submits that the code-generation machine does not analyze to the user utterance to identify one or more ambiguous components of the user utterance. Accordingly, Applicant respectfully submits that Eisner at least fails to disclose or suggest the "analyzing the query ..." and the "rewriting the query ..." features recited by the claims.
Eisner also describes that the pre-defined functions include a search-history function 112 for processing an ambiguous user utterance "by resolving any ambiguities using concepts form the context-specific dialogue history 130." Id., para. [0023]. To do this, Eisner describes that the search-history function is used to search for disambiguating concepts in the context-specific dialogue history for ambiguous entities in user utterances. Id., para. [0024]. Eisner further describes that the disambiguating concepts are used to modify the data-flow programs. Id., para. [0025]. Additionally, Eisner describes that "an intelligent decision function may be configured to assess an ambiguity, e.g., an ambiguous user utterance, or an ambiguous constraint, and select a disambiguating data-flow program to respond to the ambiguity." Id., para. [0039].
However, Applicant respectfully submits that modifying a data-flow program and/or selecting a disambiguating data-flow program is not the same thing as rewriting the user utterance or a query to include one or more specific descriptors as substitute for one or more ambiguous component in the query. Accordingly, Applicant respectfully submits that Eisner fails to disclose or suggest features recited by the claims.
Applicant respectfully traverses these rejections because the cite art fails to disclose or make obvious all of the elements set forth in these claims. For example, Claim 1 recites, in-part: analyzing the query utilizing a first machine learning model to identify one or more ambiguous components of the query;
rewriting the query to include the one or more specific descriptors as a
substitute for the one or more ambiguous components.”
However, the Examiner does not concur with the applicant’s arguments. The Examiner notes the “rewriting” statement is broad enough to be interpreted in many ways. Claim 1, does not explicitly state how the query is rewritten, and in what form.
The Examiner notes, Eisner explicitly teaches,
receiving a query from a user (paragraph [0016, 0020]-his user utterance, and query)-this is not argued.
Eisner further teaches, “analyzing the query utilizing a first machine learning model to identify one or more ambiguous components of the query (paragraphs [0039]-his machine learning model and intelligent decision function assessing ambiguity in the user utterance/query).” The Examiner notes, it is the explicit query that is analyzed in the cited paragraph and further in paragraphs [0023-0026]-within the queries, the ambiguities are determined. The Examiner further notes, the applicant also discusses Eisner utilizing an intelligent search-history function with respect to concepts from the context-specific dialogue history. In the cited sections, within the utterance, ambiguous components are detected, utilizing NLP, intelligent decision machine learning model.
Therefore, the Examiner notes, it is the “rewriting the query” limitation that is the crux of the argument. The applicant’s position is that, “the program-rewriting function does not rewrite the user utterance or query.” However, the Examiner does not concur. The applicant provides no bound not claimed element that describes the rewritten query. Therefore, any manner of rewriting the query is deemed plausible, wherein the rewritten query is found to contain the specific descriptor. The Examiner notes, there are many ways to rewrite a query, for example, in the prior art, as previously cited, Ni et al. (Ni, US 2019/0278857)-rephrases the query, by replacing words within the query, with more specific descriptors, such as “What about John Doe?” to “how tall is John Doe?” This is only one interpretation of rewriting a query. A query may be rewritten into any form, it may be rewritten into another language, it may be rewritten into the program form as described by Eisner, wherein, for example, the word Tom is replaced by “Tom Jones”, as the salient, retrieved disambiguated term that is the concept item intelligently retrieved and is used as the “rewritten” query, used to determine the response. It appears the applicant has interjected a particular restraint on the claims, which is not seen in the claim language. A query could be rewritten into a structured query language form, a machine readable and understandable form, or any other feasible form, such as being rephrased, utilizing natural language grammar and syntax and a simple rephrasing of the query, and outputting the rewritten query in a natural language form, that includes the desired specific descriptor. The Examiner notes, a clear command to rewrite a natural language query into SQL, or other form is an acceptable interpretation, and the rewritten query, as described above, and taught by Eisner, clearly teaches the claimed elements.
Therefore, if the applicant desires the query to be rewritten in the form of what is argued, as it appears the applicant is arguing limitations or constraints not found in the claim, the applicant must amend the claims to specifically claim the desired limitations. As currently presented, the applicant’s arguments are deemed non-persuasive.
Applicant’s remaining arguments, with respect to all remaining pending claims, are based on and/or inherit the above arguments and are deemed non-persuasive as well.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 8 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eisner et al. (Eisner, US 2023/0367602) in view of Cho et al. (Cho, US 2023/0315766).
As per claim 1, Eisner teaches a computer-implemented method comprising:
receiving a query from a user (paragraph [0016, 0020]-his user utterance, and query);
analyzing the query utilizing a first machine learning model to identify one or more ambiguous components of the query (paragraphs [0039]-his machine learning model and intelligent decision function assessing ambiguity in the user utterance/query);
determining, for each of the one or more ambiguous components of the query, a specific descriptor, utilizing the first machine learning model and a conversation history associated with the query (ibid-see above machine learning, ambiguous discussion, see also paragraphs [0023- 0026]-his ambiguities with respect to the utterance/query, and context-specific dialog history, wherein the clarifying entity is deemed the specific descriptor, from the dialogue history, which is associated with the query), wherein the conversation history includes one or more turns in a conversation between a user and a chatbot system that occur before the query is received (ibid-as defined by his dialogue, wherein the query comprising the anaphoric entity and content, uses context in a previous turn, comprising the extracted specific descriptor, to his automated assistant as the chatbot, paragraphs [0013-0016], Fig. 1E);
rewriting the query to include the one or more specific descriptors as a substitute for the one or more ambiguous components (ibid, see also paragraphs [0021, 0022, 0026, 0028, 0029]-his “rewriting” from the context-specific dialog history, replacing the ambiguity, with the context-specific history clarifying entity);
computing, utilizing an encoder model, an embedding vector for the rewritten query (paragraphs [0017-0019]-his query to encoder machine, vector space, and answering questions, using AI, wherein Figs. 1A-1C, illustrate the “rewritten” concept, item 140, 112’, and item 130, in communication with his encoder, item 104, the vector encoded information/representation used in question answering);
retrieving a subset of textual passages from a knowledge base [utilizing the embedding vector] for the rewritten query (ibid-see above question answering discussion, paragraph [0017, 0032, 0042, 0043]-his text response, from all data-flow of events, as stored);
determining, [utilizing a second machine learning model], an answer to the rewritten query (ibid-his answer, as the “assistant response”, Figs. 1A, 1E, as applied to the rewritten query, hereinafter), wherein the determining comprises taking as input the rewritten query and each of the textual passages from the subset of textual passages and extracting or generating the answer based on the rewritten query and information within the subset of textual passages (ibid, paragraphs [0042, 0043, 0019]-his answer, from stored information, as a subset of textual passages, and generated answer); and
providing the answer to the user as a response to the query (ibid-see above response discussion).
Eisner lacks explicitly teaching, that which Cho teaches, retrieving a subset of textual passages from a knowledge base utilizing the embedding vector for the rewritten query (paragraphs [0048, 0043-0048]-his document, and corresponding document query/embeddings, and clusters);
determining, utilizing a second machine learning model, an answer to the rewritten query (ibid, see also paragraph [0015, 0048-0052]-his FAISS-AI and neural network, which provides an answer to the query).
Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Eisner and Cho to combine the prior art element of utilizing a rewriting process for a query, in order to provide and generate an answer as taught by Eisner with using a document embedding for text passages/documents, and indexing model for answering queries as taught by Cho as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be an accurate and fast retrieval of answers/responses to queries (as rewritten queries, for purposes of clarifying ambiguities, with respect to the combination with Eisner) based on the ranking, clustered and indexed responses/answers (ibid-Cho, see also paragraphs [0009, 0017]-his relatively fast and accurate retrieval of a search query response, based on semantic difference in the embeddings space).
As per claim 8, claim 8 sets forth limitations similar to claim 1 and is thus rejected under similar reasons and rationale, wherein the system is deemed to embody the method, such that Eisner with Cho make obvious a system comprising:
one or more processors (Eisner, paragraphs [0121]-see his processor, instructions, software on storage devices, and execution discussion); and
one or more computer-readable media storing instructions which, when executed by the one or more processors, cause the system to perform operations comprising (ibid):
receiving a query from a user (ibid-see claim 1, corresponding and similar limitation); analyzing the query utilizing a first machine learning model to identify one or more ambiguous components of the query (ibid); determining, for each of the one or more ambiguous components of the query, a specific descriptor, utilizing the first machine learning model and a conversation history associated with the query, wherein the conversation history includes one or more turns in a conversation between a user and a chatbot system that occur before the query is received (ibid); rewriting the query to include the one or more specific descriptors as a substitute for the one or more ambiguous components (ibid); computing, utilizing an encoder model, an embedding vector for the rewritten query (ibid); retrieving a subset of textual passages from a knowledge base utilizing the embedding vector for the rewritten query (ibid); determining, utilizing a second machine learning model, an answer to the rewritten query, wherein the determining comprises taking as input the rewritten query and each of the textual passages from the subset of textual passages and extracting or generating the answer based on the rewritten query and information within the subset of textual passages; and providing the answer to the user as a response to the query (ibid).
As per claim 15, claim 15 sets forth limitations similar to claim 1 and is thus rejected under similar reasons and rationale, wherein one or more non-transitory computer-readable media storing instructions is deemed to embody the method, such that Eisner with Cho make obvious one or more non-transitory computer-readable media storing instructions which, when executed by one or more processors, cause a system to perform operations comprising (Eisner, paragraphs [0121]-see his processor, instructions, software on storage devices, and execution discussion):
receiving a query from a user (ibid-see claim 1, corresponding and similar limitation); analyzing the query utilizing a first machine learning model to identify one or more ambiguous components of the query (ibid); determining, for each of the one or more ambiguous components of the query, a specific descriptor, utilizing the first machine learning model and a conversation history associated with the query, wherein the conversation history includes one or more turns in a conversation between a user and a chatbot system that occur before the query is received (ibid); rewriting the query to include the one or more specific descriptors as a substitute for the one or more ambiguous components (ibid); computing, utilizing an encoder model, an embedding vector for the rewritten query (ibid); retrieving a subset of textual passages from a knowledge base utilizing the embedding vector for the rewritten query (ibid); determining, utilizing a second machine learning model, an answer to the rewritten query, wherein the determining comprises taking as input the rewritten query and each of the textual passages from the subset of textual passages and extracting or generating the answer based on the rewritten query and information within the subset of textual passages (ibid); and providing the answer to the user as a response to the query (ibid).
Claim(s) 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Eisner et al. (Eisner, US 2023/0367602) in view of Cho et al. (Cho, US 2023/0315766), and further in view of Morrill et al. (Morrill, US 2022/0374405).
As per claims 2, 9 and 16, Eisner with Cho make obvious the computer-implemented method of claim 1, further comprising:
[converting a plurality of documents in a variety of document formats into a plurality of text documents];
dividing each of the plurality of text documents into textual passages (ibid-Cho, paragraph [0006, 0007]-his document database, comprising document queries, and document responses, as his textual passages);
encoding, utilizing the encoder model or a different encoder model, semantics of each of the textual passages (ibid-his embedding space, mapping and semantic model, for all the textual passages), wherein the encoding comprises taking as input each of the textual passages and computing an embedding vector for each of the textual passages (ibid); and
indexing and storing the textual passages in a data store to generate the knowledge base, wherein the textual passages are indexed in accordance with the embedding vectors (ibid-see also, paragraph [0015]-his indexing).
Eisner with Cho lack teaching that which Morrill teaches, converting a plurality of documents in a variety of document formats into a plurality of text documents (paragraph [0031, 0034, 0036, 0047]-his plurality of different document formats, all converted into textual documents, i.e. his JSON, PDF, or the like, and converted into text documents, and also, “partitioning the text” into segments of texts, which is also interpreted as dividing a plurality of documents into text passages).
Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Eisner and Cho and Morrill to combine the prior art element of utilizing a rewriting process for a query, in order to provide and generate an answer as taught by Eisner with using a document embedding for text passages/documents, and indexing model for answering queries as taught by Cho with converting multiple document formats into text documents as taught by Morrill as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be an accurate and fast retrieval of answers/responses to queries (as rewritten queries, for purposes of clarifying ambiguities, with respect to the combination with Eisner) based on the ranking, clustered and indexed responses/answers (ibid-Cho, see also paragraphs [0009, 0017]-his relatively fast and accurate retrieval of a search query response, based on semantic difference in the embeddings space), the documents, which come in a plurality of forms, converted in a form usable downstream applications (ibid-Morrill, abstract).
Claim(s) 3, 5, 6, 10, 12, 13, 17, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Eisner et al. (Eisner, US 2023/0367602) in view of Cho et al. (Cho, US 2023/0315766), as applied to claim 1, and further in view of Zhuo et al. (Zhuo, US 2021/0357441).
As per claims 3, 10 and 17, Eisner with Cho make obvious the computer-implemented method of claim 1, but lack the method further comprising, that which Zhuo teaches:
evaluating, utilizing a cross-encoder model, how well each of the textual passages from the subset of textual passages answer the query, wherein the evaluating comprises taking as input each of the textual passages from the subset of textual passages and computing a score for each of the textual passages from the subset of textual passages that is indicative of answerability (Zhuo, paragraph [0049-0051]-his BERT model, and corresponding query and answer candidates, cross-encoding of both, and corresponding scoring, as the textual passages comprising answers/answerability);
ranking the textual passages from the subset of textual passages based on the score computed for each of the textual passages from the subset of textual passages (ibid-his ranking of the answer candidates, based on the scoring); and
grouping some of the textual passages from the subset of textual passages into a revised subset of textual passages based on the ranking and a predetermined answerability threshold (ibid-his scoring threshold for candidates, based on ranking, and ranking list changed, as the grouping, based on fine-tuning),
wherein the determining the answer to the rewritten query comprises taking as input the rewritten query and each of the textual passages from the revised subset of textual passages and extracting or generating the answer based on the rewritten query and information within the revised subset of the textual passages (ibid, paragraphs [0060-0071]-his, rewritten queries, and corresponding response candidate having the highest score, is provided to the chatbot, from the group of ranked candidates, and then, based on the input query, the final answer is provided).
Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Eisner and Cho and Zhuo to combine the prior art element of utilizing a rewriting process for a query, in order to provide and generate an answer as taught by Eisner with using a document embedding for text passages/documents, and indexing model for answering queries as taught by Cho with using an evaluation of the passages for determining answerability and ranking, in order to generate a final answer as taught by Zhuo, as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be an accurate and fast retrieval of answers/responses to queries (as rewritten queries, for purposes of clarifying ambiguities, with respect to the combination with Eisner) based on the ranking, as evaluated for answerability, and clustered and indexed responses/answers (ibid-Cho, see also paragraphs [0009, 0017]-his relatively fast and accurate retrieval of a search query response, based on semantic difference in the embeddings space, ibid-Zhuo).
As per claims 5, 12 and 19, Eisner with Cho make obvious the computer-implemented method of claim 1, Zhuo teaching that which the above combination lacks, wherein retrieving the subset of textual passages from the knowledge base comprises:
comparing the embedding vector for the rewritten query to embedding vectors computed for textual passages within the knowledge base (ibid-see Zhuo, embedding vector for rewritten query and vector for document, paragraphs [0060-0071, 0076]-his, rewritten queries, and corresponding response candidate textual passages, from a database of documents); and
retrieving each of the textual passages for the revised subset of textual passages in response to determining that a semantic distance between the embedding vector for each of the textual passages and the embedding vector for the rewritten query is less than a predetermined threshold amount (ibid-his semantic comparisons, “most semantically related” multi-dimensional vectors for distance comparison, and corresponding retrieved candidates, see claim 3, ranking and reranking, revised candidates, as the retrieved textual passages, the combination of references similarly motivated for combination, as seen in claim 3, Zhuo determining a response, based on candidate ranked passages, from documents, see also abstract).
As per claims 6, 13 and 20, Eisner with Cho make obvious the computer-implemented method of claim 1, but lack teaching that which Zhuo teaches, wherein retrieving the subset of textual passages from the knowledge base comprises:
performing a [k-nearest-neighbor] search to make classifications or predictions about groupings of textual passages within the knowledge base (ibid-see Zhuo, embedding vector for rewritten query and vector for document, paragraphs [0060-0071, 0076]-his search and corresponding response candidate textual passages, see his “nearest neighbor” discussion); and
retrieving each of the textual passages for the revised subset of textual passages in response to determining that the embedding vector for each of the textual passages and the embedding vector for the rewritten query are classified or predicted to pertain to a same grouping of textual passages within the knowledge base (ibid-Zhuo, see retrieved semantically similar candidates as the passages, and claim 3, “revised subset” discussion, based on classification and prediction, as ranked).
Eisner teaches, using a k-nearest-neighbor search to make classifications or predictions (paragraph [0123]-his “nearest-neighbor algorithm”).
Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Eisner and Cho and Zhuo to combine the prior art element of utilizing a rewriting process for a query, in order to provide and generate an answer, and nearest-neighbor search for clustering methods, as taught by Eisner with using a document embedding for text passages/documents, and indexing model for answering queries as taught by Cho with using a selection of relevant passages, as taught by Zhuo, each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be an accurate and fast retrieval of answers/responses to queries (as rewritten queries, for purposes of clarifying ambiguities, with respect to the combination with Eisner) based on the ranking, as evaluated for answerability, and clustered (ibid, see Eisner algorithm for clustering) and indexed responses/answers (ibid-Cho, see also paragraphs [0009, 0017]-his relatively fast and accurate retrieval of a search query response, based on semantic difference in the embeddings space, ibid-Zhuo).
Claim(s) 4, 11 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eisner et al. (Eisner, US 2023/0367602) in view of Cho et al. (Cho, US 2023/0315766) in view of Zhuo, as applied to claim 3, and further in view of Croutwater et al. (Croutwater, US 2021/0019375).
As per claims 4, 11 and 18, Eisner with Cho with Zhuo make obvious the computer-implemented method of claim 3, but lack further comprising, that which Trainor teaches, routing the query or a subsequent utterance from the user in the conversation between the user and the chatbot system to one or more skills within the chatbot system based on the score computed for each of the textual passages from the revised subset of textual passages (Croutwater, paragraphs [0054, 0055, 0062]-his routing user requests to skill bot, based on the relevancy, as scored, documents database as comprising relevancy scored passages).
Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Eisner and Cho and Zhuo and Croutwater to combine the prior art element of utilizing a rewriting process for a query, in order to provide and generate an answer as taught by Eisner with using a document embedding for text passages/documents, and indexing model for answering queries as taught by Cho with using an evaluation of the passages for determining answerability and ranking, in order to generate a final answer as taught by Zhuo with routing the query to a chatbot system to one or more skills based on relevancy of answers found in documents, as taught by Croutwater, as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be an accurate and fast retrieval of answers/responses to queries (as rewritten queries, for purposes of clarifying ambiguities, with respect to the combination with Eisner) based on the ranking, as evaluated for answerability, and clustered and indexed responses/answers (ibid-Cho, see also paragraphs [0009, 0017]-his relatively fast and accurate retrieval of a search query response, based on semantic difference in the embeddings space, ibid-Zhuo), the query routed to a chatbot based on the chatbot skills with respect to an answerability of the text passages (ibid, Croutwater-paragraph [0054, 0055]).
Claim(s) 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Eisner et al. (Eisner, US 2023/0367602) in view of Cho et al. (Cho, US 2023/0315766), as applied to claim 1, and further in view of Werner et al. (Werner, US 2021/0406479).
As per claims 7 and 14, Eisner with Cho make obvious the computer-implemented method of claim 1, but lack further comprising, that which Werner teaches:
providing the answer to the user as the response to the query in response to determining that the chatbot system cannot answer the query using another method (paragraph [0039]-his answer to a query, based on a different method not providing a correct answer);
providing the answer to the user as the response to the query in addition to another answer generated by the chatbot system using another method (paragraphs [0037, 0038]-his multiple models, providing an answer to the query); or
providing the answer to the user as the response to the query instead of another answer generated by the chatbot system using another method in response to determining that a confidence score calculated for the one or more answers exceeds a predetermined threshold (paragraph [0039]-his answer confidence score, from a model, different from another model, wherein the threshold of confidence determines the model and corresponding answer to be presented).
Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Eisner and Cho and Werner to combine the prior art element of utilizing a rewriting process for a query, in order to provide and generate an answer as taught by Eisner with using a document embedding for text passages/documents, and indexing model for answering queries as taught by Cho with providing an answer selection based on multiple models, as taught by Werner, as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be an accurate and fast retrieval of answers/responses to queries (as rewritten queries, for purposes of clarifying ambiguities, with respect to the combination with Eisner, ibid-Cho), wherein, answer options comprise, an answer model’s capability to answer, confidence in answer, or plurality of answers from different models (ibid-Werner, see also abstract).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAMONT M SPOONER whose telephone number is (571)272-7613. The examiner can normally be reached 8:00 AM -5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LAMONT M SPOONER/ Primary Examiner, Art Unit 2657
3/13/2026