DETAILED ACTION
This office action is in response to the above identified application filed on 02/18/2025. The application contains claims 1-20.
Claims 1-20 are pending
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The present application claims foreign priority to 10-2024-0024866, filed 02/21/2024, and claims foreign priority to 10-2024-0037619, filed 03/19/2024.
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 02/18/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 10 recites the limitation "another question" in line 4. It is unclear whether it refers to a new question or the “another question” recited in parent claim 9, line 4. Therefore, claim 10 is indefinite and rejected under 35 U.S.C. 112(b).
Claim 10 recites the limitation "the another question" in line 6 and 8, respectively. It is unclear whether it refers to a new question or the “another question” recited in parent claim 9, line 4. Therefore, claim 10 is indefinite and rejected under 35 U.S.C. 112(b).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless -
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 11, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Deng, Bowen & Gao, Shan (CN117313859A).
With regard to claim 1,
Deng, Bowen & Gao, Shan teaches
a question answering system (Page 1, Abstract: a question and answer method and system) comprising:
one or more processors (Page 8, line 5: a processor 420); and
a memory (Page 4, lines 42-50) storing one or more computer programs executed by the one or more processors, wherein the one or more computer programs include instructions for:
an operation of preprocessing a question of a user (Page 3, line 27: receive user questions. Page 3, lines 30-31: vectorize the user questions to generate question vectors, i.e., “preprocessing a question of a user”);
an operation of obtaining a first candidate passage set associated with the preprocessed question by retrieving a knowledge base using a first embedding model (Page 3, lines 31-33: use two different vector retrieval libraries to perform vector similarity matching between the question vector and the text vector knowledge base to return two groups of similar text paragraphs. Page 5, lines 37-38 & 41-43: language Embedding models);
an operation of obtaining a second candidate passage set associated with the preprocessed question by retrieving the knowledge base using a second embedding model (Page 3, lines 31-33: use two different vector retrieval libraries to perform vector similarity matching between the question vector and the text vector knowledge base to return two groups of similar text paragraphs. Page 5, lines 37-38 & 41-43: language Embedding models);
an operation of extracting one or more common passages from the first candidate passage set and the second candidate passage set (Page 3, lines 33-37; Page 5, lines 45-49; Page 6, lines 1-12: for the two groups of similar text paragraphs, perform text similarity matching between two paragraphs to obtain two similar text paragraphs with the highest paragraph similarity score higher than or equal to the paragraph similarity threshold, content fusion is performed on the two similar text paragraphs to generate the final knowledge paragraph most relevant to the question vector, and a context prompt template is generated based on the final knowledge paragraph); and
an operation of generating an answer to the preprocessed question from the one or more common passages through a generative model (Page 3, lines 34-37; Page 6, lines 9-12: when the highest paragraph similarity score is higher than or equal to the paragraph similarity threshold, fuse the content of the two similar text paragraphs to generate the final knowledge paragraph most relevant to the question vector, and generate context prompt template based on the final knowledge paragraph. Page 1, Abstract: input the context prompt template into the LLM, perform parallel reasoning, and output answers in a streaming manner).
With regard to claim 11,
Deng, Bowen & Gao, Shan teaches
a question answering method (Page 1, Abstract: a question and answer method and system) performed by at least one processor (Page 8, line 5: a processor 420), comprising:
preprocessing a question of a user (Page 3, line 27: receive user questions. Page 3, lines 30-31: vectorize the user questions to generate question vectors, i.e., “preprocessing a question of a user”);
obtaining a first candidate passage set associated with the preprocessed question by retrieving a knowledge base using a first embedding model (Page 3, lines 31-33: use two different vector retrieval libraries to perform vector similarity matching between the question vector and the text vector knowledge base to return two groups of similar text paragraphs. Page 5, lines 37-38 & 41-43: language Embedding models);
obtaining a second candidate passage set associated with the preprocessed question by retrieving the knowledge base using a second embedding model (Page 3, lines 31-33: use two different vector retrieval libraries to perform vector similarity matching between the question vector and the text vector knowledge base to return two groups of similar text paragraphs. Page 5, lines 37-38 & 41-43: language Embedding models);
extracting one or more common passages from the first candidate passage set and the second candidate passage set (Page 3, lines 33-37; Page 5, lines 45-49; Page 6, lines 1-12: for the two groups of similar text paragraphs, perform text similarity matching between two paragraphs to obtain two similar text paragraphs with the highest paragraph similarity score higher than or equal to the paragraph similarity threshold, content fusion is performed on the two similar text paragraphs to generate the final knowledge paragraph most relevant to the question vector, and a context prompt template is generated based on the final knowledge paragraph); and
generating an answer to the preprocessed question from the one or more common passages through a generative model (Page 3, lines 34-37; Page 6, lines 9-12: when the highest paragraph similarity score is higher than or equal to the paragraph similarity threshold, fuse the content of the two similar text paragraphs to generate the final knowledge paragraph most relevant to the question vector, and generate context prompt template based on the final knowledge paragraph. Page 1, Abstract: input the context prompt template into the LLM, perform parallel reasoning, and output answers in a streaming manner).
With regard to claim 20,
Deng, Bowen & Gao, Shan teaches
a non-transitory computer-readable recording medium storing a computer program executable by a processor (Page 8, line 5: a processor 420) of a computer to execute:
preprocessing a question of a user (Page 3, line 27: receive user questions. Page 3, lines 30-31: vectorize the user questions to generate question vectors, i.e., “preprocessing a question of a user”);
obtaining a first candidate passage set associated with the preprocessed question by retrieving a knowledge base using a first embedding model (Page 3, lines 31-33: use two different vector retrieval libraries to perform vector similarity matching between the question vector and the text vector knowledge base to return two groups of similar text paragraphs. Page 5, lines 37-38 & 41-43: language Embedding models);
obtaining a second candidate passage set associated with the preprocessed question by retrieving the knowledge base using a second embedding model (Page 3, lines 31-33: use two different vector retrieval libraries to perform vector similarity matching between the question vector and the text vector knowledge base to return two groups of similar text paragraphs. Page 5, lines 37-38 & 41-43: language Embedding models);
extracting one or more common passages from the first candidate passage set and the second candidate passage set (Page 3, lines 33-37; Page 5, lines 45-49; Page 6, lines 1-12: for the two groups of similar text paragraphs, perform text similarity matching between two paragraphs to obtain two similar text paragraphs with the highest paragraph similarity score higher than or equal to the paragraph similarity threshold, content fusion is performed on the two similar text paragraphs to generate the final knowledge paragraph most relevant to the question vector, and a context prompt template is generated based on the final knowledge paragraph); and
generating an answer to the preprocessed question from the one or more common passages through a generative model (Page 3, lines 34-37; Page 6, lines 9-12: when the highest paragraph similarity score is higher than or equal to the paragraph similarity threshold, fuse the content of the two similar text paragraphs to generate the final knowledge paragraph most relevant to the question vector, and generate context prompt template based on the final knowledge paragraph. Page 1, Abstract: input the context prompt template into the LLM, perform parallel reasoning, and output answers in a streaming manner).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Deng, Bowen & Gao, Shan (CN117313859A), in view of AXELROD (US 20210026918 A1).
With regard to claim 2,
As discussed in claim 1, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering system of claim 1, wherein the first embedding model is trained using a text sample pair whose length difference is less than a reference value, and the second embedding model is trained using a text sample pair whose length difference is the reference value or more.
AXELROD teaches
the question answering system of claim 1, wherein the first embedding model is trained using a text sample pair whose length difference is less than a reference value, and the second embedding model is trained using a text sample pair whose length difference is the reference value or more ([0056]-[0057]; [0008]: discard a sentence pair with a length difference more than 20% (or 30%, 50%, or more) and use the provided clean, parallel, training data in Table 1 to train language models. [0061]: language models are embedding models. Together, it teaches training embedding models using a text sample pair whose length difference meets a designated requirement. It is within the purview of one of ordinary skill in the art before the effective filing date of the claimed invention that the requirement is customizable thus can take various forms).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of AXELROD to train the first embedding model using a text sample pair whose length difference is less than a reference value, and train the second embedding model using a text sample pair whose length difference is the reference value or more. Doing so would filter a noisy corpus when no clean parallel data is available as taught by AXELROD ([0029]).
With regard to claim 12,
As discussed in claim 11, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering method of claim 11, wherein the first embedding model is trained using a text sample pair whose length difference is less than a reference value, and the second embedding model is trained using a text sample pair whose length difference is the reference value or more.
AXELROD teaches
the question answering method of claim 11, wherein the first embedding model is trained using a text sample pair whose length difference is less than a reference value, and the second embedding model is trained using a text sample pair whose length difference is the reference value or more ([0056]-[0057]; [0008]: discard a sentence pair with a length difference more than 20% (or 30%, 50%, or more) and use the provided clean, parallel, training data in Table 1 to train language models. [0061]: language models are embedding models. Together, it teaches training embedding models using a text sample pair whose length difference meets a designated requirement. It is within the purview of one of ordinary skill in the art before the effective filing date of the claimed invention that the requirement is customizable thus can take various forms).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of AXELROD to train the first embedding model using a text sample pair whose length difference is less than a reference value, and train the second embedding model using a text sample pair whose length difference is the reference value or more. Doing so would filter a noisy corpus when no clean parallel data is available as taught by AXELROD ([0029]).
Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Deng, Bowen & Gao, Shan (CN117313859A), in view of Hackman (US 20250124060 A1).
With regard to claim 3,
As discussed in claim 1, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering system of claim 1, wherein the operation of preprocessing the question includes:
an operation of generating a prompt for augmenting the question based on a question answering history of the user and the question; and
an operation of augmenting the question by inputting the prompt to a specific generative model.
Hackman teaches
the question answering system of claim 1, wherein the operation of preprocessing the question includes:
an operation of generating a prompt for augmenting the question based on a question answering history of the user and the question (Fig. 6; [0080]: at step 640, create a prompt for a language model. The prompt may include a representation of the user question and a representation of the one or more question-and-answer pairs. The representation of the question-and-answer pairs may include a representation as a dialogue history or the pairs specified as previous interactions with a language model or as the context for conversations with a language model); and
an operation of augmenting the question by inputting the prompt to a specific generative model (Fig. 6; [0086]: at step 650, submit the prompt the language model).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of Hackman to preprocessing the question including generating a prompt for augmenting the question based on a question answering history of the user and the question and augmenting the question by inputting the prompt to a specific generative model. Doing so would improve the accuracy of the answers generated by the LM by processing the question prompt from the user and modifying the prompt as taught by Hackman ([0036]).
With regard to claim 13,
As discussed in claim 11, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering method of claim 11, wherein the preprocessing of the question includes:
generating a prompt for augmenting the question based on a question answering history of the user and the question; and
augmenting the question by inputting the prompt to a specific generative model.
Hackman teaches
the question answering method of claim 11, wherein the preprocessing of the question includes:
generating a prompt for augmenting the question based on a question answering history of the user and the question (Fig. 6; [0080]: at step 640, create a prompt for a language model. The prompt may include a representation of the user question and a representation of the one or more question-and-answer pairs. The representation of the question-and-answer pairs may include a representation as a dialogue history or the pairs specified as previous interactions with a language model or as the context for conversations with a language model); and
augmenting the question by inputting the prompt to a specific generative model (Fig. 6; [0086]: at step 650, submit the prompt the language model).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of Hackman to preprocessing the question including generating a prompt for augmenting the question based on a question answering history of the user and the question and augmenting the question by inputting the prompt to a specific generative model. Doing so would improve the accuracy of the answers generated by the LM by processing the question prompt from the user and modifying the prompt as taught by Hackman ([0036]).
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Deng, Bowen & Gao, Shan (CN117313859A), in view of Heller et al (US 12067366 B1).
With regard to claim 4,
As discussed in claim 1, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering system of claim 1, wherein the operation of generating the answer to the preprocessed question includes:
an operation of obtaining surrounding passages associated with a first common passage of the one or more common passages, the surrounding passages being passages located around the first common passage in a document to which the first common passage belongs; and an operation of generating the answer to the preprocessed question by including the first common passage and the surrounding passages in the same prompt.
Heller teaches
the question answering system of claim 1, wherein the operation of generating the answer to the preprocessed question includes:
an operation of obtaining surrounding passages associated with a first common passage of the one or more common passages, the surrounding passages being passages located around the first common passage in a document to which the first common passage belongs; and an operation of generating the answer to the preprocessed question by including the first common passage and the surrounding passages in the same prompt (Fig. 13; Col. 29, line 11, 20-21: receive a query request at 1302 and create a query expansion prompt at 1304 based on the query request and a query expansion prompt template. Col. 30, lines 23-25, 52-53: execute one or more search queries based on the query expansion response at 1310 through 1312 and return one or more search results by the one or more search queries at 1314, where a search result may include one or more documents, one or more passages selected from one or more documents, and the like. Col. 30, lines 60-64: retrieve contextual information for one or more of the search results at 1316 through 1318. For example, a search result may include only a limited amount of text, such as a few sentences, selected from a larger document. In such a situation, a context retrieval query may be used to retrieve a larger amount of text, such as two pages, surrounding a passage retrieved from a larger document. Col. 32, line 59; Col. 33, lines 4-8: create one or more synthesis prompts at 1338, where a synthesis prompt may include a portion of text, such as several paragraphs, surrounding a search result retrieved at 1316 through 1318).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of Heller to obtain surrounding passages associated with a first common passage of the one or more common passages, the surrounding passages being passages located around the first common passage in a document to which the first common passage belongs and generate the answer to the preprocessed question by including the first common passage and the surrounding passages in the same prompt. Doing so would provide for retrieval augmented generation by conducting a search based on a search query, then providing the search results to an artificial intelligence system to further process the search results to produce an answer based on those search results as taught by Heller (Col. 2, lines 53-62).
With regard to claim 14,
As discussed in claim 11, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering method of claim 11, wherein the generating of the answer to the preprocessed question includes:
obtaining surrounding passages associated with a first common passage of the one or more common passages, the surrounding passages being passages located around the first common passage in a document to which the first common passage belongs; and generating the answer to the preprocessed question by including the first common passage and the surrounding passages in the same prompt.
Heller teaches
the question answering method of claim 11, wherein the generating of the answer to the preprocessed question includes:
obtaining surrounding passages associated with a first common passage of the one or more common passages, the surrounding passages being passages located around the first common passage in a document to which the first common passage belongs; and generating the answer to the preprocessed question by including the first common passage and the surrounding passages in the same prompt (Fig. 13; Col. 29, line 11, 20-21: receive a query request at 1302 and create a query expansion prompt at 1304 based on the query request and a query expansion prompt template. Col. 30, lines 23-25, 52-53: execute one or more search queries based on the query expansion response at 1310 through 1312 and return one or more search results by the one or more search queries at 1314, where a search result may include one or more documents, one or more passages selected from one or more documents, and the like. Col. 30, lines 60-64: retrieve contextual information for one or more of the search results at 1316 through 1318. For example, a search result may include only a limited amount of text, such as a few sentences, selected from a larger document. In such a situation, a context retrieval query may be used to retrieve a larger amount of text, such as two pages, surrounding a passage retrieved from a larger document. Col. 32, line 59; Col. 33, lines 4-8: create one or more synthesis prompts at 1338, where a synthesis prompt may include a portion of text, such as several paragraphs, surrounding a search result retrieved at 1316 through 1318).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of Heller to obtain surrounding passages associated with a first common passage of the one or more common passages, the surrounding passages being passages located around the first common passage in a document to which the first common passage belongs and generate the answer to the preprocessed question by including the first common passage and the surrounding passages in the same prompt. Doing so would provide for retrieval augmented generation by conducting a search based on a search query, then providing the search results to an artificial intelligence system to further process the search results to produce an answer based on those search results as taught by Heller (Col. 2, lines 53-62).
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Deng, Bowen & Gao, Shan (CN117313859A), in view of ANANTHANARAYANAN et al. (US 20240419698 A1).
With regard to claim 5,
As discussed in claim 1, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering system of claim 1, wherein the one or more common passages include a first common passage and a second common passage, and the operation of generating the answer to the preprocessed question includes:
an operation of generating a first prompt based on the preprocessed question and the first common passage; an operation of generating a first candidate answer to the preprocessed question by inputting the first prompt to the generative model;
an operation of generating a second prompt based on the preprocessed question and the second common passage; and an operation of generating a second candidate answer to the preprocessed question by inputting the second prompt to the generative model.
ANANTHANARAYANAN teaches
the question answering system of claim 1, wherein the one or more common passages include a first common passage and a second common passage, and the operation of generating the answer to the preprocessed question includes:
an operation of generating a first prompt based on the preprocessed question and the first common passage; an operation of generating a first candidate answer to the preprocessed question by inputting the first prompt to the generative model (Fig. 8; [0082]: generate and provide a first prompt as input to the foundation model at 860. The first prompt includes a first string of text based on the input query and the first context profile. Determine a first relevancy score for the first response of the foundation model responsive to the first prompt at 862);
an operation of generating a second prompt based on the preprocessed question and the second common passage; and an operation of generating a second candidate answer to the preprocessed question by inputting the second prompt to the generative model (Fig. 8; [0083]: generate and provide a second prompt as input to the foundation model at 866. The second prompt includes a second string of text based on the input query and the second context profile. Determine a second relevancy score for the second response of the foundation model responsive to the second prompt at 868).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of ANANTHANARAYANAN to generate the answer to the preprocessed question including generating a first prompt based on the preprocessed question and the first common passage, generating a first candidate answer to the preprocessed question by inputting the first prompt to the generative model, generating a second prompt based on the preprocessed question and the second common passage, and generating a second candidate answer to the preprocessed question by inputting the second prompt to the generative model. Doing so would help to improve the relevance of the response while reducing the cost of the foundation model's response to the query as taught by ANANTHANARAYANAN ([0014]).
With regard to claim 15,
As discussed in claim 11, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering method of claim 11, wherein the one or more common passages include a first common passage and a second common passage, and the generating of the answer to the preprocessed question includes:
generating a first prompt based on the preprocessed question and the first common passage; generating a first candidate answer to the preprocessed question by inputting the first prompt to the generative model;
generating a second prompt based on the preprocessed question and the second common passage; and generating a second candidate answer to the preprocessed question by inputting the second prompt to the generative model.
ANANTHANARAYANAN teaches
the question answering method of claim 11, wherein the one or more common passages include a first common passage and a second common passage, and the generating of the answer to the preprocessed question includes:
generating a first prompt based on the preprocessed question and the first common passage; generating a first candidate answer to the preprocessed question by inputting the first prompt to the generative model (Fig. 8; [0082]: generate and provide a first prompt as input to the foundation model at 860. The first prompt includes a first string of text based on the input query and the first context profile. Determine a first relevancy score for the first response of the foundation model responsive to the first prompt at 862);
generating a second prompt based on the preprocessed question and the second common passage; and generating a second candidate answer to the preprocessed question by inputting the second prompt to the generative model (Fig. 8; [0083]: generate and provide a second prompt as input to the foundation model at 866. The second prompt includes a second string of text based on the input query and the second context profile. Determine a second relevancy score for the second response of the foundation model responsive to the second prompt at 868).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of ANANTHANARAYANAN to generate the answer to the preprocessed question including generating a first prompt based on the preprocessed question and the first common passage, generating a first candidate answer to the preprocessed question by inputting the first prompt to the generative model, generating a second prompt based on the preprocessed question and the second common passage, and generating a second candidate answer to the preprocessed question by inputting the second prompt to the generative model. Doing so would help to improve the relevance of the response while reducing the cost of the foundation model's response to the query as taught by ANANTHANARAYANAN ([0014]).
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Deng, Bowen & Gao, Shan (CN117313859A), in view of CUI et al. (US 20250077940 A1).
With regard to claim 6,
As discussed in claim 1, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering system of claim 1, wherein the operation of generating the answer to the preprocessed question includes:
an operation of generating a candidate answer to the preprocessed question by inputting a prompt generated based on the preprocessed question to the generative model;
an operation of generating a verification prompt for verifying the candidate answer; an operation of verifying the candidate answer by inputting the verification prompt to a specific generative model; and an operation of providing the candidate answer as the answer to the preprocessed question based on a verification result.
CUI teaches
the question answering system of claim 1, wherein the operation of generating the answer to the preprocessed question includes:
an operation of generating a candidate answer to the preprocessed question by inputting a prompt generated based on the preprocessed question to the generative model (Fig. 4; [0052]: generate an initial prompt at step 406 and transmit the initial prompt to a machine learning model at step 408. The model generates a potential answer at step 410 in response to the initial query);
an operation of generating a verification prompt for verifying the candidate answer; an operation of verifying the candidate answer by inputting the verification prompt to a specific generative model; and an operation of providing the candidate answer as the answer to the preprocessed question based on a verification result (Fig. 4; [0052]-[0053]: formulate a verification prompt at step 412, transmit the verification prompt to the model for processing at step 414, process the verification prompt and output one of two possible answers, either a “NO” response or a “YES” response, at step 416. For a “YES” response to the verification prompt, which is indicative of the potential answer being responsive to the initial prompt, output, to the user, a final answer generated from the potential answer determined to be responsive to the initial prompt).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of CUI to generate a candidate answer to the preprocessed question by inputting a prompt generated based on the preprocessed question to the generative model, generate a verification prompt for verifying the candidate answer, verify the candidate answer by inputting the verification prompt to a specific generative model, and provide the candidate answer as the answer to the preprocessed question based on a verification result. Doing so would provide a method for detecting hallucinations output from a machine learning model by outputting to the user the potential answer as a final answer upon receiving a positive response to the verification prompt as taught by CUI ([0006]).
With regard to claim 16,
As discussed in claim 11, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering method of claim 11, wherein the generating of the answer to the preprocessed question includes:
generating a candidate answer to the preprocessed question by inputting a prompt generated based on the preprocessed question to the generative model;
generating a verification prompt for verifying the candidate answer; verifying the candidate answer by inputting the verification prompt to a specific generative model; and providing the candidate answer as the answer to the preprocessed question based on a verification result.
CUI teaches
the question answering method of claim 11, wherein the generating of the answer to the preprocessed question includes:
generating a candidate answer to the preprocessed question by inputting a prompt generated based on the preprocessed question to the generative model (Fig. 4; [0052]: generate an initial prompt at step 406 and transmit the initial prompt to a machine learning model at step 408. The model generates a potential answer at step 410 in response to the initial query);
generating a verification prompt for verifying the candidate answer; verifying the candidate answer by inputting the verification prompt to a specific generative model; and providing the candidate answer as the answer to the preprocessed question based on a verification result (Fig. 4; [0052]-[0053]: formulate a verification prompt at step 412, transmit the verification prompt to the model for processing at step 414, process the verification prompt and output one of two possible answers, either a “NO” response or a “YES” response, at step 416. For a “YES” response to the verification prompt, which is indicative of the potential answer being responsive to the initial prompt, output, to the user, a final answer generated from the potential answer determined to be responsive to the initial prompt).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of CUI to generate a candidate answer to the preprocessed question by inputting a prompt generated based on the preprocessed question to the generative model, generate a verification prompt for verifying the candidate answer, verify the candidate answer by inputting the verification prompt to a specific generative model, and provide the candidate answer as the answer to the preprocessed question based on a verification result. Doing so would provide a method for detecting hallucinations output from a machine learning model by outputting to the user the potential answer as a final answer upon receiving a positive response to the verification prompt as taught by CUI ([0006]).
Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Deng, Bowen & Gao, Shan (CN117313859A), in view of YANKOV et al. (US 20250076059 A1), and in further view of Wu et al. (US 20240418515 A1).
With regard to claim 7,
As discussed in claim 1, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering system of claim 1, wherein the knowledge base includes a drawing database (DB), and the one or more computer programs further include instructions for:
an operation of receiving another question related to path finding;
an operation of obtaining analysis information of a drawing associated with the another question by retrieving the drawing DB using the another question, the analysis information including location information of elements of a space represented by the drawing and path information between the elements;
an operation of generating a prompt based on the another question and the analysis information; and
an operation of deriving information related to the path finding by inputting the prompt to the generative model.
YANKOV teaches
the question answering system of claim 1, wherein the knowledge base includes a drawing database (DB) (Fig. 1; [0056]: a data store 132 that stores different collections of images. [0058]: a data store 136 that provides map-related information. [0059]: a data store 140 of roadway information), and the one or more computer programs further include instructions for:
an operation of receiving another question related to path finding (Fig. 2; [0064]: receive a new query 112. [0077]: the center series of blocks illustrates the query 112 explicitly specifies a route-finding request, in which the entities to be visited are explicitly identified);
an operation of generating a prompt based on the another question and the analysis information (Fig. 1; [0054]: the prompt-generating component 124 produces a query prompt that also expresses context information drawn from the dialogue history stored in the data store 122 in addition to the current query 112); and
an operation of deriving information related to the path finding by inputting the prompt to the generative model (Fig. 1; [0053]: a prompt-generating component 124 generates a prompt to submit to the language model 104 upon each interaction with the language model 104. [0083]: in stage F′, the control component 120 presents the output information 118, which includes an interactive map that shows at least one proposed route produced by the routing engine 138 and any supplemental image(s) retrieved by the image search engine 130).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of YANKOV to receive another question related to path finding, generate a prompt based on the another question and the analysis information, and derive information related to the path finding by inputting the prompt to the generative model. Doing so would successfully interpret complex map-related queries, without demanding that a user express the query in a predetermined manner as taught by YANKOV ([0013]).
Deng, Bowen & Gao, Shan and YANKOV do not teach
an operation of obtaining analysis information of a drawing associated with the another question by retrieving the drawing DB using the another question, the analysis information including location information of elements of a space represented by the drawing and path information between the elements;
Wu teaches
an operation of obtaining analysis information of a drawing associated with the another question by retrieving the drawing DB using the another question, the analysis information including location information of elements of a space represented by the drawing and path information between the elements ([0118]: an LLM may retrieve and/or access map data or other information determined to be necessary to generate an output. For example, additional contextual information, additional map information, additional feature information, and/or other information);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan and YANKOV to incorporate the teachings of Wu to obtain analysis information of a drawing associated with the another question by retrieving the drawing DB using the another question, the analysis information including location information of elements of a space represented by the drawing and path information between the elements. Doing so would generate more detailed routing information that is optimized based on the additional information available as taught by Wu (Abstract).
With regard to claim 17,
As discussed in claim 11, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering method of claim 11, wherein the knowledge base includes a drawing database (DB), and the question answering method further comprises:
receiving another question related to path finding;
obtaining analysis information of a drawing associated with the another question by retrieving the drawing DB using the another question, the analysis information including location information of elements of a space represented by the drawing and path information between the elements;
generating a prompt based on the another question and the analysis information; and
deriving information related to the path finding by inputting the prompt to the generative model.
YANKOV teaches
the question answering method of claim 11, wherein the knowledge base includes a drawing database (DB) (Fig. 1; [0056]: a data store 132 that stores different collections of images. [0058]: a data store 136 that provides map-related information. [0059]: a data store 140 of roadway information), and the question answering method further comprises:
receiving another question related to path finding (Fig. 2; [0064]: receive a new query 112. [0077]: the center series of blocks illustrates the query 112 explicitly specifies a route-finding request, in which the entities to be visited are explicitly identified);
generating a prompt based on the another question and the analysis information (Fig. 1; [0054]: the prompt-generating component 124 produces a query prompt that also expresses context information drawn from the dialogue history stored in the data store 122 in addition to the current query 112); and
deriving information related to the path finding by inputting the prompt to the generative model (Fig. 1; [0053]: a prompt-generating component 124 generates a prompt to submit to the language model 104 upon each interaction with the language model 104. [0083]: in stage F′, the control component 120 presents the output information 118, which includes an interactive map that shows at least one proposed route produced by the routing engine 138 and any supplemental image(s) retrieved by the image search engine 130).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of YANKOV to receive another question related to path finding, generate a prompt based on the another question and the analysis information, and derive information related to the path finding by inputting the prompt to the generative model. Doing so would successfully interpret complex map-related queries, without demanding that a user express the query in a predetermined manner as taught by YANKOV ([0013]).
Deng, Bowen & Gao, Shan and YANKOV do not teach
obtaining analysis information of a drawing associated with the another question by retrieving the drawing DB using the another question, the analysis information including location information of elements of a space represented by the drawing and path information between the elements;
Wu teaches
obtaining analysis information of a drawing associated with the another question by retrieving the drawing DB using the another question, the analysis information including location information of elements of a space represented by the drawing and path information between the elements ([0118]: an LLM may retrieve and/or access map data or other information determined to be necessary to generate an output. For example, additional contextual information, additional map information, additional feature information, and/or other information);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan and YANKOV to incorporate the teachings of Wu to obtain analysis information of a drawing associated with the another question by retrieving the drawing DB using the another question, the analysis information including location information of elements of a space represented by the drawing and path information between the elements. Doing so would generate more detailed routing information that is optimized based on the additional information available as taught by Wu (Abstract).
Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Deng, Bowen & Gao, Shan (CN117313859A), in view of CHANDEL et al. (US 20250068665 A1).
With regard to claim 8,
As discussed in claim 1, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering system of claim 1, wherein the one or more computer programs further include instructions for:
an operation of receiving another question retrieving a document related to specific information; an operation of obtaining a passage associated with the another question by retrieving the knowledge base using the another question; an operation of generating a prompt based on meta information of a document to which the another question and the obtained passage belong; and an operation of deriving information of the document related to the specific information by inputting the prompt to the generative model.
CHANDEL teaches
the question answering system of claim 1, wherein the one or more computer programs further include instructions for:
an operation of receiving another question retrieving a document related to specific information; an operation of obtaining a passage associated with the another question by retrieving the knowledge base using the another question; an operation of generating a prompt based on meta information of a document to which the another question and the obtained passage belong; and an operation of deriving information of the document related to the specific information by inputting the prompt to the generative model (Fig. 2; [0024]-[0026]; Abstract: receive a query and context 214 to search for examples of code segments from the codebase that are similar to the user query. The search engine 206 searches the codebase segment table 208 to find the top-k closely-similar embeddings, where the metadata and code segments 226 associated with the top-k closely-similar embeddings are extracted and used as the examples 222A-222K. The prompt generator 210 uses the examples 222A-222K, the user query, and the context, and the examples to form a prompt 216 that is transmitted to the large language model 212 to return a response 218. [0004]: a code segment may include a file of the codebase, a class of a file of the codebase, and a method of a file of the codebase);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of CHANDEL to receive another question retrieving a document related to specific information, obtain a passage associated with the another question by retrieving the knowledge base using the another question, generate a prompt based on meta information of a document to which the another question and the obtained passage belong, and derive information of the document related to the specific information by inputting the prompt to the generative model. Doing so would search for code examples to augment model prompt to improve large language model performance on code elements of a codebase that the model has not been seen before during training as taught by CHANDEL ([0002]).
With regard to claim 18,
As discussed in claim 11, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering method of claim 11, further comprising:
receiving another question retrieving a document related to specific information; obtaining a passage associated with the another question by retrieving the knowledge base using the another question; generating a prompt based on meta information of a document to which the another question and the obtained passage belong; and deriving information of the document related to the specific information by inputting the prompt to the generative model.
CHANDEL teaches
the question answering method of claim 11, further comprising:
receiving another question retrieving a document related to specific information; obtaining a passage associated with the another question by retrieving the knowledge base using the another question; generating a prompt based on meta information of a document to which the another question and the obtained passage belong; and deriving information of the document related to the specific information by inputting the prompt to the generative model (Fig. 2; [0024]-[0026]; Abstract: receive a query and context 214 to search for examples of code segments from the codebase that are similar to the user query. The search engine 206 searches the codebase segment table 208 to find the top-k closely-similar embeddings, where the metadata and code segments 226 associated with the top-k closely-similar embeddings are extracted and used as the examples 222A-222K. The prompt generator 210 uses the examples 222A-222K, the user query, and the context, and the examples to form a prompt 216 that is transmitted to the large language model 212 to return a response 218. [0004]: a code segment may include a file of the codebase, a class of a file of the codebase, and a method of a file of the codebase);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of CHANDEL to receive another question retrieving a document related to specific information, obtain a passage associated with the another question by retrieving the knowledge base using the another question, generate a prompt based on meta information of a document to which the another question and the obtained passage belong, and derive information of the document related to the specific information by inputting the prompt to the generative model. Doing so would search for code examples to augment model prompt to improve large language model performance on code elements of a codebase that the model has not been seen before during training as taught by CHANDEL ([0002]).
Claims 9, 10, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Deng, Bowen & Gao, Shan (CN117313859A), in view of Raviv et al. (US 20250200034 A1).
With regard to claim 9,
As discussed in claim 1, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering system of claim 1, wherein the knowledge base includes a database (DB) supporting query statement-based retrieval and a passage DB, and the one or more computer programs include further instructions for
an operation of receiving another question requesting retrieval of specific information;
an operation of generating a prompt for converting the another question into a specific query statement based on the another question, information of the DB, and a query statement example, the query statement example including a user question sample and a query statement sample corresponding to the user question sample;
an operation of converting the another question into the specific query statement by inputting the prompt to the generative model; and
an operation of retrieving the DB using the specific query statement.
Raviv teaches
the question answering system of claim 1, wherein the knowledge base includes a database (DB) supporting query statement-based retrieval and a passage DB (Fig. 5: data store 508), and the one or more computer programs include further instructions for
an operation of receiving another question requesting retrieval of specific information (Fig. 4; [0043]: at 402, a user may input a natural language request via user interface 502);
an operation of generating a prompt for converting the another question into a specific query statement based on the another question, information of the DB, and a query statement example, the query statement example including a user question sample and a query statement sample corresponding to the user question sample (Fig. 4; Fig. 5; [0044]-[0045]: At 404, campaign reporting system 500 (using campaign LLM 506) may convert the natural language request to a suitable query. A suitable query may be understood to include any request to retrieve or manipulate information (e.g., structured or unstructured data) that is expressed in a language and format sufficient to execute on the data store being queried. In some embodiments, campaign LLM 506 is configured to generate suitable queries for any type of data store (e.g., SQL, NoSQL, HQL, etc.) that is communicatively coupled to campaign LLM 506. [0046]-[0047]: Fig. 6 shows such an example query statement generated based on the user input suitable for the data store being queried);
an operation of converting the another question into the specific query statement by inputting the prompt to the generative model (Fig. 4; [0044]-[0045]: at 404, campaign LLM 506 may convert the natural language request to a suitable query); and
an operation of retrieving the DB using the specific query statement (Fig. 4; Fig. 5; [0048]: at 406, execute the suitable query and/or queries on data store 508 and retrieves the output);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of Raviv to receive another question requesting retrieval of specific information, generate a prompt for converting the another question into a specific query statement based on the another question, information of the DB, and a query statement example, the query statement example including a user question sample and a query statement sample corresponding to the user question sample, convert the another question into the specific query statement by inputting the prompt to the generative model, and retrieve the DB using the specific query statement. Doing so would overcome the challenge for decision makers (e.g., advertising campaign managers, marketers, etc.) to mine relevant data sets in order to glean useful insights from ever increasing amounts of data and metrics as taught by Raviv ([0001]).
With regard to claim 10,
As discussed in claim 9, Deng, Bowen & Gao, Shan and Raviv teach all the limitations therein.
Deng, Bowen & Gao, Shan further teaches
the question answering system of claim 9, wherein the one or more computer programs further include instructions for:
an operation of obtaining a passage associated with the another question by retrieving the passage DB using another question when the retrieval of the DB according to the specific query statement is unsuccessful (in addition to the indefiniteness discussed in the 112(b) rejections above, this limitation contains a contingent clause introduced by “when …” that does not need to be taught, see MPEP 2111.04 II);
an operation of generating an additional prompt based on the another question and the obtained passage; and an operation of generating an answer to the another question by inputting the additional prompt to the generative model (this is taught in a similar manner as that in the parent claim. Page 3, lines 34-37; Page 6, lines 9-12: generate context prompt template based on the final knowledge paragraph. Page 1, Abstract: input the context prompt template into the LLM, perform parallel reasoning, and output answers in a streaming manner).
With regard to claim 19,
As discussed in claim 11, Deng, Bowen & Gao, Shan teaches all the limitations therein.
Deng, Bowen & Gao, Shan does not teach
the question answering method of claim 11, wherein the knowledge base includes a database (DB) supporting query statement-based retrieval and a passage DB, and the question answering method further comprises:
receiving another question requesting retrieval of specific information;
generating a prompt for converting the another question into a specific query statement based on the another question, information of the DB, and a query statement example, the query statement example including a user question sample and a query statement sample corresponding to the user question sample;
converting the another question into the specific query statement by inputting the prompt to the generative model; and
retrieving the DB using the specific query statement.
Raviv teaches
the question answering method of claim 11, wherein the knowledge base includes a database (DB) supporting query statement-based retrieval and a passage DB (Fig. 5: data store 508), and the question answering method further comprises:
receiving another question requesting retrieval of specific information (Fig. 4; [0043]: at 402, a user may input a natural language request via user interface 502);
generating a prompt for converting the another question into a specific query statement based on the another question, information of the DB, and a query statement example, the query statement example including a user question sample and a query statement sample corresponding to the user question sample (Fig. 4; Fig. 5; [0044]-[0045]: At 404, campaign reporting system 500 (using campaign LLM 506) may convert the natural language request to a suitable query. A suitable query may be understood to include any request to retrieve or manipulate information (e.g., structured or unstructured data) that is expressed in a language and format sufficient to execute on the data store being queried. In some embodiments, campaign LLM 506 is configured to generate suitable queries for any type of data store (e.g., SQL, NoSQL, HQL, etc.) that is communicatively coupled to campaign LLM 506. [0046]-[0047]: Fig. 6 shows such an example query statement generated based on the user input suitable for the data store being queried);
converting the another question into the specific query statement by inputting the prompt to the generative model (Fig. 4; [0044]-[0045]: at 404, campaign LLM 506 may convert the natural language request to a suitable query); and
retrieving the DB using the specific query statement (Fig. 4; Fig. 5; [0048]: at 406, execute the suitable query and/or queries on data store 508 and retrieves the output);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Deng, Bowen & Gao, Shan to incorporate the teachings of Raviv to receive another question requesting retrieval of specific information, generate a prompt for converting the another question into a specific query statement based on the another question, information of the DB, and a query statement example, the query statement example including a user question sample and a query statement sample corresponding to the user question sample, convert the another question into the specific query statement by inputting the prompt to the generative model, and retrieve the DB using the specific query statement. Doing so would overcome the challenge for decision makers (e.g., advertising campaign managers, marketers, etc.) to mine relevant data sets in order to glean useful insights from ever increasing amounts of data and metrics as taught by Raviv ([0001]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOQIN HU whose telephone number is (571)272-1792. The examiner can normally be reached on Monday-Friday 7:00am-3:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached on (571) 272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAOQIN HU/Examiner, Art Unit 2168
/CHARLES RONES/Supervisory Patent Examiner, Art Unit 2168