DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
In response to the office action from 10/1/2025, the applicant has submitted an amendment, filed 12/22/2025, amending claims 1, 3-7, 9, 12-14, 16-20, cancelling claims 1, 10 and 15, while arguing to traverse prior art rejections. Applicant’s arguments have been fully considered but the previous grounds of rejections are maintained for the reasons explained in the response to arguments.
Response to Arguments
In what follows applicant’s arguments will be addressed in the order presented.
Page 8 § “I” provides a broad overview of the latest amendments.
Page 8 § “II” discusses the previous claim objections.
Due to the latest amendments the said objections are overcome.
Page 8 § “III” discusses the previous 112(b) rejections.
Due to the latest amendments the said rejections are overcome.
Page 9 § “IV” discusses the previous 35 U.S.C. 101 rejection of claims 14-20.
Due to the latest amendments the said rejection is overcome.
The remainder of page 9 provides a list of items “A”, “B” and “C” presumably not taught by the primary reference Qiao. Then on page 10 as an initial reason with respect to the item “A”, it is asserted (¶ 4 line 1): “the “prompt banks 318” in Qiao are fixed and do not change with different input queries”. This was to invalidate the mapping of Qiao’s “prompt banks 318” to “summary information” in claim 1.
Respectfully the “summary information” in the instant application is also something “fixed”; i.e., see the Abstract lines 3-4: “Summary information of each file may be pre-stored …”; i.e., something “pre-stored” in a storage that does not get updated is also “fixed” in time.
Page 10 ¶ 4 last 3 lines: “Qiao does not disclose that the associated data is related with the query request, and if the query request is different, the corresponding associated data is also different. Therefore, “the associated data” is not disclosed by the “prompt banks 318” of Qiao”. A related argument appears on page 11 first ¶ lines 2-3: “Therefore, “one or more prompt banks” are also fixed and will not change with different input queries”.
There are three flaws here: 1): According to SP ¶ 0043 S1: “After receiving the query request and the associated data (QAP-1)”, i.e., “QAP” is an example of “associated data”; this maps identically to Qiao’s “prompt banks” defined according to Qiao ¶ 0024 S5: “can be selected wherein the prompts comprises question-answer pairs”; 2) the disclosure’s “associated data” and/or “summary information” are stored in “storage unit 602” (SP ¶ 0121 S1: “obtain association data of the query request from the storage unit 602”). To be “stored” in a “storage unit” that is not updated implies they also are never changed; 3) It is not correct that as the “query” changes the corresponding “associated data” and/or “summary information” used in generating the “query response” should change, because the “query response” also depends on the “intermediate query result”.
On page 11, ¶ 1 last 2 S: “if the query request is different, the corresponding associated data is different. Qiao does not disclose this feature and therefore the “one or more prompt banks” disclosed in Qiao does not disclose the “summary information” as recited in Claims 1, 9 and 14”.
As an initial matter as disclosed above the first statement above is both factually as well as logically incorrect: i.e., see Sp. ¶ 0121 S2: “obtain association data of the query request from the storage unit 602”, which implies “association data” is pre-stored in a storage that does not evolve in time and therefore it is fixed. Secondly even if a “query” changes, a response which depends on “summary information” (which may not change) could change as it also depends on the “intermediate query result” which could change. Finally, the last but not least although the Qiao’s “prompt banks 318” may not change with time, but it “comprise[s]” “one or more prompt banks” (Qiao ¶ 0049 lines 1-2); i.e., if a “query” changes, either a different “prompt bank” and/or a different combination of the said “prompt banks” may be used to provide the “query response” to that particular “query”.
Page 11 the 2nd ¶ lines 3+: “In contradistinction, the role of “the one or more files” as recited in the claims is to provide files corresponding to the one or more summary information for matching the intermediate query result” “Therefore, the claimed “the one or more files” is different from that of “the one or more prompt banks”.
Respectfully in Qiao “prompts” are also selected to match the input queries; i.e, see Qiao ¶ 0024 S4: “if the input query is classified as “angry”, then a number of prompts form an “angry” prompt can be selected”: i.e., from a “prompt bank” of specific (matched) “angry” type is selected, or apply “prompts” that match the query.
On page 11 the last 2 ¶’s assert: “Specifically, the data in the summary is binary data while the date in the vector data base is vector data …” “vector database is used for storing, retrieving vector data, and can [sic] used for similarity search”
Respectfully anything stored in a computing or cellular device memory and/or storage is in “binary data format” including the “vector data”. The said “vector data” is one class of binary data. The “vector” property of the said data merely serves some post processing purposes such as similarity calculations that is independent of the nature of the “memory” and also no such calculations were claimed; i.e., this argument would have been valid if some claim limitations required the said similarity calculation and the office action had mapped elements used for the said calculation from memory which is not “vector” formatted.
On page 12 the 3rd ¶ last sentence it is asserted: “According to Qiao, “prompts 101” is random prompts selected from “prompt banks 318”, which means “prompts 101” and “prompt bank 318” are different”.
According to Qiao ¶ 0024 S4: “prompts can be selected from a prompt bank”; i.e., “prompts” are simply elements of a set defined by “prompt bank” and the elements of the set represent the set and its functionality which is defined via the elements of the set namely the “prompts”. Please note that there are also “one or more” (i.e., a plurality of) “prompt banks” (Qiao ¶ 0049 S1) each of which maps to the claim’s “files”. So “prompt banks 318” (the associated data) “can comprise” “one or more prompt banks” (one or more files or summary information).
Page 12 the ¶ 4 the last 2 sentences assert: “It is understood that the summary information is completely different from question-and-answer pairs in concept and form. Therefore, the concept of “the associated data” recited in the claims is different from that of “prompts 101”.
Respectfully this is completely incorrect: according to Sp. ¶ 0065 lines 2+: “the answers in the QAP may also be stored by the summary information” (i.e., the “summary information” does include “QAP” (“Question-and-answer pairs”)).
On page 13 the 2nd ¶ lines 3-5 the following conclusion is obtained; “the “output 104” of Qiao is an emotion response corresponding to input query, rather than answers to questions queried by the query request”.
Please see Qiao ¶ 0029 2nd column lines 2+: “GPT model 103 can utilize the prompts 101 as a set of example responses for questions and generate an output 104 to input query 102”. A “response” (as “output 104”) to an ‘input query” maps to an answer to a question. There are numerous other examples in Qiao paragraphs 0046, 50, and 51.
The arguments presented in § C are all based on the notion that: (1) somehow Qiao’s “output” (also called in some paragraphs as “the second response portion”) do not “disclose” the “intermediate query result” (page 13 last paragraph), and (2) “matched one or more summary information” is not disclosed by the “prompts of Qiao” (page 14 paragraph 3).
For point (1) please refer to the explanation of its preceding paragraph. As regards to point (2) please see: Sp. ¶0066: “It will be appreciated that in some embodiments, the summary information of the file may also be included in the answers in the QAP” (i.e., the “summary information” comprises of “QAP” (question and answer pairs)). Qiao’s “prompts” also according to ¶ 0024 line 14 “the prompts comprise question-and-answer pairs”. This invalidates the conclusion above as its shows claims “summary” is completely of the same nature as Qiao’s “prompts”.
On page 15 § “VI” lines 2+ discusses the 103 rejections for some dependent claims: it is concluded: “Wu does not teach the claimed subject matter missing in Qiao to remedy the deficiencies of Qiao” and then the entire claim 1 is copied and pasted afterwards.
Respectfully Wu was not and is not relied upon for claim 1. Since applicants have not argued the merits of these dependent claims, but assert patentability solely through their dependence on the allegedly patentable parent claims, they stand or fall with said parent claims and hence no further response to applicant’s arguments is necessary.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3-4, 8-9, 11-12, 14, 16-17, 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Qiao et al. (US 2024/0248920).
Regarding claim 1, Qiao et al. do teach an apparatus (Title, Abstract)
comprising: a memory for storing instructions; and one or more processor for executing the instructions (¶ 0033 sentence 2: “The memory 308 can store computer-executable instructions which, upon execution by the processor”)
to perform following process:
receiving a query request (¶ 0029 sentence 3: “During in-context learning, GPT model 103” “can receive” (receiving) “prompts 101” “and input query 102” (a query request) “as input”; ¶ 0029 2nd column line 1: “input query 102 can comprise a question or statement for GPT model”);
determining associated data of the query request, the associated data including one or more question-answering pairs associated with the query request in a plurality of question-answering pairs, and summary information of one or more files associated with the query request in summary information of a plurality of files, wherein the plurality of question-answering pairs and the summary information of the plurality of files are stored in a vector database (¶ 0029 sentence 3: “During in-context learning, GPT model 103” “can receive” (determining) “prompts 101”(an associated data) “and input query 102” (of the query request); ¶ 0024 lines 14+: “the prompts” (associated data) “comprise” (include) “question-and-answer pairs” (one or more question-answering pairs associated with the “query 102” (query request)); ¶ 0049 sentence 1: “Prompt banks 318” (the associated data) “can comprise” (includes) “one or more prompt banks” (one or more files or summary information) “corresponding to the one or more emotion classes. For example, given the emotion classes angry, disappointed, and grateful, prompt banks 318 can comprise an angry prompt bank, a disappointed prompt bank, and a grateful prompt bank”; ¶ 0033 last sentence: “the memory 308” (a vector database) “can store” (stores) “prompt banks 318” (the summary information comprising of the “prompt banks” (the one or more files, each of which comprises of question answer pairs because each “prompt” “comprise[s] question-answer-pairs” (¶ 0024));
inputting the query request and the associated data into a large language model so that the large language model outputs an intermediate query result of the query request based on the content of the associated data (¶ 0029 sentence 3: “During in-context learning, GPT model 103” (a large language model) “ can receive” (is inputted) “prompts 101” (the associated data) “and input query 102” (and a query request) “that GPT model 103 can use to generate an output 104” (to output an intermediate query result also called a “second response portion” (¶ 0051 last sentence) because according to ¶ 0051 lines 22-24: “The selected prompts” (associated data) “and input query” (and the query request) “used by second GPT model” (inputted into the large language model) “to generate second response portion” (to output an intermediate query result))
wherein the intermediate query result includes answers to questions queried by the query request (¶ 0029 2nd column lines 2-4: “GPT model 103 can utilize the prompts 101 as a set of example responses for questions and generate an output 104 to input query 102” (the “output 104” (intermediate query result) comprises of “responses” (answers) to “questions” (input query request); ¶ 0051 last S: The “final output 408” (answers to questions queried by the query request) comprises of “second response portion” (the intermediate query result) and “the first response portion”);
determining a query result of the query request based on the intermediate query result and summary information of the one or more files in the associated data (¶ 0051 last sentence: “The first GPT model 406 then generates the first response portion” “based on the filtered input query” (using the query request) “and combination component 312 combines the first response portion and the second response portion” (and the intermediate query result obtained based on the associated data of the one or more files or summary information) “at 407 to generate final output 408” (to determine a query result));
comprising:
when one or more summary information in the summary information of the one or more files matching the intermediate query result, adding the files corresponding to the one or more summary information to the intermediate query result to obtain the query result of the query request (¶ 0050 sentence 1: “In-context learning component 316 can then concatenate” (adding) “the input query” (a resulting intermediate query result) “with the selected” (to matched) “prompts” (summary information files to “input query” which results in adding e.g., “input query” plus “prompt[1]” plus “prompt[2]” which results in “input query” plus “prompt[1]” (an intermediate query result) being added to “prompt[2]” (a matched summary information file); ¶ 0051 lines 22-24: “The selected prompts” (matched summary information or files) “and input query” (and the query request) “used by second GPT model” (inputted by adding into the large language model) “to generate second response portion” (to output an intermediate query result); ¶ 0049 last sentence: “number of selected prompts can comprise between one and twenty”).
Regarding claim 3, Qiao et al. do teach the apparatus of claim 1, wherein: the at least one file comprising at least one of audio, video, pictures, tables, documents, and web pages (¶ 0044 sentence 4: “In an embodiment, the first machine learning model 302 can comprise a first generative pre-trained transformer (GPT) model. GPT models are deep learning or neural network language models that are pre-trained on a large text corpus to generate text responses to input text prompts” (i.e., “prompts” (the files) are “input text” (documents); ¶ 0029 lines 14+: “prompts 101” (the files) “can comprise question-and-answer pairs and input query” wherein according to ¶ 0042 lines 12+: “the input query can comprise” “For example, an audio” (comprise audio) “input may be converted to a text”).
Regarding claim 4, Qiao et al. do teach the apparatus of claim 1, wherein: the associated data further comprising at least one historical query request preceding the query request and the query result corresponding to the at least one historical query request (FIG. 6 lines 6-7: “For details, please see the preceding question and answer” (i.e., in obtaining a proper response to a query request, it is directing attention to a “preceding” (a historical) “question and answer” (query request association data); e.g., ¶ 0023 lines 3+: “the input query” “I cannot activate my card. I am extremely disappointed” (a historical query request); ¶ 0025 lines 5+: “the example first response portion and example second response portion above can be combined to form the final response” (query result corresponding to the historical query request) “This is not the service we intend to provide. We are sorry to know you are experiencing trouble. Please send us a direct message with your account number” (to be used as a new query request or “prompt” (association data) which is in response to the historical query and its query result); e.g., see).
Regarding claim 8, Qiao et al. do teach the apparatus of claim 1, wherein: the large language model comprising any one of the following models: ChatGPT, GPT-1, GPT-2, GPT-3, GPT-4, BERT, and XLNet (¶ 0029 sentence 3: “During in-context learning, GPT model 103” (a large language model is a “GPT[-1-4]”) “ can receive” (is inputted) “prompts 101” (the associated data) “and input query 102” (and a query request); ¶ 0028 sentence 2: “alternative machine learning model examples such as” “BERT” (BERT is also used) “can be utilized”).
Regarding claim 9, Qiao et al. do teach a method (Title, Abstract)
Performed by at least one processor of electronic device (¶ 0033 sentences1-2: “empathetic response system 301” (an electronic device) “can comprise a processor” (processor) “The memory 308 can store computer-executable instructions which, upon execution by the processor”)
comprising:
receiving a query request (¶ 0029 sentence 3: “During in-context learning, GPT model 103” “can receive” (receiving) “prompts 101” “and input query 102” (a query request) “as input”; ¶ 0029 2nd column line 1: “input query 102 can comprise a question or statement for GPT model”);
determining associated data of the query request, the associated data including one or more question-answering pairs associated with the query request in a plurality of question-answering pairs, and summary information of one or more files associated with the query request in summary information of a plurality of files, wherein the plurality of question-answering pairs and the summary information of the plurality of files are stored in a vector database (¶ 0029 sentence 3: “During in-context learning, GPT model 103” “can receive” (determining) “prompts 101”(an associated data) “and input query 102” (of the query request); ¶ 0024 lines 14+: “the prompts” (associated data) “comprise” (include) “question-and-answer pairs” (one or more question-answering pairs associated with the “query 102” (query request)); ¶ 0049 sentence 1: “Prompt banks 318” (the associated data) “can comprise” (includes) “one or more prompt banks” (one or more files or summary information) “corresponding to the one or more emotion classes. For example, given the emotion classes angry, disappointed, and grateful, prompt banks 318 can comprise an angry prompt bank, a disappointed prompt bank, and a grateful prompt bank”; ¶ 0033 last sentence: “the memory 308” (a vector database) “can store” (stores) “prompt banks 318” (the summary information comprising of the “prompt banks” (the one or more files, each of which comprises of question answer pairs because each “prompt” “comprise[s] question-answer-pairs” (¶ 0024));
inputting the query request and the associated data into a large language model so that the large language model outputs an intermediate query result of the query request based on the content of the associated data (¶ 0029 sentence 3: “During in-context learning, GPT model 103” (a large language model) “ can receive” (is inputted) “prompts 101” (the associated data) “and input query 102” (and a query request) “that GPT model 103 can use to generate an output 104” (to output an intermediate query result also called a “second response portion” (¶ 0051 last sentence) because according to ¶ 0051 lines 22-24: “The selected prompts” (associated data) “and input query” (and the query request) “used by second GPT model” (inputted into the large language model) “to generate second response portion” (to output an intermediate query result))
wherein the intermediate query result includes answers to questions queried by the query request (¶ 0029 2nd column lines 2-4: “GPT model 103 can utilize the prompts 101 as a set of example responses for questions and generate an output 104 to input query 102” (the “output 104” (intermediate query result) comprises of “responses” (answers) to “questions” (input query request); ¶ 0051 last S: The “final output 408” (answers to questions queried by the query request) comprises of “second response portion” (the intermediate query result) and “the first response portion”);
and determining a query result of the query request based on the intermediate query result and summary information of the one or more files in the associated data (¶ 0051 last sentence: “The first GPT model 406 then generates the first response portion” “based on the filtered input query” (using the query request) “and combination component 312 combines the first response portion and the second response portion” (and the intermediate query result obtained based on the associated data of the one or more files or summary information) “at 407 to generate final output 408” (to determine a query result))
comprising:
when one or more summary information in the summary information of the one or more files matching the intermediate query result, adding the files corresponding to the one or more summary information to the intermediate query result to obtain the query result of the query request (¶ 0050 sentence 1: “In-context learning component 316 can then concatenate” (adding) “the input query” (a resulting intermediate query result) “with the selected” (to matched) “prompts” (summary information files to “input query” which results in adding e.g., “input query” plus “prompt[1]” plus “prompt[2]” which results in “input query” plus “prompt[1]” (an intermediate query result) being added to “prompt[2]” (a matched summary information file); ¶ 0051 lines 22-24: “The selected prompts” (matched summary information or files) “and input query” (and the query request) “used by second GPT model” (inputted by adding into the large language model) “to generate second response portion” (to output an intermediate query result); ¶ 0049 last sentence: “number of selected prompts can comprise between one and twenty”).
Regarding claim 11, Qiao et al. do teach the method of claim 9, wherein: the at least one file comprising at least one of audio, video, pictures, tables, documents, and web pages (¶ 0044 sentence 4: “In an embodiment, the first machine learning model 302 can comprise a first generative pre-trained transformer (GPT) model. GPT models are deep learning or neural network language models that are pre-trained on a large text corpus to generate text responses to input text prompts” (i.e., “prompts” (the files) are “input text” (documents); ¶ 0029 lines 14+: “prompts 101” (the files) “can comprise question-and-answer pairs and input query” wherein according to ¶ 0042 lines 12+: “the input query can comprise” “For example, an audio” (comprise audio) “input may be converted to a text”).
Regarding claim 12, Qiao et al. do teach the method of claim 9, wherein: the associated data further comprising at least one historical query request preceding the query request and the query result corresponding to the at least one historical query request (FIG. 6 lines 6-7: “For details, please see the preceding question and answer” (i.e., in obtaining a proper response to a query request, it is directing attention to a “preceding” (a historical) “question and answer” (query request association data); e.g., ¶ 0023 lines 3+: “the input query” “I cannot activate my card. I am extremely disappointed” (a historical query request); ¶ 0025 lines 5+: “the example first response portion and example second response portion above can be combined to form the final response” (query result corresponding to the historical query request) “This is not the service we intend to provide. We are sorry to know you are experiencing trouble. Please send us a direct message with your account number” (to be used as a new query request or “prompt” (association data) which is in response to the historical query and its query result); e.g., see).
Regarding claim 14, Qiao et al. do teach a non-transitory computer-storage medium having stored thereon instructions that, when executed on an electronic device (¶ 0005 sentence 1: “According to another embodiment, a computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to …”; ¶ 0033 sentences1-2: “empathetic response system 301” (an electronic device) “can comprise a processor” (processor) “The memory 308 can store computer-executable instructions which, upon execution by the processor”),
Cause the electronic device to perform following process:
receiving a query request (¶ 0029 sentence 3: “During in-context learning, GPT model 103” “can receive” (receiving) “prompts 101” “and input query 102” (a query request) “as input”; ¶ 0029 2nd column line 1: “input query 102 can comprise a question or statement for GPT model”);
determining associated data of the query request, the associated data including one or more question-answering pairs associated with the query request in a plurality of question-answering pairs, and summary information of one or more files associated with the query request in summary information of a plurality of files, wherein the plurality of question-answering pairs and the summary information of the plurality of files are stored in a vector database (¶ 0029 sentence 3: “During in-context learning, GPT model 103” “can receive” (determining) “prompts 101”(an associated data) “and input query 102” (of the query request); ¶ 0024 lines 14+: “the prompts” (associated data) “comprise” (include) “question-and-answer pairs” (one or more question-answering pairs associated with the “query 102” (query request)); ¶ 0049 sentence 1: “Prompt banks 318” (the associated data) “can comprise” (includes) “one or more prompt banks” (one or more files or summary information) “corresponding to the one or more emotion classes. For example, given the emotion classes angry, disappointed, and grateful, prompt banks 318 can comprise an angry prompt bank, a disappointed prompt bank, and a grateful prompt bank”; ¶ 0033 last sentence: “the memory 308” (a vector database) “can store” (stores) “prompt banks 318” (the summary information comprising of the “prompt banks” (the one or more files, each of which comprises of question answer pairs because each “prompt” “comprise[s] question-answer-pairs” (¶ 0024));
inputting the query request and the associated data into a large language model so that the large language model outputs an intermediate query result of the query request based on the content of the associated data (¶ 0029 sentence 3: “During in-context learning, GPT model 103” (a large language model) “ can receive” (is inputted) “prompts 101” (the associated data) “and input query 102” (and a query request) “that GPT model 103 can use to generate an output 104” (to output an intermediate query result also called a “second response portion” (¶ 0051 last sentence) because according to ¶ 0051 lines 22-24: “The selected prompts” (associated data) “and input query” (and the query request) “used by second GPT model” (inputted into the large language model) “to generate second response portion” (to output an intermediate query result)),
wherein the intermediate query result includes answers to questions queried by the query request (¶ 0029 2nd column lines 2-4: “GPT model 103 can utilize the prompts 101 as a set of example responses for questions and generate an output 104 to input query 102” (the “output 104” (intermediate query result) comprises of “responses” (answers) to “questions” (input query request); ¶ 0051 last S: The “final output 408” (answers to questions queried by the query request) comprises of “second response portion” (the intermediate query result) and “the first response portion”);
and determining a query result of the query request based on the intermediate query result and summary information of the one or more files in the associated data (¶ 0051 last sentence: “The first GPT model 406 then generates the first response portion” “based on the filtered input query” (using the query request) “and combination component 312 combines the first response portion and the second response portion” (and the intermediate query result obtained based on the associated data of the one or more files or summary information) “at 407 to generate final output 408” (to determine a query result)),
comprising:
when one or more summary information in the summary information of the one or more files matching the intermediate query result, adding the files corresponding to the one or more summary information to the intermediate query result to obtain the query result of the query request (¶ 0050 sentence 1: “In-context learning component 316 can then concatenate” (adding) “the input query” (a resulting intermediate query result) “with the selected” (to matched) “prompts” (summary information files to “input query” which results in adding e.g., “input query” plus “prompt[1]” plus “prompt[2]” which results in “input query” plus “prompt[1]” (an intermediate query result) being added to “prompt[2]” (a matched summary information file); ¶ 0051 lines 22-24: “The selected prompts” (matched summary information or files) “and input query” (and the query request) “used by second GPT model” (inputted by adding into the large language model) “to generate second response portion” (to output an intermediate query result); ¶ 0049 last sentence: “number of selected prompts can comprise between one and twenty”).
Regarding claim 16, Qiao et al. do teach the non-transitory computer-readable storage medium of claim 14, wherein: the at least one file comprising at least one of audio, video, pictures, tables, documents, and web pages (¶ 0044 sentence 4: “In an embodiment, the first machine learning model 302 can comprise a first generative pre-trained transformer (GPT) model. GPT models are deep learning or neural network language models that are pre-trained on a large text corpus to generate text responses to input text prompts” (i.e., “prompts” (the files) are “input text” (documents); ¶ 0029 lines 14+: “prompts 101” (the files) “can comprise question-and-answer pairs and input query” wherein according to ¶ 0042 lines 12+: “the input query can comprise” “For example, an audio” (comprise audio) “input may be converted to a text”).
Regarding claim 17, Qiao et al. do teach the non-transitory computer-readable storage medium of claim 14, wherein: the associated data further comprising at least one historical query request preceding the query request and the query result corresponding to the at least one historical query request (FIG. 6 lines 6-7: “For details, please see the preceding question and answer” (i.e., in obtaining a proper response to a query request, it is directing attention to a “preceding” (a historical) “question and answer” (query request association data); e.g., ¶ 0023 lines 3+: “the input query” “I cannot activate my card. I am extremely disappointed” (a historical query request); ¶ 0025 lines 5+: “the example first response portion and example second response portion above can be combined to form the final response” (query result corresponding to the historical query request) “This is not the service we intend to provide. We are sorry to know you are experiencing trouble. Please send us a direct message with your account number” (to be used as a new query request or “prompt” (association data) which is in response to the historical query and its query result); e.g., see).
Regarding claim 20, Qiao et al. do teach the non-transitory computer-readable storage medium of claim 14, wherein: the large language model comprising any one of the following models: ChatGPT, GPT-1, GPT-2, GPT-3, GPT-4, BERT, and XLNet (¶ 0029 sentence 3: “During in-context learning, GPT model 103” (a large language model is a “GPT[-1-4]”) “ can receive” (is inputted) “prompts 101” (the associated data) “and input query 102” (and a query request); ¶ 0028 sentence 2: “alternative machine learning model examples such as” “BERT” (BERT is also used) “can be utilized”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5-7, 13, 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qiao et al., and further in view of Wu et al. (CN114357120).
Regarding claim 5, Qiao et al. do teach the apparatus of claim 1, wherein:
determining associated data of the query request, comprising:
determining a first number of question-answering pairs in the plurality of question- answering pairs that associated with a vector corresponding to the query request (¶ 0024 lines 10-12: “A number of” (a first number) “random prompts can be selected from a prompt bank” “For example, if the input query” (associated with a query request) “is classified as “angry”, then a number of prompts from an “angry” prompt bank” (associated with a vector corresponding to the query request) “can be selected” (is determined) “wherein the prompts comprise question-and-answer pairs” (of question-answering pairs)),
and a second number of summary information in summary information of the plurality of files that associated with the vector corresponding to the query request (¶ 0052 sentence before last: “It should be appreciated that use of any number” (a second number of) “of emotion classes and corresponding prompt banks” (files or summary information) “is envisioned” (are determined); e.g., Fig. 5 shows there “PROMPT BANK[s]” (files or summary information), wherein each “PROMPT BANK” (file or summary information) comprises five question-answering pairs)).
Qiao et al. do not specifically disclose:
determining at least one question-answering pair in the first number of question-answering pairs that satisfies a first condition as the one or more question-answering pair,
and the summary information in the second number of summary information that satisfies a second condition as the summary information of one or more files.
Wu et al. do teach:
determining at least one question-answering pair in the first number of question-answering pairs that satisfies a first condition as the one or more question-answering pair (Abstract lines 4-9: “calculating the similarity” (using a first satisfaction condition) “between user query information and each question and answer pair” (and a first number of question-answering pairs) “through a BM25 algorithm, and obtaining a first candidate question and answer pair” (to ultimately determine “a final candidate question and answer pair” (the question-answering pair (Abstract lines 6-4 above the bottom)))),
and the summary information in the second number of summary information that satisfies a second condition as the summary information of one or more files (Abstract lines 9-13: “calculating the similarity” (using a second condition satisfaction) “between the user query information and each question and answer pair document” (in a second number of summary information of one or more files) “through a max-pass algorithm to obtain a second candidate question and answer pair” (which is also used in determining the “final candidate question and answer pair” (the question-answering pair (Abstract lines 6-4 above the bottom)))).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the techniques used for determining “a final candidate question and answer pair” associated with a “user query information” of Wu et al. into the “query” “response” using “question-answer” “prompts” of Qiao et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Qiao et al. to “calculate the relevance ranking score between each question-and-answer tuple” “and the user query” as disclosed in Wu et al. ¶ n0124, which further enhances the method by incorporating “semantic” similarity determination as well, as disclosed in Wu et al. ¶ n0118.
Regarding claim 6, Qiao et al. do not specifically disclose the apparatus of claim 5, wherein: the first condition comprising: a matching degree between the at least one question-answering pair and the query request is greater than a first match degree; the second condition comprising: the matching degree between the summary information and the query request is greater than a second matching degree, and/or the information density of the summary information is greater than a preset value.
Wu et al. do teach:
the first condition comprising: a matching degree between the at least one question-answering pair and the query request is greater than a first match degree (¶ n0114: “use the BM25 algorithm” (first condition comprises) “calculate the similarity” (a matching degree) “between the user query Q” (between the query request) “and all existing FAQ pairs” (and question answering pairs) “and sort them” “Select the top k FAQ pairs with the highest similarity” (which are “highest” (i.e., they are higher than any similarity associated with any other “pair” e.g. a first match degree associated with the lowest similarity pair)) ;
the second condition comprising: a matching degree between the summary information and the query request is greater than a second matching degree, and/or the information density of the summary information is greater than a preset value (¶ n0127: “the specific calculation formula for the similarity maxpsg(Q,d)” (the scheme used for the second condition) “between the user query information Q” (is a matching degree between the query request)”and each question-answer document d” (and the summary information) which is further based on “the largest relevance score” (¶ n0126 line 4 (i.e., the “largest” (higher than any other “similarity” (matching degree), e.g., the lowest “relevance score” (second matching degree); “¶ n0009: “The maximum-passage algorithm” (the scheme used for the second condition) “is used to calculate the similarity” (determines a matching degree) “between the user query information” (between the query request) “and each question-answer pair” (i.e. corresponding to the “question and answer pair document” (summary information (Abstract line 11)) “The first candidate question-answer pair sequence is sorted from high to low” (greater than a “low” (a second matching degree)) “according to the obtained maxpsg similarity to obtain the second candidate question-answer pair sequence”)).
For obviousness to combine Qiao et al. and Wu et al. see claim 5.
Regarding claim 7, Qiao et al. do teach the apparatus of claim 5, wherein: the summary information of the plurality of files is determined by the following ways:
determining the summary information of text file by extracting summary from text in text file (¶ 0029 lines 14+: “prompts 101” (the plurality files or summary information) “can comprise question-and-answer pairs and input query”, wherein according to ¶ 0042 lines 12+: “and the input query can comprise any textual statement” (could be text files) “or other form of statement converted to text form”) ;
and determining the summary information of non-text file by extracting summary from text description of the non-text file (¶ 0029 lines 14+: “prompts 101” (the files) “can comprise question-and-answer pairs and input query” wherein according to ¶ 0042 lines 12+: “the input query can comprise” “For example, an audio” (are non-text) “input may be converted to a text” (from which text description is extracted) “statement”).
Regarding claim 13, Qiao et al. do teach the method of claim 9, wherein:
determining associated data of the query request, comprising:
determining a first number of question-answering pairs in the plurality of question- answering pairs that associated with a vector corresponding to the query request (¶ 0024 lines 10-12: “A number of” (a first number) “random prompts can be selected from a prompt bank” “For example, if the input query” (associated with a query request) “is classified as “angry”, then a number of prompts from an “angry” prompt bank” (associated with a vector corresponding to the query request) “can be selected” (is determined) “wherein the prompts comprise question-and-answer pairs” (of question-answering pairs)),
and a second number of summary information in summary information of the plurality of files that associated with the vector corresponding to the query request (¶ 0052 sentence before last: “It should be appreciated that use of any number” (a second number of) “of emotion classes and corresponding prompt banks” (files or summary information) “is envisioned” (are determined); e.g., Fig. 5 shows there “PROMPT BANK[s]” (files or summary information), wherein each “PROMPT BANK” (file or summary information) comprises five question-answering pairs)).
Qiao et al. do not specifically disclose:
And determining at least one question-answering pair in the first number of question-answering pairs that satisfies a first condition as the one or more question-answering pair,
and the summary information in the second number of summary information that satisfies a second condition as the summary information of one or more files.
Wu et al. do teach:
determining at least one question-answering pair in the first number of question-answering pairs that satisfies a first condition as the one or more question-answering pair (Abstract lines 4-9: “calculating the similarity” (using a first satisfaction condition) “between user query information and each question and answer pair” (and a first number of question-answering pairs) “through a BM25 algorithm, and obtaining a first candidate question and answer pair” (to ultimately determine “a final candidate question and answer pair” (the question-answering pair (Abstract lines 6-4 above the bottom)))),
and the summary information in the second number of summary information that satisfies a second condition as the summary information of one or more files (Abstract lines 9-13: “calculating the similarity” (using a second condition satisfaction) “between the user query information and each question and answer pair document” (in a second number of summary information of one or more files) “through a max-pass algorithm to obtain a second candidate question and answer pair” (which is also used in determining the “final candidate question and answer pair” (the question-answering pair (Abstract lines 6-4 above the bottom)))).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the techniques used for determining “a final candidate question and answer pair” associated with a “user query information” of Wu et al. into the “query” “response” using “question-answer” “prompts” of Qiao et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Qiao et al. to “calculate the relevance ranking score between each question-and-answer tuple” “and the user query” as disclosed in Wu et al. ¶ n0124, which further enhances the method by incorporating “semantic” similarity determination as well, as disclosed in Wu et al. ¶ n0118.
Regarding claim 18, Qiao et al. do teach the non-transitory computer-readable storage medium of claim 14, wherein:
determining associated data of the query request, comprising:
determining a first number of question-answering pairs in the plurality of question- answering pairs that associated with a vector corresponding to the query request (¶ 0024 lines 10-12: “A number of” (a first number) “random prompts can be selected from a prompt bank” “For example, if the input query” (associated with a query request) “is classified as “angry”, then a number of prompts from an “angry” prompt bank” (associated with a vector corresponding to the query request) “can be selected” (is determined) “wherein the prompts comprise question-and-answer pairs” (of question-answering pairs)),
and a second number of summary information in summary information of the plurality of files that associated with the vector corresponding to the query request (¶ 0052 sentence before last: “It should be appreciated that use of any number” (a second number of) “of emotion classes and corresponding prompt banks” (files or summary information) “is envisioned” (are determined); e.g., Fig. 5 shows there “PROMPT BANK[s]” (files or summary information), wherein each “PROMPT BANK” (file or summary information) comprises five question-answering pairs)).
Qiao et al. do not specifically disclose:
determining at least one question-answering pair in the first number of question-answering pairs that satisfies a first condition as the one or more question-answering pair,
and the summary information in the second number of summary information that satisfies a second condition as the summary information of one or more files.
Wu et al. do teach:
determining at least one question-answering pair in the first number of question-answering pairs that satisfies a first condition as the one or more question-answering pair (Abstract lines 4-9: “calculating the similarity” (using a first satisfaction condition) “between user query information and each question and answer pair” (and a first number of question-answering pairs) “through a BM25 algorithm, and obtaining a first candidate question and answer pair” (to ultimately determine “a final candidate question and answer pair” (the question-answering pair (Abstract lines 6-4 above the bottom)))),
and the summary information in the second number of summary information that satisfies a second condition as the summary information of one or more files (Abstract lines 9-13: “calculating the similarity” (using a second condition satisfaction) “between the user query information and each question and answer pair document” (in a second number of summary information of one or more files) “through a max-pass algorithm to obtain a second candidate question and answer pair” (which is also used in determining the “final candidate question and answer pair” (the question-answering pair (Abstract lines 6-4 above the bottom)))).
It would have therefore been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the techniques used for determining “a final candidate question and answer pair” associated with a “user query information” of Wu et al. into the “query” “response” using “question-answer” “prompts” of Qiao et al. would enable the combined systems and their associated methods to perform in combination as they do separately and to further enable Qiao et al. to “calculate the relevance ranking score between each question-and-answer tuple” “and the user query” as disclosed in Wu et al. ¶ n0124, which further enhances the method by incorporating “semantic” similarity determination as well, as disclosed in Wu et al. ¶ n0118.
Regarding claim 19, Qiao et al. do not specifically disclose the non-transitory computer-readable storage medium of claim 18, wherein: the first condition comprising: a matching degree between the at least one question-answering pair and the query request is greater than a first match degree; the second condition comprising: a matching degree between the summary information and the query request is greater than a second matching degree, and/or the information density of the summary information is greater than a preset value.
Wu et al. do teach:
the first condition comprising: a matching degree between the at least one question-answering pair and the query request is greater than a first match degree (¶ n0114: “use the BM25 algorithm” (first condition comprises) “calculate the similarity” (a matching degree) “between the user query Q” (between the query request) “and all existing FAQ pairs” (and question answering pairs) “and sort them” “Select the top k FAQ pairs with the highest similarity” (which are “highest” (i.e., they are higher than any similarity associated with any other “pair” e.g. a first match degree associated with the lowest similarity pair));
the second condition comprising: a matching degree between the summary information and the query request is greater than a second matching degree, and/or the information density of the summary information is greater than a preset value (¶ n0127: “the specific calculation formula for the similarity maxpsg(Q,d)” (the scheme used for the second condition) “between the user query information Q” (is a matching degree between the query request)”and each question-answer document d” (and the summary information) which is further based on “the largest relevance score” (¶ n0126 line 4 (i.e., the “largest” (higher than any other “similarity” (matching degree), e.g., the lowest or anything less than the “largest” “relevance score” (second matching degree); “¶ n0009: “The maximum-passage algorithm” (the scheme used for the second condition) “is used to calculate the similarity” (determines a matching degree) “between the user query information” (between the query request) “and each question-answer pair” (i.e. corresponding to the “question and answer pair document” (summary information (Abstract line 11)) “The first candidate question-answer pair sequence is sorted from high to low” (greater than a “low” (a second matching degree)) “according to the obtained maxpsg similarity to obtain the second candidate question-answer pair sequence”)).
For obviousness to combine Qiao et al. and Wu et al. see claim 18.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARZAD KAZEMINEZHAD whose telephone number is (571)270-5860. The examiner can normally be reached 10:30 am to 11:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D. Shah can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Farzad Kazeminezhad/
Art Unit 2653
February 21st 2026.