Detailed Action
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending; claims 1 and 19-20 are independent.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 8 recites “providing, to the user, the one or more supplemental queries associated with the stored query”. It is not clear to what entity “the one or more supplemental queries” is referring to.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 1-20 of the instant application are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,282,501, (App. No. 18/336,618). Although the conflicting claims are not identical, they are not patentably distinct from each other because the claims of instant application are covered by the claims as recited in Patent No. 12,282,501, (App. No. 18/336,618) as shown below for independent claims. Dependent claims not included in Patent No. 12,282,501, (App. No. 18/336,618) would be rejected using references shown below.
19/172,308
18/336,618 (12282501)
1-A system for automatically generating responses to user queries, the system
comprising: one or more processors; and
a memory coupled to the one or more processors comprising instructions executable by the one or more processors, the processors operable when executing the instructions to:
1-A system for automatically generating responses to user queries, the system comprising: one or more processors; and a memory coupled to the one or more processors comprising instructions executable by the one or more processors, the processors operable when executing the instructions to:
receive a query from a user, wherein the query is associated with a current case;
receive a query from a user, wherein the query is associated with a current case;
determine a first set of similarity scores between the query and a plurality of
stored queries stored in a data store, wherein each of the plurality of stored queries is associated with one or more precedent cases;
determine a first set of similarity scores between the query and a plurality of stored queries stored in a data store, wherein each of the plurality of stored queries is associated with one or more supplemental queries, and wherein each of the plurality of stored queries is associated with one or more precedent cases;
determine whether a first similarity score of the first set of similarity scores meets
a first threshold, wherein the first similarity score is associated with a first stored query of the plurality of stored queries;
determine whether a first similarity score of the first set of similarity scores meets a first threshold, wherein the first similarity score is associated with a first stored query of the plurality of stored queries; and
determine whether a second similarity score of the first set of similarity scores
meets the first threshold, wherein the second similarity score is associated with a second stored query of the plurality of stored queries; and
in accordance with a determination that the first similarity score and the second
similarity score meet the first threshold:
in accordance with a determination that the first similarity score meets the first threshold:
obtain a set of case information for the current case;
obtain a set of case information for the current case, wherein the set of case information is responsive to one or more supplemental queries of the first stored query;
retrieve, from the data store, at least one set of case information for at least
one precedent case associated with the first stored query;
retrieve, from the data store, at least one set of case information for at least one precedent case associated with the first stored query;
retrieve, from the data store, at least one set of case information for at least
one precedent case associated with the second stored query;
determine a second set of similarity scores between the current case and the at least one precedent case based on the set of case information for the current case and the at least one set of case information for the at least one precedent case;
determine whether a second similarity score of the second set of similarity scores meets a second threshold; and
generate a response to the query based on a combination of the at least one
set of case information for at least one precedent case associated with the first stored query and the at least one set of case information for at least one precedent case associated with the second store query.
upon determining that the second similarity score meets the second threshold, generate a response to the query based on one of the at least one precedent case corresponding to the second similarity score.
19- A method for automatically generating responses to user queries, wherein the
method is performed by a system comprising one or more processors, the method comprising:
A method for automatically generating responses to user queries, wherein the
method is performed by a system comprising one or more processors, the method comprising:
receiving a query from a user, wherein the query is associated with a current case;
receiving a query from a user, wherein the query is associated with a current case;
determining a first set of similarity scores between the query and a plurality of stored queries stored in a data store, wherein each of the plurality of stored queries is associated with
one or more supplemental queries, and wherein each of the plurality of stored queries is associated with one or more precedent cases;
determining a first set of similarity scores between the query and a plurality of stored queries stored in a data store, wherein each of the plurality of stored queries is associated with
one or more supplemental queries, and wherein each of the plurality of stored queries is associated with one or more precedent cases;
determining that a first similarity score of the first set of similarity scores meets a first threshold; and
determining that a first similarity score of the first set of similarity scores meets a first threshold; and
in accordance with the determination that the first similarity score meets the first
threshold:
in accordance with the determination that the first similarity score meets the first
threshold:
obtaining a set of case information for the current case, wherein the set of case
information is responsive to the one or more supplemental queries of the stored query;
obtaining a set of case information for the current case, wherein the set of case
information is responsive to the one or more supplemental queries of the stored query;
retrieving, from the data store, at least one set of case information for at least one precedent case associated with the stored query;
retrieving, from the data store, at least one set of case information for at least one precedent case associated with the stored query;
determining a second set of similarity scores between the current case and the at least one precedent case based on the set of case information for the current case and the at least one set of case information for the at least one precedent case;
determining a second set of similarity scores between the current case and the at least one precedent case based on the set of case information for the current case and the at least one set of case information for the at least one precedent case;
determining that a second similarity score of the second set of similarity scores
meets a second threshold; and
determining that a second similarity score of the second set of similarity scores
meets a second threshold; and
upon determining that the second similarity score meets the second threshold, generating a response to the query based on one of the at least one precedent case corresponding
to the second similarity score.
upon determining that the second similarity score meets the second threshold, generating a response to the query based on one of the at least one precedent case corresponding
to the second similarity score.
20- A non-transitory computer-readable storage medium storing instructions for
automatically generating responses to user queries, the instructions operable when executed by one or more processors of a system to cause the system to:
20- A non-transitory computer-readable storage medium storing instructions for
automatically generating responses to user queries, the instructions operable when executed by one or more processors of a system to cause the system to:
receive a query from a user, wherein the query is associated with a current case;
receive a query from a user, wherein the query is associated with a current case;
determine a first set of similarity scores between the query and a plurality of stored queries stored in a data store, wherein each of the plurality of stored queries is associated with one or more supplemental queries, and wherein each of the plurality of stored queries is
associated with one or more precedent cases;
determine a first set of similarity scores between the query and a plurality of stored queries stored in a data store, wherein each of the plurality of stored queries is associated with one or more supplemental queries, and wherein each of the plurality of stored queries is
associated with one or more precedent cases;
determine whether a first similarity score of the first set of similarity scores meets a first threshold; and
determine whether a first similarity score of the first set of similarity scores meets a first threshold; and
in accordance with a determination that the first similarity score meets the first threshold:
in accordance with a determination that the first similarity score meets the first threshold:
obtain a set of case information for the current case, wherein the set of case
information is responsive to the one or more supplemental queries of the stored query;
obtain a set of case information for the current case, wherein the set of case
information is responsive to the one or more supplemental queries of the stored query;
retrieve, from the data store, at least one set of case information for at least one
precedent case associated with the stored query;
retrieve, from the data store, at least one set of case information for at least one
precedent case associated with the stored query;
determine a second set of similarity scores between the current case and the at least one precedent case based on the set of case information for the current case and the at least one set of case information for the at least one precedent case;
determine a second set of similarity scores between the current case and the at least one precedent case based on the set of case information for the current case and the at least one set of case information for the at least one precedent case;
determine whether a second similarity score of the second set of similarity scores meets a second threshold; and
determine whether a second similarity score of the second set of similarity scores meets a second threshold; and
upon determining that the second similarity score meets the second threshold, generate a response to the query based on one of the at least one precedent case corresponding to
the second similarity score.
upon determining that the second similarity score meets the second threshold, generate a response to the query based on one of the at least one precedent case corresponding to
the second similarity score.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 102 that forms the basis for all the rejections under this section made in this Office Action:
A person shall be entitled to a patent unless—
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2 and 9 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Rozen et al., “Answering Product-Questions by Utilizing Questions from Other Contextually Similar Products” (Rozen).
Rozen teaches:
Claim 1. A system for automatically generating responses to user queries, the system comprising: one or more processors; and a memory coupled to the one or more processors comprising instructions executable by the one or more processors, the processors operable when executing the instructions to:
receive a query from a user, wherein the query is associated with a current case; (a user’s query, e.g. a target question, about a current product/case is received; based on the user query, similar queries and answers about other similar products/cases are retrieved: sec. 3.1-3.2, “Given a target record rt, and a corpus of product-question-answer records C, our first goal is to retrieve all records with a question having the same intent as of qt…we index the records in C by creating embedding vectors for their questions, using a pre-trained encoder…For retrieval…we similarly embed the question qt into vector et. We then use a fast Approximate K Nearest Neighbors (AKNN) search to retrieve K records, with the most similar questions, based on the cosine similarity between et and the embedding vectors of the questions in C. We denote the set of retrieved siblings of rt by S(rt)… The retrieved sibling records are those with the most similar questions to the target question”)
determine a first set of similarity scores between the query and a plurality of stored queries stored in a data store, wherein each of the plurality of stored queries is associated with one or more precedent cases; (see above, “to retrieve K records, with the most similar questions, based on the cosine similarity between et and the embedding vectors of the questions in C”)
determine whether a first similarity score of the first set of similarity scores meets a first threshold, wherein the first similarity score is associated with a first stored query of the plurality of stored queries; (see above, “to retrieve K records, with the most similar questions, based on the cosine similarity between et and the embedding vectors of the questions in C”)
determine whether a second similarity score of the first set of similarity scores meets the first threshold, wherein the second similarity score is associated with a second stored query of the plurality of stored queries; and (see above, ‘K records with the most similar questions” includes a first and a second stored quires)
in accordance with a determination that the first similarity score and the second similarity score meet the first threshold:
obtain a set of case information for the current case; (sec. 3.3, fig. 2, the product/case textual content of the target question is obtained: “The target question-product pair (qt; pt) and the twin question product pair (qj ; pj) are encoded using a transformer encoder, while the questions attend the product text. The texts of both products are coupled and also encoded, allowing the two product text attend each other. The three output vectors are then concatenated and classified using an MLP classifier”)
retrieve, from the data store, at least one set of case information for at least one precedent case associated with the first stored query; (sec. 3.3, fig. 2, the product/case textual content of a twin question-product pair is obtained: “The target question-product pair (qt; pt) and the twin question product pair (qj ; pj) are encoded using a transformer encoder, while the questions attend the product text. The texts of both products are coupled and also encoded, allowing the two product text attend each other. The three output vectors are then concatenated and classified using an MLP classifier”)
retrieve, from the data store, at least one set of case information for at least one precedent case associated with the second stored query; (see above, the same is applied to each twin question-product pair)
generate a response to the query based on a combination of the at least one set of case information for at least one precedent case associated with the first stored query and the at least one set of case information for at least one precedent case associated with the second store query. (sec. 3., a response is generated by predicting a final answer: “we introduce the Similarity-Based Answer-prediction (SimBA) method for predicting the answer for a product question, based on the answers for other similar product questions….The CPS similarity score is used to weight the twins by considering them as voters, applying a mixture-of-experts model over their answers for the final answer prediction (Figure 1, stage 4)”; sec. 3.4)
Claim 2. The system of claim 1, wherein the response is generated based on an output from an artificial intelligence model that is trained on the at least one set of case information for the at least one precedent case. (a trained model is used for generating an answer : sec, 4, “We introduce two new datasets to experiment with our answer prediction approach: 1) The Amazon Product Question Similarity (Amazon-PQSim) dataset which is used to train our Q2Q model; 2) The Amazon Product Question Answers (Amazon-PQA) dataset of product related Q&As, used for training the SimBA model”; sec. 5.1, “For our Q2Q model, we apply a standard pretrained RoBERTa (Liu et al., 2019) classifier. Specifically, we use Hugging-Face base-uncased pre-trained model12 and fine-tune13 it for the classification task on our Q2Q dataset14, while splitting the data into train, dev and test sets with 80%-10%-10% partition, respectively”; sec. 5.2-5.3)
Claim 9. The system of claim 1, wherein obtaining the set of case information for the current case comprises automatically retrieving data on the user or the current case from the data store. (sec. 3.3, fig. 2, the product/case textual content of the target question is obtained automatically: “The target question-product pair (qt; pt) and the twin question product pair (qj ; pj) are encoded using a transformer encoder, while the questions attend the product text. The texts of both products are coupled and also encoded, allowing the two product text attend each other. The three output vectors are then concatenated and classified using an MLP classifier”)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 10-18 are rejected under 35 U.S.C. 103(a) as being unpatentable over Rozen as applied to claim 1 above, in view of Tian et al., “Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback” (Tian).
Claim 10. Rozen taught the system of claim 1; Rozen did not specifically disclose but Tian discloses the response comprises a confidence score on an accuracy of the response with respect to the query (Tian, Abs., wherein a typical predication system is able to provide confidence score for generated answer: a “trustworthy real-world prediction system should produce well-calibrated confidence scores; that is, its confidence in an answer should be indicative of the likelihood that the answer is correct, enabling deferral to an expert in cases of low-confidence predictions… the most widely used LMs are fine-tuned with reinforcement learning from human feedback (RLHF-LMs)”; p. 2, “we pay particular attention to prompts that elicit verbalized probabilities, i.e., the model expresses its confidence in token-space, as either numerical probabilities or another linguistic expression of uncertainty”)
Rozen sec. 3.4, discloses generating response by applying weighted confidence scores: “A mixture of experts is a widely-used method to combine the outputs of several classifiers by associating a weighted confidence score with each classifier (Jacobs et al., 1991). In our setting, experts are individual twins that lend support for or against a particular answer for a question. Each twin is weighted by its contextual similarity to the target record rt, as predicted by the CPS model”. It would have been obvious before the effective filling date of the claimed invention to a person having ordinary skill in the art to combine the applied references for disclosing the response comprises a confidence score on an accuracy of the response with respect to the query because doing so would further provide for a machine leering model to generate a measure for showing accuracy of generated result as instructed by the user.
Claim 11. The system of claim 10, wherein the processors are further operable when executing the instructions to:
determine whether the confidence score is below a predetermined confidence threshold, wherein, in accordance with determining that the confidence score is below the predetermined confidence threshold, the response includes a recommendation for the user to consult with a subject matter expert. (Tian, a generated confidence as instructed can be interpreted as a recommendation or can be further included a text recommendation if instructed as a prompt: “trustworthy real-world prediction system should produce well-calibrated confidence scores; that is, its confidence in an answer should be indicative of the likelihood that the answer is correct, enabling deferral to an expert in cases of low-confidence predictions… the most widely used LMs are fine-tuned with reinforcement learning from human feedback (RLHF-LMs)”)
Claim 12. The system of claim 1, wherein the response comprises a rationale for the response. (Tian, a prompt as shown in p. 9 can be used to instruct the model to include its thought process: “Provide your best guess for the following question. Before giving your answer, provide a step-by-step explanation of your thought process…”)
Claim 13. The system of claim 1, wherein the processors are further operable when executing the instructions to:
receive feedback from the user about the response to the query; and update, based on the feedback, a first algorithm for determining the first set of similarity scores. (Rozen, table 3,sec. 5.1, wherein various similarity algorithms are compared: “We compare the performance of the Q2Q similarity classifier with several unsupervised baselines, namely: (a) Jaccard similarity, (b) cosine similarity over USE embedding, and (c) cosine similarity over RoBERTa embedding”, Tian, Abs., wherein, “the most widely-used LMs are fine-tuned with reinforcement learning from human feedback (RLHF-LMs)” suggest using human feedback for fine-tuning the similarity algorithm)
Claim 14. The system of claim 1, wherein the processors are further operable when executing the instructions to:
prior to generating the response to the user, receive, from the user, a preliminary response to the query, and wherein the generated response comprises an agreement or disagreement on an accuracy of the preliminary response and a rationale for the agreement or disagreement. (Tian, a prompt as shown in p. 9 can be used to instruct the model to include its thought process: “Provide your best guess for the following question. Before giving your answer, provide a step-by-step explanation of your thought process…” with respect to provided question and proposed response: “Question: ${QUESTION}\nProposed Answer: ${ANSWER}\nIs the proposed answer:\n\t(A) True or\n\t(B) False?\n The proposed answer is:)”)
Claim 15. The system of claim 1, wherein the response comprises a reference to the one of the at least one precedent case. (Rozen, wherein in table 1 the predicted answer is no; however similar products/cases to the current product can be simply used as output based on instruction provided as prompt in Tian p. 9)
Claim 16. The system of claim 1, wherein the combination of the at least one set of case information for at least one precedent case associated with the first stored query and the at least one set of case information for at least one precedent case associated with the second store query comprises a weighted average. (Note that mathematical calculation such as calculating “a weighted average” is an available tool to be used as needed for obtaining a desired result: Rozen, sec. 3.4, “A mixture of experts is a widely-used method to combine the outputs of several classifiers by associating a weighted confidence score with each classifier (Jacobs et al., 1991). In our setting, experts are individual twins that lend support for or against a particular answer for a question. Each twin is weighted by its contextual similarity to the target record rt, as predicted by the CPS model”)
Claim 17. The system of claim 16, wherein:
a first weight for the weighted average is assigned to the at least one set of case information for at least one precedent case associated with the first stored query based on a similarity between the at least one set of case information for at least one precedent case associated with the first stored query and the set of case information for the current case; and (Note that mathematical calculation such as calculating “a weighted average” is an available tool to be used as needed: Rozen, sec. 3.3, fig. 2, the product/case textual content of a twin question-product pair is obtained: “The target question-product pair (qt; pt) and the twin question product pair (qj ; pj) are encoded using a transformer encoder, while the questions attend the product text. The texts of both products are coupled and also encoded, allowing the two product text attend each other. The three output vectors are then concatenated and classified using an MLP classifier”; sec. 3.4, “A mixture of experts is a widely-used method to combine the outputs of several classifiers by associating a weighted confidence score with each classifier (Jacobs et al., 1991). In our setting, experts are individual twins that lend support for or against a particular answer for a question. Each twin is weighted by its contextual similarity to the target record rt, as predicted by the CPS model”)
a second weight for the weighted average is assigned to the at least one set of case information for at least one precedent case associated with the second stored query based on a similarity between the at least one set of case information for at least one precedent case associated with the second stored query and the set of case information for the current case. (Note that mathematical calculation such as calculating “a weighted average” is an available tool to be used as needed: Rozen, sec. 3.3, fig. 2, the product/case textual content of a twin question-product pair is obtained: “The target question-product pair (qt; pt) and the twin question product pair (qj ; pj) are encoded using a transformer encoder, while the questions attend the product text. The texts of both products are coupled and also encoded, allowing the two product text attend each other. The three output vectors are then concatenated and classified using an MLP classifier”; sec. 3.4, “A mixture of experts is a widely-used method to combine the outputs of several classifiers by associating a weighted confidence score with each classifier (Jacobs et al., 1991). In our setting, experts are individual twins that lend support for or against a particular answer for a question. Each twin is weighted by its contextual similarity to the target record rt, as predicted by the CPS model”)
Claim 18. The system of claim 1, wherein generating the response comprises populating a template response based on the combination of the at least one set of case information for at least one precedent case associated with the first stored query and the at least one set of case information for at least one precedent case associated with the second store query. (Note that a response generated by an algorithm can be generated as instructed by a prompt with predefined content as Tian p. 9; Rozen, in table 1 the predicted answer is no; however similar products/cases to the current product can be simply used as output: “Answer prediction example based on similar questions asked about similar products. The answer for all contextually-similar products is ‘no’ therefore we predict the answer ‘no’ for the target question”)
Claims 3-7 would be allowable, after overcoming double patenting rejections, if rewritten in independent form including all of the limitations of the base claim and any intervening claims after filing a timely terminal disclaimer as noted above.
Independent Claims 19-20 would be allowable after overcoming double patenting rejections.
Conclusion
The prior arts made of record in PTO-326 and not relied upon are considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHSEN ALMANI whose telephone number is (571)270-7722. The examiner can normally be reached on M-F, 9:00 to 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (Al R) at http://www. us pto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J. Lo can be reached on 571-272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only.
For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHSEN ALMANI/Primary Examiner, Art Unit 2159