DETAILED ACTION
This communication is responsive to the instant application filed on 09/10/2022.
Claims 1, 8, and 15 are independent claims.
Claims 1-20 are pending in this application and are presenting for examining on merits.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/10/2022 have been considered and recorded. The submission is in compliance with the provisions of 37 CFR §1.97. See form PTO-1449 singed and attached hereto.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Guru et al., “Cross Domain Answering FAQ Chatbot” (NPL), pub. Dated Mar. 04-05. 2022 (hereinafter as “Guru”) and further in view of Kallepalli et al., US Pub. No. 2024/0037327 A1 (hereinafter as “Kallepalli”).
Regarding claim 1, Guru teaches: a computer-based method of enhancing dialogue management systems (e.g., chatbots, see Page 1: Col. 1: in Abstract and part I. Introduction section) by enriching contextual data using fact fetchers (e.g., training datasets, see Page 1: Part II. Methodology, section A; and Page 2: Col. 2, Section D. Chatbot Model: e.g., “training input…”, wherein the trainings herein are interpreted as the fetchers) , the method comprising:
automatically intercepting a received query sent to a dialogue management system (see Fig. 1 and Col. 2, 2nd paragraph via “input” either text and/or voice; and part II. Methodology, section A, e.g., “each query that the chatbot receives”);
automatically tagging language in the received query using a trained classifier and identifying an applicable associated fact fetcher (e.g., tags using code in a JSON file, see Page 1: Part II. Methodology, section A);
automatically utilizing the associated fact fetcher to identify additional contextual data (Page 2, Col. 1, Fig. 3, and Section B. SERP API, and Sections D. and E.: which disclosed the utilizing the training dataset and trains models to identify the answers in the concise format under different descriptors, e.g., Anaser Box, Knowledge Graph, and Orgainic Results, for instance).
Guru does not explicitly teach the limitations: “automatically generating an updated dialogue including the additional contextual data; and automatically running a trained language model on the updated dialogue to generate a response for the received query.”
In the same field of endeavor (i.e., data processing), Kallepalli teaches:
automatically generating an updated dialogue including the additional contextual data (Figs. 2-4, and pars. [0032-33] via the generating different clarifying suggestions for output that are relevant to the user’s inputs that implement the updated prompt/dialogue including the relevant/additional contextual data, [0040], and [0043]); and
automatically running a trained language model on the updated dialogue to generate a response for the received query (par. [0028] e.g., “the execution steps (128) can be a series of structured query language (SQL) statements. The generative pretrained transformer (110) can dynamically generate the execution steps according to the modeled syntax of paradigms (112) when the confidence (122) satisfies the threshold (126)”; par. [0036] and Figs. 2-4).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of the cited references because the teachings of Kallepalli would have provided Guru with the above indicated limitations for allowing a skill artisan in motivation to perform the updated dialogue to generate the results/responses for the user query when runs/uses the trained/pretrained language model efficient enhancement purpose (Kallepalli: Figs. 1-5;and pars. [0002] and [0016-18]).
Regarding claim 2, Guru and Kallepalli, in combination, teach: wherein the dialogue management system comprises a pretrained large language transformer model configured to utilize a deep neural network to generate the response to the received query (Guru: Page1: Fig. 1 and Col. 2, at part II. Methodology, section A, e.g., “a Deep neural network”; and Kallepalli: par. [0025]: “machine learning platform (102) is a generative pretrained transformer (110). The generative pretrained transformer (110) is a large language model that uses a deep neural network for natural language processing (NLP)….”; and pars. [0026] and [0036]).
Regarding claim 3, Guru and Kallepalli, in combination, teach: wherein automatically tagging the language in the received query using the trained classifier (Guru: tags using code in a JSON file, see Page 1: Part II. Methodology, section A) further comprises:
automatically ranking one or more sentences in the received query to detect a class having a threshold probability of being applicable to the one or more sentences in the received query (Guru: Page 2: Part II. Methodology, section A, e.g., “the highest probability is selection” which is implied the ranking technique, and Page 3: Part III. Results, e.g., “the threshold”; and Kallepalli: Fig. 4 via “Output Probabilities”; and par. [0027] teaches the detecting a class having a threshold probability, e.g., “the generative pretrained transformer (110) utilizes context (120) when predicting the user's underlying intent generative pretrained transformer (110) outputs a confidence (122) for mappings between the query and a predicted intent (124). A threshold (126) may be set (e.g., confidence interval of 50%, 75%, 95% etc.), where if the prediction sits outside of the confidence interval, the machine learning platform (102) dynamically generates natural language text (108), using the generative pretrained transformer (110). The output of this model is a series of clarifying questions, which is sent to the user interface as natural language text (108)”).
Regarding claim 4, Guru and Kallepalli, in combination, teach:
in response to detecting that no class has the threshold probability of being applicable to the one or more sentences in the received query, automatically using a summarizer to summarize a latter portion of the one or more received sentences (Guru: see page 3, Part III. Results, e.g., “when the confidence probability was less than the threshold then the chatbot …. Called the SERP APE and it returned answers in a concise snippet…” wherein the “snippet” is summarizer; and Kallepalli: par. [0043]: “when the first confidence does not satisfy a threshold, the generative pretrained transformer machine learning model processes the first query and a second query to dynamically generate a second natural language text. The processing is performed to clarify a first intent of the first natural language text. For example, the threshold may specify a confidence interval, where if the confidence is outside the threshold, then an event is triggered prompting the dynamic generation of a second natural language text”, par. [0044] wherein the second natural language text comprises one or more clarification questions is implied as the summarizer to summarize a latter portion of the received sentences inherently, and further in par. [0045]); and
automatically ranking the summarized latter portion of the one or more received sentences to detect a class having the threshold probability of being applicable to the one or more sentences in the received query (Guru: see again in page 3, Part III. Results Kallepalli: Figs. 2-3 are shown the user interface for output the queries’ response including the ranking via suggestion 1, suggestion 2, suggestion 3, etc., for instance).
Regarding claim 5, Guru and Kallepalli, in combination, teach: wherein utilizing the associated fact fetchers (Guru: e.g., training datasets, see Page 1: Part II. Methodology, section A; and Page 2: Col. 2, Section D. Chatbot Model: e.g., “training input…”, wherein the trainings herein are interpreted as the fetchers; and Kallepalli: par. [0032] “the prompt and completion pairs (200) serve as examples that enable the fine-tuning of the generative pretrained transformer to generate the different clarifying suggestions that are relevant to the Natural Language query received as input from the user.”, wherein the “fine-tuning of the generative pretrained transformer” is interpreted as fact fetchers) further comprises performing at least one of: invoking an API on a company-specific or third-party server, invoking customized logic, querying an external or internal database, asking a user for additional input, and performing a calculation or complex math calculation, or a combination thereof to identify the additional contextual data (Guru: Page 1: Col. 1, Abstract: e.g., “University website”, and 2nd paragraph, e.g., “the Industry”; and Kallepalli: par. [0038]: “a first natural language text is received via a user interface. An interface may be for spoken language, allowing a user to submit natural language queries.”, and par. [0043] via “the first query and a second query”; and pars. [0076] teaches the performing a calculation, [0077] e.g., “implement and/or be connected to a data repository. For example, one type of data repository is a database. A database is a collection of information configured for ease of data retrieval, modification, re-organization, and deletion. Database Management System (DBMS) is a software application that provides an interface for users to define, create, query, update, or administer databases”, and [0078] “The DBMS may load the data from persistent or non-persistent storage and perform computations to respond to the query. The DBMS may return the result(s) to the user or software application”).
Regarding claim 6, Guru, and Kallepalli, in combination, teach: in response to detecting a plurality of applicable classes for the received query, automatically identifying a plurality of associated fact fetchers and automatically identifying additional the additional contextual data using each of the plurality of associated fact fetchers (Guru: Page 1, Col. 2, last paragraph, e.g., “JSON file”, and/or “SERP API”; and Kallepalli: pars. [0052-57] via the “additional context (625)”) .
Regarding claim 7, Guru, and Kallepalli, in combination, teach: wherein the generated response for the received query is output to the user using a user interface (Guru: see Page 2, Section B. SERP API; and Kallepalli: Figs. 2-3; Abstract: e.g., “ A first natural language text is received via a user interface…”; and pars. [0021-24] via the “user interface (106)”, and [0049] wherein the report is interpreted as the generated response).
Claims 8-20 are rejected in the analysis of above claims 1-7, and the claims are rejected on that basis.
Prior Arts
The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action.
It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275,277 (CCPA 1968)); Merck & Co. v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert. denied, 493 U.S. 975 (1989).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jessica N. Le whose telephone number is (571)270-1009. The examiner can normally be reached M-F 9:30 am - 5:30 pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHERIEF BADAWI can be reached at (571) 272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Jessica N Le/Examiner, Art Unit 2169 /MD I UDDIN/ Primary Examiner, Art Unit 2169