DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/08/2025 has been entered.
Response to Arguments
Applicant’s arguments, see Remarks, filed 12/08/2025, with respect to rejection of claims 1-4, 6-13 and 15-20 under 35 U.S.C. 101 have been fully considered and are persuasive. The rejection of claims 1-4, 6-13 and 15-20 under 35 U.S.C. 101 has been withdrawn. Although the examiner does not agree with the argument that: “A person skilled in the art would understand that terms in the claims, such as ‘node tree,’’jump relationships,’ and ‘recommendation list,’ describe complex data structures created and managed in computer memory, as well as automated state management and decision-making processes based on these structures”, the examiner acknowledges the applicant’s argument that “The recommended script generation mechanism in the claim (checking if the recommendation list is empty, acquiring script content from subnodes) is not mere information collection but a state-based technical mechanism. The recommendation list is a dynamic data structure associated with a specific node. Checking whether it is empty constitutes a state query. Taking different actions (populating the list or using it directly) based on its state (empty or not empty) constitutes a state-driven control flow. This mechanism optimizes the utilization of computer resources. Such state-based management is a fundamental, non-abstract technical concept in computing systems. Viewed as a whole, the claim does not merely describe an idea using highly generalized language. On the contrary, the claim describes a solution to the specific technical problem of multi-round dialogue management, implemented on a computing device by operating a specific pre-constructed node tree data structure and executing a state-based dynamic recommendation algorithm.” Regarding this applicant’s argument, the examiner agrees and thus withdraws the rejection under 35 U.S.C. 101.
Applicant's arguments filed 12/08/2025 regarding the rejection under 35 U.S.C. 103 have been fully considered but they are not persuasive. Regarding the rejection of claims 1-20 under 35 USC § 103, the applicant argues: “The node tree defined in the claims is fundamentally distinct from Kuchmann's parse tree in terms of purpose, function, and stage of use. Their structures and relationships serve entirely different objectives.
The node tree in the claims is pre-constructed and defines the service flow corresponding to a specific scenario. That is, it defines how the system can respond and how the dialogue may flow, embodying the system's internal business logic and dialogue capabilities. In contrast, Kuchmann's parse tree is generated only after a user query is received. It is an analytical result and representation of the grammatical and semantic structure of that specific user input sentence. It describes the user's input, not the system's output or internal logic. This represents a fundamental opposition between pre-defined system logic and post-hoc user input analysis. The Examiner's equating of the parse tree, which represents ‘what the user said,’ with the node tree that defines ‘how the system should respond and what to do next,’ is logically untenable.” Regarding applicant’s arguments, the examiner respectfully disagrees. The examiner contends that the applicant’s characterization of the claim language and Kuchmann’s disclosure appears to be inconsistent. The applicant relies on the newly added language of a “pre-constructed” node tree to argue that the claim language represents a service flow corresponding to a specific scenario, as opposed to Kuchmann’s parse tree which is generated after the user’s query is received. However, the examiner contends that Kuchmann’s disclosure, in p. 0037-0039 & p. 0046, provides for the matching of the parse tree, generated from the user’s input, to a pre-defined pattern database, where the pattern database defines linguistic patterns that are matched to the patterns in in the parse tree. Although the parse tree in Kuchmann does represent “what the user said,” as argued by the applicant, the pattern database represents a mapping of how the system should respond and what to do next. Furthermore, the language of “defining a service flow in the scenario” is represented, for example, in Kuchmann’s p. 0061 because the disclosure provides for triggering a technical query, as the scenario, which in turn provides a template for template for the technical query which would correspond to a specific service flow triggered by the identified scenario. Thus, the examiner contends that Kuchmann in view of Andres do teach the recited language of the claim. Furthermore, regarding the rejection of claims 1-4, 6, 7, 9-13, 15, 16 and 18-20 under 35 USC § 103, the applicant argues: “The Examiner contends that Andreas's ‘the initial value is insufficient’ is equivalent to an ‘empty recommendation list’ and its extended plan is equivalent to acquiring content from subnodes. We disagree with this assertion by the Examiner. In claim 1, the ‘empty recommendation list’ is associated with a specific node. Checking whether it is empty constitutes a state query. If it is empty, content is acquired from the set of qualifying sub-nodes of that node - a pre-defined, structured source - to populate this list. In Andreas, assessing whether the initial value is insufficient is a dynamic judgment, not a state query. It judges whether the current data is sufficient to generate a response; if not, it executes a new, potentially arbitrary computational plan (extended plan) to acquire more data. Here, there is no concept of a ‘list,’ nor is there the concept of acquiring content from a fixed, hierarchical collection of ‘subnodes’; rather, it involves executing another task. The Examiner erroneously equates the action of ‘checking if the recommendation list is empty,’ which is a state query, with the action of ‘judging whether the initial value is sufficient,’ which is a condition evaluation, in Andreas. The former is an inspection of a persistent data structure, while the latter is an evaluation of the result of a computation. These are two fundamentally different computational paradigms.” Regarding applicants arguments, the examiner respectfully disagrees. The examiner contends that the applicant’s characterization of Andrea’s disclosure appears to be incorrect. The examiner previously argued that: “The description of Andreas’ p. 0098-0100 provides that when the initial value is determined insufficient because the selected description template is otherwise “empty” at initiation to properly generate an extended description responsive to the conversational event, thus the description template may require additional information that is not output by the initial computer-executable plan (e.g. the “initial value”).” The examiner further argues that the disclosure of Andrea’s initial value corresponds to the condition evaluation that decides whether an extended computer-executable plan is required. The argument that this is a fundamentally different computational paradigm than the claimed is incorrect, because the claimed subject matter simply describes whether further script content is needed to be obtained based on the recommendation list being empty or not. The applicant’s arguments appear to construe the claimed subject matter in a much narrower light than what is actually claimed. Namely, the applicant argues that checking whether the recommendation list is empty constitutes a state query. There is no indication in the claim that the language represents a state query other than the recitation of a node tree which does not require a state query, in the understanding of the computational paradigm of the claimed subject matter. Regarding the applicant’s arguments stating the technical solutions, the examiner contends that the applicant appears to construe the claim language in a narrow light. The applicant notes that: “through the pre-constructed node tree architecture for business scenarios combined with the dynamic recommendation list mechanism, achieves a significant improvement in the flexibility of dialogue flow and the accuracy of recommendations.” The examiner contends that, although the recommendation list mechanism is dynamic in the sense that it attempts to collect scripts when the recommendations are absent, the claim does not specify at all what mechanism is employed to obtain the script content when the recommendation list is empty, as this appears to be an important part of the dynamic mechanism of the recommendation list.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6, 7, 9-13, 15, 16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kuchmann-Beauger (US PG Pub 20130262501; hereinafter “Kuchmann”) in view of Andreas (US PG Pub 20210050006).
As per claims 1, 10 and 19, Kuchmann discloses:
A multi-round dialogue processing method, electronic device and non-transitory computer readable storage medium storing computer instructions for causing a computer to perform a multi-round dialogue processing method implemented by a computing device comprising: at least one processor (Kuchmann; Fig. 7, item 705; p. 0115 - The computer system 700 includes a processor 705; also see p. 0040 - answer generator 160 may include query processor 162, visualization processor 164 and feedback handler 166); and a memory connected with the at least one processor communicatively (Kuchmann; Fig. 7, item 710 & 715; p. 0115 - The computer system 700 includes a media reader 740 to read the instructions from the computer readable storage medium 755 and store the instructions in storage 710 or in random access memory (RAM) 715… The processor 705 reads instructions from the RAM 715 and performs actions as instructed); wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a multi-round dialogue processing method (Kuchmann; Fig. 7, item 710 & 715; p. 0115 - The computer system 700 includes a media reader 740 to read the instructions from the computer readable storage medium 755 and store the instructions in storage 710 or in random access memory (RAM) 715… The processor 705 reads instructions from the RAM 715 and performs actions as instructed) comprising: acquiring a first query input by a user in a current round dialogue (Kuchmann; Fig. 2, item 210; p. 0044- The process starts at 210 with receiving a query. The query may be expressed in natural language. Questions to be answered may be expressed in free form and may include various linguistic features, mistakes, ambiguities, etc. In one embodiment, the query is a BI question posed by a user. Table 1… illustrates exemplary BI questions), and identifying an intention of the first query (Kuchmann; Fig. 2, item 220; p. 0045 - At 220, the query is parsed. In one embodiment, methods and techniques for analyzing queries semantically and syntactically may be implemented. For example, various modules, such as, tokenizer 132, POS tagger 134, stemmer 136, focus identification 138, entity recognition 140, semantic relationship identifier 142, and syntactic relationship identifier 144 (FIG. 1), may perform analyses of received queries. In one embodiment, business entities are identified in the query, where the business entities correspond to entities defined in a semantic layer or other ontologies representing data sources (e.g., data models 174 in FIG. 1)); determining a scenario corresponding to the identified intention from scenarios corresponding to different intentions (Kuchmann; Fig. 2, item 230; p. 0046 - At 230, as a result of parsing the query, a corresponding data structure that corresponds to the parsed query is generated. The parsed query may be represented as a tree structure such as a parse tree. In one embodiment, the parse tree is a labeled tree graph. The parsed question may represent syntactic and semantic analysis of the received question… The parse tree or labeled tree graph corresponds to a “scenario” as defined in the specification in Pg. 4 – “…each scenario may include (that is, correspond to) one node tree; that is, plural nodes are organized in a tree structure…”; see also p. 0047 - the parsed query is matched to a pattern from a number of patterns (scenarios corresponding to different intentions)), determining a node matched with the first query in a pre-constructed node tree corresponding to the scenario (Kuchmann; p. 0046-0047 - At 240, the parsed query is matched to a pattern from a number of patterns. To translate the received BI query to a machine-readable query, e.g., a technical query, the parsed query is matched to patterns such as patterns kept in pattern database 152 in FIG. 1…; see also p. 0037-0039 - Pattern matcher 150 may include pattern database 152 and pattern learner 154. Pattern database 152 includes a set of predefined linguistic patterns. A pattern is a set of rules for recognizing character strings that share common features… a pattern of a question may be associated with predefined templates of technical queries. Once, a question to be answered is parsed by question analyzer 130, the parsed question is matched by pattern matcher 150 to a corresponding pattern included in pattern database 152), the node matched with the first query being a node with a node trigger condition matched with the first query (Kuchmann; p. 0061 - At 250, a technical query is triggered, where the technical query is associated with the pattern that matches the received query. For example, the received query may be the exemplary question "What is the global revenue in Tokyo". This exemplary question may be matched by Pattern (1). Template (1) below represents an exemplary template of technical query that may be associated with Pattern (1) (trigger condition matched)), wherein the pre-constructed node tree includes multiple nodes organized in a tree structure with jump relationships defining a service flow in the scenario, and each node processes a dialogue logic (Kuchmann; p. 0046 - nodes of the tree may represent various tokens or business entities, for example, measures, dimensions, attributes, members of measures, and the like (tree structure). Edges of the tree represent relations between the entities (jump relationships). In one embodiment, the generated tree structure may be implemented in RDF. The RDF tree may represent entities identified in the received question and relations between these entities in the form RDF triples; see also p. 0061 - At 250, a technical query is triggered, where the technical query is associated with the pattern that matches the received query (service flow). For example, the received query may be the exemplary question "What is the global revenue in Tokyo". This exemplary question may be matched by Pattern (1). Template (1) below represents an exemplary template of technical query that may be associated with Pattern (1)…); determining a recommended script according to the matched node (Kuchmann; p. 0069-0071 - In one embodiment, a QA system may be integrated to a situational recommender system to enable context-aware recommendation of answers to users' questions; see also p. 0039-0040), wherein each node corresponds to a script content which is generated based on the node triggering condition of the node (Kuchmann; p. 0061 - At 250, a technical query is triggered, where the technical query is associated with the pattern that matches the received query. For example, the received query may be the exemplary question "What is the global revenue in Tokyo". This exemplary question may be matched by Pattern (1). Template (1) below represents an exemplary template of technical query that may be associated with Pattern (1) (trigger condition matched)); and acquiring a reply script corresponding to the first query (Kuchmann; p. 0069-0071 - At 270, based on the retrieved data, an answer to the received query is generated… The QA system generates answers to BI queries that are adapted to the user's context or situation (reply script); see also p. 0039-0040). Kuchmann, however, fails to disclose determining a to-be-identified node from the matched node, in response to that the to-be- identified node corresponds to an empty recommendation list, acquiring a script content corresponding to each sub-node of the to-be-identified node meeting requirements, forming the recommendation list with all script contents acquired, and generating the recommended script according to the recommendation list; and in response to that the recommendation list corresponding to the to-be-identified node is not empty, generating the recommended script according to the recommendation list, wherein each node corresponds to a script content which is generated based on the node triggering condition of the node; acquiring a guide script corresponding to the first query, wherein the guide script is preset; and generating a reply corresponding to the first query according to the reply script, the guide script and the recommended script. Andreas does teach determining a to-be-identified node from the matched node (Andreas; p. 0044 - For example, as shown in FIG. 2C, the matcher of Rule 1 is configured to match a node that fulfills two conditions, namely that the node (1) contains a WeatherReport that (2) resulted from a call to the weatherSearch function. The matcher is configured to assign the name [report] to the report value itself, and the names [place] and [time] to the two arguments of the call to weatherSearch. If the matcher is not applicable in the generation model's current state (e.g. if the vertex was not produced by a call to weatherSearch), then the rule is not used (i.e., other rules may be used instead)), in response to that the to-be- identified node corresponds to an empty recommendation list, acquiring a script content corresponding to each sub-node of the to-be-identified node meeting requirements, forming the recommendation list with all script contents acquired (Andreas; p. 0098-0100 - At 128, method 120 includes selecting an extended computer-executable plan based on the initial value. In some examples, the extended computer-executable plan may be selected based on determining that the initial value is insufficient for generating the extended description responsive to the conversational event (empty recommendation list, additional information required). For example, the extended computer-executable plan may be a computer-executable plan that is configured to output additional information suitable for filling in a description template. The extended computer-executable plan may be selected using any suitable techniques, for example, based on generating one or more extended computer-executable plans that are configured to output data having a datatype corresponding to the additional information… selecting a description template and/or selecting the extended computer-executable plan may be based on selecting an applicable generation rule of a plurality of generation rules, wherein the applicable generation rule defines both of the description template and/or the extended computer-executable plan (e.g., as the “body” of the generation rule) (acquiring a script content corresponding to each subnode meeting requirements)), and generating the recommended script according to the recommendation list (Andreas; p. 0104 - At 130, method 120 includes using the extended computer-executable plan to output the additional information. The additional information may be used for generating an extended description for responding to the conversational event. For example, the additional information may be used for filling in a description template as illustrated at 132); and in response to that the recommendation list corresponding to the to-be-identified node is not empty, generating the recommended script according to the recommendation list (Andreas; p. 0057 - Accordingly, returning to FIG. 1B, at 112, method 100 optionally includes filling the natural-language template of the generation rule with description(s) derived from the extended computer-executable plan. For example, further description(s) may be generated by recursively traversing the extended computer-executable plan to find suitable data-flow program fragments for computing values related to the further description(s), and applying further generation rules to the data-flow program fragments. For example, the natural-language template may indicate one or more values to be mentioned in the description. As an example, returning to FIG. 2C, rule 1 includes a template specifying an output description of the form “It is {Entity [weatherNow]} {Property [time]}, but {Entity [weatherNext]} starting {Property [timeNext]}.” Accordingly, generating a description according to rule 1 includes filling in the template with descriptions for “weatherNow,” “time,” “weatherNext,” and “timeNext.” In the notation used herein, curly braces designate values that should be described by recursively calling the generation model, e.g., by recursively matching additional generation rules to the values); acquiring a guide script corresponding to the first query, wherein the guide script is preset (Andreas; p. 0054 - For example, rule 1 is configured to describe the weather, not only by outputting a description of the current weather, but by further outputting a description of the weather in an hour if the weather is expected to change (this corresponds to a “guide script”… guide script in the specification is defined in Pg. 8 as: “…used to guide subsequent scripts”. Thus, providing a further description of the current weather may serve as guiding subsequent exchange with the user)… Accordingly, the rule may be applied to assess whether the weather is likely to change and, if the weather is likely to change, the rule body is successfully executed so that a suitable response utterance may be generated using the natural language template of the rule (preset). Generating response utterances using rules such as rule 1 may assist the user by providing information that the user is likely interested in but may not have explicitly asked for. For example, a user may be interested in the weather so that they may select appropriate attire, and as such if the weather is expected to change the user may wish to know about the change. Accordingly, applying rule 1 may result in gathering additional information and including the new information in the response utterance, e.g., so that the user receives pertinent information without needing to ask follow-up questions (guiding subsequent scripts); see also p. 0058 – natural language template of a generation rule is a generation rule that is effectively “preset”; see also p. 0105) and generating a reply corresponding to the first query according to the reply script, the guide script and the recommended script (Andreas; p. 0058 - After filling in a natural-language template of a generation rule with description(s) derived from the extended computer-executable plan, at 114, method 100 further includes outputting a response utterance based on the filled-in natural-language template. In some examples, more than one generation rule may be applied. Consequently, more than one natural-language template may be filled in. Accordingly, outputting the response utterance may include selecting one of the filled-in natural-language templates and outputting a text string based on that template. Selection of a particular response utterance based on multiple different generation rules may include filtering and/or ranking response utterances in any fashion; see also p. 0105).
Therefore, it would have been obvious to one of ordinary skill in the art to modify the method, electronic device and non-transitory computer readable storage medium of Kuchmann to include acquiring a guide script corresponding to the first query and generating a reply corresponding to the first query according to the reply script, the guide script and the recommended script, as taught by Andreas, because by performing new computational steps in the course of outputting a description, the conversational computing interface may be able to take “initiative” to fulfill user requests (e.g., to satisfy an implicit desire implied by a user request) (Andreas; p. 0054).
As per claims 2, 11 and 20, Kuchmann in view of Andreas disclose: The method, electronic device and non-transitory computer readable storage medium according to claims 1, 10 and 19, upon which claims 2, 11 and 20 depend. And further, Andreas discloses wherein the determining the to-be-identified node from the matched node comprises: if a number of the matched nodes is 1, taking the matched node as a to-be-identified node (Andreas; p. 0044 - For example, as shown in FIG. 2C, the matcher of Rule 1 is configured to match a node that fulfills two conditions, namely that the node (1) contains a WeatherReport that (2) resulted from a call to the weatherSearch function. The matcher is configured to assign the name [report] to the report value itself, and the names [place] and [time] to the two arguments of the call to weatherSearch. If the matcher is not applicable in the generation model's current state (e.g. if the vertex was not produced by a call to weatherSearch), then the rule is not used (i.e., other rules may be used instead)), and if the number of the matched nodes is greater than 1, taking the last node in the matched nodes as the to-be-identified node, the last node being a node to which a process finally jumps according to a jump relationship between the nodes (Andreas; p. 0022 - For example, the generation rules may be applied by traversing a data-flow graph representing the computer-executable plan...; see also p. 0044 - A rule further includes a matcher configured to accept or reject a node in the data-flow graph, for example by testing the function at that node, the value the function returned, or recursively testing the arguments the function consumed (e.g., testing the function at an argument node, the value returned by the argument node, and/or further recursively testing further arguments of the argument node). A matcher is configured to determine whether a rule applies to a given piece of the dataflow graph); and generating the recommended script according to the to-be-identified node (Andreas; p. 0022 - Generation rules may be configured to produce the response utterance in any suitable fashion, for example a generation rule may be applied to add text to a response utterance that will be output in the conversation, extend the computer-executable plan with additional computations by adding new nodes to the data-flow graph, and/or recursively traverse the data-flow graph to apply more generation rules. A plurality of generation rules may be applied to obtain a set of candidate response utterances, which may be utilized in any suitable fashion (e.g., ranked to select a specific response utterance, and/or used in training data)…). Therefore, it would have been obvious to one of ordinary skill in the art to modify the method, electronic device and non-transitory computer readable storage medium of Kuchmann to include wherein the determining the recommended script according to the matched node comprises: if a number of the matched nodes is 1, taking the matched node as a to-be-identified node, and if the number of the matched nodes is greater than 1, taking the last node in the matched nodes as the to-be-identified node, the last node being a node to which a process finally jumps according to a jump relationship between the nodes; and generating the recommended script according to the to-be-identified node, as taught by Andreas, because by performing new computational steps in the course of outputting a description, the conversational computing interface may be able to take “initiative” to fulfill user requests (e.g., to satisfy an implicit desire implied by a user request) (Andreas; p. 0054).
As per claims 3 and 12, Kuchmann in view of Andreas disclose: The method and electronic device according to claims 2 and 11, further comprising: judging whether the to-be-identified node meets a recommendation condition (Kuchmann; p. 0072 - depending on the context, a user may receive automatically generated recommendations; see also p. 0096-0098 – contains examples of different recommendation conditions such as time, location and historical context). And further, Andreas teaches generating the recommended script according to the to-be-identified node under a condition that the to-be-identified node meets the recommendation condition (Andreas; p. 0044 - For example, as shown in FIG. 2C, the matcher of Rule 1 is configured to match a node that fulfills two conditions, namely that the node (1) contains a WeatherReport that (2) resulted from a call to the weatherSearch function. The matcher is configured to assign the name [report] to the report value itself, and the names [place] and [time] to the two arguments of the call to weatherSearch. If the matcher is not applicable in the generation model's current state (e.g. if the vertex was not produced by a call to weatherSearch), then the rule is not used (i.e., other rules may be used instead)). Therefore, it would have been obvious to one of ordinary skill in the art to modify the method and electronic device of Kuchmann to include generating the recommended script according to the to-be-identified node under a condition that the to-be-identified node meets the recommendation condition, as taught by Andreas, because by performing new computational steps in the course of outputting a description, the conversational computing interface may be able to take “initiative” to fulfill user requests (e.g., to satisfy an implicit desire implied by a user request) (Andreas; p. 0054).
As per claims 4 and 13, Kuchmann in view of Andreas disclose: The method and electronic device according to claims 3 and 12, wherein the to-be-identified node meets the recommendation condition in the case that the to-be-identified node comprises at least two subnodes meeting requirements; and wherein a subnode of the subnodes meets requirements in the case of: an intention corresponding to a subnode is matched with the intention corresponding to the first query; and/or an entity corresponding to a subnode is matched with an entity corresponding to the first query, the entity comprising an entity attribute and/or an entity content (Andreas; p. 0044 - For example, as shown in FIG. 2C, the matcher of Rule 1 is configured to match a node that fulfills two conditions, namely that the node (1) contains a WeatherReport that (2) resulted from a call to the weatherSearch function (intent). The matcher is configured to assign the name [report] to the report value itself, and the names [place] and [time] (entities) to the two arguments of the call to weatherSearch. If the matcher is not applicable in the generation model's current state (e.g. if the vertex was not produced by a call to weatherSearch), then the rule is not used (i.e., other rules may be used instead)). Therefore, it would have been obvious to one of ordinary skill in the art to modify the method and electronic device of Kuchmann to include wherein the to-be-identified node meets the recommendation condition in the case that the to-be-identified node comprises at least two subnodes meeting requirements; and wherein a subnode of the subnodes meets requirements in the case of: an intention corresponding to a subnode is matched with the intention corresponding to the first query; and/or an entity corresponding to a subnode is matched with an entity corresponding to the first query, the entity comprising an entity attribute and/or an entity content, as taught by Andreas, because by performing new computational steps in the course of outputting a description, the conversational computing interface may be able to take “initiative” to fulfill user requests (e.g., to satisfy an implicit desire implied by a user request) (Andreas; p. 0054).
As per claims 6 and 15, Kuchmann in view of Andreas disclose: The method and electronic device according to claims 1 and 10, wherein the forming the recommendation list with all script contents acquired comprises: ranking the script contents acquired according to a predetermined arrangement order of the corresponding subnodes in the subnodes of the to-be-identified node, and forming the recommendation list by the ranked script contents (Andreas; p. 0058 - Selection of a particular response utterance based on multiple different generation rules may include filtering and/or ranking response utterances in any fashion. For example, the generation model may be trained based on training data indicating exemplary user utterances and suitable response utterances. Further examples of obtaining training data indicating how to select response utterances will be described below. In some examples, ranking response utterances may be based on scoring and/or ranking the different applicable rules in each state during recursive generation, thereby inducing a ranking on responses (e.g., a ranking based on a sum, product, and/or average of scores for applicable rules at each state, a mean ranking across all states, and/or any other suitable ranking based on aggregating rankings of different applicable rules in each state during recursive generation)); and wherein the generating the recommended script according to the recommendation list comprises: generating the recommended script comprising all script content in the recommendation list, and a sequence of the script contents in the recommended script is the same as a ranking sequence of the script contents in the recommendation list (Andreas; p. 0034 - In some examples, a conversational computing interface may be configured to automatically infer a final “describe” operation for a computer-executable plan that does not otherwise include a “describe” operation, e.g., in order to automatically describe an overall result of the computer-executable plan and provide a suitable response utterance. For example, the automatic “describe” operation may be added to the computer-executable plan and parametrized by a data-flow program fragment corresponding to a terminal (e.g., most downstream) vertex in a corresponding data-flow graph (e.g., a highest-ordered vertex according to an order induced by topologically sorting the data-flow graph (ranking sequence of the script contents)). For example, FIG. 2B shows a data-flow graph corresponding to an extended data-flow program that may be automatically inferred based on the data-flow program 204 shown in FIG. 2A, namely by adding a “describe” operation parametrized by the “weatherSearch” operation). Therefore, it would have been obvious to one of ordinary skill in the art to modify method and electronic device of Kuchmann to include wherein the forming the recommendation list with all script contents acquired comprises: ranking the script contents acquired according to a predetermined arrangement order of the corresponding subnodes in the subnodes of the to-be-identified node, and forming the recommendation list by the ranked script contents; and wherein the generating the recommended script according to the recommendation list comprises: generating the recommended script comprising all script content in the recommendation list, and a sequence of the script contents in the recommended script is the same as a ranking sequence of the script contents in the recommendation list, as taught by Andreas, because by performing new computational steps in the course of outputting a description, the conversational computing interface may be able to take “initiative” to fulfill user requests (e.g., to satisfy an implicit desire implied by a user request) (Andreas; p. 0054).
As per claims 7 and 16, Kuchmann in view of Andreas disclose: The method and electronic device according to claims 6 and 15, further comprising: acquiring a second query input by the user in a next round dialogue after the current round dialogue, and determining subnodes matched with the second query in the subnodes meeting requirements; increasing a number of hits corresponding to the subnodes matched with the second query, an initial number of hits of the subnodes meeting requirements being 0 (Andreas; p. 0098-0100 - At 128, method 120 includes selecting an extended computer-executable plan based on the initial value. In some examples, the extended computer-executable plan may be selected based on determining that the initial value is insufficient for generating the extended description responsive to the conversational event (prompting for additional information). For example, the extended computer-executable plan may be a computer-executable plan that is configured to output additional information suitable for filling in a description template. The extended computer-executable plan may be selected using any suitable techniques, for example, based on generating one or more extended computer-executable plans that are configured to output data having a datatype corresponding to the additional information… selecting a description template and/or selecting the extended computer-executable plan may be based on selecting an applicable generation rule of a plurality of generation rules, wherein the applicable generation rule defines both of the description template and/or the extended computer-executable plan (e.g., as the “body” of the generation rule)); and re-ranking the script contents in the recommendation list in a descending order of the number of hits (Andreas; p. 0058 - Selection of a particular response utterance based on multiple different generation rules may include filtering and/or ranking response utterances in any fashion. For example, the generation model may be trained based on training data indicating exemplary user utterances and suitable response utterances. Further examples of obtaining training data indicating how to select response utterances will be described below. In some examples, ranking response utterances may be based on scoring and/or ranking the different applicable rules in each state during recursive generation, thereby inducing a ranking on responses (e.g., a ranking based on a sum, product, and/or average of scores for applicable rules at each state, a mean ranking across all states, and/or any other suitable ranking based on aggregating rankings of different applicable rules in each state during recursive generation) (once additional information is gathered, a re-ranking may be performed)). Therefore, it would have been obvious to one of ordinary skill in the art to modify the method and electronic device of Kuchmann to include acquiring a second query input by the user in a next round dialogue after the current round dialogue, and determining subnodes matched with the second query in the subnodes meeting requirements; increasing a number of hits corresponding to the subnodes matched with the second query, an initial number of hits of the subnodes meeting requirements being 0; and re-ranking the script contents in the recommendation list in a descending order of the number of hits, as taught by Andreas, because by performing new computational steps in the course of outputting a description, the conversational computing interface may be able to take “initiative” to fulfill user requests (e.g., to satisfy an implicit desire implied by a user request) (Andreas; p. 0054). As per claims 9 and 18, Kuchmann in view of Andreas disclose: The method according to claims 1 and 10, upon which claims 9 and 18 depend. And further, Andreas teaches wherein the generating the reply corresponding to the first query according to the reply script, the guide script and the recommended script comprises: splicing the reply script, the guide script and the recommended script to obtain the reply corresponding to the first query (Andreas; p. 0058 - After filling in a natural-language template of a generation rule with description(s) derived from the extended computer-executable plan, at 114, method 100 further includes outputting a response utterance based on the filled-in natural-language template. In some examples, more than one generation rule may be applied. Consequently, more than one natural-language template may be filled in. Accordingly, outputting the response utterance may include selecting one of the filled-in natural-language templates and outputting a text string based on that template. Selection of a particular response utterance based on multiple different generation rules may include filtering and/or ranking response utterances in any fashion; see also p. 0105).
Therefore, it would have been obvious to one of ordinary skill in the art to modify the method and electronic device of Kuchmann to include wherein the generating the reply corresponding to the first query according to the reply script, the guide script and the recommended script comprises: splicing the reply script, the guide script and the recommended script to obtain the reply corresponding to the first query, as taught by Andreas, because by performing new computational steps in the course of outputting a description, the conversational computing interface may be able to take “initiative” to fulfill user requests (e.g., to satisfy an implicit desire implied by a user request) (Andreas; p. 0054). Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kuchmann in view of Andreas and further in view of Terry (US PG Pub 20180373696).
As per claims 8 and 17, Kluchmann in view of Andreas disclose: The method and electronic device according to claims 1 and 10, upon which claims 8 and 17 depend. Kluchmann in view of Andreas, however, fail to disclose wherein the acquiring the guide script corresponding to the first query comprises: selecting randomly one guide script from at least one preset guide script as the guide script corresponding to the first query. Terry does teach wherein the acquiring the guide script corresponding to the first query comprises: selecting randomly one guide script from at least one preset guide script as the guide script corresponding to the first query (Terry; p. 0156 - It should be noted the response template may be a singular template, multiple templates that can be used interchangeably, or a template with variable features that may be psudo-randomly replaced... For example, if the message asks “how are you” the response template could include “[Salutation], I [verb] [status] today.” The salutation could be randomly, or pseudo-randomly selected from the following: “Thanks for asking”, “Hi”, “Hey”, “You are so sweet” or the like. The verb could include the following: “am”, “am feeling”, “feel”, etc. The status could include: “happy”, “fine”, “great”, etc. This allows a total of at least 36 possible outputs for this question).
Therefore, it would have been obvious to one of ordinary skill in the art to modify the method and electronic device of Kuchmann and Andreas to include wherein the acquiring the guide script corresponding to the first query comprises: selecting randomly one guide script from at least one preset guide script as the guide script corresponding to the first query, as taught by Terry, in order to ensure the response is as “human sounding” as possible… because the lead may send more than one simple question over the course of a message exchange, and having a static answer may appear “robotic” to the lead over time (Terry; p. 0156).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The prior art made of record and not relied upon includes: Mandal (US PG Pub 20190057157) discloses a method and system for providing context based adaptive response to user interactions. A primary context of the user interactions is determined based on intents and associated named entities extracted from the user interactions. Further, secondary context of the user interactions is determined by detecting enquiry intent in user responses for queries provided for the primary context of the user interactions. Information related to primary and the secondary contexts are stored as Key Context Information (KCI) and is dynamically updated during the user interactions. Finally, context based adaptive responses are generated based on the updated KCI upon determining non-enquiry intent in subsequent user responses. The method of present disclosure maintains track of the user interactions and automatically detects changes in the context of the user interactions. Thereafter, the method provides adaptive responses corresponding to each context of the user interactions, thereby improving overall user experience (Mandal; Abstract). He (US PG Pub 20180373965) discloses a method and apparatus for generating a response based on artificial intelligence, and a storage medium. The method comprises: generating a response forest which employs a multi-way tree data structure, the multi-way tree at least comprising three layers of nodes: a root node, domain nodes and role nodes arranged in a top-bottom order, each leaf node respectively corresponding to at least one response template corresponding to information on the leaf node path; obtaining a user question, searching the response forest according to the user question to obtain a leaf node corresponding to the user question, and regarding a response template corresponding to the obtained leaf node as a candidate response template; generating a to-be-broadcast response according to the user question and the candidate response template, and broadcasting the to-be-broadcast response to the user. The solution of the present disclosure exhibits wide applicability and can improve the response broadcasting effect (He; Abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Rodrigo A Chavez whose telephone number is (571)270-0139. The examiner can normally be reached Monday - Friday 9-6 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached on 5712727602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RODRIGO A CHAVEZ/Examiner, Art Unit 2658
/RICHEMOND DORVIL/Supervisory Patent Examiner, Art Unit 2658