Prosecution Insights
Last updated: April 19, 2026
Application No. 19/078,781

CONFIGURATION METHOD AND APPARATUS OF DIALOGUE ROBOT, ELECTRONIC DEVICE, MEDIUM AND PRODUCT

Final Rejection §103
Filed
Mar 13, 2025
Examiner
SONIFRANK, RICHA MISHRA
Art Unit
2654
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
4 (Final)
66%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
91%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
250 granted / 379 resolved
+4.0% vs TC avg
Strong +25% interview lift
Without
With
+24.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
408
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 379 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application is a continuation of PCT application PCT/CN2024/084845 which was filed on 3/29/2024 Response to Amendment No claims are amended. Claims 4 and 16 are cancelled. Claims 1-3, 5-15 and 17-20 are presented for examination. Response to Arguments Applicant’s arguments filed on 11/7/2025 have been reviewed. Following are the response: Applicant argues “Compared with the cited references, Applicant respectfully submits that claim 1 has at least the following two distinguishing features: I. Feature (1): the configuration instruction comprises a dialogue strategy for a second dialogue scene, and the language style is determined based on the identification or category of the second user The Examiner asserts that this feature is disclosed in both Abramson and Saradhi (see pages 4-5 of the Office action). However, the language styles of the chatbots in both references are entirely irrelevant to the second user, and neither reference discloses any technical feature of determining the language style based on the second user's identity or category. Examiner has relied on Saradhi to teach this concept -- specifically Saradhi teaches In our project, we created a Telegram bot that automatically responds to WhatsApp messages for a particular contact. Users can provide the contact name and a chat text file to the bot. The conversation data is processed, and dialogues are extracted and put into a structured manner. The Microsoft Dialogue GPT model and transfer learning techniques are used to train a conversational chatbot that mimics the communication style of the target contact. After deployment to the Hugging Face repository, the trained model is used. Here the style of response is based on the second user ( contacts). Additionally, Abramson teaches the any user can pick style hence the chatbot style is based on the second user because user can select the style - Para 0053; 0015-0016 Applicant argues “(A) Abramson does not disclose this feature because its language style is determined by the imitated specific entity and has no connection with the second user. Abramson's core technical solution is "creating a conversational chat bot of a specific person/entity" (see Abstract). The language style of its chatbot is determined solely by the personality traits of the "specific entity/person," with no correlation to the chat recipient (i.e., the second user in the present application). Specific bases are as follows. 1. Construction logic of the Personality Index: Abramson explicitly defines the Personality Index as a structured data repository built from social data of a specific entity (e.g., the entity's conversation records, voice data, and behavioral preferences) (see Par. [0014], [0032]). Its core function is to "train the chatbot to converse and interact in the personality of the specific person" (see Par. [0015]). This style remains fixed after training. 2. Uniqueness and immutability of the language style: Abramson's chatbot only imitates the style of a single specific entity. Regardless of the chat recipient (i.e., whether the second user is a friend, colleague, or stranger), the chatbot outputs the fixed style of that specific entity (see Par. [0034]). For instance, a chatbot imitating "Celebrity X" willPage 12 of 17 use Celebrity X's language style when conversing with any user, without changing based on the chat recipient's identity or category. 3. Abramson does not disclose determining language style based on the second user: Nowhere does Abramson mention "adjusting the language style based on the chat recipient's identity or category." All of its technical solutions focus solely on "how to accurately imitate a specific entity," rather than "how to adapt to the chat recipient." In summary, the language style of Abramson's chatbot is determined solely by the imitated specific entity and is unrelated to the second user's identity or category. Accordingly, Abramson fails to disclose Feature (1).” Examiner relied on Saradhi to teach this concept. But under BRI to be able to pick a particular style to respond to a second user ( friend) would read on the limitation of language style of chatbot based on a second user. Applicant argues “(B) Saradhi does not disclose this feature because its language style is fixed to imitate the target contact, and there is no limitation on dialogue scene. In Saradhi, the language style of the bot is determined solely by the target contact, is unaffected by the chat recipient, and does not involve any configuration of a dialogue scene. Specific bases are as follows.” However, the contact is a chat recipient and hence, Saradhi teaches the claimed concept. Applicant further argues “1. Fixity of the language style: Saradhi explicitly states that the chatbot is trained using chat records of the target contact provided by the user to learn the target contact's language style (see Abstract, Section 1.1). After training, the chatbot's language style is fixed to that of the target contact and does not change based on the chat recipient. For example, if the user uploads "chat records of contact Bunny," the chatbot will imitate Bunny's style after training and will reply in Bunny's style regardless of whether the subsequent chat recipient is Swajan ace or any other user (see FIGS. 3 and 4).” However, the claim requires determining, in response to detecting the second dialogue scene, a language style based on an identification or a category of the second user having the dialogue with the first user, no where in the claim requires the bot keeps adapting to the user style after user started speaking/chatting. Even the original filed specification of the current case does not teach this concept. The specification only mentions the bot tries to change based on the target user. Applicant further argues “2. No configuration instruction for dialogue scene: The user operations in Saradhi are limited to "send the chat text file and contact name to the bot" (see Section 1.2.2), and there is no disclosure of configuring a "dialogue scene" (such as selecting a category of the second user or specifying a particular user type). The chatbot's operating logic is simply to "launch WhatsApp and begin keeping an eye on the specified contact's chat" (see Section 1.2.5), with no design for enabling or switching styles in connection with any dialogue scene.” However, refer to fig 3 Saradhi teaches potential scenario and the model works based on uploading the scenes ( Under 1.1 ) . Additionally, examiner also relied on Abramson to teach this concept. Applicant argues “ 3. Language style is irrelevant to the chat recipient: As shown in FIG. 4 of Saradhi, when interacting with the chat recipient Swajan ace, the chatbot consistently replies in the Bunny-imitated style, without adjusting its language style based on Swajan ace's identity. Moreover, nowhere in Saradhi is there any disclosure of adjusting the language style for different chat recipients. In summary, the language style of Saradhi's chatbot is fixed to imitate the target contact, is not associated with any dialogue scene, and is not determined based on the second user's identity or category. Accordingly, Saradhi also fails to disclose Feature (1).” However, claim only requires chat style of the second user. The idea of . For example, if the user uploads "chat records of contact Bunny," the chatbot will imitate Bunny's style after training and will reply in Bunny's style regardless of whether the subsequent chat recipient is Swajan ace or any other user (see FIGS. 3 and 4) is not present in the claims. Further since the chat style of each individual user is can be invoked it can be seen that if chat is with other users of watapp it will be in their style. II. Feature (2): For different users dialoguing with the first user, the dialogue robot is configured to use different language styles to send messages in the dialogue.” However, Saradhi teaches the different contact have their own language style ( under Purpose of the project Section 1.1 ) Applicant argues “The Examiner submits that this feature is disclosed by both Abramson and Saradhi. However, the language styles of the chatbots in both references are fixed and unchangeable, and they cannot switch styles for different users: (A) Abramson does not disclose this feature because its language style is fixed and irrelevant to the chat recipient, and the Examiner's citation of Para. 0053 is a misinterpretation. 1. Abramson's chatbot lacks the ability to dynamically adjust the language style: As discussed above, Abramson's chatbot is trained based on the Personality Index of a specific entity. Once the imitated object is determined, the chatbot's language style is fixed (see Pars. [0015] and [0034]). For example, a chatbot imitating "historical figure Abraham Lincoln" will consistently use Lincoln's language style when conversing with any user and cannot adopt different styles for different users. 2. Par. [0053] of Abramson cited by the Examiner merely reiterates the solution of "using the personality index to train a chat bot to interact conversationally using the personality of the specific entity." It does not disclose any mechanism for "adjusting the language style for different chat recipients." In fact, Par. [0053] further confirms that the chatbot's style is determined by the specific entity and is unrelated to the identity or category of the chat recipient.” However, based on the user the style can be adjusted since user can choose many different chat style ( index) Applicant further argues “ (B) Saradhi does not disclose this feature because its chatbot only has a single imitated style and cannot adapt to different chat recipients 1. The style of Saradhi's chatbot is fixed and unique: Saradhi's chatbot is trained using chat records of a single target contact and learns only one language style-the style of that contact. As shown in FIG. 3 of Saradhi (Telegram bot user interface), the user needs only to specify one target contact (e.g., Bunny). The chatbot fixedly imitates that contact's style, with no operation or mechanism for "configuring multiple styles." 2. No logic for switching styles for different users: Saradhi's operating process explicitly states that, after activation, the chatbot monitors only the chat window of the "specified contact" (see Section 1.2.5). In the example of FIGS. 3 and 4, the chatbot imitates Bunny's style and outputs Bunny's style even when the chat recipient is Swajan ace. Thus, Saradhi cannot achieve "different styles for different users," nor can it modify the style based on the identity or category of the chat recipient.” However, Saradhi clearly teaches ( fig 3- user uploads a particular scenes; scene is a target user , Section 1.2.3) The language styles of the chatbots in Abramson and Saradhi are determined solely by the "imitated specific entity/target contact." They are entirely unrelated to the second user's identity or category, and they do not depend on the dialogue scenario. Once determined, the style remains fixed and unchangeable, and the chatbots cannot switch styles for different users. Accordingly, neither reference discloses the above distinguishing technical features of claim 1.” However, the claim does not require the style to change. The claim simply required the style is based on the identity or the category of the user. And Abramson and Saradhi teaches this concept as explained above. In summary, applicant is trying to argue that somehow the claim requires language style to be dynamic and changeable however that’s not in the claim. Dialogue style based on the contact reads on the style based on the recipient and hence also dynamic because it changes based on the recipient. Additionally, its known that watsapp can have multiple people in the same conversation and if the style is based on who the contacts are the chatbot will converse in the particular style of the recipient. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5-6, 8-10, 12-15, 17-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Abramson ( US Pub: 20180293483) and further in view of Vijaya ( Human Mimic Chatbot ) and further in view of Agarwal ( US Pub: 20240412001) Regarding claim 1, Abramson teaches a configuration method of a dialogue robot ( creating a bot, Para 0014, Fig 3), comprising: creating a dialogue robot for a first user ( creating a chatbot of a specific person, Para 0030; also fig 6), wherein the dialogue robot is for assisting the first user in a dialogue (the trained chat bot/LU model may be additionally or alternatively operable to provide additional functions, such as replying to emails and social media posts, answering voice calls and providing voicemails, serving as a personal digital assistant, storing reminders or messages, etc, Para 0034); displaying a configuration interface of the dialogue robot of the first user (personality index…. In the form of questions to the user, Para 0033); and sending a configuration instruction to the dialogue robot through the configuration interface ( training a bot, Fig 3, Fig 6), wherein the configuration instruction comprises a dialogue strategy for a specified dialogue scene ( configuration is based on personality index, Para 0032) , the specified dialogue scene comprises a second dialogue scene, the second dialogue scene comprises at least one of the second user belonging to a specified category or the second user being a specified user (wherein the personality index comprises personality information for the specific entity; and using the personality index to train a chat bot to interact conversationally using the personality of the specific entity. In some examples, the specific entity corresponds to at least one of a friend, a relative, an acquaintance, a celebrity, a fictional character and a historical figure., Para 0053; also from Para 0015-0016) , and the dialogue strategy for the second dialogue scene comprises: determining, in response to detecting the second dialogue scene, a language style based on an identification or a category of the second user having the dialogue with the first user, and participating in the dialogue based on the language style, wherein for different users dialoguing with the first user ( responding to the second user based on index which includes the second user style, Para 0053) , the dialogue robot is configured to use different language styles to send the messages in the dialogue ( personality index based on specific entity, Para 0053) Abramson does not explicitly teach participating in the dialogue between the first user and the second user and sending messages to the second user using the language style Vijaya in the same field on endeavor teaches participating in the dialogue with the second user and sending messages to the second user using the language style ( The Microsoft Dialogue GPT model and transfer learning techniques are used to train a conversational chatbot that mimics the communication style of the target contact, Under 1.1 Project Implementation; We have trained a conversational chatbot model utilising transfer learning strategies and the Microsoft Dialogue GPT model. This model is refined using the retrieved conversational data, allowing it to pick up on and imitate the contact's particular conversational style, Under Purpose of the project) It would have been obvious having the teachings of Abramson to further include the concept of Vijaya before effective filing date to have the chatbot which is relatable to improve productivity and tailored responses ( Abstract, Vijaya) Abramson modified by Vijaya does not explicitly teaches wherein the dialogue robot is for assisting the first user in a dialogue with a second user and the dialogue strategy for the second dialogue scene comprises: determining, in response to detecting the second dialogue scene, a language style based on an identification or a category of the second user having the dialogue with the first user, and participating in the dialogue between the first user and the second user based on the language style, wherein for different users dialoguing with the first user, the dialogue robot is configured to use different language styles to participate in the dialogue However Agrawal teaches wherein the dialogue robot is for assisting the first user in a dialogue with a second user ( Morgan is invoked to assist user 1, Fig 4A, Para 0005, 0098-0099)and the dialogue strategy for the second dialogue scene comprises: determining, in response to detecting the second dialogue scene, a language style based on an identification or a category of the second user having the dialogue with the first user, and participating in the dialogue between the first user and the second user (bot can be individualized or standard, Para 0005, 0051, Agarwal; participating based on given context/topic, Para 0078-0079, 0082) It would have been obvious having the concept of Abramson and Vijaya to further include the teachings of Agarwal before effective filing date so to reduce workload by acting in a mediator or counselor capacity ( Para 0020-0021, Agarwal) Regarding claim 2, Abramson modified by Agrawal as above in claim 1, teaches wherein the dialogue scene further comprises at least one of a dialogue between the first user and the second user comprising a specified dialogue topic, the second user belonging to a specified category ( dialogue for e.g, about the league, Fig 4a-4b, Agarwal ), and the dialogue strategy comprises at least one of a strategy of whether the dialogue robot participating in the dialogue between the first user and the second user or a language style of the dialogue robot (when to participate, Para 0051, Agrawal; version of oneself ( style etc.), Para 0014-0015, Abramson) Regarding claim 3, Agarwal as above in claim 2, teaches wherein a first dialogue scene comprises at least one of the dialogue between the first user and the second user comprising the specified dialogue topic, the second user belonging to the specified category ( particular topic based on inferred conversation, arguments etc., Para 0051) , the second user being the specified user, or the dialogue state of the first user belonging to the specified state, and the dialogue strategy for the specified dialogue scene comprises: participating in the dialogue between the first user and the second user in response to detecting the first dialogue scene( fig 4a-4b) Regarding claim 5, Abramson modified by Agarwal as above in claim 2, teaches wherein a third dialogue scene comprises a topic of the dialogue belonging to a specified category, and the dialogue strategy for the specified dialogue scene comprises: determining, in response to detecting the third dialogue scene, a language style based on the topic of the dialogue between the first user and the second user, and participating in the dialogue based on the language style ( bot responds based on the topic of discussion and bot can be standard or adversarial, Para 0051, 0005, Agarwal) Regarding claim 6, Abramson as above in claim 1, teaches wherein the configuration interface comprises a first dialogue interface of the first user and the dialogue robot, and the sending the configuration instruction to the dialogue robot through the configuration interface comprises: sending a dialogue, which is sent from the first user, as the configuration instruction to the dialogue robot in the first dialogue interface ( training a bot, Para 0053, Fig 3, Fig 6; wherein the configuration can be accessing media etc., and also by form filling in the form of questions to the user) Regarding claim 8, Agarwal as above in claim 1, teaches wherein the sending the configuration instruction to the dialogue robot through the configuration interface comprises: determining a historical dialogue record selected by the first user through the configuration interface, wherein the historical dialogue record comprises dialogue(s) between the first user and the second user (historic conversation between users, Para 0046, Agarwal) ; and sending the configuration instruction comprising the historical dialogue record and a learning instruction to the dialogue robot, wherein the learning instruction is for instructing the dialogue robot to extract a dialogue strategy of the first user in a dialogue scene of the historical dialogue record ( learning based on historic conversation, Para 0046) Abramson modified by Agarwal does not explicitly teaches wherein the historical dialogue record comprises dialogue(s) between the first user and the second user and is authorized by the first user In the same field of endeavor Vijaya teaches wherein the historical dialogue record comprises dialogue(s) between the first user and the second user and is authorized by the first user ( Fig 3 -Fig 4, historical chat data; wherein the user uploads the chat history of them and the friend ( target contact) It would have been obvious having the teachings of Abramson and Agarwal to further include the concept of Vijaya before effective filing date so that bot can learn the specific style ( Under- Purpose of project) Regarding claim 9, Abramson as above in claim 1, teaches wherein the creating the dialogue robot for the first user comprises: generating an image of the dialogue robot based on style description information input by the first user and an image from the first user authorized by the first user ( image and style of the user for the bot, Para 0015, 0016, 0027) ; and creating the dialogue robot for the first user based on the image of the dialogue robot ( image applied to the bot, Para 0016, 0027) Regarding claim 10, Abramson as above in claim 1, teaches wherein the creating the dialogue robot for the first user comprises: generating voice information of the dialogue robot based on voice of the first user authorized by the first user; and creating the dialogue robot for the first user based on the voice information of the dialogue robot ( voice information for the bot, Para 0015-0016) Regarding claim 12, Agarwal as above in claim 1, teaches wherein the assisting the first user in the dialogue with the second user comprises: taking a place of the first user to have a dialogue with the second user; or participating in the dialogue between the first user and the second user ( fig 4a-4b) Regarding claim 13, arguments analogous to claim 1, are applicable. In addition Abramson teaches An electronic device comprising: a memory; and a processor coupled to the memory, the processor being configured to, based on instructions stored in the memory, perform a configuration method as in claim 1( Fig 6) Regarding claim 14, arguments analogous to claim 2, are applicable. Regarding claim 15, arguments analogous to claim 3, are applicable. Regarding claim 17, arguments analogous to claim 5, are applicable. Regarding claim 18, arguments analogous to claim 6, are applicable. Regarding claim 20, arguments analogous to claim 1, are applicable. In addition, Abramson teaches A non-transitory computer-readable storage medium having stored there on a computer program ( Para 0054) 2nd Rejection Claims 1-3, 5-6, 8, 12-15, 17-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Vijaya ( Human Mimic Chatbot ) and further in view of Agarwal ( US Pub: 20240412001) Regarding claim 1, Vijaya teaches a configuration method of a dialogue robot, comprising: creating a dialogue robot for a first user, wherein the dialogue robot is for assisting the first user in a dialogue with a second user ( create a telegram bot, Under Introduction ) ; displaying a configuration interface of the dialogue robot of the first user ( for e.g. installation, Under 1.2 user interaction) ; and sending a configuration instruction to the dialogue robot through the configuration interface, wherein the configuration instruction comprises a dialogue strategy for a specified dialogue scene, the specified dialogue scene comprises a second dialogue scene, the second dialogue scene comprises at least one of the second user belonging to a specified category or the second user being a specified user ( fig 3- user uploads a particular scenes; scene is a target user , Section 1.2.3) , and the dialogue strategy for the second dialogue scene comprises: determining, in response to detecting the second dialogue scene, a language style based on an identification or a category of the second user having the dialogue with the first user ( mimicking the style of the target contact, Section 1.1) , participating in the dialogue with the second user, and sending messages to the second user using the language style, wherein for different users dialoguing with the first user, the dialogue robot is configured to use different language styles to send the messages in the dialogue ( different style based on target contacts, Section 1.1;1.4, 1.5) Vijaya does not explicitly teaches participating in the dialogue between the first user and the second user, and sending messages to the second user using the language style However Agrawal teaches participating in the dialogue between the first user and the second user, and sending messages to the second user using the language style ( ( Morgan is invoked to assist user 1, Fig 4A, Para 0005, 0098-0099; (bot can be individualized or standard, Para 0005, 0051, Agarwal; participating based on given context/topic, Para 0078-0079, 0082) It would have been obvious having the concept of Vijaya to further include the teachings of Agarwal before effective filing date so to reduce workload by acting in a mediator or counselor capacity ( Para 0020-0021, Agarwal) Regarding claim 2, Vijaya modified by Agrawal as above in claim 1, teaches wherein the dialogue scene further comprises at least one of a dialogue between the first user and the second user comprising a specified dialogue topic, the second user belonging to a specified category ( dialogue for e.g, about the league, Fig 4a-4b, Agarwal ), and the dialogue strategy comprises at least one of a strategy of whether the dialogue robot participating in the dialogue between the first user and the second user or a language style of the dialogue robot (when to participate, Para 0051, Agrawal; style of the target, Section 1.1, Abstract, Vijaya) Regarding claim 3, Agarwal as above in claim 2, teaches wherein a first dialogue scene comprises at least one of the dialogue between the first user and the second user comprising the specified dialogue topic, the second user belonging to the specified category ( particular topic based on inferred conversation, arguments etc., Para 0051) , the second user being the specified user, or the dialogue state of the first user belonging to the specified state, and the dialogue strategy for the specified dialogue scene comprises: participating in the dialogue between the first user and the second user in response to detecting the first dialogue scene( fig 4a-4b) Regarding claim 5, Vijaya modified by Agarwal as above in claim 2, teaches wherein a third dialogue scene comprises a topic of the dialogue belonging to a specified category, and the dialogue strategy for the specified dialogue scene comprises: determining, in response to detecting the third dialogue scene, a language style based on the topic of the dialogue between the first user and the second user, and participating in the dialogue based on the language style ( bot responds based on the topic of discussion and bot can be standard or adversarial, Para 0051, 0005, Agarwal; style of the target user, Section 1.1,1.4, 1.5 Abstract) Regarding claim 6, Vijaya as above in claim 1, teaches wherein the configuration interface comprises a first dialogue interface of the first user and the dialogue robot, and the sending the configuration instruction to the dialogue robot through the configuration interface comprises: sending a dialogue, which is sent from the first user, as the configuration instruction to the dialogue robot in the first dialogue interface ( training the bot, Fig 3-Fig 4) Regarding claim 8, Vijaya as above in claim 1, teaches wherein the sending the configuration instruction to the dialogue robot through the configuration interface comprises: determining a historical dialogue record selected by the first user through the configuration interface, wherein the historical dialogue record comprises dialogue(s) between the first user and the second user and is authorized by the first user; and sending the configuration instruction comprising the historical dialogue record and a learning instruction to the dialogue robot, wherein the learning instruction is for instructing the dialogue robot to extract a dialogue strategy of the first user in a dialogue scene of the historical dialogue record ( Fig 3 -Fig 4, historical chat data; wherein the user uploads the chat history of them and the friend ( target contact) Regarding claim 12, Agarwal as above in claim 1, teaches wherein the assisting the first user in the dialogue with the second user comprises: taking a place of the first user to have a dialogue with the second user; or participating in the dialogue between the first user and the second user ( fig 4a-4b) Regarding claim 13, arguments analogous to claim 1, are applicable. In addition, Vijaya teaches An electronic device comprising: a memory; and a processor coupled to the memory, the processor being configured to, based on instructions stored in the memory, perform a configuration method as in claim 1( Fig 1-2) Regarding claim 14, arguments analogous to claim 2, are applicable. Regarding claim 15, arguments analogous to claim 3, are applicable. Regarding claim 17, arguments analogous to claim 5, are applicable. Regarding claim 18, arguments analogous to claim 6, are applicable. Regarding claim 20, arguments analogous to claim 1, are applicable. In addition, Vijaya teaches A non-transitory computer-readable storage medium having stored there on a computer program ( Fig 1-2) Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Vijaya ( Human Mimic Chatbot ) and further in view of Agarwal ( US Pub: 20240412001) and further in view of Abramson ( US Pub: 20180293483) Regarding claim 9, Vijaya modified by Agrawal as above in claim 1, does not teach wherein the creating the dialogue robot for the first user comprises: generating an image of the dialogue robot based on style description information input by the first user and an image from the first user authorized by the first user; and creating the dialogue robot for the first user based on the image of the dialogue robot. Abramson teaches wherein the creating the dialogue robot for the first user comprises: generating an image of the dialogue robot based on style description information input by the first user and an image from the first user authorized by the first user ( image and style of the user for the bot, Para 0015, 0016, 0027) ; and creating the dialogue robot for the first user based on the image of the dialogue robot ( image applied to the bot, Para 0016, 0027) It would have been obvious having the teachings of Vijaya and Agrawal to further include the concept of Abramson before effective data to create a more realistic, human-like chat experience ( Para 0016, Abramson) Regarding claim 10, Vijaya modified by Agrawal as above in claim 1, does not teach wherein the creating the dialogue robot for the first user comprises: generating voice information of the dialogue robot based on voice of the first user authorized by the first user; and creating the dialogue robot for the first user based on the voice information of the dialogue robot Abramson teaches wherein the creating the dialogue robot for the first user comprises: generating voice information of the dialogue robot based on voice of the first user authorized by the first user; and creating the dialogue robot for the first user based on the voice information of the dialogue robot ( voice information for the bot, Para 0015-0016) It would have been obvious having the teachings of Vijaya and Agrawal to further include the concept of Abramson before effective data to create a more realistic, human-like chat experience ( Para 0016, Abramson) Claims 7 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Vijaya ( Human Mimic Chatbot ) and further in view of Agarwal ( US Pub: 20240412001) and further in view of Balasubramanian ( US Pub: 20180107461) Regarding claim 7, Vijaya as above in claim 1, teaches wherein the configuration interface comprises at least one of an input control or a selection control , and the sending the configuration instruction to the dialogue robot through the configuration interface comprises: obtaining, in response to a submission operation interface, description information and input information of the control on the interface, wherein the description information comprises at least one of information for guiding input or an example of the dialogue strategy; determining the configuration instruction based on at least one of the description information or the input information ; and sending the configuration instruction to the dialogue robot ( Fig 3 and Fig 4 – interface for bot training) Vijaya modified by Agarwal does not explicitly teaches wherein the configuration interface comprises a form interface In the same field of endeavor Balasubramanian teaches wherein the configuration interface comprises a form interface ( FIGS. 13-18 depict example GUI screens of a workflow development system that can be used to create a bot in accordance with an embodiment. These screens may be generated, for example, by UI generator 310 of workflow designer 306 as previously described in reference to workflow development system 300 of FIG. 3. The bot being developed in this example can guide a user through the steps of filling out a form for, say, a sales contact. The bot can go through several rounds of request/response and finally create an item in a Microsoft® Sharepoint® list or send an email to someone in the company, Para 0112) It would have been obvious having the teachings of Vijaya and Agarwal to further include the concept of Balasubramanian before effective filing date to make is easy for the user to build a bot without the need for the user to have specialized computer programming skills to develop the bot ( Para 0130, Balasubramanian) Regarding claim 19, arguments analogous to claim 7, are applicable. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Abramson Vijaya ( Human Mimic Chatbot ) and further in view of Agarwal ( US Pub: 20240412001) and further in view of BMS ( US 20210390144) Regarding claim 11, Abramson modified by Vijaya and Agarwal as above in claim 1, does not explicitly teaches displaying a second dialogue interface between the dialogue robot and the second user; receiving a feedback from the first user on a dialogue sent by the dialogue robot in the second dialogue interface, wherein the feedback comprises affirmation, negation or modification; and adjusting the dialogue strategy of the dialogue robot based on the feedback However BMS teaches displaying a second dialogue interface between the dialogue robot and the second user ( for e.g. response from the SME, Para 0085-0087) ; receiving a feedback from the first user on a dialogue sent by the dialogue robot in the second dialogue interface ( receive feedback from the participant, Para 0085-0087) , wherein the feedback comprises affirmation, negation or modification; and adjusting the dialogue strategy of the dialogue robot based on the feedback ( bot learns based on the feedback, Fig 2b, 3) It would have been obvious having the teachings of Vijaya and Agarwal to further include the teachings of BMS before effective filing date to improve the user experience (Para 0019, BMS) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 20240195758 – training based on group conversation 20190121842- modify sentences based on whom the user is communication (Para 0064) Patel ( US Pub: 20210173718 ) THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Richa Sonifrank whose telephone number is (571)272-5357. The examiner can normally be reached M-T 7AM - 5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Phan Hai can be reached at (571)272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Richa Sonifrank/Primary Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Mar 13, 2025
Application Filed
May 27, 2025
Non-Final Rejection — §103
Aug 22, 2025
Response Filed
Sep 02, 2025
Final Rejection — §103
Nov 07, 2025
Request for Continued Examination
Nov 15, 2025
Response after Non-Final Action
Dec 09, 2025
Non-Final Rejection — §103
Mar 09, 2026
Response Filed
Mar 16, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602552
Machine-Learning-Based OKR Generation
2y 5m to grant Granted Apr 14, 2026
Patent 12603085
ENTITY LEVEL DATA AUGMENTATION IN CHATBOTS FOR ROBUST NAMED ENTITY RECOGNITION
2y 5m to grant Granted Apr 14, 2026
Patent 12585883
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12585877
GROUPING AND LINKING FACTS FROM TEXT TO REMOVE AMBIGUITY USING KNOWLEDGE GRAPHS
2y 5m to grant Granted Mar 24, 2026
Patent 12579988
METHOD AND APPARATUS FOR CONTROLLING AUDIO FRAME LOSS CONCEALMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
66%
Grant Probability
91%
With Interview (+24.9%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 379 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month