Prosecution Insights
Last updated: April 19, 2026
Application No. 18/545,153

Tailored Synthetic Personas with Parameterized Behaviors

Final Rejection §103
Filed
Dec 19, 2023
Examiner
ZHU, RICHARD Z
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Centurylink Intellectual Property LLC
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
85%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
498 granted / 718 resolved
+7.4% vs TC avg
Strong +15% interview lift
Without
With
+15.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
32 currently pending
Career history
750
Total Applications
across all art units

Statute-Specific Performance

§101
16.0%
-24.0% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
4.2%
-35.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 718 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Acknowledgement Acknowledgement is made of applicant’s amendment made on 2/16/2026. Applicant’s submission filed has been entered and made of record. Status of the Claims Claims 1-5 and 7-19 are pending. Response to Applicant’s Arguments Claims 1-7 and 9-10 are patent eligible for the following reason: Amended Claim 17 recites a system comprising: a computing system, comprising: at least one first processor; and a first non-transitory computer readable medium communicatively coupled to the at least one first processor, the first non-transitory computer readable medium having stored thereon computer software comprising a first set of instructions that, when executed by the at least one first processor, causes the computing system to: cause at least one artificial intelligence ("AI")/machine learning ("ML") driven persona to interact with a user via a user interface ("UI"), the interaction including a conversation between the at least one AI/ML-driven persona and the user; analyze, using one of at least one AI/ML model, the conversation to identify one or more goals of the conversation related to at least one of one or more products or one or more services provided by a provider; generate, using one of the at least one AI/ML model, one or more first conversational threads configured to achieve at least one goal of the conversation among the one or more goals of the conversation related to the at least one of the one or more products or the one or more services; cause the at least one AI/ML-driven persona to continue the conversation with the user using the one or more first conversational threads to work toward achieving the at least one goal of the conversation; analyze, using one of the at least one AI/ML model, the interaction to identify one or more observable characteristics of the user, the one or more observable characteristics including at least one of one or more speech patterns of the user, a language used by the user, whether the user has an accent, what accent the user has, whether English is a second language for the user, one or more non-verbal cues of the user, a demeanor of the user, a sentiment of the user, or an emotional state of the user; access and analyze, using one of the at least one AI/ML model, stored information associated with the user to identify one or more conversation points, the stored information including at least one information regarding a market segment within which the user is classified or societal information for a societal segment to which the user belongs; and cause the at least one AI/ML-driven persona to adapt by modifying the interaction with the user, based at least in part on at least the identified one or more conversation points, to enhance or improve the interaction with the user. Claim 1 recites a corresponding method. Claim 18 recites a method, comprising: causing, by a computing system, at least one artificial intelligence ("AI")/machine learning ("ML") -driven persona to interact with a user via a user interface ("UI"), the interaction including a conversation between the at least one AI/ML-driven persona and the user; analyzing, by the computing system and using one of at least one AI/ML model, the conversation to identify one or more goals of the conversation; determining, by the computing system and using one of the at least one AI/ML model, a structure of the interaction with the user, based at least in part on the identified one or more goals of the conversation; generating, by the computing system and using one of the at least one AI/ML model, one or more first conversational threads configured to achieve at least one goal of the conversation among the one or more goals of the conversation within the determined structure of the interaction with the user; causing, by the computing system, the at least one AI/ML-driven persona to continue the conversation with the user using the one or more first conversational threads to work toward achieving the at least one goal of the conversation; mapping, by the computing system and using one of the at least one AI/ML model, a flow of the interaction with the user; based on a determination that the flow of the interaction is moving away from achieving the at least one goal of the conversation, generating, by the computing system and using one of the at least one AI/ML model, one or more second conversational threads configured to steer the interaction back toward achieving the at least one goal of the conversation; causing, by the computing system, the at least one AI/ML-driven persona to continue the conversation with the user using the one or more second conversational threads to steer the interaction back toward achieving the at least one goal of the conversation; analyzing, by the computing system and using one of the at least one AI/ML model, the interaction to identify one or more observable characteristics of the user, the one or more observable characteristics including at least one of one or more speech patterns of the user, a language used by the user, whether the user has an accent, what accent the user has, whether English is a second language for the user, one or more non-verbal cues of the user, a demeanor of the user, a sentiment of the user, or an emotional state of the user; accessing and analyzing, by the computing system and using one of the at least one AI/ML model, stored information associated with the user to identify one or more conversation points, the stored information including at least one of account information associated with the user, contact information associated with the user, previous interactions with the user, historical data associated with the user, demographic information about the user, personal information about the user, user-volunteered information regarding general interests of the user, information regarding a market segment within which the user is classified, or societal information for a societal segment to which the user belongs; and causing, by the computing system, the at least one AI/ML-driven persona to adapt by modifying the interaction with the user, based at least in part on at least the identified one or more conversation points, to enhance or improve the interaction with the user. Under Prong (2) of Step 2A, the goal is to determine whether the claim is directed to the recited exception by evaluating whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. See MPEP 2106.04II(A). In particular, evaluating integration into a practical application requires identifying whether there are any additional elements recited in the claim beyond the judicial exception and evaluating those additional elements, individually and in combination, to determine whether they integrate the exception into a practical application, using one or more of the considerations laid out by the Supreme Court and the Federal Circuit (“CAFC”). See MPEP 2106.04(d). According to the Supreme Court, a patent may issue for the means or method of producing a certain result, or effect, and not for the result or effect produced. Diamond v. Diehr, 450 U.S. 175, 182 n. 7 (1981). Therefore, the focus is on whether the claim “focus on a specific means or method that improves the relevant technology or are instead directed to a result or effect that itself is the abstract idea and merely invoke generic processes and machinery”. Enfish, L.L.C. v. Microsoft Corp., 822 F.3d 1327, 1336 (Fed. Cir. 2016). For example, in Enfish, the CAFC found it relevant to ask whether claims were directed to an improvement to computer functionality versus being directed to an abstract idea. Enfish, 822 F.3d at 1335. To that extent, the CAFC found that the claims were specifically directed to a self-referential table for a computer database. Id. at 1337. In particular, the claim language required a four step algorithm specifically directed to a self-referential table for a computer database that improved upon prior art information search and retrieval systems by employing a flexible, self-referential table to store data. Id. at 1336-37. Therefore, the focus of the claims was on a specific asserted improvement in computer capabilities (i.e., the self-referential table for a computer database), not on economic or other tasks for which a computer was used in its ordinary capacity. Id. at 1336. See also MPEP 2106.04(d)I (“an improvement in the functioning of a computer or an improvement to other technology or technical field, as discussed in MPEP 2106.04(d)(1) and 2106.05(a)”). In another example, in McRO, the CAFC noted that prior art method of generating morph weight set with values between “0” and “1” for computer animation of facial expressions are manually determined. McRO, Inc. v. Bandai Namco Games America, Inc., 837 F.3d 1299, 1304-5 (Fed. Cir. 2016). The claimed improvement in McRO allows computers to produce “accurate and realistic lip synchronization and facial expressions in animated characters” that previously could only be produced by human animators through the automated use of rules, rather than artists, to set the morph weights and transitions between phonemes. Id. at 1313. Specifically, the claims are directed to the incorporation of claimed rules, not the use of the computer that improved existing technological process by allowing automation of further tasks that goes beyond merely organizing existing information into a new form. Id. at 1314-15. In other words, the claimed process used a combined order of specific rules that rendered information into a specific format that was then used and applied to create a sequence of synchronized, animated characters that prevent pre-emption of all processes for achieving automated lip-synchronization of 3-D characters. Id. at 1315. Therefore, the CAFC held that the ordered combination of claimed steps, using unconventional rules that relate sub-sequences of phonemes, timing, and morph weight sets is patent eligible. Id. at 1302-3. In the instant application, claims 1, 17, and 18 recite AI / ML driven persona to interact with a user via a user interface including a conversation between the at least one AI / ML driven persona and the user. The AI / ML driven persona uses at least one AI / ML model to generate conversation threads to continue conversation with the user, access and analyze stored information associated with the user to identify one or more conversation points, and modify interaction with the user based on the one or more conversation points. Therefore, the claims specifically asserted the application of AI / ML models to provide a user interface for interacting with the user. Much like the self-referential table for a computer database in Enfish being directed to a specifically asserted computer functionality, the combination of limitations set forth in claims 1, 17, and 18 are directed toward a specifically asserted AI / ML model driven user interface for interacting with the user by using AI / ML model to generate conversation threads, identifying conversation points, and modifying interaction with the user based on the one or more conversation points. In other words, much like the incorporation of claimed rules to allow automation of computer animation of facial expressions in McRO, the combination of limitations set forth in claims 1, 17, and 18 incorporated AI / ML models to automate the generation of conversation threads, identification of conversation points, and modification of interaction with the user based on the conversation points. Therefore, claims 1-5 and 7-19 are patent eligible. In response to “the Applicant respectfully submits that the additional references fail to remedy the deficiencies of Sapugay and Gong as each fail to disclose or suggest "accessing and analyzing, by the computing system and using one of the at least one AI/ML model, stored information associated with the user to identify one or more conversation points, the stored information including at least one information regarding a market segment within which the user is classified or societal information for a societal segment to which the user belongs" and "causing, by the computing system, the at least one AI/ML-driven persona to adapt by modifying the interaction with the user, based at least in part on at least the identified one or more conversation points, to enhance or improve the interaction with the user," as recited in the present claims. Thus, each of the cited references, whether taken alone or in combination, fail to disclose or suggest each of the recitations of the present claims”. In view of such amendment to claims 1, 17, and 18, rejections set forth in the non-final office action have been withdrawn. Upon further search and consideration, please see details of a new combination of references set forth below. Claim Rejections - 35 USC § 103 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 103 that form the basis for the rejections under this section made in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 7, and 17 are rejected under 35 USC 103(a) as being unpatentable over Sapugay et al. (US 2019/0294676 A1) in view of Zamora Duran et al. (US 10621978 B2). Regarding Claims 1 and 17, Sapugay discloses a system (Fig. 1), comprising: a computing system (Figs. 1 and 4A, ¶41 and ¶53, cloud based platform 20 comprising data center 22), comprising: at least one first processor (¶51 and ¶108, processor 82 associated with client instance 42 within data center 22); and a first non-transitory computer readable medium communicatively coupled to the at least one first processor, the first non-transitory computer readable medium having stored thereon computer software comprising a first set of instructions that, when executed by the at least one first processor (¶31, “application” and “engine refer to computer program software instructions executable by one or more processors; ” ¶51 and ¶108, processor 82 performing instructions stored in memory 86), causes the computing system to perform cause at least one artificial intelligence (“AI”)/machine learning (“ML”)-driven persona to interact with a user via a user interface (“UI”), the interaction including a conversation between the at least one AI/ML-driven persona and the user (¶54, Fig. 4A, a reasoning agent / behavior engine RA/BE 102 hosting virtual agents / personas that interact with the user of client device 14D via natural language user requests 122 (“user utterances”) and agent responses 124 (“agent utterances”); per ¶39, agent automation framework applying a combination of rule based and machine learning (¶34, neural network implementation) based cognitive linguistic techniques in extracting meaning from natural language utterances); analyze, using one of at least one AI/ML model, the conversation to identify one or more goals of the conversation related to at least one of one or more products or one or more services provided by a provider (¶60, RA/BE 102 provides utterance 122 to NLU framework 104 to process the utterance 122 to derive intents / entities within the utterances to perform one or more particular predefined actions; e.g., ¶65, RA/BE 102 processes the derived intents / entities to determine suitable actions such as purchasing an item (i.e., product) or closing an account (i.e., service); i.e., the intent / goal was to purchase a product or to request a service); generate, using one of the at least one AI/ML model, one or more first conversational threads configured to achieve at least one goal of the conversation among the one or more goals of the conversation related to the at least one of the one or more products or the one or more services (¶117, determine whether a user message is associated with context information of another episode (e.g., yesterday’s context) and perform suitable actions in response to user messages by retrieving and overlaying the context information of the current episode (e.g., ¶65 and ¶125, intents / entities of the new user message such as purchasing an item or closing an account) with context of the referenced episode; ¶124, when the persona of RA/BE 102 receives a new message from the user, determine whether the new message should be treated as a continuation of a prior conversation episode or the beginning of a new episode); and cause the at least one AI/ML-driven persona to continue the conversation with the user using the one or more first conversational threads to work toward achieving the at least one goal of the conversation (¶125, when the new message is a continuation of a prior episode, the RA/BE 102 resumes the conversation using the context of the prior episode by overlaying the episode context information of the prior episode over current context information in order to use context information of the prior episode when responding to the new user message). Sapugay does not disclose analyze, using one of the at least one AI/ML model, the interaction to identify one or more observable characteristics of the user. Zamora Duran discloses a system for dynamically generating computer dialog (Col 6, Rows 40-47) identifying goals of the user for engaging in a dialog interaction with the system and context (Col 6, Rows 47-55, identifying perceived topics of interest to the user, identifying goals of the user for engaging in the dialog, and contextual parameters). The system analyzes, using at least one AI/ML model (Col 7, Rows 45-58, dialog module 103 comprises a natural language processor NLP 107 utilizing deep learning based natural language models for understanding human language as the input dialog is entered into the system), the dialog interaction to identify one or more observable characteristics of the user, the one or more observable characteristics including at least one of one or more speech patterns of the user, a language used by the user (Col 7, Rows 55-56, identification of the input language), whether the user has an accent, what accent the user has, whether English is a second language for the user (Col 11, Rows 2-4 in view of Col 7, Rows 55-56, identifying a level of English (i.e., user as a native English speaker) based on conversational input itself), one or more non-verbal cues of the user (Col 8, Rows 46-49, NLP 107 scan recorded images and video data for contextual clues about user’s body language), a demeanor of the user (Col 8, Rows 46-49, NLP 107 scan recorded images and video data for contextual clues about user’s visual demeanor), a sentiment of the user (Col 8, Rows 4-5, NLP 107 identifies user’s sentiment), or an emotional state of the user (Col 8, Rows 46-49, NLP 107 scan recorded images and video data for contextual clues about user’s emotion); access and analyze, using one of the at least one AI/ML model, stored information associated with the user (Col 9, Rows 36-42, query databases for known or existing knowledge about the user; Col 9, Rows 52-60, the dialog system 101 learns and adapts each time a repeat user interacts with the dialog system to collect and store information about a user based on past conversations, preferences, and interactions between the user and the dialog system; e.g., Col 8, Rows 30-37, NLP 107 / deep learning natural language models identify natural language input for context such as culture and age of the user, language and other conversational parameters) to identify one or more conversation points (Col 11, Rows 45-51 and Table 3, based on previous interactions with the dialog system 101, dialog creation module 110 selects an appropriate dictionary comprising words at an appropriate level for the user to understand based on user’s current skill set), the stored information including at least one information regarding a market segment within which the user is classified or societal information for a societal segment to which the user belongs (Col 9, Rows 49-50, query information regarding user’s age (i.e., market segment), location, culture (i.e., societal information); see Table 2, user’s culture, age, location, education, profession); and cause the at least one AI/ML-driven persona to adapt by modifying the interaction with the user, based at least in part on at least the identified one or more conversation points, to enhance or improve the interaction with the user (Col 10, Rows 55-61, based on the language information discerned from the conversational input and known information about the user, select a corresponding dictionary having an appropriate language and level of sophistication for carrying on a conversation with the user; Col 11, Rows 52-52-63, dialog creation module 110 selects an appropriate dictionary and an appropriate corpus of information associated with each of the perceived topics of interest identified by the NLP 107 to create a responsive dialog to the user, the corpus was selected as a function of the conversational input of the user and contextual parameters such as emotion, gestures, body language of the user). It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to identify one or more observable characteristics of the user, access and analyze stored information associated with the user to identify one or more conversation points, and adapt the AI/ML driven persona by modifying the interaction with the user based on the conversation points to retrieve, parse, and integrate human expressions into dialogs in order to sound more human like and emulate the dynamic ability of human expression found naturally in the interactions between humans during conversation (Zamora Duran, Col 12, Rows 52-57). Regarding Claim 2, Sapugay discloses wherein the computing system comprises at least one of a server (¶42 and Fig. 1, server 24 / cloud based platform 20), an AI system, a ML system, an AI/ML system (¶34, “machine learning” refers to any suitable statistical form of artificial intelligence capable of being trained using machine learning techniques), a deep learning (“DL”) system (¶34, ML techniques implemented using a deep neural network), a user interactive system (¶54, Fig. 4A, a reasoning agent / behavior engine RA/BE 102 hosting virtual agents / personas that interact with the user of client device 14D), a customer interface server (¶43, server instances 24 handles requests from and serves multiple customers), a cloud computing system (¶42 and Fig. 1, server 24 / cloud based platform 20), or a distributed computing system (Fig. 1), wherein the UI comprises one of a voice-only UI (¶54, interact with user of client device 14D via natural language user requests 122 in a voice interactive system), a telephone communication UI (¶32 and ¶47, virtual agent being a telephone call agent), a video-only UI, a video with voice UI, a chat UI, a software application (“app”) UI, a holographic UI, a virtual reality (“VR”)-based UI, an augmented reality (“AR”)-based UI, a mixed reality (“MR”)-based UI, or a web-portal-based UI (¶40, web browser application acting as gateway between client devices 14 and cloud based platform 20; ¶44, servers 24 being web servers). Regarding Claim 3, Sapugay discloses wherein the one or more goals of the conversation comprise at least one of purchasing the one or more products, ordering the one or more services (¶65, RA/BE 102 processes the derived intents / entities to determine suitable actions such as purchasing an item (i.e., product) or closing an account (i.e., service), answering one or more questions regarding at least one product among the one or more products, answering one or more questions regarding at least one service among the one or more services, learning how to use at least one product among the one or more products, learning how to use at least one service among the one or more services, troubleshooting at least one issue associated with at least one product among the one or more products, troubleshooting at least one issue associated with at least one service among the one or more services, returning the one or more products, or ending the one or more services. Regarding Claim 4, Sapugay discloses wherein the one or more products comprise one or more of a telephone, a modem, a router, a customer premises equipment (“CPE”), an Ethernet circuit, a network device, a server, a consumer product (¶33, merchandise entities associated with purchase intents), an electronic device, sporting goods, office equipment, a home appliance, a media recording device, a media player, a user device, clothing, footwear (¶106, “buy this shoe” and “purchase this sneaker”), a vehicle, or a building; wherein the one or more services comprise one or more of electricity utility service, water utility service, trash and recycling pickup service, telephone service (¶32, RA/BE agent being a telephone call agent), cellular phone service, satellite telephone service, digital subscriber line (“DSL”) service, Internet service, Ethernet service, optical fiber Internet service, satellite Internet service, streaming media service, downloadable media service, cable television service, or satellite television service. Regarding Claim 7, Sapugay discloses identifying and verifying, by the computing system, an identity of the user, based at least in part on one or more of caller identification (“ID”) information, the account information associated with the user, a voiceprint of the user, two-factor authentication, or information provided by the user (¶43, multi-tenant cloud architecture distinguishes between and segregates data of various customers by assigning particular identifier for each customer and using the identifier to identify and segregate data from each customer; i.e., when customer provides data, the cloud architecture can use the customer data to assign customer identifier in order to segregate data from each customer). Claim 5 is rejected under 35 USC 103(a) as being unpatentable over Sapugay et al. (US 2019/0294676 A1) and Zamora Duran et al. (US 10621978 B2) as applied to claim 1, in view of Walters et al. (US 20014/0244712 A1). Regarding Claim 5, Sapugay does not teach concurrent with the interaction with the user, causing, by the computing system, the at least one AI/ML-driven persona to investigate a status of the at least one of the one or more products or the one or more services. Walters teaches a virtual assistant computing system interacting with a user, the interaction includes a conversation between the virtual assistant and the user (¶82, human users interact with a natural language interaction application / virtual assistant where user provides input or request to a natural language interaction engine 403, which interprets the intention of the user request and construct one or more appropriate responses to the request; ¶84, natural language interaction engine 403 executes appropriate actions needed to react to request including appropriate verbal, textual, visual, or haptic response) related to products or services provided by a provider (¶84, interact with a third party service to execute a transaction for e-commerce systems), concurrent with the interaction with the user, the virtual assistant computing system investigating a status of the at least one of the one or more products or the one or more services (¶106, virtual assistant handles intermittent connectivity and be able to continue processing requests from user that require connection when connectivity resume; ¶157 and ¶163, determine that a task requires information from a device connected to the internet and perform connectivity process 903); determining, by the computing system, whether the computing system is capable of remotely addressing one or more issues with the at least one of the one or more products or the one or more services (¶106, responding to user request to block off time in her calendar, determine that virtual assistant is unable to connect to user’s calendar); and based on a determination that the computing system is capable of remotely addressing one or more issues with the at least one of the one or more products or the one or more services, generating, by the computing system, a conversational message indicating that the at least one AI/ML-driven persona is able to remotely address the one or more issues with the at least one of the one or more products or the one or more services and will proceed to do so (¶106, when connectivity resumes, virtual assistant completes the request; ¶177, present estimations on when tasks can be completed to the user by uttering voice signifying such), and initiating, by the computing system, one or more processes to remotely address the one or more issues with the at least one of the one or more products or the one or more services, wherein determining whether the computing system is capable of remotely addressing the one or more issues with the at least one of the one or more products or the one or more services comprises remotely accessing, by the computing system, one or more systems associated with the at least one of the one or more products or the one or more services (¶179, check a status of current task to determine if it did not complete successfully due to lost connectivity from network interface 923), and remotely running, by the computing system, one or more tests on the one or more systems, the one or more tests including a connectivity and control test (¶165, use signal strength to estimate an amount of time that connectivity may last; e.g., ¶167, connectivity process 903 determines that when an increasing signal strength tend is present, connectivity may last longer). It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to investigate a status of at least one or more products or one or more services and determining whether the computing system is capable of remotely addressing one or more issues with the products or services in order to accurately assist in determining a priority for completing a task corresponding to the products or services (Walters, ¶157 and ¶163, determine when and under what conditions tasks may be performed). Claims 8-10, 13, and 16 are rejected under 35 USC 103(a) as being unpatentable over Sapugay et al. (US 2019/0294676 A1) and Zamora Duran et al. (US 10621978 B2) as applied to claim 1, in view of Gong (US 2003/0167167 A1). Regarding Claims 8-10, and 13, Sapugay does not teach AI/ML driven persona being based on cartoon character, generating an avatar for each of the at least one AI / ML driven persona, each AI / ML driven persona has a set personality. Gong teaches a computing system implementing AI / ML driven persona (¶17 and ¶22, intelligent social agent implemented by social intelligence engine 300 of Fig. 3; per ¶33, the engine 300 implements adaptation engine 330 with a machine learning module 332) to interact with users (¶23) comprising analyzing, by the computing system and using one of at least one AI/ML model, interaction to identify one or more observable characteristics of the user (¶23, process user speech to provide a profile of the affective and physiological states of the user); accessing and analyzing, by the computing system and using one of the at least one AI/ML model, stored information associated with the user to identify one or more conversation points (¶30, determine user’s internal context and external context to determine, for example, urgent internal context when user command includes the term “quickly” or “now”; ¶¶34-35, determine a basic profile of the user and compare received information about the user and the context with the basic profile of the user; ¶37, access basic profile of the user from data storage device 150); and causing, by the computing system, the at least one AI/ML-driven persona to adapt by modifying the interaction with the user, based at least in part on at least one of the identified one or more observable characteristics of the user or the identified one or more conversation points, to enhance or improve the interaction with the user (¶¶38-39, dynamic adaptor module 336 receives adjusted basic profile of the user and the dynamic digest about the suer from machine learning module 332 to determine the actions and behavior of the intelligent social agent; e.g., ¶40, machine learning module indicates user’s internal context is urgent, adjust the intelligent social agent to have a facial expression that looks serious and stops a non-critical function or closing unnecessary application programs to accomplish a requested urgent action as quickly as possible); wherein the at least one AI/ML-driven persona is among a plurality of AI/ML-driven personas comprising at least one of one or more personas based on a fictional literary character, one or more personas based on a non-fictional literary character, one or more personas based on a comic-book character, one or more personas based on a cartoon character (¶20, intelligent social agent as an animated talking head style character such as a cartoon head), one or more personas based on an anime character, one or more personas based on a manga character, one or more personas based on a television character, one or more personas based on a movie character, one or more personas based on a character from an advertisement, one or more personas based on a mascot, one or more personas based on a meme, one or more personas based on an athlete, one or more personas based on a sports personality, one or more personas based on a news personality, one or more personas based on a political personality, one or more personas based on a reality television personality, one or more personas based on a social media influencer, one or more personas based on a living celebrity, one or more personas based on a deceased celebrity, one or more personas based on a historical figure, one or more personas based on a fictionalization of a historical figure, one or more personas based on a character played by an actor or actress, one or more personas based on a bespoke character, or one or more personas simulating average humans in a geographical area within which the user is currently located or was previously residing; wherein the UI is a visual-based UI, wherein the method further comprises: generating, by the computing system, an avatar for each of the at least one AI/ML-driven persona (¶20, implement the intelligent social agent as an animated talking head style character); displaying, by the computing system and within the UI, the avatar for each of the at least one AI/ML-driven persona (Abstract, intelligent social agent is an animated computer interface agent designed to be appealing, affective, adaptive, and appropriate when interacting with the user); and animating, by the computing system and within the UI, the avatar in synchronization with the conversation with the user, wherein the interaction further comprises the animation of the avatar (¶41, in one example, when machine learning module 332 indicates that the user is fatigued, adjust the intelligent social agent so that the agent has a relaxed facial expression, speaks more slowly, and uses words with fewer syllables, and sentences with fewer words); wherein each AI/ML-driven persona has a set personality, the set personality including at least one of a set speech pattern, a set mannerism, a set command of one or more languages, a set accent, a set collection of non-verbal cues, or a set collection of emotional demeanors (¶21, creating the visual appearance, voice, and personality of an intelligent social agent based on personal and professional characteristics of the target user population, which manifest affect through facial (“set collection of non-verbal cues”), vocal, and linguistic expressions (“set speech patterns”) to appear affective to the target users; ¶47, agent 350 expresses emotion based on facial expressions and vocal expressions), wherein each of the interaction, the conversation, the one or more first conversational threads, and the animation of each AI/ML-driven persona is performed in a manner consistent with the set personality of said AI/ML-driven persona (¶40, machine learning module indicates user’s internal context is urgent, adjust the intelligent social agent to have a facial expression that looks serious and stops a non-critical function or closing unnecessary application programs to accomplish a requested urgent action as quickly as possible); at least one of: adapting or adjusting, by the computing system and using one of the at least one AI/ML model, a personality of one or more of the at least one AI/ML-driven persona to mold to or match a determined personality of the user, wherein the personality of the user is determined based on at least one of analysis of the interaction with the user, analysis of a previous interaction with the user, or known information about the user (¶40, machine learning module indicates user’s internal context is urgent, adjust the intelligent social agent to have a facial expression that looks serious and stops a non-critical function or closing unnecessary application programs to accomplish a requested urgent action as quickly as possible); or adapting or adjusting, by the computing system and using one of the at least one AI/ML model, one or more interaction characteristics of one or more of the at least one AI/ML-driven persona to match to a determined corresponding interaction characteristic of the user, the one or more interaction characteristics including at least one of speech pattern, language, accent, cultural mannerisms, cultural phraseology, general mannerisms, general phraseology, slang, jargon, or sentiment (¶42, in one example, when machine learning module 332 indicates that the user is happy or energetic, adjust the intelligent social agent so that the agent has a happy facial expression and speaks faster); and It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to analyze the interaction to identify user’s observable characteristics with stored information associated with the user (i.e., basic user profile) to identify one or more conversation points (points in conversation indicating urgency of user’s request) to adapt the AI/ML driven persona and modifying the interaction with the user to help ensure that the behaviors and actions of the AI/ML driven persona are appropriate for the context of the user (Gong, ¶39). Regarding Claim 16, Sapugay as modified by Gong discloses during the interaction with the user, performing, by the computing system and using one of the at least one AI/ML model, at least one of continuous tone analysis or continuous sentiment analysis of the interaction with the user (Gong ¶35, machine learning module 332 receives heart rate of the user for comparison with basic profile of the user to determine a corresponding emotional state evident in the user); and after the interaction with the user, updating, by the computing system, the at least one AI/ML model based on the at least one of continuous tone analysis or continuous sentiment analysis of the interaction with the user (Gong ¶36, machine learning module 332 produces a dynamic digest about the user, the context, and the input received from the user, uses the dynamic digest to update the basic profile of the user; i.e., ¶37, updating a machine learned model of the user to adapt the agent to fit the user’s changing circumstances as the intelligent social agent interacts with the user). Claim 11 is rejected under 35 USC 103(a) as being unpatentable over Sapugay et al. (US 2019/0294676 A1) in view of Zamora Duran et al. (US 10621978 B2) and Gong (US 2003/0167167 A1) as applied to claim 9, in further view of Vibbert et al. (US 2016/0042735 A1). Regarding Claim 11, Sapugay as modified by Gong discloses in response to user selection of a gamification mode, performing the following: generating, by the computing system, a visual representation of the conversation (Gong, ¶20, user may select parameters that define the appearance of the social agent; ¶23, upon selection, user may interact with the social intelligence engine with agents having behaviors and actions that are appropriate for the context of the user). The combination does not teach the visual representation is a list of one or more goals of the conversation. Vibbert teaches a dialog agent system interacting with human users via speech input (¶39; ¶53, in response to voice commands, task manager 302 instructs different task dialog engines to either start, pause, or abort operation) that generates a visual representation of the conversation (¶62, inform dialog agents transmit synthesized speech output and visual output on client display presenting the user with information and acknowledge a user’s input according to conversational strategies (¶70, conversational strategies include resuming dialog) to maintain a continuous dialog between the application and the user where request dialog agent allows the user to select from one of several options displayed) comprising a list of the one or more goals of the conversation (¶125, dialog engine executes dialog agents in a dialog stack while displaying information to the user and implementing conversational strategies; i.e., generating visual output keeping the user informed of the dialog agent being executed); and in response to achieving a goal among the one or more goals of the conversation, generating, by the computing system, a visual output of one or more of dialog agents performing one or more actions comprising checking off the achieved goal, removing the achieved goal from the list, or marking the achieved goal as having been achieved (¶125, execute dialog agent located at the top of the dialog stack and upon completion, the dialog engine removes the completed dialog agent from the top of the stack such that the dialog agent located below the previously active dialog agent rises to the top of the dialog stack), the one or more actions being performed in a manner consistent with the set visualization of each of the one or more of the at least one dialog agent (¶125, dialog engine displays information to the user and implementing conversational strategies in order to present information to the user and maintaining continuous conversation allowing the user to select displayed options (i.e., dialog agents being executed) such as to pause or abort the agents being executed; each agent implementing respective conversational strategies). It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to modify the established function of Gong modifying Sapugay generating an animation of the AI/ML driven persona for each dialog agent in an ordered list performing one or more actions in a manner consistent with the set personality of the AI / ML driven persona (Gong, ¶¶40-42) comprising checking off the achieved goal, removing the achieved goal from the list, or marking the achieved goal as having been achieved in order to maintain continuous conversation with the user while presenting the user with information such as option to abort or pause task executions (Vibbert, ¶53 and ¶62 i.e., adjusting Gong’s agents having facial expression and synthesized speech to present information to the user while maintaining continuous conversation allowing user the option to pause or abort respective agents as they are being executed or completed). Claim 12 is rejected under 35 USC 103(a) as being unpatentable over Sapugay et al. (US 2019/0294676 A1) in view of Zamora Duran et al. (US 10621978 B2) and Gong (US 2003/0167167 A1) as applied to claim 9, in further view of Cabezas et al. (US 7379872 B2). Regarding Claim 12, Sapugay as modified by Gong discloses prior to the conversation or during a setup phase, receiving, by the computing system, a user selection of the at least one AI/ML-driven persona (Gong, ¶20, user may select parameters that define the appearance of the social agent; ¶23, upon selection, user may interact with the social intelligence engine with agents having behaviors and actions that are appropriate for the context of the user). The combination does not teach performing one of: receiving, by the computing system, a user selection of the at least one AI/ML-driven persona from a licensed set of AI/ML-driven personas among the plurality of AI/ML-driven personas with whom to interact; identifying, by the computing system, the user, and selecting, by the computing system, the at least one AI/ML-driven persona from the licensed set of AI/ML-driven personas to match the user for interacting with the user, based on information regarding the identified user; setting, by the computing system, a default set of AI/ML-driven personas from the licensed set of AI/ML-driven personas, wherein the default set of AI/ML-driven personas comprises the at least one AI/ML-driven persona; or randomly selecting, by the computing system, the at least one AI/ML-driven persona from the licensed set of AI/ML-driven personas for interacting with the user. Cabezas teaches a licensed set of voice characteristics / personas with whom a recipient may interact (Col 5, Rows 33-35 and Col 5, Rows 54-62, personal voice profile for celebrity or political figure with expiration allowing a person to have the personal voice profile issued for a limited use) and selecting at least one persona from the licensed set of personas to match the user for interacting with the user based on information regarding identified user (Col 1, Rows 50-55, the personal voice profile has a public key from a public key / private key pair; Col 1, Row 56 – Col 2, Row 11, when transmitting the personal voice profile, encrypt the profile using the public key that corresponds to the recipient’s private key such that when the message is received, the recipient may decrypt the encrypted message digest using the public key (i.e., recipient’s private key)). It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to further modify the established function of Gong modifying Sapugay (Gong, ¶20, user may select parameters that define the appearance of the social agent) to select AI/ML-driven persona from a licensed set of AI/ML-driven personas among the plurality of AI/ML-driven personas with whom to interact and select the licensed AI/ML driven persona for interacting with the user based on information regarding the identified user in order to use voice characteristics of celebrity or political figures for limited use (Cabezas, Col 5, Rows 33-35 and Rows 60-62). Claims 14-15 and 18-19 are rejected under 35 USC 103(a) as being unpatentable over Sapugay et al. (US 2019/0294676 A1) in view of Topol (US 11361163 B2) and Zamora Duran et al. (US 10621978 B2). Regarding Claims 14-15, Sapugay discloses determining, by the computing system and using one of the at least one AI/ML model, a structure of the interaction with the user (¶57, conversation model stores associations between intents and particular responses and actions, which generally define the behavior of the RA/BE 102), based at least in part on the identified one or more goals of the conversation (¶65, process received user utterance to extract intents / entities, using a conversation model 110 to determine suitable actions based on the intents / entities); and determining, by the computing system and using one of the at least one AI/ML model, one or more first parameters for the determined structure of the interaction with the user (¶65, extract entities from the received user utterance; in view of ¶33, “entity” refers to object, subject, and parameters of a corresponding intent; e.g., merchandise entities associated with purchase intents), wherein the one or more first conversational threads are generated based on the one or more first parameters (¶65, using a conversation model 110 to determine suitable actions based on the intents / entities). Sapugay does not disclose the one or more first parameters defining conversational guardrails for steering the interaction away from conversational tangents and toward achieving the at least one goal of the conversation. Topol discloses a chatbot computing system / machine learning driven persona (Col 3, Rows 7-10, Natural Language interface 100 / chatbot; Col 3, Rows 24-30, use supervised machine learning algorithms to determine user intent and entities) to determine a structure of interaction with user based on identified goals of a conversation (Col 3, Rows 33-38, user interacts with natural language interface 100 as if conversing with a person; e.g., to classify the intent of user utterance; Col 7, Rows 36-41, in the example of sales intent, implement supervised machine learning algorithm to generate conversational results indicating whether conversations resulted in sales), determining one or more first parameters for the determined structure of the interaction with the user (Col 5, Rows 1-15, invoke Toolset 360 to process conversational data 200 and generate conversational data 320 with intent 330 and graph embeddings of the conversation; Col 8, Rows 40-55, generate outcome predictions of an outcome for a sale of a product from conversational data 320 received in real time by comparing graph embeddings of a current conversation with similar ones of previous conversations), the one or more first parameters defining conversational guardrails for steering the interaction away from conversational tangents and toward achieving the at least one goal of the conversation (Col 5, Rows 8-12, analyze results of conversations to make predictions of outcomes of real-time conversations to steer conversations toward a desired outcome; Col 8, Rows 52-59, by determining the outcome of similar previous conversations, generate a prediction to indicate the likely outcome of the current conversation so as to use the prediction to adjust a course of the conversation by nudging the user in a desired direction through responses based on the prediction). In particular, this is implemented by mapping, by the computing system and using one of the at least one AI/ML model (Fig. 3, Natural Language Interface 100 implemented by Natural Language Analyzer 300), a flow of the interaction with the user (Col 5, Rows 1-15, invoke Toolset 360 to process conversational data 200 and generate conversational data 320 with intent 330 and control flow diagram with graph embeddings; per Col 7, Rows 56-57, conversational flows are represented as graph embeddings); and based on a determination that the flow of the interaction is moving away from achieving the at least one goal of the conversation (Col 5, Rows 8-12, analyze results of conversations to make predictions of outcomes of real-time conversations; Col 7, Rows 41-47 and Col 8, Rows 52-55, predict a likely outcome of the current conversation by comparing graph embeddings (i.e., conversational flow) of current conversation with graph embeddings of previous conversations; in the example of sales intent, prediction indicating that the current conversation is similar to previous conversation which resulted in a negative outcome): determining, by the computing system and using one of the at least one AI/ML model, one or more second parameters for steering the interaction back toward achieving the at least one goal of the conversation (Col 8, Rows 55-59, adjust a course of the conversation by nudging the user in a desired direction through responses based on the prediction; Col 9, Rows 24-30, interface controller 374 generates interface controls 354 based on the prediction to steer the current conversation towards the desired result); generating, by the computing system and using one of the at least one AI/ML model, one or more second conversational threads configured to steer the interaction back toward achieving the at least one goal of the conversation, based on the one or more second parameters (Col 9, Rows 30-33, interface controller 374 sends interface controls 354 to DM component 104 and NLG component 106 of the natural language interface 100; in view of Col 3, Rows 63-67, NLG 106 uses machine learning algorithm for best next action prediction for outputting the actual natural language that will be returned to the user); and causing, by the computing system, the at least one AI/ML-driven persona to continue the conversation with the user using the one or more second conversational threads (Col 3, Rows 66-67, NLG component 106 outputs the actual natural language that will be returned to the user). It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to determine first parameters for the determined structure of the interaction of the user that define conversational guardrails for steering the interaction away from conversational tangents (i.e., negative outcome) and toward achieving at least one goal of the conversation in order to control response outputs to steer the current conversation toward a positive result (Topol, Abstract). Regarding Claim 18, Sapugay discloses a method, comprising: causing, by a computing system (Figs. 1 and 4A, ¶41 and ¶53, cloud based platform 20 comprising data center 22), at least one artificial intelligence (“AI”)/machine learning (“ML”)-driven persona to interact with a user via a user interface (“UI”), the interaction including a conversation between the at least one AI/ML-driven persona and the user (¶54, Fig. 4A, a reasoning agent / behavior engine RA/BE 102 hosting virtual agents / personas that interact with the user of client device 14D via natural language user requests 122 (“user utterances”) and agent responses 124 (“agent utterances”); per ¶39, agent automation framework applying a combination of rule based and machine learning (¶34, neural network implementation) based cognitive linguistic techniques in extracting meaning from natural language utterances); analyzing, by the computing system and using one of at least one AI/ML model, the conversation to identify one or more goals of the conversation (¶60, RA/BE 102 provides utterance 122 to NLU framework 104 to process the utterance 122 to derive intents / entities within the utterances to perform one or more particular predefined actions; e.g., ¶65, RA/BE 102 processes the derived intents / entities to determine suitable actions such as purchasing an item (i.e., product) or closing an account (i.e., service)); determining, by the computing system and using one of the at least one AI/ML model, a structure of the interaction with the user (¶57, conversation model stores associations between intents and particular responses and actions, which generally define the behavior of the RA/BE 102), based at least in part on the identified one or more goals of the conversation (¶65, process received user utterance to extract intents / entities, using a conversation model 110 to determine suitable actions based on the intents / entities); generating, by the computing system and using one of the at least one AI/ML model, one or more first conversational threads configured to achieve at least one goal of the conversation among the one or more goals of the conversation within the determined structure of the interaction with the user (¶117, determine whether a user message is associated with context information of another episode (e.g., yesterday’s context) and perform suitable actions in response to user messages by retrieving and overlaying the context information of the current episode (e.g., ¶65 and ¶125, intents / entities of the new user message such as purchasing an item or closing an account) with context of the referenced episode; ¶124, when the persona of RA/BE 102 receives a new message from the user, determine whether the new message should be treated as a continuation of a prior conversation episode or the beginning of a new episode); causing, by the computing system, the at least one AI/ML-driven persona to continue the conversation with the user using the one or more first conversational threads to work toward achieving the at least one goal of the conversation (¶125, when the new message is a continuation of a prior episode, the RA/BE 102 resumes the conversation using the context of the prior episode by overlaying the episode context information of the prior episode over current context information in order to use context information of the prior episode when responding to the new user message). Sapugay does not disclose mapping, by the computing system and using one of the at least one AI/ML model, a flow of the interaction with the user. Topol discloses a chatbot computing system / machine learning driven persona (Col 3, Rows 7-10, Natural Language interface 100 / chatbot; Col 3, Rows 24-30 in view of Fig. 3, Toolset 360 (including intent classifier 362, graph embedder 366) uses supervised machine learning algorithms to determine user intent and entities) to determine a structure of interaction with user based on identified goals of a conversation (Col 3, Rows 33-38, user interacts with natural language interface 100 as if conversing with a person; e.g., to classify the intent of user utterance; Col 7, Rows 36-41, in the example of sales intent, implement supervised machine learning algorithm to generate conversational results indicating whether conversations resulted in sales) and using machine learning algorithm / ML model to map a flow of the interaction with the user (Col 5, Rows 1-15, invoke Toolset 360 to process conversational data 200 and generate conversational data 320 with intent 330 and control flow diagram with graph embeddings; per Col 7, Rows 56-57, conversational flows are represented as graph embeddings); based on a determination that the flow of the interaction is moving away from achieving the at least one goal of the conversation (Col 5, Rows 8-12, analyze results of conversations to make predictions of outcomes of real-time conversations; Col 7, Rows 41-47 and Col 8, Rows 52-55, predict a likely outcome of the current conversation by comparing graph embeddings (i.e., conversational flow) of current conversation with graph embeddings of previous conversations; in the example of sales intent, prediction indicating that the current conversation is similar to previous conversation which resulted in a negative outcome), generating, by the computing system and using one of the at least one AI/ML model (Col 7, Rows 36-40, result analyzer 370 implements a supervised machine learning algorithm), one or more second conversational threads configured to steer the interaction back toward achieving the at least one goal of the conversation (Col 8, Rows 55-59, adjust a course of the conversation by nudging the user in a desired direction through responses based on the prediction; Col 9, Rows 24-30, interface controller 374 generates interface controls 354 based on the prediction to steer the current conversation towards the desired result); and causing, by the computing system, the at least one AI/ML-driven persona to continue the conversation with the user using the one or more second conversational threads to steer the interaction back toward achieving the at least one goal of the conversation (Col 8, Rows 55-59, adjust a course of the conversation by nudging the user in a desired direction through responses based on the prediction; Col 9, Rows 24-30, interface controller 374 generates interface controls 354 based on the prediction to steer the current conversation towards the desired result). It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to map a flow of interaction with the user to determine if it is moving away from achieving the at least one goal of the conversion (i.e., conversation flow indicates negative outcome) and steer the flow of interaction toward achieving at least one goal of the conversation in order to control response outputs to steer the current conversation toward a positive result (Topol, Abstract). Sapugay does not disclose analyze, using one of the at least one AI/ML model, the interaction to identify one or more observable characteristics of the user. Zamora Duran discloses a system for dynamically generating computer dialog (Col 6, Rows 40-47) identifying goals of the user for engaging in a dialog interaction with the system and context (Col 6, Rows 47-55, identifying perceived topics of interest to the user, identifying goals of the user for engaging in the dialog, and contextual parameters). The system analyzes, using at least one AI/ML model (Col 7, Rows 45-58, dialog module 103 comprises a natural language processor NLP 107 utilizing deep learning based natural language models for understanding human language as the input dialog is entered into the system), the dialog interaction to identify one or more observable characteristics of the user, the one or more observable characteristics including at least one of one or more speech patterns of the user, a language used by the user (Col 7, Rows 55-56, identification of the input language), whether the user has an accent, what accent the user has, whether English is a second language for the user (Col 11, Rows 2-4 in view of Col 7, Rows 55-56, identifying a level of English (i.e., user as a native English speaker) based on conversational input itself), one or more non-verbal cues of the user (Col 8, Rows 46-49, NLP 107 scan recorded images and video data for contextual clues about user’s body language), a demeanor of the user (Col 8, Rows 46-49, NLP 107 scan recorded images and video data for contextual clues about user’s visual demeanor), a sentiment of the user (Col 8, Rows 4-5, NLP 107 identifies user’s sentiment), or an emotional state of the user (Col 8, Rows 46-49, NLP 107 scan recorded images and video data for contextual clues about user’s emotion); access and analyze, using one of the at least one AI/ML model, stored information associated with the user (Col 9, Rows 36-42, query databases for known or existing knowledge about the user; Col 9, Rows 52-60, the dialog system 101 learns and adapts each time a repeat user interacts with the dialog system to collect and store information about a user based on past conversations, preferences, and interactions between the user and the dialog system; e.g., Col 8, Rows 30-37, NLP 107 / deep learning natural language models identify natural language input for context such as culture and age of the user, language and other conversational parameters) to identify one or more conversation points (Col 11, Rows 45-51 and Table 3, based on previous interactions with the dialog system 101, dialog creation module 110 selects an appropriate dictionary comprising words at an appropriate level for the user to understand based on user’s current skill set), the stored information including at least one of account information associated with the user, contact information associated with the user, previous interactions with the user (Col 9, Rows 52-56,collect and store information about a user based on past conversations), historical data associated with the user (Col 9, Rows 50-51, access ser information such as viewing history), demographic information about the user (Col 9, Row 49, user’s age, location, culture; see Table 2), personal information about the user (Table 2, previously learned information such as topics of interest, topics to avoid), user-volunteered information regarding general interests of the user (Col 9, Rows 44-46, user may fill in user profile 135 themselves), information regarding a market segment within which the user is classified, or societal information for a societal segment to which the user belongs (Col 9, Rows 49-50, query information regarding user’s age (i.e., market segment), location, culture (i.e., societal information); see Table 2, user’s culture, age, location, education, profession); and cause the at least one AI/ML-driven persona to adapt by modifying the interaction with the user, based at least in part on at least the identified one or more conversation points, to enhance or improve the interaction with the user (Col 10, Rows 55-61, based on the language information discerned from the conversational input and known information about the user, select a corresponding dictionary having an appropriate language and level of sophistication for carrying on a conversation with the user; Col 11, Rows 52-52-63, dialog creation module 110 selects an appropriate dictionary and an appropriate corpus of information associated with each of the perceived topics of interest identified by the NLP 107 to create a responsive dialog to the user, the corpus was selected as a function of the conversational input of the user and contextual parameters such as emotion, gestures, body language of the user). It would’ve been obvious to one ordinarily skilled in the art before the effective filing date of the invention to identify one or more observable characteristics of the user, access and analyze stored information associated with the user to identify one or more conversation points, and adapt the AI/ML driven persona by modifying the interaction with the user based on the conversation points to retrieve, parse, and integrate human expressions into dialogs in order to sound more human like and emulate the dynamic ability of human expression found naturally in the interactions between humans during conversation (Zamora Duran, Col 12, Rows 52-57). Regarding Claim 19, Sapugay as modified by Topol discloses determining, by the computing system and using one of the at least one AI/ML model (Topol, Col 3, Rows 7-10, Natural Language interface 100 / chatbot implemented by Natural Language Analyzer 300 of Fig. 3; Col 3, Rows 24-30, use supervised machine learning algorithms to determine user intent and entities), one or more first parameters for the determined structure of the interaction with the user (Topol, Col 5, Rows 1-15, invoke Toolset 360 to process conversational data 200 and generate conversational data 320 with intent 330 and graph embeddings of the conversation; Col 8, Rows 40-55, generate outcome predictions of an outcome for a sales of a product (i.e., intent) from conversational data 320 received in real time by comparing graph embeddings of a current conversation with similar ones of previous conversations), the one or more first parameters defining conversational guardrails for steering the interaction away from conversational tangents and toward achieving the at least one goal of the conversation (Topol, Col 5, Rows 8-12, analyze results of conversations to make predictions of outcomes of real-time conversations to steer conversations toward a desired outcome; Col 8, Rows 52-59, by determining the outcome of similar previous conversations, generate a prediction to indicate the likely outcome of the current conversation so as to use the prediction to adjust a course of the conversation by nudging the user in a desired direction through responses based on the prediction), wherein the one or more first conversational threads are generated based on the one or more first parameters (Topol, Col 8, Rows 40-45, generate predictions 352 of an outcome of a conversation as conversation data 320 (i.e., intents) from the conversation is received in real time; i.e., whether current conversation will lead to a sale of a product); and determining, by the computing system and using one of the at least one AI/ML model, one or more second parameters for steering the interaction back toward achieving the at least one goal of the conversation, wherein the one or more second conversational threads are generated based on the one or more second parameters (Topol, Col 8, Rows 55-59, adjust a course of the conversation by nudging the user in a desired direction through responses based on the prediction; Col 9, Rows 24-30, interface controller 374 generates interface controls 354 based on the prediction to steer the current conversation towards the desired result). Conclusion Applicant's amendment necessitated the new grounds of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner Richard Z. Zhu whose telephone number is 571-270-1587 or examiner’s supervisor Hai Phan whose telephone number is 571-272-6338. Examiner Richard Zhu can normally be reached on M-Th, 0730:1700. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD Z ZHU/Primary Examiner, Art Unit 2654 03/20/2026
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Nov 26, 2025
Non-Final Rejection — §103
Feb 16, 2026
Response Filed
Mar 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592228
SPEECH INTERACTION METHOD ,AND APPARATUS, COMPUTER READABLE STORAGE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12592222
APPARATUSES, COMPUTER PROGRAM PRODUCTS, AND COMPUTER-IMPLEMENTED METHODS FOR ADAPTING SPEECH RECOGNITION CONFIDENCE SCORES BASED ON EXPECTED RESPONSE
2y 5m to grant Granted Mar 31, 2026
Patent 12586574
ELECTRONIC DEVICE FOR PROCESSING UTTERANCE, OPERATING METHOD THEREOF, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579978
NETWORKED DEVICES, SYSTEMS, & METHODS FOR INTELLIGENTLY DEACTIVATING WAKE-WORD ENGINES
2y 5m to grant Granted Mar 17, 2026
Patent 12572739
GENERATING MACHINE INTERPRETABLE DECOMPOSABLE MODELS FROM REQUIREMENTS TEXT
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
85%
With Interview (+15.4%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 718 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month