Prosecution Insights
Last updated: April 19, 2026
Application No. 18/062,747

VIRTUAL INTELLIGENT COMPOSITE PERSONA IN THE METAVERSE

Non-Final OA §103
Filed
Dec 07, 2022
Examiner
WARNER, PHILIP N
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
36%
Grant Probability
At Risk
1-2
OA Rounds
3y 7m
To Grant
65%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
39 granted / 107 resolved
-15.6% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
28 currently pending
Career history
135
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
53.8%
+13.8% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 107 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following NON-FINAL Office is in response to Applicant’s communication filed 12/07/2022 regarding Application 18/062,747. The following is the first action on the merits. Status of Claim(s) Claim(s) 1-20 is/are currently pending and are rejected as follows. Statutory Subject Matter with Regards to 35 U.S.C. 101 Claim(s) 1-20 have been analyzed under the Alice/Mayo framework and determined to be statutory with regards to 35 U.S.C. 101 for the following reasons. First, under Step 1 of the Alice/Mayo framework it must be considered whether the claims are directed to one or more of the statutory classes of invention. In the instant case, Claim(s) 1-12 are directed towards a method, Claim(s) 13-18 are directed towards an apparatus, and Claim(s) 19-20 are directed towards a product. Accordingly, these claims fall under the four statutory category of invention and will be further analyzed under Step 2 of the Alice/Mayo framework. Under Step 2A, Prong One, it was considered whether the claims recite any abstract ideas. The independent claims 1, 14, and 19 were all deemed to not recite any abstract ideas. Therefore the claims as currently represented are deemed statutory subject matter in view of 101. However, any changes made to Applicant’s claims to overcome any applied rejections below does not prevent a rejection under 101 should it be deemed appropriate under subsequent analysis in view of those changes. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-5, 8-14, and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chawla (US 2017/0185919 A1) in view of Abrams (US 2018/0165596 A1) Claim(s) 1 and 13 – Chawla discloses the following limitations: A memory and (Chawla: Paragraph 26, “The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.”) A processor (Chawla: Paragraph 26, “The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.”) receiving end user information of end users to produce synthesized user research data; (Chawla: Paragraph 36, “Cognitive systems achieve these abilities by combining various aspects of artificial intelligence, natural language processing, dynamic learning, and hypothesis generation to render vast quantities of intelligible data to assist humans in making better decisions. As such, cognitive systems can be characterized as having the ability to interact naturally with people to extend what either humans, or machines, could do on their own. Furthermore, they are typically able to process natural language, multi-structured data, and experience much in the same way as humans. Moreover, they are also typically able to learn a knowledge domain based upon the best available data and get better, and more immersive, over time.”; Paragraph 50, “As used herein, ambient signals 220 broadly refer to input signals, or other data streams, that may contain data providing additional insight or context to the curated data 222 and learned knowledge 224 received by the CILS 118. For example, ambient signals may allow the CILS 118 to understand that a user is currently using their mobile device, at location ‘x’, at time ‘y’, doing activity ‘z’. To further the example, there is a difference between the user using their mobile device while they are on an airplane versus using their mobile device after landing at an airport and walking between one terminal and another. To extend the example even further, ambient signals may add additional context, such as the user is in the middle of a three leg trip and has two hours before their next flight. Further, they may be in terminal A1, but their next flight is out of C1, it is lunchtime, and they want to know the best place to eat. Given the available time the user has, their current location, restaurants that are proximate to their predicted route, and other factors such as food preferences, the CILS 118 can perform various cognitive operations and provide a recommendation for where the user can eat.”; Paragraph 51, “In various embodiments, the curated data 222 may include structured, unstructured, social, public, private, streaming, device or other types of data described in greater detail herein. In certain embodiments, the learned knowledge 224 is based upon past observations and feedback from the presentation of prior cognitive insight streams and recommendations. In various embodiments, the learned knowledge 224 is provided via a feedback loop that provides the learned knowledge 224 in the form of a learning stream of data.”; Paragraph 53, “In various embodiments, the information contained in, and referenced by, a cognitive graph 226 is derived from many sources (e.g., public, private, social, device), such as curated data 222. In certain of these embodiments, the cognitive graph 226 assists in the identification and organization of information associated with how people, places and things are related to one other. In various embodiments, the cognitive graph 226 enables automated agents, described in greater detail herein, to access the Web more intelligently, enumerate inferences through utilization of curated, structured data 222, and provide answers to questions by serving as a computational knowledge engine.”; Paragraph 62, “In these and other embodiments, the cognitive applications 304 possess situational and temporal awareness based upon ambient signals from users and data, which facilitates understanding the user's intent, content, context and meaning to drive goal-driven dialogs and outcomes. Further, they are designed to gain knowledge over time from a wide variety of structured, non-structured, and device data sources, continuously interpreting and autonomously reprogramming themselves to better understand a given domain. As such, they are well-suited to support human decision making, by proactively providing trusted advice, offers and recommendations while respecting user privacy and permissions.”; Paragraph 122, “FIG. 7 is a simplified block diagram of a plurality of cognitive platforms implemented in accordance with an embodiment of the invention within a hybrid cloud infrastructure. In this embodiment, the hybrid cloud infrastructure 740 includes a cognitive cloud management 342 component, a hosted 704 cognitive cloud environment, and a private 706 network environment. As shown in FIG. 7, the hosted 704 cognitive cloud environment includes a hosted 710 cognitive platform, such as the cognitive platform 310 shown in FIGS. 3, 4a, and 4b. In various embodiments, the hosted 704 cognitive cloud environment may also include a hosted 718 universal knowledge repository and one or more repositories of curated public data 714 and licensed data 716. Likewise, the hosted 710 cognitive platform may also include a hosted 712 analytics infrastructure, such as the cloud analytics infrastructure 344 shown in FIGS. 3 and 4c.”) storing the synthesized user research data in a knowledge corpus; (Chawla: Paragraph 107, “In various embodiments, the crawl framework 452 is implemented to support various crawlers 454 familiar to skilled practitioners of the art. In certain embodiments, the crawlers 454 are custom configured for various target domains. For example, different crawlers 454 may he used for various travel forums, travel blogs, travel news and other travel sites. In various embodiments, data collected by the crawlers 454 is provided by the crawl framework 452 to the repository of crawl data 460. In these embodiments, the collected crawl data is processed and then stored in a normalized form in the repository of crawl data 460. The normalized data is then provided to SQL/NoSQL database 417 agent, which in turn provides it to the dataset engine 322. In one embodiment, the crawl database 460 is a NoSQL database, such as Mongo®.”; Paragraph 142, “In certain embodiments, the data may be multi-structured data. In these embodiments, the multi-structured data may include unstructured data (e.g., a document), semi-structured data (e.g., a social media post), and structured data (e.g., a string, an integer, etc.), such as data stored in a relational database management system (RDBMS). In various embodiments, the data may be public, private, or a combination thereof. In certain embodiments the data may be provided by a device, stored in a data lake, a data warehouse, or some combination thereof.”; Paragraph 218, “The submitted 1042 input data is then processed by the graph query engine 326 to generate a graph query 1044, as described in greater detail herein. The resulting graph query 1044 is then used to query the application cognitive graph 1082, which results in the generation of one or more composite cognitive insights, likewise described in greater detail herein. In certain embodiments, the graph query 1044 uses knowledge elements stored in the universal knowledge repository 1080 when querying the application cognitive graph 1082 to generate the one or more composite cognitive insights.”) segmenting the end users into one or more end user segments based on end user roles or end user categories; (Chawla: Paragraph 52, “As likewise used herein, a cognitive graph 226 refers to a representation of expert knowledge, associated with individuals and groups over a period of time, to depict relationships between people, places, and things using words, ideas, audio and images. As such, it is a machine-readable formalism for knowledge representation that provides a common framework allowing data and knowledge to be shared and reused across user, application, organization, and community boundaries.”; Paragraph 125, “In certain embodiments, the knowledge elements within a universal knowledge repository may also include statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes or opinions associated with a person or a group. As an example, user ‘A’ may prefer the pizza served by a first restaurant, while user ‘B’ may prefer the pizza served by a second restaurant. Furthermore, both user ‘A’ and ‘B’ are firmly of the opinion that the first and second restaurants respectively serve the very best pizza available. In this example, the respective preferences and opinions of users ‘A’ and ‘B’ regarding the first and second restaurant may be included in the universal knowledge repository 880 as they are not contradictory. Instead, they are simply knowledge elements respectively associated with the two users and can be used in various embodiments for the generation of various cognitive insights, as described in greater detail herein.”; Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”) generating, for each end user segment, a persona, each representing a composite of end users in a respective end user segment; (Chawla: Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”; Paragraph 186, “In various embodiments, provision of the composite cognitive insights results in the CILS receiving feedback 958 data from various individual users and other sources, such as cognitive business processes and applications 948. In one embodiment, the feedback 958 data is used to revise or modify the cognitive persona. In another embodiment, the feedback 958 data is used to create a new cognitive persona. In yet another embodiment, the feedback 958 data is used to create one or more associated cognitive personas, which inherit a common set of attributes from a source cognitive persona. In one embodiment, the feedback 958 data is used to create a new cognitive persona that combines attributes from two or more source cognitive personas. In another embodiment, the feedback 958 data is used to create a cognitive profile, described in greater detail herein, based upon the cognitive persona. Those of skill in the art will realize that many such embodiments are possible and the foregoing is not intended to limit the spirit, scope or intent of the invention.”; Paragraph 189, “In various embodiments, a cognitive persona or cognitive profile is defined by a first set of nodes in a weighted cognitive graph. In these embodiments, the cognitive persona or cognitive profile is further defined by a set of attributes that are respectively associated with a set of corresponding nodes in the weighted cognitive graph. In various embodiments, an attribute weight is used to represent a relevance value between two attributes. For example, a higher numeric value (e.g., ‘5.0’) associated with an attribute weight may indicate a higher degree of relevance between two attributes, while a lower numeric value (e.g., ‘0.5’) may indicate a lower degree of relevance.”; Paragraph 219, “In various embodiments, the graph query 1044 results in the selection of a cognitive persona, described in greater detail herein, from a repository of cognitive personas ‘1’ through ‘n’ 1072, according to a set of contextual information associated with a user. In certain embodiments, the universal knowledge repository 1080 includes the repository of personas ‘1’ through ‘n’ 1072. In various embodiments, individual nodes within cognitive personas stored in the repository of personas ‘1’ through ‘n’ 1072 are linked 1054 to corresponding nodes in the universal knowledge repository 1080. In certain embodiments, nodes within the universal knowledge repository 1080 are likewise linked 1054 to nodes within the cognitive application graph 1082.”) training each persona with training data from the synthesized user research data of the end users in the respective end user segment; (Chawla: Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”; Paragraph 186, “In various embodiments, provision of the composite cognitive insights results in the CILS receiving feedback 958 data from various individual users and other sources, such as cognitive business processes and applications 948. In one embodiment, the feedback 958 data is used to revise or modify the cognitive persona. In another embodiment, the feedback 958 data is used to create a new cognitive persona. In yet another embodiment, the feedback 958 data is used to create one or more associated cognitive personas, which inherit a common set of attributes from a source cognitive persona. In one embodiment, the feedback 958 data is used to create a new cognitive persona that combines attributes from two or more source cognitive personas. In another embodiment, the feedback 958 data is used to create a cognitive profile, described in greater detail herein, based upon the cognitive persona. Those of skill in the art will realize that many such embodiments are possible and the foregoing is not intended to limit the spirit, scope or intent of the invention.”; Paragraph 189, “In various embodiments, a cognitive persona or cognitive profile is defined by a first set of nodes in a weighted cognitive graph. In these embodiments, the cognitive persona or cognitive profile is further defined by a set of attributes that are respectively associated with a set of corresponding nodes in the weighted cognitive graph. In various embodiments, an attribute weight is used to represent a relevance value between two attributes. For example, a higher numeric value (e.g., ‘5.0’) associated with an attribute weight may indicate a higher degree of relevance between two attributes, while a lower numeric value (e.g., ‘0.5’) may indicate a lower degree of relevance.”; Paragraph 219, “In various embodiments, the graph query 1044 results in the selection of a cognitive persona, described in greater detail herein, from a repository of cognitive personas ‘1’ through ‘n’ 1072, according to a set of contextual information associated with a user. In certain embodiments, the universal knowledge repository 1080 includes the repository of personas ‘1’ through ‘n’ 1072. In various embodiments, individual nodes within cognitive personas stored in the repository of personas ‘1’ through ‘n’ 1072 are linked 1054 to corresponding nodes in the universal knowledge repository 1080. In certain embodiments, nodes within the universal knowledge repository 1080 are likewise linked 1054 to nodes within the cognitive application graph 1082.”) Chawla does not explicitly disclose the following limitations, however in analogous art of persona generation and interaction, Abrams discloses the following generating a dialog engine for each persona based on the training; (Abrams: Paragraph 23, “To provide a convincing illusion that users are interacting with a “real” character, the memory 118 includes, without limitation, the character engine 140. In operation the character engine 140 combines sensing algorithms, advanced thinking and learning algorithms, and expressing algorithms in a flexible and adaptive manner. Upon receiving data via the user platform 120, the character engine 140 determines a current context that includes a user intent and an assessment domain. To generate character responses, the character engine 140 selects and applies inference algorithm(s) and personality engine(s) based on the current context and data received from the data sources 150. Finally, the character engine 140 tailors the character responses to the capabilities of the user platform 120.”; Paragraph 35, “For instance, in some embodiments, the user intent engine 230 may include a speech analysis algorithm that examines the speech prosody of speech included in the recognition data 225 to detect any upward inflections at the end of sentences. If the speech analysis algorithm detects an upward inflection at the end of a sentence, then the speech analysis algorithm determines that the user intent 235 is “asking a question.” In other embodiments, the user intent engine 230 may include a gesture analysis algorithm that analyzes the recognition data 225 to determine whether the user intent 235 is “looking for emotional support.””; Paragraph 39, “In some embodiments, the domain parser 245 includes a single generalized content algorithm. In other embodiments, the domain parser 245 includes multiple specialized analytic processors (not shown) that act upon streaming input data (e.g., the recognition data 225 and/or the user intent 235) in real time. Each of the analytic processors acts as a “tuned resonator” that searches for specific features that map to a particular context. For instance, suppose that the character engine 120 represents a helper droid that is interacting with a user. The domain parser 245 could execute any number of specialized analytic algorithms that search for features unique to the current context of the interaction between the helper droid and the user.”) connecting an avatar to each persona; and (Abrams: Paragraph 23, “To provide a convincing illusion that users are interacting with a “real” character, the memory 118 includes, without limitation, the character engine 140. In operation the character engine 140 combines sensing algorithms, advanced thinking and learning algorithms, and expressing algorithms in a flexible and adaptive manner. Upon receiving data via the user platform 120, the character engine 140 determines a current context that includes a user intent and an assessment domain. To generate character responses, the character engine 140 selects and applies inference algorithm(s) and personality engine(s) based on the current context and data received from the data sources 150. Finally, the character engine 140 tailors the character responses to the capabilities of the user platform 120.”; Paragraph 35, “For instance, in some embodiments, the user intent engine 230 may include a speech analysis algorithm that examines the speech prosody of speech included in the recognition data 225 to detect any upward inflections at the end of sentences. If the speech analysis algorithm detects an upward inflection at the end of a sentence, then the speech analysis algorithm determines that the user intent 235 is “asking a question.” In other embodiments, the user intent engine 230 may include a gesture analysis algorithm that analyzes the recognition data 225 to determine whether the user intent 235 is “looking for emotional support.””; Paragraph 39, “In some embodiments, the domain parser 245 includes a single generalized content algorithm. In other embodiments, the domain parser 245 includes multiple specialized analytic processors (not shown) that act upon streaming input data (e.g., the recognition data 225 and/or the user intent 235) in real time. Each of the analytic processors acts as a “tuned resonator” that searches for specific features that map to a particular context. For instance, suppose that the character engine 120 represents a helper droid that is interacting with a user. The domain parser 245 could execute any number of specialized analytic algorithms that search for features unique to the current context of the interaction between the helper droid and the user.”) making each avatar accessible for dialog with metaverse users in the Metaverse, wherein the Metaverse is a 3D virtual reality environment. (Abrams: Paragraph 29, “In operation, the input platform abstraction infrastructure 210 receives input data from any number and types of the user platforms 120. For instance, in some embodiments, the input platform abstraction infrastructure 210 may receive text, voice, accelerometer, and video data from the smartphone 122. In other embodiments, the input platform abstraction infrastructure 210 may receive discrete button, voice, and motion data from the robot 132 and/or the toy 138. In yet other embodiments, the input platform abstraction infrastructure 210 may receive control inputs from the game console 126 or a smart television; audio from a telephone or kiosk microphone; and/or voice and imagery from augmented and virtual Reality (AR/VR) systems.”) Chawla discloses a method for generating a virtual persona. Abrams discloses a method for training and creating an avatar for interaction based on a persona. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Chawla with the teachings of Abrams in order to improve the adaptability and realism of generated personas as disclosed by Abrams (Abrams: Paragraph 6, “Consequently, service providers are often unable to automate interactions for services that rely on establishing and nurturing “real” relationships between users and characters.”) Claim(s) 2 – Chawla in view of Abrams disclose the limitations of claim 1 Chawla further discloses the following: adding the persona interaction data to the training data in the knowledge corpus; and subsequently (Chawla: Paragraph 165, “As used herein, a supervised learning 818 machine learning algorithm broadly refers to a machine learning approach for inferring a function from labeled training data. The training data typically consists of a set of training examples, with each example consisting of an input object (e.g., a vector) and a desired output value(e.g., a supervisory signal). In various embodiments, a supervised learning algorithm is implemented to analyze the training data and produce an inferred function, which can be used for mapping new examples.”; Paragraph 166, “As used herein, an unsupervised learning 820 machine learning algorithm broadly refers to a machine learning approach for finding non-obvious or hidden structures within a set of unlabeled data. In various embodiments, the unsupervised learning 820 machine learning algorithm is not given a set of training examples. Instead, it attempts to summarize and explain key features of the data it processes. Examples of unsupervised learning approaches include clustering (e.g., k-means, mixture models, hierarchical clustering, etc.) and latent variable models (e.g., expectation-maximization algorithms, method of moments, blind signal separation techniques, etc.).”; Paragraph 273, “The resulting feedback and the cognitive profile currently in use is then used to perform additional cognitive learning operations, which results in the generation of either a new, or refined, cognitive profile. The new, or refined, cognitive profile and the provided input data are used in combination with associated cognitive insight attributes to generate a second set of cognitive profile generation suggestions, which are then provided to the user within the sub-window 1308 of the UI window 1302 shown in FIG. 13. In these embodiments, the cognitive profile generation operations described in the descriptive text associated with FIG. 13 are repeated to generate a new or refined cognitive profile. In certain of these embodiments, the resulting refined cognitive profile is used to generate additional cognitive insights, which in turn are displayed in display field 1404.”) training another persona using the persona interaction data in the training data of the knowledge corpus. (Chawla: Paragraph 165, “As used herein, a supervised learning 818 machine learning algorithm broadly refers to a machine learning approach for inferring a function from labeled training data. The training data typically consists of a set of training examples, with each example consisting of an input object (e.g., a vector) and a desired output value(e.g., a supervisory signal). In various embodiments, a supervised learning algorithm is implemented to analyze the training data and produce an inferred function, which can be used for mapping new examples.”; Paragraph 166, “As used herein, an unsupervised learning 820 machine learning algorithm broadly refers to a machine learning approach for finding non-obvious or hidden structures within a set of unlabeled data. In various embodiments, the unsupervised learning 820 machine learning algorithm is not given a set of training examples. Instead, it attempts to summarize and explain key features of the data it processes. Examples of unsupervised learning approaches include clustering (e.g., k-means, mixture models, hierarchical clustering, etc.) and latent variable models (e.g., expectation-maximization algorithms, method of moments, blind signal separation techniques, etc.).”; Paragraph 273, “The resulting feedback and the cognitive profile currently in use is then used to perform additional cognitive learning operations, which results in the generation of either a new, or refined, cognitive profile. The new, or refined, cognitive profile and the provided input data are used in combination with associated cognitive insight attributes to generate a second set of cognitive profile generation suggestions, which are then provided to the user within the sub-window 1308 of the UI window 1302 shown in FIG. 13. In these embodiments, the cognitive profile generation operations described in the descriptive text associated with FIG. 13 are repeated to generate a new or refined cognitive profile. In certain of these embodiments, the resulting refined cognitive profile is used to generate additional cognitive insights, which in turn are displayed in display field 1404.”) Chawla does not explicitly disclose the following limitations, however in analogous art of persona generation and interaction, Abrams discloses the following receiving persona interaction data based on an interacting persona that interacts with a metaverse user; (Abrams: Paragraph 37, “The domain parser 240 may determine the assessment domain 245 based on any mode of interaction, at any level of granularity, and in any technically feasible fashion. For instance, in some embodiments, the domain parser 240 may first determine the mode of interaction (e.g., whether speech is to be understood, images are to be recognized, text is to be understood, and/or gestures are to be recognized). The domain parser 240 may then analyze the content of the recognition data 225 to determine the assessment domain 245 for the mode of interaction. For example, if speech is to be understood, then the domain parser 240 could perform analysis operations on the recognition data 225 to determine whether the assessment domain 245 is “causal conversation,” “storytelling,” or “homework,” to name a few.”; Paragraph 71, “The inference engine 250 and/or the knowledge subsystem 260 may process the data received from the data sources 150 in any technically feasible fashion. For example, some of the data sources 150 provide structured data (e.g., weather feeds), while other data sources 150 provide unstructured data (e.g., movie scripts). Notably, in some embodiments, the inference engine 250 and/or the knowledge subsystem 260 may receive data via crowdsourcing. For example, the inference engine 250 and/or the knowledge subsystem 260 could receive data from a gamification platform that engages large numbers of users to manually perform entity extraction and relationship mark-up among named entities. The structured data received by the inference engine 250 and/or the knowledge subsystem 260 could then be included in the training data 262 and used to train any number of the inference algorithms 270.”; Paragraph 80, “For example, if the user platform 120 implements a chat application, then the inference engine 250 could transmit data associated with each new conversation to the knowledge subsystem 260. The data could include state variables such as user specific information, the user intent 235, the assessment domain 245, events that occur during the conversations, inferences and character responses 285 generated by the inference engine 150, and so forth. The knowledge subsystem 260 could evaluate the data, include relevant data in the knowledge database 266, and include data that is suitable for machine learning in the training data 262.”) Chawla discloses a method for generating a virtual persona. Abrams discloses a method for training and creating an avatar for interaction based on a persona. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Chawla with the teachings of Abrams in order to improve the adaptability and realism of generated personas as disclosed by Abrams (Abrams: Paragraph 6, “Consequently, service providers are often unable to automate interactions for services that rely on establishing and nurturing “real” relationships between users and characters.”) Claim(s) 3 – Chawla in view of Abrams disclose the limitations of claim 1 Chawla further discloses the following: wherein the knowledge corpus comprises personal data, environment data, and workflow data (Chawla: Paragraph 54, “In certain embodiments, the cognitive graph 226 not only elicits and maps expert knowledge by deriving associations from data, it also renders higher level insights and accounts for knowledge creation through collaborative knowledge modeling. In various embodiments, the cognitive graph 226 is a machine-readable, declarative memory system that stores and learns both episodic memory (e.g., specific personal experiences associated with an individual or entity), and semantic memory, which stores factual information (e.g., geo location of an airport or restaurant).”; Paragraph 187, “As used herein, a cognitive profile refers to a set of data associated with a user, whether anonymous or not. In various embodiments, a cognitive profile may be associated with a particular user (i.e., may be a profile of one). In various embodiments, a cognitive profile refers to an instance of a cognitive persona that references personal data associated with a user. In various embodiments, the personal data may include the user's name, address, Social Security Number (SSN), age, gender, marital status, occupation, employer, income, education, skills, knowledge, interests, preferences, likes and dislikes, goals and plans, and so forth. In certain embodiments, the personal data may include data associated with the user's interaction with a CILS and related composite cognitive insights that are generated and provided to the user. In various embodiments, the user's interaction with a CILS may be provided to the CILS as feedback 958 data. In various embodiments, the personal data may include one or more of a purchase history of the particular user, CRM data associated with the particular user and social media data associated with the particular user. The cognitive profile associated with the particular user ensures that the cognitive profile is specific to that individual and cognitive recommendations, insights and suggestions are generated based on the specific attributes of that user's profile. In certain embodiments, the weight of importance attributed to each attribute can vary. E.g., while two users may both like red and green, in certain situations, one user may like red but love green whereas another user may love green but like red.”) Claim(s) 4 – Chawla in view of Abrams disclose the limitations of claim 1 Chawla further discloses the following limitations: wherein the end user information comprises end user answers from end users in response to end user questions; (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 53, “In various embodiments, the information contained in, and referenced by, a cognitive graph 226 is derived from many sources (e.g., public, private, social, device), such as curated data 222. In certain of these embodiments, the cognitive graph 226 assists in the identification and organization of information associated with how people, places and things are related to one other. In various embodiments, the cognitive graph 226 enables automated agents, described in greater detail herein, to access the Web more intelligently, enumerate inferences through utilization of curated, structured data 222, and provide answers to questions by serving as a computational knowledge engine.”; Paragraph 67, “In various embodiments, the graph query engine 326 is implemented to receive and process queries such that they can be bridged into a cognitive graph, as described in greater detail herein, through the use of a bridging agent. In certain embodiments, the graph query engine 326 performs various natural language processing (NLP), familiar to skilled practitioners of the art, to process the queries. In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In various embodiments, two or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may be implemented to operate collaboratively to generate a cognitive insight or recommendation. In certain embodiments, one or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may operate autonomously to generate a cognitive insight or recommendation.”; Paragraph 91, “To further differentiate the distinction between the translate 427 component and the bridging 428 component, the translate 427 component relates to a general domain translation of a question. In contrast, the bridging 428 component allows the question to be asked in the context of a specific domain (e.g., healthcare, travel, etc.), given what is known about the data. In certain embodiments, the bridging 428 component is implemented to process what is known about the translated query, in the context of the user, to provide an answer that is relevant to a specific domain.”; Paragraph 93, “In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a target cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In these and other embodiments, the insight/learning engine 330 is implemented to perform insight/learning operations, described in greater detail herein. In various embodiments, the insight/learning engine 330 may include a discover/visibility 430 component, a predict 431 component, a rank/recommend 432 component, and one or more insight 433 agents.”; Paragraph 124, “As used herein, a universal knowledge repository broadly refers to a collection of knowledge elements that can be used in various embodiments to generate one or more cognitive insights described in greater detail herein. In various embodiments, these knowledge elements may include facts (e.g., milk is a dairy product), information (e.g., an answer to a question), descriptions (e.g., the color of an automobile), skills (e.g., the ability to install plumbing fixtures), and other classes of knowledge familiar to those of skill in the art. In these embodiments, the knowledge elements may be explicit or implicit. As an example, the fact that water freezes at zero degrees centigrade would be an explicit knowledge element, while the fact that an automobile mechanic knows how to repair an automobile would be an implicit knowledge element.”) the method further comprising: adding end user questions and end user responses to the synthesized user research data in the knowledge corpus; (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 53, “In various embodiments, the information contained in, and referenced by, a cognitive graph 226 is derived from many sources (e.g., public, private, social, device), such as curated data 222. In certain of these embodiments, the cognitive graph 226 assists in the identification and organization of information associated with how people, places and things are related to one other. In various embodiments, the cognitive graph 226 enables automated agents, described in greater detail herein, to access the Web more intelligently, enumerate inferences through utilization of curated, structured data 222, and provide answers to questions by serving as a computational knowledge engine.”; Paragraph 67, “In various embodiments, the graph query engine 326 is implemented to receive and process queries such that they can be bridged into a cognitive graph, as described in greater detail herein, through the use of a bridging agent. In certain embodiments, the graph query engine 326 performs various natural language processing (NLP), familiar to skilled practitioners of the art, to process the queries. In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In various embodiments, two or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may be implemented to operate collaboratively to generate a cognitive insight or recommendation. In certain embodiments, one or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may operate autonomously to generate a cognitive insight or recommendation.”; Paragraph 91, “To further differentiate the distinction between the translate 427 component and the bridging 428 component, the translate 427 component relates to a general domain translation of a question. In contrast, the bridging 428 component allows the question to be asked in the context of a specific domain (e.g., healthcare, travel, etc.), given what is known about the data. In certain embodiments, the bridging 428 component is implemented to process what is known about the translated query, in the context of the user, to provide an answer that is relevant to a specific domain.”; Paragraph 93, “In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a target cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In these and other embodiments, the insight/learning engine 330 is implemented to perform insight/learning operations, described in greater detail herein. In various embodiments, the insight/learning engine 330 may include a discover/visibility 430 component, a predict 431 component, a rank/recommend 432 component, and one or more insight 433 agents.”; Paragraph 124, “As used herein, a universal knowledge repository broadly refers to a collection of knowledge elements that can be used in various embodiments to generate one or more cognitive insights described in greater detail herein. In various embodiments, these knowledge elements may include facts (e.g., milk is a dairy product), information (e.g., an answer to a question), descriptions (e.g., the color of an automobile), skills (e.g., the ability to install plumbing fixtures), and other classes of knowledge familiar to those of skill in the art. In these embodiments, the knowledge elements may be explicit or implicit. As an example, the fact that water freezes at zero degrees centigrade would be an explicit knowledge element, while the fact that an automobile mechanic knows how to repair an automobile would be an implicit knowledge element.”) and creating a question and answer component as a part of the dialog engine that is usable by an interacting persona. (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 53, “In various embodiments, the information contained in, and referenced by, a cognitive graph 226 is derived from many sources (e.g., public, private, social, device), such as curated data 222. In certain of these embodiments, the cognitive graph 226 assists in the identification and organization of information associated with how people, places and things are related to one other. In various embodiments, the cognitive graph 226 enables automated agents, described in greater detail herein, to access the Web more intelligently, enumerate inferences through utilization of curated, structured data 222, and provide answers to questions by serving as a computational knowledge engine.”; Paragraph 67, “In various embodiments, the graph query engine 326 is implemented to receive and process queries such that they can be bridged into a cognitive graph, as described in greater detail herein, through the use of a bridging agent. In certain embodiments, the graph query engine 326 performs various natural language processing (NLP), familiar to skilled practitioners of the art, to process the queries. In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In various embodiments, two or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may be implemented to operate collaboratively to generate a cognitive insight or recommendation. In certain embodiments, one or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may operate autonomously to generate a cognitive insight or recommendation.”; Paragraph 91, “To further differentiate the distinction between the translate 427 component and the bridging 428 component, the translate 427 component relates to a general domain translation of a question. In contrast, the bridging 428 component allows the question to be asked in the context of a specific domain (e.g., healthcare, travel, etc.), given what is known about the data. In certain embodiments, the bridging 428 component is implemented to process what is known about the translated query, in the context of the user, to provide an answer that is relevant to a specific domain.”; Paragraph 93, “In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a target cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In these and other embodiments, the insight/learning engine 330 is implemented to perform insight/learning operations, described in greater detail herein. In various embodiments, the insight/learning engine 330 may include a discover/visibility 430 component, a predict 431 component, a rank/recommend 432 component, and one or more insight 433 agents.”; Paragraph 124, “As used herein, a universal knowledge repository broadly refers to a collection of knowledge elements that can be used in various embodiments to generate one or more cognitive insights described in greater detail herein. In various embodiments, these knowledge elements may include facts (e.g., milk is a dairy product), information (e.g., an answer to a question), descriptions (e.g., the color of an automobile), skills (e.g., the ability to install plumbing fixtures), and other classes of knowledge familiar to those of skill in the art. In these embodiments, the knowledge elements may be explicit or implicit. As an example, the fact that water freezes at zero degrees centigrade would be an explicit knowledge element, while the fact that an automobile mechanic knows how to repair an automobile would be an implicit knowledge element.”; Paragraph 140, “As used herein, a cognitive learning category broadly refers to a source of information used by a CILS to perform cognitive learning operations. In various embodiments, the cognitive learning categories 810 may include a data-based 812 cognitive learning category and an interaction-based 814 cognitive learning category. As used herein, a data-based 812 cognitive learning category broadly refers to the use of data as a source of information in the performance of a cognitive learning operation by a CILS.”; Paragraph 151, “For example, an online shopper may select a first pair of shoes that are available in a white, black and brown. The user then elects to view a larger photo of the first pair of shoes, first in white, then in black, but not brown. To continue the example, the user then selects a second pair of shoes that are likewise available in white, black and brown. As before, the user elects to view a larger photo of the second pair of shoes, first in white, then in black, but once again, not brown. In this example, the user's online interaction indicates an explicit like for white and black shoes and an explicit dislike for brown shoes.”; Paragraph 173, “In various embodiments, the cognitive insight is delivered to a device, an application, a service, a process, a user, or a combination thereof. In certain embodiments, the resulting interaction information is likewise received by a CILS from a device, an application, a service, a process, a user, or a combination thereof. In various embodiments, the resulting interaction information is provided in the form of feedback data to the CILS. In these embodiments, the method by which the cognitive learning process, and its associated cognitive learning steps, is implemented is a matter of design choice. Skilled practitioners of the art will recognize that many such embodiments are possible and the foregoing is not intended to limit the spirit, scope or intent of the invention.”) Claim(s) 5 – Chawla in view of Abrams disclose the limitations of claim 1 Chawla further discloses the following: wherein the end user information comprises publicly available existing data about the end users or their contexts. (Chawla: Paragraph 211, “In various embodiments, cognitive learning operations may be performed in various phases of a cognitive learning process. In this embodiment, these phases include a source 1034 phase, a learn 1036 phase, an interpret/infer 1038 phase, and an act 1040 phase. In the source 1034 phase, a predetermined instantiation of a cognitive platform 1010 sources social data 1012, public data 1014, device data 1016, and proprietary data 1018 from various sources as described in greater detail herein. In various embodiments, an example of a cognitive platform 1010 instantiation is the cognitive platform 310 shown in FIGS. 3, 4a, and 4b. In this embodiment, the instantiation of a cognitive platform 1010 includes a source 1006 component, a process 1008 component, a deliver 1030 component, a cleanse 1020 component, an enrich 1022 component, a filter/transform 1024 component, and a repair/reject 1026 component. Likewise, as shown in FIG. 10a, the process 1008 component includes a repository of models 1028, described in greater detail herein.”; Paragraph 214, “In various embodiments, the process 1008 component is implemented to generate various models, described in greater detail herein, which are stored in the repository of models 1028. The process 1008 component is likewise implemented in various embodiments to use the sourced data to generate one or more cognitive graphs, such as an application cognitive graph 1082, as likewise described in greater detail herein. In various embodiments, the process 1008 component is implemented to gain an understanding of the data sourced from the sources of social data 1012, public data 1014, device data 1016, and proprietary data 1018, which assist in the automated generation of the application cognitive graph 1082.”) Claim(s) 8 – Chawla in view of Abrams disclose the limitations of claim 1 Chawla does not explicitly disclose the following limitations, however in analogous art of persona generation and interaction, Abrams discloses the following wherein the dialog engine facilitates questions, comments, conversations, and random injections. (Abrams: Paragraph 37, “The domain parser 240 may determine the assessment domain 245 based on any mode of interaction, at any level of granularity, and in any technically feasible fashion. For instance, in some embodiments, the domain parser 240 may first determine the mode of interaction (e.g., whether speech is to be understood, images are to be recognized, text is to be understood, and/or gestures are to be recognized). The domain parser 240 may then analyze the content of the recognition data 225 to determine the assessment domain 245 for the mode of interaction. For example, if speech is to be understood, then the domain parser 240 could perform analysis operations on the recognition data 225 to determine whether the assessment domain 245 is “causal conversation,” “storytelling,” or “homework,” to name a few.”; Paragraph 56, “Because the inference engine 250 selects and applies the inference algorithms 270 that are optimized with respect to the current context of the interaction, the inference engine 250 may generates realistic inferences for a wide range of interactions via the user platforms 120. Notably, advanced inference algorithms 270 TOM systems, etc.) enable the inference engine 250 to take initiative and proactively engage with users. Accordingly, the interaction paradigms implemented in the inference engine 250 are substantially more sophisticated than the question and answer interaction paradigms implemented in many conventional AI systems (e.g., the perception of sympathy, empathy, emotion, etc.). Further, because the number and type of inference algorithms 270 may be modified as time passes, the inference engine 250 may be adapted to exploit advances in technology.”; Paragraph 106, “Advantageously, by implementing the character engine to automate user interactions, service providers provide a convincing illusion that users are interacting with a “real” character instead of a machine. In particular, unlike conventional question and answer based AI systems, the character engine proactively engages with users. For example, the character engine can reply to a question with another question. Because the character engine is modular, the character engine may be configured to implement and select between a wide variety of algorithms that are applicable to different characters, contexts, and user platforms. Further, the algorithms implemented in the character engine may be updated as technology advances. By continuously updating the knowledge database, training data, and local history, the character engine develops a relationship with an individual user that evolves over time. In addition, since the character engine dynamically tailors the responses to different user platforms, the character engine generates the illusion of continuous and consistent relationships that transcend the user platforms.”) Chawla discloses a method for generating a virtual persona. Abrams discloses a method for training and creating an avatar for interaction based on a persona. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Chawla with the teachings of Abrams in order to improve the adaptability and realism of generated personas as disclosed by Abrams (Abrams: Paragraph 6, “Consequently, service providers are often unable to automate interactions for services that rely on establishing and nurturing “real” relationships between users and characters.”) Claim(s) 9 – Chawla in view of Abrams disclose the limitations of claim 1 Chawla further discloses the following: using machine learning and natural language processing to update the knowledge corpus; (Chawla: Paragraph 95, “In certain embodiments, the insight/learning engine 330 may include additional components. For example the additional components may include classification algorithms, clustering algorithms, and so forth. Skilled practitioners of the art will realize that many such additional components are possible and that the foregoing is not intended to limit the spirit, scope or intent of the invention. In various embodiments, the insights agents 433 are implemented to create a visual data story, highlighting user-specific insights, relationships and recommendations. As a result, it can share, operationalize, or track business insights in various embodiments. In various embodiments, the learning agent 434 work in the background to continually update the cognitive graph, as described in greater detail herein, from each unique interaction with data and users.”; Paragraph 106, “In various embodiments, the repository of cognitive graphs 457 is implemented to store cognitive graphs generated, accessed, and updated by the cognitive engine 320 in the process of generating cognitive insights. In various embodiments, the repository of cognitive graphs 457 may include one or more repositories of curated data 458, described in greater detail herein. In certain embodiments, the repositories of curated data 458 includes data that has been curated by one or more users, machine operations, or a combination of the two, by performing various sourcing, filtering, and enriching operations described in greater detail herein. In these and other embodiments, the curated data 458 is ingested by the cognitive platform 310 and then processed, as likewise described in greater detail herein, to generate cognitive insights. In various embodiments, the repository of models 459 is implemented to store models that are generated, accessed, and updated by the cognitive engine 320 in the process of generating cognitive insights. As used herein, models broadly refer to machine learning models. In certain embodiments, the models include one or more statistical models,”) identifying common patterns and themes; and (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 48, “As likewise used herein, entity resolution 216 broadly refers to the process of finding elements in a set of data that refer to the same entity across different data sources (e.g., structured, non-structured, streams, devices, etc.), where the target entity does not share a common identifier. In various embodiments, the entity resolution 216 process is implemented by the CILS 118 to identify significant nouns, adjectives, phrases or sentence elements that represent various predetermined entities within one or more domains. From the foregoing, it will be appreciated that the implementation of one or more of the semantic analysis 202, goal optimization 204, collaborative filtering 206, common sense reasoning 208, natural language processing 210, summarization 212, temporal/spatial reasoning 214, and entity resolution 216 processes by the CILS 118 can facilitate the generation of a semantic, cognitive model.”; Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”) using the common patterns and themes in the generating of the persona. (Chawla: Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”; Paragraph 186, “In various embodiments, provision of the composite cognitive insights results in the CILS receiving feedback 958 data from various individual users and other sources, such as cognitive business processes and applications 948. In one embodiment, the feedback 958 data is used to revise or modify the cognitive persona. In another embodiment, the feedback 958 data is used to create a new cognitive persona. In yet another embodiment, the feedback 958 data is used to create one or more associated cognitive personas, which inherit a common set of attributes from a source cognitive persona. In one embodiment, the feedback 958 data is used to create a new cognitive persona that combines attributes from two or more source cognitive personas. In another embodiment, the feedback 958 data is used to create a cognitive profile, described in greater detail herein, based upon the cognitive persona. Those of skill in the art will realize that many such embodiments are possible and the foregoing is not intended to limit the spirit, scope or intent of the invention.”) Claim(s) 10 – Chawla in view of Abrams discloses the limitations of claims 1 and 9 Chawla further discloses the following: further comprising using a rating system for the common patterns and themes to determine a particular attribute of the persona. (Chawla: Paragraph 46, “Summarization 212, as used herein, broadly refers to processing a set of information, organizing and ranking it, and then generating a corresponding summary. As an example, a news article may be processed to identify its primary topic and associated observations, which are then extracted, ranked, and then presented to the user. As another example, page ranking operations may be performed on the same news article to identify individual sentences, rank them, order them, and determine which of the sentences are most impactful in describing the article and its content. As yet another example, a structured data record, such as a patient's electronic medical record (EMR), may be processed using the summarization 212 process to generate sentences and phrases that describes the content of the EMR. In various embodiments, various summarization 212 processes are implemented by the CILS 118 to generate summarizations of content streams, which are in turn used to generate cognitive insights.”; Paragraph 94, “In various embodiments, the discover/visibility 430 component is implemented to provide detailed information related to a predetermined topic, such as a subject or an event, along with associated historical information. In certain embodiments, the predict 431 component is implemented to perform predictive operations to provide insight into what may next occur for a predetermined topic. In various embodiments, the rank/recommend 432 component is implemented to perform ranking and recommendation operations to provide a user prioritized recommendations associated with a provided cognitive insight.”; Paragraph 252, “As an example, composite cognitive insights provided by a particular insight agent related to a first subject may not be relevant or particularly useful to a user of the cognitive business processes and applications 304. As a result, the user provides feedback 1062 to that effect, which in turn is stored in the appropriate session graph that is associated with the user and stored in a repository of session graphs ‘1’ through ‘n’ 1052. Accordingly, subsequent insights provided by the insight agent related the first subject may be ranked lower, or not provided, within a cognitive insight summary 1048 provided to the user. Conversely, the same insight agent may provide excellent insights related to a second subject, resulting in positive feedback 1062 being received from the user. The positive feedback 1062 is likewise stored in the appropriate session graph that is associated with the user and stored in a repository of session graphs ‘1’ through ‘n’ 1052. As a result, subsequent insights provided by the insight agent related to the second subject may be ranked higher within a cognitive insight summary 1048 provided to the user.”) Claim(s) 11 and 17 – Chawla in view of Abrams discloses the limitations of claims 1 and 13 Chawla further discloses the following: wherein the segmenting of the end users comprises using a classifier that is a cluster algorithm that uses a distance calculator. (Chawla: Paragraph 95, “In certain embodiments, the insight/learning engine 330 may include additional components. For example the additional components may include classification algorithms, clustering algorithms, and so forth. Skilled practitioners of the art will realize that many such additional components are possible and that the foregoing is not intended to limit the spirit, scope or intent of the invention. In various embodiments, the insights agents 433 are implemented to create a visual data story, highlighting user-specific insights, relationships and recommendations. As a result, it can share, operationalize, or track business insights in various embodiments. In various embodiments, the learning agent 434 work in the background to continually update the cognitive graph, as described in greater detail herein, from each unique interaction with data and users.”; Paragraph 113, “In various embodiments, the compute cluster management 476 sub-component is implemented to manage various computing resources as a compute cluster. One such example of such a compute cluster management 476 sub-component is Mesos/Nimbus, a cluster management platform that manages distributed hardware resources into a single pool of resources that can be used by application frameworks to efficiently manage workload distribution for both batch jobs and long-running services. In various embodiments, the distributed object storage 478 sub-component is implemented to manage the physical storage and retrieval of distributed objects (e.g., binary file, image, text, etc.) In a cloud environment. Examples of a distributed object storage 478 sub-component include Amazon S3®, available from Amazon.com of Seattle, Wash., and Swift, an open source, scalable and redundant storage system.”; Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”; Paragraph 189, “In various embodiments, a cognitive persona or cognitive profile is defined by a first set of nodes in a weighted cognitive graph. In these embodiments, the cognitive persona or cognitive profile is further defined by a set of attributes that are respectively associated with a set of corresponding nodes in the weighted cognitive graph. In various embodiments, an attribute weight is used to represent a relevance value between two attributes. For example, a higher numeric value (e.g., ‘5.0’) associated with an attribute weight may indicate a higher degree of relevance between two attributes, while a lower numeric value (e.g., ‘0.5’) may indicate a lower degree of relevance.”; Paragraph 190, “In various embodiments, the numeric value associated with attribute weights may change as a result of the performance of composite cognitive insight and feedback 958 operations described in greater detail herein. In one embodiment, the changed numeric values associated with the attribute weights may be used to modify an existing cognitive persona or cognitive profile. In another embodiment, the changed numeric values associated with the attribute weights may be used to generate a new cognitive persona or cognitive profile. In certain embodiments, various ecosystem services 942 are implemented to manage various aspects of the CILS infrastructure, such as interaction with external services. The method by which these various aspects are managed is a matter of design choice.”) Claim(s) 12 and 18 – Chawla in view of Abrams discloses the limitations of claims 1 and 13 Chawla does not explicitly disclose the following limitations, however in analogous art of persona generation and interaction, Abrams discloses the following: plugging in the persona to an ongoing workflow; (Abrams: Paragraph 23, “To provide a convincing illusion that users are interacting with a “real” character, the memory 118 includes, without limitation, the character engine 140. In operation the character engine 140 combines sensing algorithms, advanced thinking and learning algorithms, and expressing algorithms in a flexible and adaptive manner. Upon receiving data via the user platform 120, the character engine 140 determines a current context that includes a user intent and an assessment domain. To generate character responses, the character engine 140 selects and applies inference algorithm(s) and personality engine(s) based on the current context and data received from the data sources 150. Finally, the character engine 140 tailors the character responses to the capabilities of the user platform 120”; Paragraph 38, “Together, the user intent 235 and the assessment domain 245 provide a current context for the interaction with the user. Notably, the user intent 235, the mode(s) of interaction, and/or the assessment domain 245 may change as time passes. For example, at one particular time, the user intent engine 230 and the domain parser 240 could determine that the user intent 235 is “asking a question” and the assessment domain 245 is “sports.” At a subsequent time, the user intent engine 230 and the domain parser 240 could determine that the the user intent 235 is “asking a question” and the assessment domain 245 is “cooking.””; Paragraph 47, “The domain parser 245 transmits the recognition data 225, the user intent 235, and the assessment domain 245 to the inference engine 250. The inference engine 250 then establishes a current context based on the user intent 235 and the assessment domain 245. The inference engine 250 may also refine the current context, the user intent 235, and/or the assessment domain 245 in any technically feasible fashion and based on any data. For example, the inference engine 250 may derive a current context from the assessment domain 245 and the user intent 235 and then perform additional assessment operations on the recognition data 225 to refine the current context.”) checking the workflow against a set of workflow parameters stored in a workflow data portion of the knowledge corpus; and (Abrams: Paragraph 57, “For example, one hard wired personality engine representing a chatbot could have a finite set of scripted responses based on a finite set of inputs. Another hard wired personality engine for the chatbot could include a finite set of classifiers that parse ambiguous inputs. By contrast, the personality engine 280 for the chatbot includes the parameterizable model 320, and is not constrained to a finite set of chat inputs and outputs.”; Paragraph 82, “In another example, suppose that the user holds a rock in front of a camera for identification, the inference engine 250 incorrectly identifies the rock as a brick, and the user then states that the object is actually a rock. As part of the offline learning 265, a convolution neural network used for the image recognition could be updated with the new training data 262 that includes the correct identification of the rock. Consequently, the image identification accuracy of the convolution neural network would improve.”; Paragraph 88, “The personality engine 280(1) is based on a “personality color wheel.” As shown, the personality engine 280(1) includes, without limitation, the parameterizable model 320 and a multi-dimensional vector of scalars (e.g., coefficients, weights, etc.) that define the personality 350. The parameterizable model 320 includes eight personality dimensions 320: optimism, love, submission, awe, disapproval, remorse, contempt, and aggressiveness. Although not shown, the colors and shades vary across the parameterizable model 320. The personality 350 is defined by a multi-dimensional vector that includes eight vectors (v1-v8). Each vector corresponds to one of the personality dimensions 320 and includes any number of scalar values.”; Paragraph 94, “At step 408, the domain parser 240 analyzes the recognition data 225 to determine the assessment domain 245. The domain parser 240 may determine the assessment domain 245 at any level of granularity and any in any technically feasible fashion. For example, the domain parser 240 could analyze the content of the recognition data 225 to determine that the assessment domain 245 is “causal conversation,” “storytelling,” or “homework.” In some embodiments, as part of determining the assessment domain 245, the domain parser 240 may refine the user intent 235. For example, the domain parser 240 may refine the user intent 235 from “asking a question,” to “asking a question about sports.” Together, the user intent 235 and the assessment domain 245 provide a current context for the interaction with the user. The domain parser 240 transmits the recognition data 225, the user intent 235, and the assessment domain 245 to the inference engine 250.”) calling out identified anomalies and changes to workflow patterns. (Abrams: Paragraph 100, “FIG. 5 is a flow diagram of method steps for incorporating data into a model of a character, according to various embodiments of the present invention. Although the method steps are described with reference to the systems of FIGS. 1-3, persons skilled in the art will understand that any system configured to implement the method steps, in any order, falls within the scope of the present invention. As persons skilled in the art will recognize, each of the method steps may be performed in a batch mode while the inference engine 250 is not interacting with users or a run-time mode while the inference engine 250 is interacting with users.”; Paragraph 105, “In sum, the disclosed techniques may be implemented to automate interactions with users. In general, a character engine provides modular sensing, thinking and learning, and expressing functionality. More specifically, an input platform abstraction infrastructure and a sensor processing engine implement sensing algorithms that process user data received from any number and type of user platforms (e.g., dolls, teleconferencing, avatar, etc.) to generate recognition data. A user intent engine, a domain parser, and an inference engine implement analysis and machine learning algorithms that generate inferences that are consistent with the current context (e.g., user intent, assessment domain, etc.). A personality engine and an output platform abstraction infrastructure implement algorithms that express character responses based on the inferences and tuned to the character and the particular user platform. Further, to evolve and refine the character, the character engine continuously updates a knowledge database, training data used to train the machine learning algorithms, and individual user histories based on interactions with users and external data sources.”; Paragraph 106, “Advantageously, by implementing the character engine to automate user interactions, service providers provide a convincing illusion that users are interacting with a “real” character instead of a machine. In particular, unlike conventional question and answer based AI systems, the character engine proactively engages with users. For example, the character engine can reply to a question with another question. Because the character engine is modular, the character engine may be configured to implement and select between a wide variety of algorithms that are applicable to different characters, contexts, and user platforms. Further, the algorithms implemented in the character engine may be updated as technology advances. By continuously updating the knowledge database, training data, and local history, the character engine develops a relationship with an individual user that evolves over time. In addition, since the character engine dynamically tailors the responses to different user platforms, the character engine generates the illusion of continuous and consistent relationships that transcend the user platforms.”) Chawla discloses a method for generating a virtual persona. Abrams discloses a method for training and creating an avatar for interaction based on a persona. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Chawla with the teachings of Abrams in order to improve the adaptability and realism of generated personas as disclosed by Abrams (Abrams: Paragraph 6, “Consequently, service providers are often unable to automate interactions for services that rely on establishing and nurturing “real” relationships between users and characters.”) Claim(s) 14 – Chawla in view of Abrams discloses the limitations of claim 13 Chawla further discloses the following adding the persona interaction data to the training data in the knowledge corpus; and subsequently (Chawla: Paragraph 165, “As used herein, a supervised learning 818 machine learning algorithm broadly refers to a machine learning approach for inferring a function from labeled training data. The training data typically consists of a set of training examples, with each example consisting of an input object (e.g., a vector) and a desired output value(e.g., a supervisory signal). In various embodiments, a supervised learning algorithm is implemented to analyze the training data and produce an inferred function, which can be used for mapping new examples.”; Paragraph 166, “As used herein, an unsupervised learning 820 machine learning algorithm broadly refers to a machine learning approach for finding non-obvious or hidden structures within a set of unlabeled data. In various embodiments, the unsupervised learning 820 machine learning algorithm is not given a set of training examples. Instead, it attempts to summarize and explain key features of the data it processes. Examples of unsupervised learning approaches include clustering (e.g., k-means, mixture models, hierarchical clustering, etc.) and latent variable models (e.g., expectation-maximization algorithms, method of moments, blind signal separation techniques, etc.).”; Paragraph 273, “The resulting feedback and the cognitive profile currently in use is then used to perform additional cognitive learning operations, which results in the generation of either a new, or refined, cognitive profile. The new, or refined, cognitive profile and the provided input data are used in combination with associated cognitive insight attributes to generate a second set of cognitive profile generation suggestions, which are then provided to the user within the sub-window 1308 of the UI window 1302 shown in FIG. 13. In these embodiments, the cognitive profile generation operations described in the descriptive text associated with FIG. 13 are repeated to generate a new or refined cognitive profile. In certain of these embodiments, the resulting refined cognitive profile is used to generate additional cognitive insights, which in turn are displayed in display field 1404.”) training another persona using the persona interaction data in the training data of the knowledge corpus. (Chawla: Paragraph 165, “As used herein, a supervised learning 818 machine learning algorithm broadly refers to a machine learning approach for inferring a function from labeled training data. The training data typically consists of a set of training examples, with each example consisting of an input object (e.g., a vector) and a desired output value(e.g., a supervisory signal). In various embodiments, a supervised learning algorithm is implemented to analyze the training data and produce an inferred function, which can be used for mapping new examples.”; Paragraph 166, “As used herein, an unsupervised learning 820 machine learning algorithm broadly refers to a machine learning approach for finding non-obvious or hidden structures within a set of unlabeled data. In various embodiments, the unsupervised learning 820 machine learning algorithm is not given a set of training examples. Instead, it attempts to summarize and explain key features of the data it processes. Examples of unsupervised learning approaches include clustering (e.g., k-means, mixture models, hierarchical clustering, etc.) and latent variable models (e.g., expectation-maximization algorithms, method of moments, blind signal separation techniques, etc.).”; Paragraph 273, “The resulting feedback and the cognitive profile currently in use is then used to perform additional cognitive learning operations, which results in the generation of either a new, or refined, cognitive profile. The new, or refined, cognitive profile and the provided input data are used in combination with associated cognitive insight attributes to generate a second set of cognitive profile generation suggestions, which are then provided to the user within the sub-window 1308 of the UI window 1302 shown in FIG. 13. In these embodiments, the cognitive profile generation operations described in the descriptive text associated with FIG. 13 are repeated to generate a new or refined cognitive profile. In certain of these embodiments, the resulting refined cognitive profile is used to generate additional cognitive insights, which in turn are displayed in display field 1404.”) wherein the end user information comprises end user answers from end users in response to end user questions; (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 53, “In various embodiments, the information contained in, and referenced by, a cognitive graph 226 is derived from many sources (e.g., public, private, social, device), such as curated data 222. In certain of these embodiments, the cognitive graph 226 assists in the identification and organization of information associated with how people, places and things are related to one other. In various embodiments, the cognitive graph 226 enables automated agents, described in greater detail herein, to access the Web more intelligently, enumerate inferences through utilization of curated, structured data 222, and provide answers to questions by serving as a computational knowledge engine.”; Paragraph 67, “In various embodiments, the graph query engine 326 is implemented to receive and process queries such that they can be bridged into a cognitive graph, as described in greater detail herein, through the use of a bridging agent. In certain embodiments, the graph query engine 326 performs various natural language processing (NLP), familiar to skilled practitioners of the art, to process the queries. In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In various embodiments, two or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may be implemented to operate collaboratively to generate a cognitive insight or recommendation. In certain embodiments, one or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may operate autonomously to generate a cognitive insight or recommendation.”; Paragraph 91, “To further differentiate the distinction between the translate 427 component and the bridging 428 component, the translate 427 component relates to a general domain translation of a question. In contrast, the bridging 428 component allows the question to be asked in the context of a specific domain (e.g., healthcare, travel, etc.), given what is known about the data. In certain embodiments, the bridging 428 component is implemented to process what is known about the translated query, in the context of the user, to provide an answer that is relevant to a specific domain.”; Paragraph 93, “In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a target cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In these and other embodiments, the insight/learning engine 330 is implemented to perform insight/learning operations, described in greater detail herein. In various embodiments, the insight/learning engine 330 may include a discover/visibility 430 component, a predict 431 component, a rank/recommend 432 component, and one or more insight 433 agents.”; Paragraph 124, “As used herein, a universal knowledge repository broadly refers to a collection of knowledge elements that can be used in various embodiments to generate one or more cognitive insights described in greater detail herein. In various embodiments, these knowledge elements may include facts (e.g., milk is a dairy product), information (e.g., an answer to a question), descriptions (e.g., the color of an automobile), skills (e.g., the ability to install plumbing fixtures), and other classes of knowledge familiar to those of skill in the art. In these embodiments, the knowledge elements may be explicit or implicit. As an example, the fact that water freezes at zero degrees centigrade would be an explicit knowledge element, while the fact that an automobile mechanic knows how to repair an automobile would be an implicit knowledge element.”) the method further comprising: adding end user questions and end user responses to the synthesized user research data in the knowledge corpus; (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 53, “In various embodiments, the information contained in, and referenced by, a cognitive graph 226 is derived from many sources (e.g., public, private, social, device), such as curated data 222. In certain of these embodiments, the cognitive graph 226 assists in the identification and organization of information associated with how people, places and things are related to one other. In various embodiments, the cognitive graph 226 enables automated agents, described in greater detail herein, to access the Web more intelligently, enumerate inferences through utilization of curated, structured data 222, and provide answers to questions by serving as a computational knowledge engine.”; Paragraph 67, “In various embodiments, the graph query engine 326 is implemented to receive and process queries such that they can be bridged into a cognitive graph, as described in greater detail herein, through the use of a bridging agent. In certain embodiments, the graph query engine 326 performs various natural language processing (NLP), familiar to skilled practitioners of the art, to process the queries. In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In various embodiments, two or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may be implemented to operate collaboratively to generate a cognitive insight or recommendation. In certain embodiments, one or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may operate autonomously to generate a cognitive insight or recommendation.”; Paragraph 91, “To further differentiate the distinction between the translate 427 component and the bridging 428 component, the translate 427 component relates to a general domain translation of a question. In contrast, the bridging 428 component allows the question to be asked in the context of a specific domain (e.g., healthcare, travel, etc.), given what is known about the data. In certain embodiments, the bridging 428 component is implemented to process what is known about the translated query, in the context of the user, to provide an answer that is relevant to a specific domain.”; Paragraph 93, “In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a target cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In these and other embodiments, the insight/learning engine 330 is implemented to perform insight/learning operations, described in greater detail herein. In various embodiments, the insight/learning engine 330 may include a discover/visibility 430 component, a predict 431 component, a rank/recommend 432 component, and one or more insight 433 agents.”; Paragraph 124, “As used herein, a universal knowledge repository broadly refers to a collection of knowledge elements that can be used in various embodiments to generate one or more cognitive insights described in greater detail herein. In various embodiments, these knowledge elements may include facts (e.g., milk is a dairy product), information (e.g., an answer to a question), descriptions (e.g., the color of an automobile), skills (e.g., the ability to install plumbing fixtures), and other classes of knowledge familiar to those of skill in the art. In these embodiments, the knowledge elements may be explicit or implicit. As an example, the fact that water freezes at zero degrees centigrade would be an explicit knowledge element, while the fact that an automobile mechanic knows how to repair an automobile would be an implicit knowledge element.”) and creating a question and answer component as a part of the dialog engine that is usable by an interacting persona. (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 53, “In various embodiments, the information contained in, and referenced by, a cognitive graph 226 is derived from many sources (e.g., public, private, social, device), such as curated data 222. In certain of these embodiments, the cognitive graph 226 assists in the identification and organization of information associated with how people, places and things are related to one other. In various embodiments, the cognitive graph 226 enables automated agents, described in greater detail herein, to access the Web more intelligently, enumerate inferences through utilization of curated, structured data 222, and provide answers to questions by serving as a computational knowledge engine.”; Paragraph 67, “In various embodiments, the graph query engine 326 is implemented to receive and process queries such that they can be bridged into a cognitive graph, as described in greater detail herein, through the use of a bridging agent. In certain embodiments, the graph query engine 326 performs various natural language processing (NLP), familiar to skilled practitioners of the art, to process the queries. In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In various embodiments, two or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may be implemented to operate collaboratively to generate a cognitive insight or recommendation. In certain embodiments, one or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may operate autonomously to generate a cognitive insight or recommendation.”; Paragraph 91, “To further differentiate the distinction between the translate 427 component and the bridging 428 component, the translate 427 component relates to a general domain translation of a question. In contrast, the bridging 428 component allows the question to be asked in the context of a specific domain (e.g., healthcare, travel, etc.), given what is known about the data. In certain embodiments, the bridging 428 component is implemented to process what is known about the translated query, in the context of the user, to provide an answer that is relevant to a specific domain.”; Paragraph 93, “In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a target cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In these and other embodiments, the insight/learning engine 330 is implemented to perform insight/learning operations, described in greater detail herein. In various embodiments, the insight/learning engine 330 may include a discover/visibility 430 component, a predict 431 component, a rank/recommend 432 component, and one or more insight 433 agents.”; Paragraph 124, “As used herein, a universal knowledge repository broadly refers to a collection of knowledge elements that can be used in various embodiments to generate one or more cognitive insights described in greater detail herein. In various embodiments, these knowledge elements may include facts (e.g., milk is a dairy product), information (e.g., an answer to a question), descriptions (e.g., the color of an automobile), skills (e.g., the ability to install plumbing fixtures), and other classes of knowledge familiar to those of skill in the art. In these embodiments, the knowledge elements may be explicit or implicit. As an example, the fact that water freezes at zero degrees centigrade would be an explicit knowledge element, while the fact that an automobile mechanic knows how to repair an automobile would be an implicit knowledge element.”; Paragraph 140, “As used herein, a cognitive learning category broadly refers to a source of information used by a CILS to perform cognitive learning operations. In various embodiments, the cognitive learning categories 810 may include a data-based 812 cognitive learning category and an interaction-based 814 cognitive learning category. As used herein, a data-based 812 cognitive learning category broadly refers to the use of data as a source of information in the performance of a cognitive learning operation by a CILS.”; Paragraph 151, “For example, an online shopper may select a first pair of shoes that are available in a white, black and brown. The user then elects to view a larger photo of the first pair of shoes, first in white, then in black, but not brown. To continue the example, the user then selects a second pair of shoes that are likewise available in white, black and brown. As before, the user elects to view a larger photo of the second pair of shoes, first in white, then in black, but once again, not brown. In this example, the user's online interaction indicates an explicit like for white and black shoes and an explicit dislike for brown shoes.”; Paragraph 173, “In various embodiments, the cognitive insight is delivered to a device, an application, a service, a process, a user, or a combination thereof. In certain embodiments, the resulting interaction information is likewise received by a CILS from a device, an application, a service, a process, a user, or a combination thereof. In various embodiments, the resulting interaction information is provided in the form of feedback data to the CILS. In these embodiments, the method by which the cognitive learning process, and its associated cognitive learning steps, is implemented is a matter of design choice. Skilled practitioners of the art will recognize that many such embodiments are possible and the foregoing is not intended to limit the spirit, scope or intent of the invention.”) Chawla does not explicitly disclose the following limitations, however in analogous art of persona generation and interaction, Abrams discloses the following receiving persona interaction data based on an interacting persona that interacts with a metaverse user; (Abrams: Paragraph 37, “The domain parser 240 may determine the assessment domain 245 based on any mode of interaction, at any level of granularity, and in any technically feasible fashion. For instance, in some embodiments, the domain parser 240 may first determine the mode of interaction (e.g., whether speech is to be understood, images are to be recognized, text is to be understood, and/or gestures are to be recognized). The domain parser 240 may then analyze the content of the recognition data 225 to determine the assessment domain 245 for the mode of interaction. For example, if speech is to be understood, then the domain parser 240 could perform analysis operations on the recognition data 225 to determine whether the assessment domain 245 is “causal conversation,” “storytelling,” or “homework,” to name a few.”; Paragraph 71, “The inference engine 250 and/or the knowledge subsystem 260 may process the data received from the data sources 150 in any technically feasible fashion. For example, some of the data sources 150 provide structured data (e.g., weather feeds), while other data sources 150 provide unstructured data (e.g., movie scripts). Notably, in some embodiments, the inference engine 250 and/or the knowledge subsystem 260 may receive data via crowdsourcing. For example, the inference engine 250 and/or the knowledge subsystem 260 could receive data from a gamification platform that engages large numbers of users to manually perform entity extraction and relationship mark-up among named entities. The structured data received by the inference engine 250 and/or the knowledge subsystem 260 could then be included in the training data 262 and used to train any number of the inference algorithms 270.”; Paragraph 80, “For example, if the user platform 120 implements a chat application, then the inference engine 250 could transmit data associated with each new conversation to the knowledge subsystem 260. The data could include state variables such as user specific information, the user intent 235, the assessment domain 245, events that occur during the conversations, inferences and character responses 285 generated by the inference engine 150, and so forth. The knowledge subsystem 260 could evaluate the data, include relevant data in the knowledge database 266, and include data that is suitable for machine learning in the training data 262.”) Chawla discloses a method for generating a virtual persona. Abrams discloses a method for training and creating an avatar for interaction based on a persona. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Chawla with the teachings of Abrams in order to improve the adaptability and realism of generated personas as disclosed by Abrams (Abrams: Paragraph 6, “Consequently, service providers are often unable to automate interactions for services that rely on establishing and nurturing “real” relationships between users and characters.”) Claim(s) 16 – Chawla in view of Abrams disclose the limitations of claim 13 Chawla further discloses the following: using machine learning and natural language processing to update the knowledge corpus; (Chawla: Paragraph 95, “In certain embodiments, the insight/learning engine 330 may include additional components. For example the additional components may include classification algorithms, clustering algorithms, and so forth. Skilled practitioners of the art will realize that many such additional components are possible and that the foregoing is not intended to limit the spirit, scope or intent of the invention. In various embodiments, the insights agents 433 are implemented to create a visual data story, highlighting user-specific insights, relationships and recommendations. As a result, it can share, operationalize, or track business insights in various embodiments. In various embodiments, the learning agent 434 work in the background to continually update the cognitive graph, as described in greater detail herein, from each unique interaction with data and users.”; Paragraph 106, “In various embodiments, the repository of cognitive graphs 457 is implemented to store cognitive graphs generated, accessed, and updated by the cognitive engine 320 in the process of generating cognitive insights. In various embodiments, the repository of cognitive graphs 457 may include one or more repositories of curated data 458, described in greater detail herein. In certain embodiments, the repositories of curated data 458 includes data that has been curated by one or more users, machine operations, or a combination of the two, by performing various sourcing, filtering, and enriching operations described in greater detail herein. In these and other embodiments, the curated data 458 is ingested by the cognitive platform 310 and then processed, as likewise described in greater detail herein, to generate cognitive insights. In various embodiments, the repository of models 459 is implemented to store models that are generated, accessed, and updated by the cognitive engine 320 in the process of generating cognitive insights. As used herein, models broadly refer to machine learning models. In certain embodiments, the models include one or more statistical models,”) identifying common patterns and themes; and (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 48, “As likewise used herein, entity resolution 216 broadly refers to the process of finding elements in a set of data that refer to the same entity across different data sources (e.g., structured, non-structured, streams, devices, etc.), where the target entity does not share a common identifier. In various embodiments, the entity resolution 216 process is implemented by the CILS 118 to identify significant nouns, adjectives, phrases or sentence elements that represent various predetermined entities within one or more domains. From the foregoing, it will be appreciated that the implementation of one or more of the semantic analysis 202, goal optimization 204, collaborative filtering 206, common sense reasoning 208, natural language processing 210, summarization 212, temporal/spatial reasoning 214, and entity resolution 216 processes by the CILS 118 can facilitate the generation of a semantic, cognitive model.”; Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”) using the common patterns and themes in the generating of the persona. (Chawla: Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”; Paragraph 186, “In various embodiments, provision of the composite cognitive insights results in the CILS receiving feedback 958 data from various individual users and other sources, such as cognitive business processes and applications 948. In one embodiment, the feedback 958 data is used to revise or modify the cognitive persona. In another embodiment, the feedback 958 data is used to create a new cognitive persona. In yet another embodiment, the feedback 958 data is used to create one or more associated cognitive personas, which inherit a common set of attributes from a source cognitive persona. In one embodiment, the feedback 958 data is used to create a new cognitive persona that combines attributes from two or more source cognitive personas. In another embodiment, the feedback 958 data is used to create a cognitive profile, described in greater detail herein, based upon the cognitive persona. Those of skill in the art will realize that many such embodiments are possible and the foregoing is not intended to limit the spirit, scope or intent of the invention.”) further comprising using a rating system for the common patterns and themes to determine a particular attribute of the persona. (Chawla: Paragraph 46, “Summarization 212, as used herein, broadly refers to processing a set of information, organizing and ranking it, and then generating a corresponding summary. As an example, a news article may be processed to identify its primary topic and associated observations, which are then extracted, ranked, and then presented to the user. As another example, page ranking operations may be performed on the same news article to identify individual sentences, rank them, order them, and determine which of the sentences are most impactful in describing the article and its content. As yet another example, a structured data record, such as a patient's electronic medical record (EMR), may be processed using the summarization 212 process to generate sentences and phrases that describes the content of the EMR. In various embodiments, various summarization 212 processes are implemented by the CILS 118 to generate summarizations of content streams, which are in turn used to generate cognitive insights.”; Paragraph 94, “In various embodiments, the discover/visibility 430 component is implemented to provide detailed information related to a predetermined topic, such as a subject or an event, along with associated historical information. In certain embodiments, the predict 431 component is implemented to perform predictive operations to provide insight into what may next occur for a predetermined topic. In various embodiments, the rank/recommend 432 component is implemented to perform ranking and recommendation operations to provide a user prioritized recommendations associated with a provided cognitive insight.”; Paragraph 252, “As an example, composite cognitive insights provided by a particular insight agent related to a first subject may not be relevant or particularly useful to a user of the cognitive business processes and applications 304. As a result, the user provides feedback 1062 to that effect, which in turn is stored in the appropriate session graph that is associated with the user and stored in a repository of session graphs ‘1’ through ‘n’ 1052. Accordingly, subsequent insights provided by the insight agent related the first subject may be ranked lower, or not provided, within a cognitive insight summary 1048 provided to the user. Conversely, the same insight agent may provide excellent insights related to a second subject, resulting in positive feedback 1062 being received from the user. The positive feedback 1062 is likewise stored in the appropriate session graph that is associated with the user and stored in a repository of session graphs ‘1’ through ‘n’ 1052. As a result, subsequent insights provided by the insight agent related to the second subject may be ranked higher within a cognitive insight summary 1048 provided to the user.”) Claim(s) 6-7, 15, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chawla (US 2017/0185919 A1) in view of Abrams (US 2018/0165596 A1) and Nichols (US 7,660,778 B1) Claim(s) 6-7 and 15 – Chawla in view of Abrams discloses the limitations of claims 1 and 13 Chawla does not explicitly disclose the following limitations, however in analogous art of persona generation and interaction, Abrams discloses the following each avatar is accessible within a metaverse environment; (Abrams: Paragraph 29, “In operation, the input platform abstraction infrastructure 210 receives input data from any number and types of the user platforms 120. For instance, in some embodiments, the input platform abstraction infrastructure 210 may receive text, voice, accelerometer, and video data from the smartphone 122. In other embodiments, the input platform abstraction infrastructure 210 may receive discrete button, voice, and motion data from the robot 132 and/or the toy 138. In yet other embodiments, the input platform abstraction infrastructure 210 may receive control inputs from the game console 126 or a smart television; audio from a telephone or kiosk microphone; and/or voice and imagery from augmented and virtual Reality (AR/VR) systems.”) Chawla in view of Abrams do not explicitly disclose the following, however, in analogous art of virtual reality interactions Nichols discloses the following: the metaverse environment is a virtual enterprise design thinking workshop (VEDTW); (Nichols: Column 4 lines 1-14, “The simulation model executes the business function that the student is learning and is therefore the center point of the application. An activity `layer` allows the user to visually guide the simulation by passing inputs into the simulation engine and receiving an output from the simulation model. For example, if the student was working on an income statement activity, the net sales and cost of goods sold calculations are passed as inputs to the simulation model and the net income value is calculated and retrieved as an output. As calculations are passed to and retrieved from the simulation model, they are also passed to the Intelligent Coaching Agent (ICA). The ICA analyzes the Inputs and Outputs to the simulation model and generates feedback based on a set of rules. This feedback is received and displayed through the Visual Basic Architecture.”; Column 4 line 60 – Column 5 line 4, “Business simulation in accordance with a preferred embodiment delivers training curricula in an optimal manner. This is because such applications provide effective training that mirrors a student's actual work environment. The application of skills "on the job" facilitates increased retention and higher overall job performance. While the results of such training applications are impressive, business simulations are very complex to design and build correctly. These simulations are characterized by a very open-ended environment, where students can go through the application along any number of paths, depending on their learning style and prior experiences/knowledge.”; Column 10 lines 24 – 65, “We have clearly defined why a combined component/framework approach is the best solution for delivering high-quality BusSim solutions at a lower cost. Given that there are a number of third party frameworks already on the market that provide delivery capability for a wide variety of platforms, the TEL project is focused on defining and developing a set of components that provide unique services for the development and delivery of BusSim solutions. These components along with a set of design and test workbenches are the tools used by instructional designers to support activities in the four phases of BusSim development. We call this suite of tools the Business Simulation Toolset. Following is a description of each of the components and workbenches of the toolset. A Component can be thought of as a black box that encapsulates the behavior and data necessary to support a related set of services. It exposes these services to the outside world through published interfaces. The published interface of a component allows you to understand what it does through the services it offers, but not how it does it. The complexity of its implementation is hidden from the user. The following are the key components of the BusSim Toolset. Domain Component--provides services for modeling the state of a simulation. Profiling Component--provides services for rule-based evaluating the state of a simulation. Transformation Component--provides services for manipulating the state of a simulation. Remediation Component--provides services for the rule-based delivering of feedback to the student The Domain Model component is the central component of the suite that facilitates communication of context data across the application and the other components. It is a modeling tool that can use industry-standard database such as Informix, Oracle, or Sybase to store its data. A domain model is a representation of the objects in a simulation. The objects are such pseudo tangible things as a lever the student can pull, a form or notepad the student fills out, a character the student interacts with in a simulated meeting, etc. They can also be abstract objects such as the ROI for a particular investment, the number of times the student asked a particular question, etc. These objects are called entities. Some example entities include: Vehicles, operators and incidents in an insurance domain; Journal entries, cash flow statements and balance sheets in a financial accounting domain and Consumers and purchases in a marketing domain.”) the metaverse users are VEDTW students; (Nichols: Column 4 lines 1-14, “The simulation model executes the business function that the student is learning and is therefore the center point of the application. An activity `layer` allows the user to visually guide the simulation by passing inputs into the simulation engine and receiving an output from the simulation model. For example, if the student was working on an income statement activity, the net sales and cost of goods sold calculations are passed as inputs to the simulation model and the net income value is calculated and retrieved as an output. As calculations are passed to and retrieved from the simulation model, they are also passed to the Intelligent Coaching Agent (ICA). The ICA analyzes the Inputs and Outputs to the simulation model and generates feedback based on a set of rules. This feedback is received and displayed through the Visual Basic Architecture.”; Column 4 line 60 – Column 5 line 4, “Business simulation in accordance with a preferred embodiment delivers training curricula in an optimal manner. This is because such applications provide effective training that mirrors a student's actual work environment. The application of skills "on the job" facilitates increased retention and higher overall job performance. While the results of such training applications are impressive, business simulations are very complex to design and build correctly. These simulations are characterized by a very open-ended environment, where students can go through the application along any number of paths, depending on their learning style and prior experiences/knowledge.”; Column 10 lines 24 – 65, “We have clearly defined why a combined component/framework approach is the best solution for delivering high-quality BusSim solutions at a lower cost. Given that there are a number of third party frameworks already on the market that provide delivery capability for a wide variety of platforms, the TEL project is focused on defining and developing a set of components that provide unique services for the development and delivery of BusSim solutions. These components along with a set of design and test workbenches are the tools used by instructional designers to support activities in the four phases of BusSim development. We call this suite of tools the Business Simulation Toolset. Following is a description of each of the components and workbenches of the toolset. A Component can be thought of as a black box that encapsulates the behavior and data necessary to support a related set of services. It exposes these services to the outside world through published interfaces. The published interface of a component allows you to understand what it does through the services it offers, but not how it does it. The complexity of its implementation is hidden from the user. The following are the key components of the BusSim Toolset. Domain Component--provides services for modeling the state of a simulation. Profiling Component--provides services for rule-based evaluating the state of a simulation. Transformation Component--provides services for manipulating the state of a simulation. Remediation Component--provides services for the rule-based delivering of feedback to the student The Domain Model component is the central component of the suite that facilitates communication of context data across the application and the other components. It is a modeling tool that can use industry-standard database such as Informix, Oracle, or Sybase to store its data. A domain model is a representation of the objects in a simulation. The objects are such pseudo tangible things as a lever the student can pull, a form or notepad the student fills out, a character the student interacts with in a simulated meeting, etc. They can also be abstract objects such as the ROI for a particular investment, the number of times the student asked a particular question, etc. These objects are called entities. Some example entities include: Vehicles, operators and incidents in an insurance domain; Journal entries, cash flow statements and balance sheets in a financial accounting domain and Consumers and purchases in a marketing domain.”) and the dialog engine is structured around the VEDTW; (Nichols: Column 6 line 56 – Column 7 line 9, “During the build phase, the application development team uses the detailed designs to code the application. Coding tasks include the interfaces and widgets that the student interacts with. The interfaces can be made up of buttons, grids, check boxes, or any other screen controls that allow the student to view and manipulate his deliverables. The developer must also code logic that analyzes the student's work and provides feedback interactions. These interactions may take the form of text and/or multimedia feedback from simulated team members, conversations with simulated team members, or direct manipulations of the student's work by simulated team members. In parallel with these coding efforts, graphics, videos, and audio are being created for use in the application. Managing the development of these assets have their own complications. Risks in the build phase include misinterpretation of the designs. If the developer does not accurately understand the designer's intentions, the application will not function as desired. Also, coding these applications requires very skilled developers because the logic that analyzes the student's work and composes feedback is very complex.”; Column 10 lines 24 – 65, “We have clearly defined why a combined component/framework approach is the best solution for delivering high-quality BusSim solutions at a lower cost. Given that there are a number of third party frameworks already on the market that provide delivery capability for a wide variety of platforms, the TEL project is focused on defining and developing a set of components that provide unique services for the development and delivery of BusSim solutions. These components along with a set of design and test workbenches are the tools used by instructional designers to support activities in the four phases of BusSim development. We call this suite of tools the Business Simulation Toolset. Following is a description of each of the components and workbenches of the toolset. A Component can be thought of as a black box that encapsulates the behavior and data necessary to support a related set of services. It exposes these services to the outside world through published interfaces. The published interface of a component allows you to understand what it does through the services it offers, but not how it does it. The complexity of its implementation is hidden from the user. The following are the key components of the BusSim Toolset. Domain Component--provides services for modeling the state of a simulation. Profiling Component--provides services for rule-based evaluating the state of a simulation. Transformation Component--provides services for manipulating the state of a simulation. Remediation Component--provides services for the rule-based delivering of feedback to the student The Domain Model component is the central component of the suite that facilitates communication of context data across the application and the other components. It is a modeling tool that can use industry-standard database such as Informix, Oracle, or Sybase to store its data. A domain model is a representation of the objects in a simulation. The objects are such pseudo tangible things as a lever the student can pull, a form or notepad the student fills out, a character the student interacts with in a simulated meeting, etc. They can also be abstract objects such as the ROI for a particular investment, the number of times the student asked a particular question, etc. These objects are called entities. Some example entities include: Vehicles, operators and incidents in an insurance domain; Journal entries, cash flow statements and balance sheets in a financial accounting domain and Consumers and purchases in a marketing domain.”; Column 21 line 21 – Column 22 line 4, “Execution Scenario: Student Interaction--FIG. 18 illustrates a suite to support a student interaction in accordance with a preferred embodiment. In this task the student is trying to journalize invoices. He sees a chart of accounts, an invoice, and the journal entry for each invoice. He journalizes a transaction by dragging and dropping an account from the chart of accounts onto the `Debits` or the `Credits` line of the journal entry and entering the dollar amount of the debit or credit. He does this for each transaction. As the student interacts with the interface, all actions are reported to and recorded in the Domain Model. The Domain Model has a meta-model describing a transaction, its data, and what information a journal entry contains. The actions of the student populates the entities in the domain model with the appropriate information. When the student is ready, he submits the work to a simulated team member for review. This submission triggers the Analysis-Interpretation cycle. The Transformation Component is invoked and performs additional calculations on the data in the Domain Model, perhaps determining that Debits and Credits are unbalanced for a given journal entry. The Profiling Component can then perform rule-based pattern matching on the Domain Model, examining both the student actions and results of any Transformation Component analysis. Some of the profiles fire as they identify the mistakes and correct answers the student has given. Any profiles that fire activate topics in the Remediation Component. After the Profiling Component completes, the Remediation Component is invoked. The remediation algorithm searches the active topics in the tree of concepts to determine the best set of topics to deliver. This set may contain text, video, audio, URLs, even actions that manipulate the Domain Model. It is then assembled into prose-like paragraphs of text and media and presented to the student. The text feedback helps the student localize his journalization errors and understand why they are wrong and what is needed to correct the mistakes. The student is presented with the opportunity to view a video war story about the tax and legal consequences that arise from incorrect journalization. He is also presented with links to the reference materials that describe the fundamentals of journalization. The Analysis-Interpretation cycle ends when any coach items that result in updates to the Domain Model have been posted and the interface is redrawn to represent the new domain data. In this case, the designer chose to highlight with a red check the transactions that the student journalized incorrectly.”) the processor is further configured to: apply generated insights during situational analysis while EDT activities are in progress. (Nichols: Column 6 line 56 – Column 7 line 9, “During the build phase, the application development team uses the detailed designs to code the application. Coding tasks include the interfaces and widgets that the student interacts with. The interfaces can be made up of buttons, grids, check boxes, or any other screen controls that allow the student to view and manipulate his deliverables. The developer must also code logic that analyzes the student's work and provides feedback interactions. These interactions may take the form of text and/or multimedia feedback from simulated team members, conversations with simulated team members, or direct manipulations of the student's work by simulated team members. In parallel with these coding efforts, graphics, videos, and audio are being created for use in the application. Managing the development of these assets have their own complications. Risks in the build phase include misinterpretation of the designs. If the developer does not accurately understand the designer's intentions, the application will not function as desired. Also, coding these applications requires very skilled developers because the logic that analyzes the student's work and composes feedback is very complex.”; Column 10 lines 24 – 65, “We have clearly defined why a combined component/framework approach is the best solution for delivering high-quality BusSim solutions at a lower cost. Given that there are a number of third party frameworks already on the market that provide delivery capability for a wide variety of platforms, the TEL project is focused on defining and developing a set of components that provide unique services for the development and delivery of BusSim solutions. These components along with a set of design and test workbenches are the tools used by instructional designers to support activities in the four phases of BusSim development. We call this suite of tools the Business Simulation Toolset. Following is a description of each of the components and workbenches of the toolset. A Component can be thought of as a black box that encapsulates the behavior and data necessary to support a related set of services. It exposes these services to the outside world through published interfaces. The published interface of a component allows you to understand what it does through the services it offers, but not how it does it. The complexity of its implementation is hidden from the user. The following are the key components of the BusSim Toolset. Domain Component--provides services for modeling the state of a simulation. Profiling Component--provides services for rule-based evaluating the state of a simulation. Transformation Component--provides services for manipulating the state of a simulation. Remediation Component--provides services for the rule-based delivering of feedback to the student The Domain Model component is the central component of the suite that facilitates communication of context data across the application and the other components. It is a modeling tool that can use industry-standard database such as Informix, Oracle, or Sybase to store its data. A domain model is a representation of the objects in a simulation. The objects are such pseudo tangible things as a lever the student can pull, a form or notepad the student fills out, a character the student interacts with in a simulated meeting, etc. They can also be abstract objects such as the ROI for a particular investment, the number of times the student asked a particular question, etc. These objects are called entities. Some example entities include: Vehicles, operators and incidents in an insurance domain; Journal entries, cash flow statements and balance sheets in a financial accounting domain and Consumers and purchases in a marketing domain.”; Column 21 line 21 – Column 22 line 4, “Execution Scenario: Student Interaction--FIG. 18 illustrates a suite to support a student interaction in accordance with a preferred embodiment. In this task the student is trying to journalize invoices. He sees a chart of accounts, an invoice, and the journal entry for each invoice. He journalizes a transaction by dragging and dropping an account from the chart of accounts onto the `Debits` or the `Credits` line of the journal entry and entering the dollar amount of the debit or credit. He does this for each transaction. As the student interacts with the interface, all actions are reported to and recorded in the Domain Model. The Domain Model has a meta-model describing a transaction, its data, and what information a journal entry contains. The actions of the student populates the entities in the domain model with the appropriate information. When the student is ready, he submits the work to a simulated team member for review. This submission triggers the Analysis-Interpretation cycle. The Transformation Component is invoked and performs additional calculations on the data in the Domain Model, perhaps determining that Debits and Credits are unbalanced for a given journal entry. The Profiling Component can then perform rule-based pattern matching on the Domain Model, examining both the student actions and results of any Transformation Component analysis. Some of the profiles fire as they identify the mistakes and correct answers the student has given. Any profiles that fire activate topics in the Remediation Component. After the Profiling Component completes, the Remediation Component is invoked. The remediation algorithm searches the active topics in the tree of concepts to determine the best set of topics to deliver. This set may contain text, video, audio, URLs, even actions that manipulate the Domain Model. It is then assembled into prose-like paragraphs of text and media and presented to the student. The text feedback helps the student localize his journalization errors and understand why they are wrong and what is needed to correct the mistakes. The student is presented with the opportunity to view a video war story about the tax and legal consequences that arise from incorrect journalization. He is also presented with links to the reference materials that describe the fundamentals of journalization. The Analysis-Interpretation cycle ends when any coach items that result in updates to the Domain Model have been posted and the interface is redrawn to represent the new domain data. In this case, the designer chose to highlight with a red check the transactions that the student journalized incorrectly.”) Chawla discloses a method for generating a virtual persona. Abrams discloses a method for training and creating an avatar for interaction based on a persona. Nichols discloses a method for using a simulated environment for training and teaching interactions with simulated personas. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Chawla with the teachings of Abrams in order to improve the adaptability and realism of generated personas as disclosed by Abrams (Abrams: Paragraph 6, “Consequently, service providers are often unable to automate interactions for services that rely on establishing and nurturing “real” relationships between users and characters.”). It would have been further obvious to one of ordinary skill in the art to combine the methods of Chawla and Abrams with the teachings of Nichols in order to utilize the virtual reality and personas of Chawla and Abrams to improve the teaching and development of skills as taught by Nichols (Nichols: Column 1 line 62 – Column 2 line 14, “The system utilizes an artificial intelligence engine driving individualized and dynamic feedback with synchronized video and graphics used to simulate real-world environment and interactions. Multiple "correct" answers are integrated into the learning system to allow individualized learning experiences in which navigation through the system is at a pace controlled by the learner.”) Claim(s) 19 – Chawla discloses the following: one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions comprising program instructions to: (Chawla: Paragraph 26, “The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.”) receiving end user information of end users to produce synthesized user research data; (Chawla: Paragraph 36, “Cognitive systems achieve these abilities by combining various aspects of artificial intelligence, natural language processing, dynamic learning, and hypothesis generation to render vast quantities of intelligible data to assist humans in making better decisions. As such, cognitive systems can be characterized as having the ability to interact naturally with people to extend what either humans, or machines, could do on their own. Furthermore, they are typically able to process natural language, multi-structured data, and experience much in the same way as humans. Moreover, they are also typically able to learn a knowledge domain based upon the best available data and get better, and more immersive, over time.”; Paragraph 50, “As used herein, ambient signals 220 broadly refer to input signals, or other data streams, that may contain data providing additional insight or context to the curated data 222 and learned knowledge 224 received by the CILS 118. For example, ambient signals may allow the CILS 118 to understand that a user is currently using their mobile device, at location ‘x’, at time ‘y’, doing activity ‘z’. To further the example, there is a difference between the user using their mobile device while they are on an airplane versus using their mobile device after landing at an airport and walking between one terminal and another. To extend the example even further, ambient signals may add additional context, such as the user is in the middle of a three leg trip and has two hours before their next flight. Further, they may be in terminal A1, but their next flight is out of C1, it is lunchtime, and they want to know the best place to eat. Given the available time the user has, their current location, restaurants that are proximate to their predicted route, and other factors such as food preferences, the CILS 118 can perform various cognitive operations and provide a recommendation for where the user can eat.”; Paragraph 51, “In various embodiments, the curated data 222 may include structured, unstructured, social, public, private, streaming, device or other types of data described in greater detail herein. In certain embodiments, the learned knowledge 224 is based upon past observations and feedback from the presentation of prior cognitive insight streams and recommendations. In various embodiments, the learned knowledge 224 is provided via a feedback loop that provides the learned knowledge 224 in the form of a learning stream of data.”; Paragraph 53, “In various embodiments, the information contained in, and referenced by, a cognitive graph 226 is derived from many sources (e.g., public, private, social, device), such as curated data 222. In certain of these embodiments, the cognitive graph 226 assists in the identification and organization of information associated with how people, places and things are related to one other. In various embodiments, the cognitive graph 226 enables automated agents, described in greater detail herein, to access the Web more intelligently, enumerate inferences through utilization of curated, structured data 222, and provide answers to questions by serving as a computational knowledge engine.”; Paragraph 62, “In these and other embodiments, the cognitive applications 304 possess situational and temporal awareness based upon ambient signals from users and data, which facilitates understanding the user's intent, content, context and meaning to drive goal-driven dialogs and outcomes. Further, they are designed to gain knowledge over time from a wide variety of structured, non-structured, and device data sources, continuously interpreting and autonomously reprogramming themselves to better understand a given domain. As such, they are well-suited to support human decision making, by proactively providing trusted advice, offers and recommendations while respecting user privacy and permissions.”; Paragraph 122, “FIG. 7 is a simplified block diagram of a plurality of cognitive platforms implemented in accordance with an embodiment of the invention within a hybrid cloud infrastructure. In this embodiment, the hybrid cloud infrastructure 740 includes a cognitive cloud management 342 component, a hosted 704 cognitive cloud environment, and a private 706 network environment. As shown in FIG. 7, the hosted 704 cognitive cloud environment includes a hosted 710 cognitive platform, such as the cognitive platform 310 shown in FIGS. 3, 4a, and 4b. In various embodiments, the hosted 704 cognitive cloud environment may also include a hosted 718 universal knowledge repository and one or more repositories of curated public data 714 and licensed data 716. Likewise, the hosted 710 cognitive platform may also include a hosted 712 analytics infrastructure, such as the cloud analytics infrastructure 344 shown in FIGS. 3 and 4c.”) storing the synthesized user research data in a knowledge corpus; (Chawla: Paragraph 107, “In various embodiments, the crawl framework 452 is implemented to support various crawlers 454 familiar to skilled practitioners of the art. In certain embodiments, the crawlers 454 are custom configured for various target domains. For example, different crawlers 454 may he used for various travel forums, travel blogs, travel news and other travel sites. In various embodiments, data collected by the crawlers 454 is provided by the crawl framework 452 to the repository of crawl data 460. In these embodiments, the collected crawl data is processed and then stored in a normalized form in the repository of crawl data 460. The normalized data is then provided to SQL/NoSQL database 417 agent, which in turn provides it to the dataset engine 322. In one embodiment, the crawl database 460 is a NoSQL database, such as Mongo®.”; Paragraph 142, “In certain embodiments, the data may be multi-structured data. In these embodiments, the multi-structured data may include unstructured data (e.g., a document), semi-structured data (e.g., a social media post), and structured data (e.g., a string, an integer, etc.), such as data stored in a relational database management system (RDBMS). In various embodiments, the data may be public, private, or a combination thereof. In certain embodiments the data may be provided by a device, stored in a data lake, a data warehouse, or some combination thereof.”; Paragraph 218, “The submitted 1042 input data is then processed by the graph query engine 326 to generate a graph query 1044, as described in greater detail herein. The resulting graph query 1044 is then used to query the application cognitive graph 1082, which results in the generation of one or more composite cognitive insights, likewise described in greater detail herein. In certain embodiments, the graph query 1044 uses knowledge elements stored in the universal knowledge repository 1080 when querying the application cognitive graph 1082 to generate the one or more composite cognitive insights.”) segmenting the end users into one or more end user segments based on end user roles or end user categories; (Chawla: Paragraph 52, “As likewise used herein, a cognitive graph 226 refers to a representation of expert knowledge, associated with individuals and groups over a period of time, to depict relationships between people, places, and things using words, ideas, audio and images. As such, it is a machine-readable formalism for knowledge representation that provides a common framework allowing data and knowledge to be shared and reused across user, application, organization, and community boundaries.”; Paragraph 125, “In certain embodiments, the knowledge elements within a universal knowledge repository may also include statements, assertions, beliefs, perceptions, preferences, sentiments, attitudes or opinions associated with a person or a group. As an example, user ‘A’ may prefer the pizza served by a first restaurant, while user ‘B’ may prefer the pizza served by a second restaurant. Furthermore, both user ‘A’ and ‘B’ are firmly of the opinion that the first and second restaurants respectively serve the very best pizza available. In this example, the respective preferences and opinions of users ‘A’ and ‘B’ regarding the first and second restaurant may be included in the universal knowledge repository 880 as they are not contradictory. Instead, they are simply knowledge elements respectively associated with the two users and can be used in various embodiments for the generation of various cognitive insights, as described in greater detail herein.”; Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”) generating, for each end user segment, a persona, each representing a composite of end users in a respective end user segment; (Chawla: Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”; Paragraph 186, “In various embodiments, provision of the composite cognitive insights results in the CILS receiving feedback 958 data from various individual users and other sources, such as cognitive business processes and applications 948. In one embodiment, the feedback 958 data is used to revise or modify the cognitive persona. In another embodiment, the feedback 958 data is used to create a new cognitive persona. In yet another embodiment, the feedback 958 data is used to create one or more associated cognitive personas, which inherit a common set of attributes from a source cognitive persona. In one embodiment, the feedback 958 data is used to create a new cognitive persona that combines attributes from two or more source cognitive personas. In another embodiment, the feedback 958 data is used to create a cognitive profile, described in greater detail herein, based upon the cognitive persona. Those of skill in the art will realize that many such embodiments are possible and the foregoing is not intended to limit the spirit, scope or intent of the invention.”; Paragraph 189, “In various embodiments, a cognitive persona or cognitive profile is defined by a first set of nodes in a weighted cognitive graph. In these embodiments, the cognitive persona or cognitive profile is further defined by a set of attributes that are respectively associated with a set of corresponding nodes in the weighted cognitive graph. In various embodiments, an attribute weight is used to represent a relevance value between two attributes. For example, a higher numeric value (e.g., ‘5.0’) associated with an attribute weight may indicate a higher degree of relevance between two attributes, while a lower numeric value (e.g., ‘0.5’) may indicate a lower degree of relevance.”; Paragraph 219, “In various embodiments, the graph query 1044 results in the selection of a cognitive persona, described in greater detail herein, from a repository of cognitive personas ‘1’ through ‘n’ 1072, according to a set of contextual information associated with a user. In certain embodiments, the universal knowledge repository 1080 includes the repository of personas ‘1’ through ‘n’ 1072. In various embodiments, individual nodes within cognitive personas stored in the repository of personas ‘1’ through ‘n’ 1072 are linked 1054 to corresponding nodes in the universal knowledge repository 1080. In certain embodiments, nodes within the universal knowledge repository 1080 are likewise linked 1054 to nodes within the cognitive application graph 1082.”) training each persona with training data from the synthesized user research data of the end users in the respective end user segment; (Chawla: Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”; Paragraph 186, “In various embodiments, provision of the composite cognitive insights results in the CILS receiving feedback 958 data from various individual users and other sources, such as cognitive business processes and applications 948. In one embodiment, the feedback 958 data is used to revise or modify the cognitive persona. In another embodiment, the feedback 958 data is used to create a new cognitive persona. In yet another embodiment, the feedback 958 data is used to create one or more associated cognitive personas, which inherit a common set of attributes from a source cognitive persona. In one embodiment, the feedback 958 data is used to create a new cognitive persona that combines attributes from two or more source cognitive personas. In another embodiment, the feedback 958 data is used to create a cognitive profile, described in greater detail herein, based upon the cognitive persona. Those of skill in the art will realize that many such embodiments are possible and the foregoing is not intended to limit the spirit, scope or intent of the invention.”; Paragraph 189, “In various embodiments, a cognitive persona or cognitive profile is defined by a first set of nodes in a weighted cognitive graph. In these embodiments, the cognitive persona or cognitive profile is further defined by a set of attributes that are respectively associated with a set of corresponding nodes in the weighted cognitive graph. In various embodiments, an attribute weight is used to represent a relevance value between two attributes. For example, a higher numeric value (e.g., ‘5.0’) associated with an attribute weight may indicate a higher degree of relevance between two attributes, while a lower numeric value (e.g., ‘0.5’) may indicate a lower degree of relevance.”; Paragraph 219, “In various embodiments, the graph query 1044 results in the selection of a cognitive persona, described in greater detail herein, from a repository of cognitive personas ‘1’ through ‘n’ 1072, according to a set of contextual information associated with a user. In certain embodiments, the universal knowledge repository 1080 includes the repository of personas ‘1’ through ‘n’ 1072. In various embodiments, individual nodes within cognitive personas stored in the repository of personas ‘1’ through ‘n’ 1072 are linked 1054 to corresponding nodes in the universal knowledge repository 1080. In certain embodiments, nodes within the universal knowledge repository 1080 are likewise linked 1054 to nodes within the cognitive application graph 1082.”) adding the persona interaction data to the training data in the knowledge corpus; and subsequently (Chawla: Paragraph 165, “As used herein, a supervised learning 818 machine learning algorithm broadly refers to a machine learning approach for inferring a function from labeled training data. The training data typically consists of a set of training examples, with each example consisting of an input object (e.g., a vector) and a desired output value(e.g., a supervisory signal). In various embodiments, a supervised learning algorithm is implemented to analyze the training data and produce an inferred function, which can be used for mapping new examples.”; Paragraph 166, “As used herein, an unsupervised learning 820 machine learning algorithm broadly refers to a machine learning approach for finding non-obvious or hidden structures within a set of unlabeled data. In various embodiments, the unsupervised learning 820 machine learning algorithm is not given a set of training examples. Instead, it attempts to summarize and explain key features of the data it processes. Examples of unsupervised learning approaches include clustering (e.g., k-means, mixture models, hierarchical clustering, etc.) and latent variable models (e.g., expectation-maximization algorithms, method of moments, blind signal separation techniques, etc.).”; Paragraph 273, “The resulting feedback and the cognitive profile currently in use is then used to perform additional cognitive learning operations, which results in the generation of either a new, or refined, cognitive profile. The new, or refined, cognitive profile and the provided input data are used in combination with associated cognitive insight attributes to generate a second set of cognitive profile generation suggestions, which are then provided to the user within the sub-window 1308 of the UI window 1302 shown in FIG. 13. In these embodiments, the cognitive profile generation operations described in the descriptive text associated with FIG. 13 are repeated to generate a new or refined cognitive profile. In certain of these embodiments, the resulting refined cognitive profile is used to generate additional cognitive insights, which in turn are displayed in display field 1404.”) subsequently training another persona using the persona interaction data in the training data of the knowledge corpus. (Chawla: Paragraph 165, “As used herein, a supervised learning 818 machine learning algorithm broadly refers to a machine learning approach for inferring a function from labeled training data. The training data typically consists of a set of training examples, with each example consisting of an input object (e.g., a vector) and a desired output value(e.g., a supervisory signal). In various embodiments, a supervised learning algorithm is implemented to analyze the training data and produce an inferred function, which can be used for mapping new examples.”; Paragraph 166, “As used herein, an unsupervised learning 820 machine learning algorithm broadly refers to a machine learning approach for finding non-obvious or hidden structures within a set of unlabeled data. In various embodiments, the unsupervised learning 820 machine learning algorithm is not given a set of training examples. Instead, it attempts to summarize and explain key features of the data it processes. Examples of unsupervised learning approaches include clustering (e.g., k-means, mixture models, hierarchical clustering, etc.) and latent variable models (e.g., expectation-maximization algorithms, method of moments, blind signal separation techniques, etc.).”; Paragraph 273, “The resulting feedback and the cognitive profile currently in use is then used to perform additional cognitive learning operations, which results in the generation of either a new, or refined, cognitive profile. The new, or refined, cognitive profile and the provided input data are used in combination with associated cognitive insight attributes to generate a second set of cognitive profile generation suggestions, which are then provided to the user within the sub-window 1308 of the UI window 1302 shown in FIG. 13. In these embodiments, the cognitive profile generation operations described in the descriptive text associated with FIG. 13 are repeated to generate a new or refined cognitive profile. In certain of these embodiments, the resulting refined cognitive profile is used to generate additional cognitive insights, which in turn are displayed in display field 1404.”) wherein the knowledge corpus comprises personal data, environment data, and workflow data (Chawla: Paragraph 54, “In certain embodiments, the cognitive graph 226 not only elicits and maps expert knowledge by deriving associations from data, it also renders higher level insights and accounts for knowledge creation through collaborative knowledge modeling. In various embodiments, the cognitive graph 226 is a machine-readable, declarative memory system that stores and learns both episodic memory (e.g., specific personal experiences associated with an individual or entity), and semantic memory, which stores factual information (e.g., geo location of an airport or restaurant).”; Paragraph 187, “As used herein, a cognitive profile refers to a set of data associated with a user, whether anonymous or not. In various embodiments, a cognitive profile may be associated with a particular user (i.e., may be a profile of one). In various embodiments, a cognitive profile refers to an instance of a cognitive persona that references personal data associated with a user. In various embodiments, the personal data may include the user's name, address, Social Security Number (SSN), age, gender, marital status, occupation, employer, income, education, skills, knowledge, interests, preferences, likes and dislikes, goals and plans, and so forth. In certain embodiments, the personal data may include data associated with the user's interaction with a CILS and related composite cognitive insights that are generated and provided to the user. In various embodiments, the user's interaction with a CILS may be provided to the CILS as feedback 958 data. In various embodiments, the personal data may include one or more of a purchase history of the particular user, CRM data associated with the particular user and social media data associated with the particular user. The cognitive profile associated with the particular user ensures that the cognitive profile is specific to that individual and cognitive recommendations, insights and suggestions are generated based on the specific attributes of that user's profile. In certain embodiments, the weight of importance attributed to each attribute can vary. E.g., while two users may both like red and green, in certain situations, one user may like red but love green whereas another user may love green but like red.”) wherein the end user information comprises end user answers from end users in response to end user questions; (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 53, “In various embodiments, the information contained in, and referenced by, a cognitive graph 226 is derived from many sources (e.g., public, private, social, device), such as curated data 222. In certain of these embodiments, the cognitive graph 226 assists in the identification and organization of information associated with how people, places and things are related to one other. In various embodiments, the cognitive graph 226 enables automated agents, described in greater detail herein, to access the Web more intelligently, enumerate inferences through utilization of curated, structured data 222, and provide answers to questions by serving as a computational knowledge engine.”; Paragraph 67, “In various embodiments, the graph query engine 326 is implemented to receive and process queries such that they can be bridged into a cognitive graph, as described in greater detail herein, through the use of a bridging agent. In certain embodiments, the graph query engine 326 performs various natural language processing (NLP), familiar to skilled practitioners of the art, to process the queries. In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In various embodiments, two or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may be implemented to operate collaboratively to generate a cognitive insight or recommendation. In certain embodiments, one or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may operate autonomously to generate a cognitive insight or recommendation.”; Paragraph 91, “To further differentiate the distinction between the translate 427 component and the bridging 428 component, the translate 427 component relates to a general domain translation of a question. In contrast, the bridging 428 component allows the question to be asked in the context of a specific domain (e.g., healthcare, travel, etc.), given what is known about the data. In certain embodiments, the bridging 428 component is implemented to process what is known about the translated query, in the context of the user, to provide an answer that is relevant to a specific domain.”; Paragraph 93, “In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a target cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In these and other embodiments, the insight/learning engine 330 is implemented to perform insight/learning operations, described in greater detail herein. In various embodiments, the insight/learning engine 330 may include a discover/visibility 430 component, a predict 431 component, a rank/recommend 432 component, and one or more insight 433 agents.”; Paragraph 124, “As used herein, a universal knowledge repository broadly refers to a collection of knowledge elements that can be used in various embodiments to generate one or more cognitive insights described in greater detail herein. In various embodiments, these knowledge elements may include facts (e.g., milk is a dairy product), information (e.g., an answer to a question), descriptions (e.g., the color of an automobile), skills (e.g., the ability to install plumbing fixtures), and other classes of knowledge familiar to those of skill in the art. In these embodiments, the knowledge elements may be explicit or implicit. As an example, the fact that water freezes at zero degrees centigrade would be an explicit knowledge element, while the fact that an automobile mechanic knows how to repair an automobile would be an implicit knowledge element.”) the method further comprising: adding end user questions and end user responses to the synthesized user research data in the knowledge corpus; (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 53, “In various embodiments, the information contained in, and referenced by, a cognitive graph 226 is derived from many sources (e.g., public, private, social, device), such as curated data 222. In certain of these embodiments, the cognitive graph 226 assists in the identification and organization of information associated with how people, places and things are related to one other. In various embodiments, the cognitive graph 226 enables automated agents, described in greater detail herein, to access the Web more intelligently, enumerate inferences through utilization of curated, structured data 222, and provide answers to questions by serving as a computational knowledge engine.”; Paragraph 67, “In various embodiments, the graph query engine 326 is implemented to receive and process queries such that they can be bridged into a cognitive graph, as described in greater detail herein, through the use of a bridging agent. In certain embodiments, the graph query engine 326 performs various natural language processing (NLP), familiar to skilled practitioners of the art, to process the queries. In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In various embodiments, two or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may be implemented to operate collaboratively to generate a cognitive insight or recommendation. In certain embodiments, one or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may operate autonomously to generate a cognitive insight or recommendation.”; Paragraph 91, “To further differentiate the distinction between the translate 427 component and the bridging 428 component, the translate 427 component relates to a general domain translation of a question. In contrast, the bridging 428 component allows the question to be asked in the context of a specific domain (e.g., healthcare, travel, etc.), given what is known about the data. In certain embodiments, the bridging 428 component is implemented to process what is known about the translated query, in the context of the user, to provide an answer that is relevant to a specific domain.”; Paragraph 93, “In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a target cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In these and other embodiments, the insight/learning engine 330 is implemented to perform insight/learning operations, described in greater detail herein. In various embodiments, the insight/learning engine 330 may include a discover/visibility 430 component, a predict 431 component, a rank/recommend 432 component, and one or more insight 433 agents.”; Paragraph 124, “As used herein, a universal knowledge repository broadly refers to a collection of knowledge elements that can be used in various embodiments to generate one or more cognitive insights described in greater detail herein. In various embodiments, these knowledge elements may include facts (e.g., milk is a dairy product), information (e.g., an answer to a question), descriptions (e.g., the color of an automobile), skills (e.g., the ability to install plumbing fixtures), and other classes of knowledge familiar to those of skill in the art. In these embodiments, the knowledge elements may be explicit or implicit. As an example, the fact that water freezes at zero degrees centigrade would be an explicit knowledge element, while the fact that an automobile mechanic knows how to repair an automobile would be an implicit knowledge element.”) and creating a question and answer component as a part of the dialog engine that is usable by an interacting persona. (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 53, “In various embodiments, the information contained in, and referenced by, a cognitive graph 226 is derived from many sources (e.g., public, private, social, device), such as curated data 222. In certain of these embodiments, the cognitive graph 226 assists in the identification and organization of information associated with how people, places and things are related to one other. In various embodiments, the cognitive graph 226 enables automated agents, described in greater detail herein, to access the Web more intelligently, enumerate inferences through utilization of curated, structured data 222, and provide answers to questions by serving as a computational knowledge engine.”; Paragraph 67, “In various embodiments, the graph query engine 326 is implemented to receive and process queries such that they can be bridged into a cognitive graph, as described in greater detail herein, through the use of a bridging agent. In certain embodiments, the graph query engine 326 performs various natural language processing (NLP), familiar to skilled practitioners of the art, to process the queries. In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In various embodiments, two or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may be implemented to operate collaboratively to generate a cognitive insight or recommendation. In certain embodiments, one or more of the dataset engine 322, the graph query engine 326, and the insight/learning engine 330 may operate autonomously to generate a cognitive insight or recommendation.”; Paragraph 91, “To further differentiate the distinction between the translate 427 component and the bridging 428 component, the translate 427 component relates to a general domain translation of a question. In contrast, the bridging 428 component allows the question to be asked in the context of a specific domain (e.g., healthcare, travel, etc.), given what is known about the data. In certain embodiments, the bridging 428 component is implemented to process what is known about the translated query, in the context of the user, to provide an answer that is relevant to a specific domain.”; Paragraph 93, “In various embodiments, the insight/learning engine 330 is implemented to encapsulate a predetermined algorithm, which is then applied to a target cognitive graph to generate a result, such as a cognitive insight or a recommendation. In certain embodiments, one or more such algorithms may contribute to answering a specific question and provide additional cognitive insights or recommendations. In these and other embodiments, the insight/learning engine 330 is implemented to perform insight/learning operations, described in greater detail herein. In various embodiments, the insight/learning engine 330 may include a discover/visibility 430 component, a predict 431 component, a rank/recommend 432 component, and one or more insight 433 agents.”; Paragraph 124, “As used herein, a universal knowledge repository broadly refers to a collection of knowledge elements that can be used in various embodiments to generate one or more cognitive insights described in greater detail herein. In various embodiments, these knowledge elements may include facts (e.g., milk is a dairy product), information (e.g., an answer to a question), descriptions (e.g., the color of an automobile), skills (e.g., the ability to install plumbing fixtures), and other classes of knowledge familiar to those of skill in the art. In these embodiments, the knowledge elements may be explicit or implicit. As an example, the fact that water freezes at zero degrees centigrade would be an explicit knowledge element, while the fact that an automobile mechanic knows how to repair an automobile would be an implicit knowledge element.”; Paragraph 140, “As used herein, a cognitive learning category broadly refers to a source of information used by a CILS to perform cognitive learning operations. In various embodiments, the cognitive learning categories 810 may include a data-based 812 cognitive learning category and an interaction-based 814 cognitive learning category. As used herein, a data-based 812 cognitive learning category broadly refers to the use of data as a source of information in the performance of a cognitive learning operation by a CILS.”; Paragraph 151, “For example, an online shopper may select a first pair of shoes that are available in a white, black and brown. The user then elects to view a larger photo of the first pair of shoes, first in white, then in black, but not brown. To continue the example, the user then selects a second pair of shoes that are likewise available in white, black and brown. As before, the user elects to view a larger photo of the second pair of shoes, first in white, then in black, but once again, not brown. In this example, the user's online interaction indicates an explicit like for white and black shoes and an explicit dislike for brown shoes.”; Paragraph 173, “In various embodiments, the cognitive insight is delivered to a device, an application, a service, a process, a user, or a combination thereof. In certain embodiments, the resulting interaction information is likewise received by a CILS from a device, an application, a service, a process, a user, or a combination thereof. In various embodiments, the resulting interaction information is provided in the form of feedback data to the CILS. In these embodiments, the method by which the cognitive learning process, and its associated cognitive learning steps, is implemented is a matter of design choice. Skilled practitioners of the art will recognize that many such embodiments are possible and the foregoing is not intended to limit the spirit, scope or intent of the invention.”) using machine learning and natural language processing to update the knowledge corpus; (Chawla: Paragraph 95, “In certain embodiments, the insight/learning engine 330 may include additional components. For example the additional components may include classification algorithms, clustering algorithms, and so forth. Skilled practitioners of the art will realize that many such additional components are possible and that the foregoing is not intended to limit the spirit, scope or intent of the invention. In various embodiments, the insights agents 433 are implemented to create a visual data story, highlighting user-specific insights, relationships and recommendations. As a result, it can share, operationalize, or track business insights in various embodiments. In various embodiments, the learning agent 434 work in the background to continually update the cognitive graph, as described in greater detail herein, from each unique interaction with data and users.”; Paragraph 106, “In various embodiments, the repository of cognitive graphs 457 is implemented to store cognitive graphs generated, accessed, and updated by the cognitive engine 320 in the process of generating cognitive insights. In various embodiments, the repository of cognitive graphs 457 may include one or more repositories of curated data 458, described in greater detail herein. In certain embodiments, the repositories of curated data 458 includes data that has been curated by one or more users, machine operations, or a combination of the two, by performing various sourcing, filtering, and enriching operations described in greater detail herein. In these and other embodiments, the curated data 458 is ingested by the cognitive platform 310 and then processed, as likewise described in greater detail herein, to generate cognitive insights. In various embodiments, the repository of models 459 is implemented to store models that are generated, accessed, and updated by the cognitive engine 320 in the process of generating cognitive insights. As used herein, models broadly refer to machine learning models. In certain embodiments, the models include one or more statistical models,”) identifying common patterns and themes; and (Chawla: Paragraph 47, “As used herein, temporal/spatial reasoning 214 broadly refers to reasoning based upon qualitative abstractions of temporal and spatial aspects of common sense knowledge, described in greater detail herein. For example, it is not uncommon for a predetermined set of data to change over time. Likewise, other attributes, such as its associated metadata, may likewise change over time. As a result, these changes may affect the context of the data. To further the example, the context of asking someone what they believe they should be doing at 3:00 in the afternoon during the workday while they are at work may be quite different than asking the same user the same question at 3:00 on a Sunday afternoon when they are at home. In certain embodiments, various temporal/spatial reasoning 214 processes are implemented by the CILS 118 to determine the context of queries, and associated data, which are in turn used to generate cognitive insights.”; Paragraph 48, “As likewise used herein, entity resolution 216 broadly refers to the process of finding elements in a set of data that refer to the same entity across different data sources (e.g., structured, non-structured, streams, devices, etc.), where the target entity does not share a common identifier. In various embodiments, the entity resolution 216 process is implemented by the CILS 118 to identify significant nouns, adjectives, phrases or sentence elements that represent various predetermined entities within one or more domains. From the foregoing, it will be appreciated that the implementation of one or more of the semantic analysis 202, goal optimization 204, collaborative filtering 206, common sense reasoning 208, natural language processing 210, summarization 212, temporal/spatial reasoning 214, and entity resolution 216 processes by the CILS 118 can facilitate the generation of a semantic, cognitive model.”; Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”) using the common patterns and themes in the generating of the persona. (Chawla: Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”; Paragraph 186, “In various embodiments, provision of the composite cognitive insights results in the CILS receiving feedback 958 data from various individual users and other sources, such as cognitive business processes and applications 948. In one embodiment, the feedback 958 data is used to revise or modify the cognitive persona. In another embodiment, the feedback 958 data is used to create a new cognitive persona. In yet another embodiment, the feedback 958 data is used to create one or more associated cognitive personas, which inherit a common set of attributes from a source cognitive persona. In one embodiment, the feedback 958 data is used to create a new cognitive persona that combines attributes from two or more source cognitive personas. In another embodiment, the feedback 958 data is used to create a cognitive profile, described in greater detail herein, based upon the cognitive persona. Those of skill in the art will realize that many such embodiments are possible and the foregoing is not intended to limit the spirit, scope or intent of the invention.”) Chawla does not explicitly disclose the following limitations, however in analogous art of persona generation and interaction, Abrams discloses the following generating a dialog engine for each persona based on the training; (Abrams: Paragraph 23, “To provide a convincing illusion that users are interacting with a “real” character, the memory 118 includes, without limitation, the character engine 140. In operation the character engine 140 combines sensing algorithms, advanced thinking and learning algorithms, and expressing algorithms in a flexible and adaptive manner. Upon receiving data via the user platform 120, the character engine 140 determines a current context that includes a user intent and an assessment domain. To generate character responses, the character engine 140 selects and applies inference algorithm(s) and personality engine(s) based on the current context and data received from the data sources 150. Finally, the character engine 140 tailors the character responses to the capabilities of the user platform 120.”; Paragraph 35, “For instance, in some embodiments, the user intent engine 230 may include a speech analysis algorithm that examines the speech prosody of speech included in the recognition data 225 to detect any upward inflections at the end of sentences. If the speech analysis algorithm detects an upward inflection at the end of a sentence, then the speech analysis algorithm determines that the user intent 235 is “asking a question.” In other embodiments, the user intent engine 230 may include a gesture analysis algorithm that analyzes the recognition data 225 to determine whether the user intent 235 is “looking for emotional support.””; Paragraph 39, “In some embodiments, the domain parser 245 includes a single generalized content algorithm. In other embodiments, the domain parser 245 includes multiple specialized analytic processors (not shown) that act upon streaming input data (e.g., the recognition data 225 and/or the user intent 235) in real time. Each of the analytic processors acts as a “tuned resonator” that searches for specific features that map to a particular context. For instance, suppose that the character engine 120 represents a helper droid that is interacting with a user. The domain parser 245 could execute any number of specialized analytic algorithms that search for features unique to the current context of the interaction between the helper droid and the user.”) connecting an avatar to each persona; and (Abrams: Paragraph 23, “To provide a convincing illusion that users are interacting with a “real” character, the memory 118 includes, without limitation, the character engine 140. In operation the character engine 140 combines sensing algorithms, advanced thinking and learning algorithms, and expressing algorithms in a flexible and adaptive manner. Upon receiving data via the user platform 120, the character engine 140 determines a current context that includes a user intent and an assessment domain. To generate character responses, the character engine 140 selects and applies inference algorithm(s) and personality engine(s) based on the current context and data received from the data sources 150. Finally, the character engine 140 tailors the character responses to the capabilities of the user platform 120.”; Paragraph 35, “For instance, in some embodiments, the user intent engine 230 may include a speech analysis algorithm that examines the speech prosody of speech included in the recognition data 225 to detect any upward inflections at the end of sentences. If the speech analysis algorithm detects an upward inflection at the end of a sentence, then the speech analysis algorithm determines that the user intent 235 is “asking a question.” In other embodiments, the user intent engine 230 may include a gesture analysis algorithm that analyzes the recognition data 225 to determine whether the user intent 235 is “looking for emotional support.””; Paragraph 39, “In some embodiments, the domain parser 245 includes a single generalized content algorithm. In other embodiments, the domain parser 245 includes multiple specialized analytic processors (not shown) that act upon streaming input data (e.g., the recognition data 225 and/or the user intent 235) in real time. Each of the analytic processors acts as a “tuned resonator” that searches for specific features that map to a particular context. For instance, suppose that the character engine 120 represents a helper droid that is interacting with a user. The domain parser 245 could execute any number of specialized analytic algorithms that search for features unique to the current context of the interaction between the helper droid and the user.”) making each avatar accessible for dialog with metaverse users in the Metaverse, wherein the Metaverse is a 3D virtual reality environment. (Abrams: Paragraph 29, “In operation, the input platform abstraction infrastructure 210 receives input data from any number and types of the user platforms 120. For instance, in some embodiments, the input platform abstraction infrastructure 210 may receive text, voice, accelerometer, and video data from the smartphone 122. In other embodiments, the input platform abstraction infrastructure 210 may receive discrete button, voice, and motion data from the robot 132 and/or the toy 138. In yet other embodiments, the input platform abstraction infrastructure 210 may receive control inputs from the game console 126 or a smart television; audio from a telephone or kiosk microphone; and/or voice and imagery from augmented and virtual Reality (AR/VR) systems.”) receiving persona interaction data based on an interacting persona that interacts with a metaverse user; (Abrams: Paragraph 37, “The domain parser 240 may determine the assessment domain 245 based on any mode of interaction, at any level of granularity, and in any technically feasible fashion. For instance, in some embodiments, the domain parser 240 may first determine the mode of interaction (e.g., whether speech is to be understood, images are to be recognized, text is to be understood, and/or gestures are to be recognized). The domain parser 240 may then analyze the content of the recognition data 225 to determine the assessment domain 245 for the mode of interaction. For example, if speech is to be understood, then the domain parser 240 could perform analysis operations on the recognition data 225 to determine whether the assessment domain 245 is “causal conversation,” “storytelling,” or “homework,” to name a few.”; Paragraph 71, “The inference engine 250 and/or the knowledge subsystem 260 may process the data received from the data sources 150 in any technically feasible fashion. For example, some of the data sources 150 provide structured data (e.g., weather feeds), while other data sources 150 provide unstructured data (e.g., movie scripts). Notably, in some embodiments, the inference engine 250 and/or the knowledge subsystem 260 may receive data via crowdsourcing. For example, the inference engine 250 and/or the knowledge subsystem 260 could receive data from a gamification platform that engages large numbers of users to manually perform entity extraction and relationship mark-up among named entities. The structured data received by the inference engine 250 and/or the knowledge subsystem 260 could then be included in the training data 262 and used to train any number of the inference algorithms 270.”; Paragraph 80, “For example, if the user platform 120 implements a chat application, then the inference engine 250 could transmit data associated with each new conversation to the knowledge subsystem 260. The data could include state variables such as user specific information, the user intent 235, the assessment domain 245, events that occur during the conversations, inferences and character responses 285 generated by the inference engine 150, and so forth. The knowledge subsystem 260 could evaluate the data, include relevant data in the knowledge database 266, and include data that is suitable for machine learning in the training data 262.”) wherein the dialog engine facilitates questions, comments, conversations, and random injections. (Abrams: Paragraph 37, “The domain parser 240 may determine the assessment domain 245 based on any mode of interaction, at any level of granularity, and in any technically feasible fashion. For instance, in some embodiments, the domain parser 240 may first determine the mode of interaction (e.g., whether speech is to be understood, images are to be recognized, text is to be understood, and/or gestures are to be recognized). The domain parser 240 may then analyze the content of the recognition data 225 to determine the assessment domain 245 for the mode of interaction. For example, if speech is to be understood, then the domain parser 240 could perform analysis operations on the recognition data 225 to determine whether the assessment domain 245 is “causal conversation,” “storytelling,” or “homework,” to name a few.”; Paragraph 56, “Because the inference engine 250 selects and applies the inference algorithms 270 that are optimized with respect to the current context of the interaction, the inference engine 250 may generates realistic inferences for a wide range of interactions via the user platforms 120. Notably, advanced inference algorithms 270 TOM systems, etc.) enable the inference engine 250 to take initiative and proactively engage with users. Accordingly, the interaction paradigms implemented in the inference engine 250 are substantially more sophisticated than the question and answer interaction paradigms implemented in many conventional AI systems (e.g., the perception of sympathy, empathy, emotion, etc.). Further, because the number and type of inference algorithms 270 may be modified as time passes, the inference engine 250 may be adapted to exploit advances in technology.”; Paragraph 106, “Advantageously, by implementing the character engine to automate user interactions, service providers provide a convincing illusion that users are interacting with a “real” character instead of a machine. In particular, unlike conventional question and answer based AI systems, the character engine proactively engages with users. For example, the character engine can reply to a question with another question. Because the character engine is modular, the character engine may be configured to implement and select between a wide variety of algorithms that are applicable to different characters, contexts, and user platforms. Further, the algorithms implemented in the character engine may be updated as technology advances. By continuously updating the knowledge database, training data, and local history, the character engine develops a relationship with an individual user that evolves over time. In addition, since the character engine dynamically tailors the responses to different user platforms, the character engine generates the illusion of continuous and consistent relationships that transcend the user platforms.”) each avatar is accessible within a metaverse environment; (Abrams: Paragraph 29, “In operation, the input platform abstraction infrastructure 210 receives input data from any number and types of the user platforms 120. For instance, in some embodiments, the input platform abstraction infrastructure 210 may receive text, voice, accelerometer, and video data from the smartphone 122. In other embodiments, the input platform abstraction infrastructure 210 may receive discrete button, voice, and motion data from the robot 132 and/or the toy 138. In yet other embodiments, the input platform abstraction infrastructure 210 may receive control inputs from the game console 126 or a smart television; audio from a telephone or kiosk microphone; and/or voice and imagery from augmented and virtual Reality (AR/VR) systems.”) Chawla in view of Abrams do not explicitly disclose the following, however, in analogous art of virtual reality interactions Nichols discloses the following: the metaverse environment is a virtual enterprise design thinking workshop (VEDTW); (Nichols: Column 4 lines 1-14, “The simulation model executes the business function that the student is learning and is therefore the center point of the application. An activity `layer` allows the user to visually guide the simulation by passing inputs into the simulation engine and receiving an output from the simulation model. For example, if the student was working on an income statement activity, the net sales and cost of goods sold calculations are passed as inputs to the simulation model and the net income value is calculated and retrieved as an output. As calculations are passed to and retrieved from the simulation model, they are also passed to the Intelligent Coaching Agent (ICA). The ICA analyzes the Inputs and Outputs to the simulation model and generates feedback based on a set of rules. This feedback is received and displayed through the Visual Basic Architecture.”; Column 4 line 60 – Column 5 line 4, “Business simulation in accordance with a preferred embodiment delivers training curricula in an optimal manner. This is because such applications provide effective training that mirrors a student's actual work environment. The application of skills "on the job" facilitates increased retention and higher overall job performance. While the results of such training applications are impressive, business simulations are very complex to design and build correctly. These simulations are characterized by a very open-ended environment, where students can go through the application along any number of paths, depending on their learning style and prior experiences/knowledge.”; Column 10 lines 24 – 65, “We have clearly defined why a combined component/framework approach is the best solution for delivering high-quality BusSim solutions at a lower cost. Given that there are a number of third party frameworks already on the market that provide delivery capability for a wide variety of platforms, the TEL project is focused on defining and developing a set of components that provide unique services for the development and delivery of BusSim solutions. These components along with a set of design and test workbenches are the tools used by instructional designers to support activities in the four phases of BusSim development. We call this suite of tools the Business Simulation Toolset. Following is a description of each of the components and workbenches of the toolset. A Component can be thought of as a black box that encapsulates the behavior and data necessary to support a related set of services. It exposes these services to the outside world through published interfaces. The published interface of a component allows you to understand what it does through the services it offers, but not how it does it. The complexity of its implementation is hidden from the user. The following are the key components of the BusSim Toolset. Domain Component--provides services for modeling the state of a simulation. Profiling Component--provides services for rule-based evaluating the state of a simulation. Transformation Component--provides services for manipulating the state of a simulation. Remediation Component--provides services for the rule-based delivering of feedback to the student The Domain Model component is the central component of the suite that facilitates communication of context data across the application and the other components. It is a modeling tool that can use industry-standard database such as Informix, Oracle, or Sybase to store its data. A domain model is a representation of the objects in a simulation. The objects are such pseudo tangible things as a lever the student can pull, a form or notepad the student fills out, a character the student interacts with in a simulated meeting, etc. They can also be abstract objects such as the ROI for a particular investment, the number of times the student asked a particular question, etc. These objects are called entities. Some example entities include: Vehicles, operators and incidents in an insurance domain; Journal entries, cash flow statements and balance sheets in a financial accounting domain and Consumers and purchases in a marketing domain.”) the metaverse users are VEDTW students; (Nichols: Column 4 lines 1-14, “The simulation model executes the business function that the student is learning and is therefore the center point of the application. An activity `layer` allows the user to visually guide the simulation by passing inputs into the simulation engine and receiving an output from the simulation model. For example, if the student was working on an income statement activity, the net sales and cost of goods sold calculations are passed as inputs to the simulation model and the net income value is calculated and retrieved as an output. As calculations are passed to and retrieved from the simulation model, they are also passed to the Intelligent Coaching Agent (ICA). The ICA analyzes the Inputs and Outputs to the simulation model and generates feedback based on a set of rules. This feedback is received and displayed through the Visual Basic Architecture.”; Column 4 line 60 – Column 5 line 4, “Business simulation in accordance with a preferred embodiment delivers training curricula in an optimal manner. This is because such applications provide effective training that mirrors a student's actual work environment. The application of skills "on the job" facilitates increased retention and higher overall job performance. While the results of such training applications are impressive, business simulations are very complex to design and build correctly. These simulations are characterized by a very open-ended environment, where students can go through the application along any number of paths, depending on their learning style and prior experiences/knowledge.”; Column 10 lines 24 – 65, “We have clearly defined why a combined component/framework approach is the best solution for delivering high-quality BusSim solutions at a lower cost. Given that there are a number of third party frameworks already on the market that provide delivery capability for a wide variety of platforms, the TEL project is focused on defining and developing a set of components that provide unique services for the development and delivery of BusSim solutions. These components along with a set of design and test workbenches are the tools used by instructional designers to support activities in the four phases of BusSim development. We call this suite of tools the Business Simulation Toolset. Following is a description of each of the components and workbenches of the toolset. A Component can be thought of as a black box that encapsulates the behavior and data necessary to support a related set of services. It exposes these services to the outside world through published interfaces. The published interface of a component allows you to understand what it does through the services it offers, but not how it does it. The complexity of its implementation is hidden from the user. The following are the key components of the BusSim Toolset. Domain Component--provides services for modeling the state of a simulation. Profiling Component--provides services for rule-based evaluating the state of a simulation. Transformation Component--provides services for manipulating the state of a simulation. Remediation Component--provides services for the rule-based delivering of feedback to the student The Domain Model component is the central component of the suite that facilitates communication of context data across the application and the other components. It is a modeling tool that can use industry-standard database such as Informix, Oracle, or Sybase to store its data. A domain model is a representation of the objects in a simulation. The objects are such pseudo tangible things as a lever the student can pull, a form or notepad the student fills out, a character the student interacts with in a simulated meeting, etc. They can also be abstract objects such as the ROI for a particular investment, the number of times the student asked a particular question, etc. These objects are called entities. Some example entities include: Vehicles, operators and incidents in an insurance domain; Journal entries, cash flow statements and balance sheets in a financial accounting domain and Consumers and purchases in a marketing domain.”) apply generated insights during situational analysis while EDT activities are in progress. (Nichols: Column 6 line 56 – Column 7 line 9, “During the build phase, the application development team uses the detailed designs to code the application. Coding tasks include the interfaces and widgets that the student interacts with. The interfaces can be made up of buttons, grids, check boxes, or any other screen controls that allow the student to view and manipulate his deliverables. The developer must also code logic that analyzes the student's work and provides feedback interactions. These interactions may take the form of text and/or multimedia feedback from simulated team members, conversations with simulated team members, or direct manipulations of the student's work by simulated team members. In parallel with these coding efforts, graphics, videos, and audio are being created for use in the application. Managing the development of these assets have their own complications. Risks in the build phase include misinterpretation of the designs. If the developer does not accurately understand the designer's intentions, the application will not function as desired. Also, coding these applications requires very skilled developers because the logic that analyzes the student's work and composes feedback is very complex.”; Column 10 lines 24 – 65, “We have clearly defined why a combined component/framework approach is the best solution for delivering high-quality BusSim solutions at a lower cost. Given that there are a number of third party frameworks already on the market that provide delivery capability for a wide variety of platforms, the TEL project is focused on defining and developing a set of components that provide unique services for the development and delivery of BusSim solutions. These components along with a set of design and test workbenches are the tools used by instructional designers to support activities in the four phases of BusSim development. We call this suite of tools the Business Simulation Toolset. Following is a description of each of the components and workbenches of the toolset. A Component can be thought of as a black box that encapsulates the behavior and data necessary to support a related set of services. It exposes these services to the outside world through published interfaces. The published interface of a component allows you to understand what it does through the services it offers, but not how it does it. The complexity of its implementation is hidden from the user. The following are the key components of the BusSim Toolset. Domain Component--provides services for modeling the state of a simulation. Profiling Component--provides services for rule-based evaluating the state of a simulation. Transformation Component--provides services for manipulating the state of a simulation. Remediation Component--provides services for the rule-based delivering of feedback to the student The Domain Model component is the central component of the suite that facilitates communication of context data across the application and the other components. It is a modeling tool that can use industry-standard database such as Informix, Oracle, or Sybase to store its data. A domain model is a representation of the objects in a simulation. The objects are such pseudo tangible things as a lever the student can pull, a form or notepad the student fills out, a character the student interacts with in a simulated meeting, etc. They can also be abstract objects such as the ROI for a particular investment, the number of times the student asked a particular question, etc. These objects are called entities. Some example entities include: Vehicles, operators and incidents in an insurance domain; Journal entries, cash flow statements and balance sheets in a financial accounting domain and Consumers and purchases in a marketing domain.”; Column 21 line 21 – Column 22 line 4, “Execution Scenario: Student Interaction--FIG. 18 illustrates a suite to support a student interaction in accordance with a preferred embodiment. In this task the student is trying to journalize invoices. He sees a chart of accounts, an invoice, and the journal entry for each invoice. He journalizes a transaction by dragging and dropping an account from the chart of accounts onto the `Debits` or the `Credits` line of the journal entry and entering the dollar amount of the debit or credit. He does this for each transaction. As the student interacts with the interface, all actions are reported to and recorded in the Domain Model. The Domain Model has a meta-model describing a transaction, its data, and what information a journal entry contains. The actions of the student populates the entities in the domain model with the appropriate information. When the student is ready, he submits the work to a simulated team member for review. This submission triggers the Analysis-Interpretation cycle. The Transformation Component is invoked and performs additional calculations on the data in the Domain Model, perhaps determining that Debits and Credits are unbalanced for a given journal entry. The Profiling Component can then perform rule-based pattern matching on the Domain Model, examining both the student actions and results of any Transformation Component analysis. Some of the profiles fire as they identify the mistakes and correct answers the student has given. Any profiles that fire activate topics in the Remediation Component. After the Profiling Component completes, the Remediation Component is invoked. The remediation algorithm searches the active topics in the tree of concepts to determine the best set of topics to deliver. This set may contain text, video, audio, URLs, even actions that manipulate the Domain Model. It is then assembled into prose-like paragraphs of text and media and presented to the student. The text feedback helps the student localize his journalization errors and understand why they are wrong and what is needed to correct the mistakes. The student is presented with the opportunity to view a video war story about the tax and legal consequences that arise from incorrect journalization. He is also presented with links to the reference materials that describe the fundamentals of journalization. The Analysis-Interpretation cycle ends when any coach items that result in updates to the Domain Model have been posted and the interface is redrawn to represent the new domain data. In this case, the designer chose to highlight with a red check the transactions that the student journalized incorrectly.”) Chawla discloses a method for generating a virtual persona. Abrams discloses a method for training and creating an avatar for interaction based on a persona. Nichols discloses a method for using a simulated environment for training and teaching interactions with simulated personas. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Chawla with the teachings of Abrams in order to improve the adaptability and realism of generated personas as disclosed by Abrams (Abrams: Paragraph 6, “Consequently, service providers are often unable to automate interactions for services that rely on establishing and nurturing “real” relationships between users and characters.”). It would have been further obvious to one of ordinary skill in the art to combine the methods of Chawla and Abrams with the teachings of Nichols in order to utilize the virtual reality and personas of Chawla and Abrams to improve the teaching and development of skills as taught by Nichols (Nichols: Column 1 line 62 – Column 2 line 14, “The system utilizes an artificial intelligence engine driving individualized and dynamic feedback with synchronized video and graphics used to simulate real-world environment and interactions. Multiple "correct" answers are integrated into the learning system to allow individualized learning experiences in which navigation through the system is at a pace controlled by the learner.”) Claim(s) 20 – Chawla in view of Abrams and Nichols disclose the limitations of claim 19 Chawla further discloses the following: use a rating system for the common patterns and themes to determine a particular attribute of the persona; . (Chawla: Paragraph 46, “Summarization 212, as used herein, broadly refers to processing a set of information, organizing and ranking it, and then generating a corresponding summary. As an example, a news article may be processed to identify its primary topic and associated observations, which are then extracted, ranked, and then presented to the user. As another example, page ranking operations may be performed on the same news article to identify individual sentences, rank them, order them, and determine which of the sentences are most impactful in describing the article and its content. As yet another example, a structured data record, such as a patient's electronic medical record (EMR), may be processed using the summarization 212 process to generate sentences and phrases that describes the content of the EMR. In various embodiments, various summarization 212 processes are implemented by the CILS 118 to generate summarizations of content streams, which are in turn used to generate cognitive insights.”; Paragraph 94, “In various embodiments, the discover/visibility 430 component is implemented to provide detailed information related to a predetermined topic, such as a subject or an event, along with associated historical information. In certain embodiments, the predict 431 component is implemented to perform predictive operations to provide insight into what may next occur for a predetermined topic. In various embodiments, the rank/recommend 432 component is implemented to perform ranking and recommendation operations to provide a user prioritized recommendations associated with a provided cognitive insight.”; Paragraph 252, “As an example, composite cognitive insights provided by a particular insight agent related to a first subject may not be relevant or particularly useful to a user of the cognitive business processes and applications 304. As a result, the user provides feedback 1062 to that effect, which in turn is stored in the appropriate session graph that is associated with the user and stored in a repository of session graphs ‘1’ through ‘n’ 1052. Accordingly, subsequent insights provided by the insight agent related the first subject may be ranked lower, or not provided, within a cognitive insight summary 1048 provided to the user. Conversely, the same insight agent may provide excellent insights related to a second subject, resulting in positive feedback 1062 being received from the user. The positive feedback 1062 is likewise stored in the appropriate session graph that is associated with the user and stored in a repository of session graphs ‘1’ through ‘n’ 1052. As a result, subsequent insights provided by the insight agent related to the second subject may be ranked higher within a cognitive insight summary 1048 provided to the user.”) the segmenting of the end users comprises using a classifier that is a cluster algorithm that uses a distance calculator. (Chawla: Paragraph 95, “In certain embodiments, the insight/learning engine 330 may include additional components. For example the additional components may include classification algorithms, clustering algorithms, and so forth. Skilled practitioners of the art will realize that many such additional components are possible and that the foregoing is not intended to limit the spirit, scope or intent of the invention. In various embodiments, the insights agents 433 are implemented to create a visual data story, highlighting user-specific insights, relationships and recommendations. As a result, it can share, operationalize, or track business insights in various embodiments. In various embodiments, the learning agent 434 work in the background to continually update the cognitive graph, as described in greater detail herein, from each unique interaction with data and users.”; Paragraph 113, “In various embodiments, the compute cluster management 476 sub-component is implemented to manage various computing resources as a compute cluster. One such example of such a compute cluster management 476 sub-component is Mesos/Nimbus, a cluster management platform that manages distributed hardware resources into a single pool of resources that can be used by application frameworks to efficiently manage workload distribution for both batch jobs and long-running services. In various embodiments, the distributed object storage 478 sub-component is implemented to manage the physical storage and retrieval of distributed objects (e.g., binary file, image, text, etc.) In a cloud environment. Examples of a distributed object storage 478 sub-component include Amazon S3®, available from Amazon.com of Seattle, Wash., and Swift, an open source, scalable and redundant storage system.”; Paragraph 182, “In various embodiments, the profile services 940 include services related to the provision and management of cognitive personas and cognitive profiles used by a CILS when performing a cognitive learning operation. As used herein, a cognitive persona broadly refers to an archetype user model that represents a common set of attributes associated with a hypothesized group of users. In various embodiments, the common set of attributes may be described through the use of demographic, geographic, psychographic, behavioristic, and other information. As an example, the demographic information may include age brackets (e.g., 25 to 34 years old), gender, marital status (e.g., single, married, divorced, etc.), family size, income brackets, occupational classifications, educational achievement, and so forth. Likewise, the geographic information may include the cognitive persona's typical living and working locations (e.g., rural, semi-rural, suburban, urban, etc.) as well as characteristics associated with individual locations (e.g., parochial, cosmopolitan, population density, etc.).”; Paragraph 189, “In various embodiments, a cognitive persona or cognitive profile is defined by a first set of nodes in a weighted cognitive graph. In these embodiments, the cognitive persona or cognitive profile is further defined by a set of attributes that are respectively associated with a set of corresponding nodes in the weighted cognitive graph. In various embodiments, an attribute weight is used to represent a relevance value between two attributes. For example, a higher numeric value (e.g., ‘5.0’) associated with an attribute weight may indicate a higher degree of relevance between two attributes, while a lower numeric value (e.g., ‘0.5’) may indicate a lower degree of relevance.”; Paragraph 190, “In various embodiments, the numeric value associated with attribute weights may change as a result of the performance of composite cognitive insight and feedback 958 operations described in greater detail herein. In one embodiment, the changed numeric values associated with the attribute weights may be used to modify an existing cognitive persona or cognitive profile. In another embodiment, the changed numeric values associated with the attribute weights may be used to generate a new cognitive persona or cognitive profile. In certain embodiments, various ecosystem services 942 are implemented to manage various aspects of the CILS infrastructure, such as interaction with external services. The method by which these various aspects are managed is a matter of design choice.”) wherein the end user information comprises publicly available existing data about the end users or their contexts. (Chawla: Paragraph 211, “In various embodiments, cognitive learning operations may be performed in various phases of a cognitive learning process. In this embodiment, these phases include a source 1034 phase, a learn 1036 phase, an interpret/infer 1038 phase, and an act 1040 phase. In the source 1034 phase, a predetermined instantiation of a cognitive platform 1010 sources social data 1012, public data 1014, device data 1016, and proprietary data 1018 from various sources as described in greater detail herein. In various embodiments, an example of a cognitive platform 1010 instantiation is the cognitive platform 310 shown in FIGS. 3, 4a, and 4b. In this embodiment, the instantiation of a cognitive platform 1010 includes a source 1006 component, a process 1008 component, a deliver 1030 component, a cleanse 1020 component, an enrich 1022 component, a filter/transform 1024 component, and a repair/reject 1026 component. Likewise, as shown in FIG. 10a, the process 1008 component includes a repository of models 1028, described in greater detail herein.”; Paragraph 214, “In various embodiments, the process 1008 component is implemented to generate various models, described in greater detail herein, which are stored in the repository of models 1028. The process 1008 component is likewise implemented in various embodiments to use the sourced data to generate one or more cognitive graphs, such as an application cognitive graph 1082, as likewise described in greater detail herein. In various embodiments, the process 1008 component is implemented to gain an understanding of the data sourced from the sources of social data 1012, public data 1014, device data 1016, and proprietary data 1018, which assist in the automated generation of the application cognitive graph 1082.”) Chawla does not explicitly disclose the following limitations, however in analogous art of persona generation and interaction, Abrams discloses the following: plugging in the persona to an ongoing workflow; (Abrams: Paragraph 23, “To provide a convincing illusion that users are interacting with a “real” character, the memory 118 includes, without limitation, the character engine 140. In operation the character engine 140 combines sensing algorithms, advanced thinking and learning algorithms, and expressing algorithms in a flexible and adaptive manner. Upon receiving data via the user platform 120, the character engine 140 determines a current context that includes a user intent and an assessment domain. To generate character responses, the character engine 140 selects and applies inference algorithm(s) and personality engine(s) based on the current context and data received from the data sources 150. Finally, the character engine 140 tailors the character responses to the capabilities of the user platform 120”; Paragraph 38, “Together, the user intent 235 and the assessment domain 245 provide a current context for the interaction with the user. Notably, the user intent 235, the mode(s) of interaction, and/or the assessment domain 245 may change as time passes. For example, at one particular time, the user intent engine 230 and the domain parser 240 could determine that the user intent 235 is “asking a question” and the assessment domain 245 is “sports.” At a subsequent time, the user intent engine 230 and the domain parser 240 could determine that the the user intent 235 is “asking a question” and the assessment domain 245 is “cooking.””; Paragraph 47, “The domain parser 245 transmits the recognition data 225, the user intent 235, and the assessment domain 245 to the inference engine 250. The inference engine 250 then establishes a current context based on the user intent 235 and the assessment domain 245. The inference engine 250 may also refine the current context, the user intent 235, and/or the assessment domain 245 in any technically feasible fashion and based on any data. For example, the inference engine 250 may derive a current context from the assessment domain 245 and the user intent 235 and then perform additional assessment operations on the recognition data 225 to refine the current context.”) checking the workflow against a set of workflow parameters stored in a workflow data portion of the knowledge corpus; and (Abrams: Paragraph 57, “For example, one hard wired personality engine representing a chatbot could have a finite set of scripted responses based on a finite set of inputs. Another hard wired personality engine for the chatbot could include a finite set of classifiers that parse ambiguous inputs. By contrast, the personality engine 280 for the chatbot includes the parameterizable model 320, and is not constrained to a finite set of chat inputs and outputs.”; Paragraph 82, “In another example, suppose that the user holds a rock in front of a camera for identification, the inference engine 250 incorrectly identifies the rock as a brick, and the user then states that the object is actually a rock. As part of the offline learning 265, a convolution neural network used for the image recognition could be updated with the new training data 262 that includes the correct identification of the rock. Consequently, the image identification accuracy of the convolution neural network would improve.”; Paragraph 88, “The personality engine 280(1) is based on a “personality color wheel.” As shown, the personality engine 280(1) includes, without limitation, the parameterizable model 320 and a multi-dimensional vector of scalars (e.g., coefficients, weights, etc.) that define the personality 350. The parameterizable model 320 includes eight personality dimensions 320: optimism, love, submission, awe, disapproval, remorse, contempt, and aggressiveness. Although not shown, the colors and shades vary across the parameterizable model 320. The personality 350 is defined by a multi-dimensional vector that includes eight vectors (v1-v8). Each vector corresponds to one of the personality dimensions 320 and includes any number of scalar values.”; Paragraph 94, “At step 408, the domain parser 240 analyzes the recognition data 225 to determine the assessment domain 245. The domain parser 240 may determine the assessment domain 245 at any level of granularity and any in any technically feasible fashion. For example, the domain parser 240 could analyze the content of the recognition data 225 to determine that the assessment domain 245 is “causal conversation,” “storytelling,” or “homework.” In some embodiments, as part of determining the assessment domain 245, the domain parser 240 may refine the user intent 235. For example, the domain parser 240 may refine the user intent 235 from “asking a question,” to “asking a question about sports.” Together, the user intent 235 and the assessment domain 245 provide a current context for the interaction with the user. The domain parser 240 transmits the recognition data 225, the user intent 235, and the assessment domain 245 to the inference engine 250.”) calling out identified anomalies and changes to workflow patterns. (Abrams: Paragraph 100, “FIG. 5 is a flow diagram of method steps for incorporating data into a model of a character, according to various embodiments of the present invention. Although the method steps are described with reference to the systems of FIGS. 1-3, persons skilled in the art will understand that any system configured to implement the method steps, in any order, falls within the scope of the present invention. As persons skilled in the art will recognize, each of the method steps may be performed in a batch mode while the inference engine 250 is not interacting with users or a run-time mode while the inference engine 250 is interacting with users.”; Paragraph 105, “In sum, the disclosed techniques may be implemented to automate interactions with users. In general, a character engine provides modular sensing, thinking and learning, and expressing functionality. More specifically, an input platform abstraction infrastructure and a sensor processing engine implement sensing algorithms that process user data received from any number and type of user platforms (e.g., dolls, teleconferencing, avatar, etc.) to generate recognition data. A user intent engine, a domain parser, and an inference engine implement analysis and machine learning algorithms that generate inferences that are consistent with the current context (e.g., user intent, assessment domain, etc.). A personality engine and an output platform abstraction infrastructure implement algorithms that express character responses based on the inferences and tuned to the character and the particular user platform. Further, to evolve and refine the character, the character engine continuously updates a knowledge database, training data used to train the machine learning algorithms, and individual user histories based on interactions with users and external data sources.”; Paragraph 106, “Advantageously, by implementing the character engine to automate user interactions, service providers provide a convincing illusion that users are interacting with a “real” character instead of a machine. In particular, unlike conventional question and answer based AI systems, the character engine proactively engages with users. For example, the character engine can reply to a question with another question. Because the character engine is modular, the character engine may be configured to implement and select between a wide variety of algorithms that are applicable to different characters, contexts, and user platforms. Further, the algorithms implemented in the character engine may be updated as technology advances. By continuously updating the knowledge database, training data, and local history, the character engine develops a relationship with an individual user that evolves over time. In addition, since the character engine dynamically tailors the responses to different user platforms, the character engine generates the illusion of continuous and consistent relationships that transcend the user platforms.”) Chawla in view of Abrams do not explicitly disclose the following, however, in analogous art of virtual reality interactions Nichols discloses the following: apply generated insights during situational analysis while EDT activities are in progress. (Nichols: Column 6 line 56 – Column 7 line 9, “During the build phase, the application development team uses the detailed designs to code the application. Coding tasks include the interfaces and widgets that the student interacts with. The interfaces can be made up of buttons, grids, check boxes, or any other screen controls that allow the student to view and manipulate his deliverables. The developer must also code logic that analyzes the student's work and provides feedback interactions. These interactions may take the form of text and/or multimedia feedback from simulated team members, conversations with simulated team members, or direct manipulations of the student's work by simulated team members. In parallel with these coding efforts, graphics, videos, and audio are being created for use in the application. Managing the development of these assets have their own complications. Risks in the build phase include misinterpretation of the designs. If the developer does not accurately understand the designer's intentions, the application will not function as desired. Also, coding these applications requires very skilled developers because the logic that analyzes the student's work and composes feedback is very complex.”; Column 10 lines 24 – 65, “We have clearly defined why a combined component/framework approach is the best solution for delivering high-quality BusSim solutions at a lower cost. Given that there are a number of third party frameworks already on the market that provide delivery capability for a wide variety of platforms, the TEL project is focused on defining and developing a set of components that provide unique services for the development and delivery of BusSim solutions. These components along with a set of design and test workbenches are the tools used by instructional designers to support activities in the four phases of BusSim development. We call this suite of tools the Business Simulation Toolset. Following is a description of each of the components and workbenches of the toolset. A Component can be thought of as a black box that encapsulates the behavior and data necessary to support a related set of services. It exposes these services to the outside world through published interfaces. The published interface of a component allows you to understand what it does through the services it offers, but not how it does it. The complexity of its implementation is hidden from the user. The following are the key components of the BusSim Toolset. Domain Component--provides services for modeling the state of a simulation. Profiling Component--provides services for rule-based evaluating the state of a simulation. Transformation Component--provides services for manipulating the state of a simulation. Remediation Component--provides services for the rule-based delivering of feedback to the student The Domain Model component is the central component of the suite that facilitates communication of context data across the application and the other components. It is a modeling tool that can use industry-standard database such as Informix, Oracle, or Sybase to store its data. A domain model is a representation of the objects in a simulation. The objects are such pseudo tangible things as a lever the student can pull, a form or notepad the student fills out, a character the student interacts with in a simulated meeting, etc. They can also be abstract objects such as the ROI for a particular investment, the number of times the student asked a particular question, etc. These objects are called entities. Some example entities include: Vehicles, operators and incidents in an insurance domain; Journal entries, cash flow statements and balance sheets in a financial accounting domain and Consumers and purchases in a marketing domain.”; Column 21 line 21 – Column 22 line 4, “Execution Scenario: Student Interaction--FIG. 18 illustrates a suite to support a student interaction in accordance with a preferred embodiment. In this task the student is trying to journalize invoices. He sees a chart of accounts, an invoice, and the journal entry for each invoice. He journalizes a transaction by dragging and dropping an account from the chart of accounts onto the `Debits` or the `Credits` line of the journal entry and entering the dollar amount of the debit or credit. He does this for each transaction. As the student interacts with the interface, all actions are reported to and recorded in the Domain Model. The Domain Model has a meta-model describing a transaction, its data, and what information a journal entry contains. The actions of the student populates the entities in the domain model with the appropriate information. When the student is ready, he submits the work to a simulated team member for review. This submission triggers the Analysis-Interpretation cycle. The Transformation Component is invoked and performs additional calculations on the data in the Domain Model, perhaps determining that Debits and Credits are unbalanced for a given journal entry. The Profiling Component can then perform rule-based pattern matching on the Domain Model, examining both the student actions and results of any Transformation Component analysis. Some of the profiles fire as they identify the mistakes and correct answers the student has given. Any profiles that fire activate topics in the Remediation Component. After the Profiling Component completes, the Remediation Component is invoked. The remediation algorithm searches the active topics in the tree of concepts to determine the best set of topics to deliver. This set may contain text, video, audio, URLs, even actions that manipulate the Domain Model. It is then assembled into prose-like paragraphs of text and media and presented to the student. The text feedback helps the student localize his journalization errors and understand why they are wrong and what is needed to correct the mistakes. The student is presented with the opportunity to view a video war story about the tax and legal consequences that arise from incorrect journalization. He is also presented with links to the reference materials that describe the fundamentals of journalization. The Analysis-Interpretation cycle ends when any coach items that result in updates to the Domain Model have been posted and the interface is redrawn to represent the new domain data. In this case, the designer chose to highlight with a red check the transactions that the student journalized incorrectly.”) Chawla discloses a method for generating a virtual persona. Abrams discloses a method for training and creating an avatar for interaction based on a persona. Nichols discloses a method for using a simulated environment for training and teaching interactions with simulated personas. At the time of Applicant’s filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Chawla with the teachings of Abrams in order to improve the adaptability and realism of generated personas as disclosed by Abrams (Abrams: Paragraph 6, “Consequently, service providers are often unable to automate interactions for services that rely on establishing and nurturing “real” relationships between users and characters.”). It would have been further obvious to one of ordinary skill in the art to combine the methods of Chawla and Abrams with the teachings of Nichols in order to utilize the virtual reality and personas of Chawla and Abrams to improve the teaching and development of skills as taught by Nichols (Nichols: Column 1 line 62 – Column 2 line 14, “The system utilizes an artificial intelligence engine driving individualized and dynamic feedback with synchronized video and graphics used to simulate real-world environment and interactions. Multiple "correct" answers are integrated into the learning system to allow individualized learning experiences in which navigation through the system is at a pace controlled by the learner.”) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure Karmi (US 2023/0410378 A1) discloses a method for user persona management Hodge (US 2009/0299932 A1) discloses a method for providing a virtual persona Martin (US 2022/0253717 A1) discloses a method for bringing inanimate characters to life Bae (US 2023/0154093 A1) discloses a method for analyzing personality or aptitude based on metaverse and artificial intelligence Le Chavalier (US 2023/0162420 A1) discloses a method for provision of multimedia avatars Van Luchene (US 2010/0197409 A1) discloses a method for managing relationships between characters in a game or virtual environment Brignull (US 2009/0187461 A1) discloses a method for market segmentation analysis in virtual universes Karadijan (US 2015/0221230 A1) discloses a simulation training system Murugaiah (US 2022/0230143 A1) discloses a method for training an avatar to assist a user in career advancement Williams (US 2023/0179955 A1) discloses a method for dynamic and adaptive systems for managing behaviors Chen (US 2009/0313274 A1) discloses a method for persona management Parasker (US 2021/0142238 A1) discloses a method for delivering information technology products and services Goff (US 2012/0308982 A1) discloses a method for a virtual social lab Tormasov (US 2022/0358344 A1) discloses a method for generating a user behavioral avatar for a social media platform Smith (US 10,861,344 B2) discloses a method for a personalized learning system and method for automated generation of structured learning Harlow (US 11,158,204 B2) discloses a method for customizing learning interactions based on a user model Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip N Warner whose telephone number is (571)270-7407. The examiner can normally be reached Monday-Friday 7am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Philip N Warner/Examiner, Art Unit 3624 /Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624
Read full office action

Prosecution Timeline

Dec 07, 2022
Application Filed
Oct 19, 2023
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596974
MULTI-LAYER ABRASIVE TOOLS FOR CONCRETE SURFACE PROCESSING
2y 5m to grant Granted Apr 07, 2026
Patent 12596984
INFORMATION GENERATION APPARATUS, INFORMATION GENERATION METHOD AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12579490
GENERATING SUGGESTIONS WITHIN A DATA INTEGRATION SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12567011
BATTERY LEDGER MANAGEMENT SYSTEM AND METHOD OF BATTERY LEDGER MANAGEMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12493819
UTILIZING MACHINE LEARNING MODELS TO GENERATE INITIATIVE PLANS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
36%
Grant Probability
65%
With Interview (+28.6%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 107 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month