DETAILED ACTION
The action is responsive to the Application filed on 10/12/2023. Claims 1-20 are pending in the case. Claims 1, 12 and 19 are independent claims.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Verma (US 12190056 B1) in view of Ciminelli et al. (US 20230409298 A1, hereinafter Ciminelli).
As to claim 1, Verma discloses a service provider system comprising:
a non-transitory memory ("Software applications executed by the smart glasses may be stored within the non-transitory memory and/or other storage medium. Software applications may provide instructions to the processor that enable the apparatus to perform various functions. The instructions may include any of the smart glasses methods and processes described herein," Verma column 18 lines 32-37); and
one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the service provider system to perform operations ("Software applications executed by the smart glasses may be stored within the non-transitory memory and/or other storage medium. Software applications may provide instructions to the processor that enable the apparatus to perform various functions. The instructions may include any of the smart glasses methods and processes described herein," Verma column 18 lines 32-37) comprising:
determining user data associated with user features for a user, wherein the user is engaged in a use of a service of the service provider system ("A party may identify a user, an agent, or a customer, for example, in a bank setting. Identifying each party is critical and there may be multi-party identification processes. For simplicity, however, consider a two-party identification process. For example, the first party may be a banking service agent, and the second party may be a customer. Speech recognition software may be used to create a banking service agent voice profile, which may be used to identify the first party ID in a conversation thread. Any speech other than the first party ID may then be considered for a second party ID," Verma column 7 lines 9-18);
determining agent data associated with agent features for an agent assisting the user with a user experience (UX) for the service ("A party may identify a user, an agent, or a customer, for example, in a bank setting. Identifying each party is critical and there may be multi-party identification processes. For simplicity, however, consider a two-party identification process. For example, the first party may be a banking service agent, and the second party may be a customer. Speech recognition software may be used to create a banking service agent voice profile, which may be used to identify the first party ID in a conversation thread. Any speech other than the first party ID may then be considered for a second party ID," Verma column 7 lines 9-18);
determining a service state of at least one of the service being provided to the user or another service available to the user at a current time ("In addition, each UI form may have a sharing confidence index (e.g., a grouping of pending UI form text fields). Some conversational context blocks are not so clear and may activate multiple UI form text fields together in listening mode, e.g.: First party (agent): “May I know your account no. or phone no., please?” Second party (customer): “It is 989-034-7889.”," Verma column 7 lines 19-26, determining what questions the agent has asked and what questions the user has answered (i.e., determining a service state of the banking service being provided to the user));
computing, using a simulation engine based on the user data, the agent data, and the service state, a plurality of scenarios for dynamic user interfaces (UIs) displayable to at least one of the user or the agent ("In this case, the UI form may have two UI form text fields: (1) account no. and (2) phone no. These two UI form text fields may partially match a conversational block. Therefore, a context parsing rule in both fields may suggest looking for a 10-digit numeric value in the second party's response. Because the smart glasses UI form, via a conversation tracking application, may find that the response partially matches two UI form text fields, the smart glasses UI form may split a confidence index between these two AI supported UI form text fields (e.g., 50-50 for each UI form text field, though it may vary from case-to-case, e.g., 20-80/30-70/40-60, etc.)," Verma column 7 lines 27-38; "The AI powered contextual form 404 may include customer ID text field 406 (with 0% confidence index), account no. text field 408 (with 10% confidence index), email ID text field 410 (with 99% confidence index), and phone no. text field 412 (with 50% confidence index). A 99% confidence index may be considered a valid entry value. The UI form and its fields may toggle from one state to another based on the context of the conversation and keep their respective values updated. Confidence Index 402 measures how confident the system is about the currently identified value for a given text field," Verma column 16 lines 54-64; "In addition, a field state 414 may be displayed for each UI form text field. The field state 414 may include, for example, a color-coded or pattern-coded system for identifying conversational context. For example, one color or pattern may represent smart glasses are “actively listening to the conversation,” one color or pattern may represent smart glasses are “inactive, waiting for context,” one color or pattern may represent smart glasses when “context parsing completed, value assigned,” and another color or pattern may represent when smart glasses “found potentially relevant context, but confidence is low.”," Verma column 16 lines 65-67 and column 17 lines 1-8, determining a variety of confidence scenarios to fill out and determining color code scenarios for the UI);
generating, using the simulation engine and based on UI presentation parameters for presenting different dynamic UIs, wherein the UI presentation parameters comprise different communication channels, available UI content, and UI fields for the content, and wherein the first dynamic UI provides UI content for the UX with the service ("When the sensor detects the voice of the agent, the microprocessor may be operable to execute a conversation tracking application. The conversation tracking application may be configured to determine one or more conversations between the agent and the customer. The conversation tracking application may extract data directed to a data entry field within the UI. Further, the conversation tracking application may detect data while listening to a conversation, and, in response to the detection, identify a data segment from the data within the data entry field," Verma column 3 lines 39-48, modified UI is presented in the context of a conversation communication channel); and
outputting the first dynamic UI to at least one of a first computing device of the user or a second computing device of the agent during the use of the service ("A smart glasses interface may display a potential UI form text field value in a side box and the user or agent may be prompted to obtain verification of UI form text field value. In some embodiments, the user or agent engages in UI form text field value verification steps," Verma column 10 lines 63-67, displaying the GUI to both the agent and the user).
However Verma does not appear to explicitly disclose computing code for a first dynamic UI based on the plurality of scenarios.
Ciminelli teaches computing code for a first dynamic UI based on the plurality of scenarios ("In some examples, step 510 may comprise analyzing a textual input (such as textual data 102, the textual input received by step 508, the first textual input received by step 706, the textual input received by step 802, etc.) and/or a sketch (such as sketch data 104, the sketch received by step 508, the sketch received by step 706, the sketch received by step 802, etc.) to determine at least one change to a selected portion of a design of a user interface (for example, to the portion of the design of the user interface of step 508, to selected portion 110, to the entire design of the user interface, etc.). In one example, the at least one change may include at least one of adding an element to the user interface or removing an element from the user interface. Other non-limiting examples of such changes may include changing a size of an element of the user interface, changing a color of at least part of an element of the user interface, changing a font, changing a layout of elements of the user interface, changing a position of an element of the user interface, changing distance between two elements of the user interface, changing appearance of an element of the user interface, changing a level of details associated with at least part of the user interface, changing a timing of an event associated with an element of the user interface, and so forth. In one example, step 510 may use a machine learning model to analyze the textual input and/or the sketch and/or additional information to determine the at least one change to the portion of the design of the user interface," Ciminelli paragraph 0093; "In some examples, step 512 may comprise implementing at least one change (such as the at least one change determined by step 512) to generate a modified design of a user interface (such as the user interface of step 502). In one example, implanting the at least one change may include encoding the modified design of the user interface and/or design elements of the modified design of the user interface and/or a layout of the modified design of the user interface in digital files (for example in an HTML format and/or a CSS format and/or a source code in a style sheet language and/or media files)," Ciminelli paragraph 0113, generating code when updating a UI’s colors or content).
Accordingly it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Verma to generate code and modify code when updating the UI as taught by Ciminelli. One would have been motivated to make such a combination so that the UI modification could work with any service’s UI application, thus allowing the finished product to work in more kinds of scenarios.
As to claim 2, Verma as modified by Ciminelli discloses the service provider system of claim 1, wherein the generating the first dynamic UI comprises generating additional computing code for a plurality of dynamic UIs including the first dynamic UI based on the plurality of scenarios and the UI presentation parameters, each of the plurality of dynamic UIs for a different UX with the service, and wherein the outputting the first dynamic UI comprises outputting the plurality of dynamic UIs further using the additional computing code ("In some examples, step 510 may comprise analyzing a textual input (such as textual data 102, the textual input received by step 508, the first textual input received by step 706, the textual input received by step 802, etc.) and/or a sketch (such as sketch data 104, the sketch received by step 508, the sketch received by step 706, the sketch received by step 802, etc.) to determine at least one change to a selected portion of a design of a user interface (for example, to the portion of the design of the user interface of step 508, to selected portion 110, to the entire design of the user interface, etc.). In one example, the at least one change may include at least one of adding an element to the user interface or removing an element from the user interface. Other non-limiting examples of such changes may include changing a size of an element of the user interface, changing a color of at least part of an element of the user interface, changing a font, changing a layout of elements of the user interface, changing a position of an element of the user interface, changing distance between two elements of the user interface, changing appearance of an element of the user interface, changing a level of details associated with at least part of the user interface, changing a timing of an event associated with an element of the user interface, and so forth. In one example, step 510 may use a machine learning model to analyze the textual input and/or the sketch and/or additional information to determine the at least one change to the portion of the design of the user interface," Ciminelli paragraph 0093; "In some examples, step 512 may comprise implementing at least one change (such as the at least one change determined by step 512) to generate a modified design of a user interface (such as the user interface of step 502). In one example, implanting the at least one change may include encoding the modified design of the user interface and/or design elements of the modified design of the user interface and/or a layout of the modified design of the user interface in digital files (for example in an HTML format and/or a CSS format and/or a source code in a style sheet language and/or media files)," Ciminelli paragraph 0113, using modified code to display the modified UI).
As to claim 3, Verma as modified by Ciminelli discloses the service provider system of claim 2, wherein the additional computing code for each of the dynamic UIs is generated with corresponding UI content displayable in the plurality of dynamic UIs, and wherein the corresponding UI content includes UI fields with UI presentable data for the UI fields having one or more arrangements of the UI fields in corresponding ones of the plurality of dynamic UIs ("In this case, the UI form may have two UI form text fields: (1) account no. and (2) phone no. These two UI form text fields may partially match a conversational block. Therefore, a context parsing rule in both fields may suggest looking for a 10-digit numeric value in the second party's response. Because the smart glasses UI form, via a conversation tracking application, may find that the response partially matches two UI form text fields, the smart glasses UI form may split a confidence index between these two AI supported UI form text fields (e.g., 50-50 for each UI form text field, though it may vary from case-to-case, e.g., 20-80/30-70/40-60, etc.)," Verma column 7 lines 27-38; "The AI powered contextual form 404 may include customer ID text field 406 (with 0% confidence index), account no. text field 408 (with 10% confidence index), email ID text field 410 (with 99% confidence index), and phone no. text field 412 (with 50% confidence index). A 99% confidence index may be considered a valid entry value. The UI form and its fields may toggle from one state to another based on the context of the conversation and keep their respective values updated. Confidence Index 402 measures how confident the system is about the currently identified value for a given text field," Verma column 16 lines 54-64; "In addition, a field state 414 may be displayed for each UI form text field. The field state 414 may include, for example, a color-coded or pattern-coded system for identifying conversational context. For example, one color or pattern may represent smart glasses are “actively listening to the conversation,” one color or pattern may represent smart glasses are “inactive, waiting for context,” one color or pattern may represent smart glasses when “context parsing completed, value assigned,” and another color or pattern may represent when smart glasses “found potentially relevant context, but confidence is low.”," Verma column 16 lines 65-67 and column 17 lines 1-8, determining content for the fields and determining color code information for the UI).
As to claim 4, Verma as modified by Ciminelli discloses the service provider system of claim 2, wherein the operations further comprise:
receiving a selection of the first dynamic UI from the plurality of dynamic UIs ("In this case, the UI form may have two UI form text fields: (1) account no. and (2) phone no. These two UI form text fields may partially match a conversational block. Therefore, a context parsing rule in both fields may suggest looking for a 10-digit numeric value in the second party's response. Because the smart glasses UI form, via a conversation tracking application, may find that the response partially matches two UI form text fields, the smart glasses UI form may split a confidence index between these two AI supported UI form text fields (e.g., 50-50 for each UI form text field, though it may vary from case-to-case, e.g., 20-80/30-70/40-60, etc.)," Verma column 7 lines 27-38; "The AI powered contextual form 404 may include customer ID text field 406 (with 0% confidence index), account no. text field 408 (with 10% confidence index), email ID text field 410 (with 99% confidence index), and phone no. text field 412 (with 50% confidence index). A 99% confidence index may be considered a valid entry value. The UI form and its fields may toggle from one state to another based on the context of the conversation and keep their respective values updated. Confidence Index 402 measures how confident the system is about the currently identified value for a given text field," Verma column 16 lines 54-64; "In addition, a field state 414 may be displayed for each UI form text field. The field state 414 may include, for example, a color-coded or pattern-coded system for identifying conversational context. For example, one color or pattern may represent smart glasses are “actively listening to the conversation,” one color or pattern may represent smart glasses are “inactive, waiting for context,” one color or pattern may represent smart glasses when “context parsing completed, value assigned,” and another color or pattern may represent when smart glasses “found potentially relevant context, but confidence is low.”," Verma column 16 lines 65-67 and column 17 lines 1-8, determining a variety of confidence scenarios to fill out and determining color code scenarios for the UI and then displaying the highest ranked UI (i.e., receiving a selection of the UI from the plurality of UIs));
rendering the first dynamic UI on the computing device of the user ("A smart glasses interface may display a potential UI form text field value in a side box and the user or agent may be prompted to obtain verification of UI form text field value. In some embodiments, the user or agent engages in UI form text field value verification steps," Verma column 10 lines 63-67, displaying the GUI to both the agent and the user); and
allowing at least one of the user or the agent to further configure the first dynamic UI with additional options for the UI content for the UX ("In an embodiment, AI Mode 418 may be toggled ON or OFF. AI Mode ON/OFF 418 is used to toggle the UI form between an AI enabled (AI Mode ON) and a regular mode (AI Mode OFF). Further, Show Context 416 made be toggled ON or OFF, as well. Show context ON 416 may help a first party (agent) know what is understood from a conversation," Verma column 17 lines 22-28, UI can be customized by toggling options).
As to claim 5, Verma as modified by Ciminelli discloses the service provider system of claim 2, each of the plurality of dynamic UIs is presentable using at least one of the different communication channels, and wherein the different communication channels include at least one of a conversational UI channel, a multimedia UI channel, a web-based chat channel, an instant messaging channel, a text-to-speech UI channel, an interactive voice response channel, or an email channel ("When the sensor detects the voice of the agent, the microprocessor may be operable to execute a conversation tracking application. The conversation tracking application may be configured to determine one or more conversations between the agent and the customer. The conversation tracking application may extract data directed to a data entry field within the UI. Further, the conversation tracking application may detect data while listening to a conversation, and, in response to the detection, identify a data segment from the data within the data entry field," Verma column 3 lines 39-48, UI is presented in the context of a conversation communication channel).
As to claim 6, Verma as modified by Ciminelli discloses the service provider system of claim 2, wherein the first dynamic UI is output to one of the first computing device or the second computing device based on a corresponding one of the user or the agent receiving the first dynamic UI, and wherein the operations further comprise:
outputting a second dynamic UI of the plurality of dynamic UIs instead of the first dynamic UI to another one of the first computing device or the second computing device based on a corresponding one of the user or the agent receiving the second dynamic UI ("In this case, the UI form may have two UI form text fields: (1) account no. and (2) phone no. These two UI form text fields may partially match a conversational block. Therefore, a context parsing rule in both fields may suggest looking for a 10-digit numeric value in the second party's response. Because the smart glasses UI form, via a conversation tracking application, may find that the response partially matches two UI form text fields, the smart glasses UI form may split a confidence index between these two AI supported UI form text fields (e.g., 50-50 for each UI form text field, though it may vary from case-to-case, e.g., 20-80/30-70/40-60, etc.)," Verma column 7 lines 27-38; "The AI powered contextual form 404 may include customer ID text field 406 (with 0% confidence index), account no. text field 408 (with 10% confidence index), email ID text field 410 (with 99% confidence index), and phone no. text field 412 (with 50% confidence index). A 99% confidence index may be considered a valid entry value. The UI form and its fields may toggle from one state to another based on the context of the conversation and keep their respective values updated. Confidence Index 402 measures how confident the system is about the currently identified value for a given text field," Verma column 16 lines 54-64; "In addition, a field state 414 may be displayed for each UI form text field. The field state 414 may include, for example, a color-coded or pattern-coded system for identifying conversational context. For example, one color or pattern may represent smart glasses are “actively listening to the conversation,” one color or pattern may represent smart glasses are “inactive, waiting for context,” one color or pattern may represent smart glasses when “context parsing completed, value assigned,” and another color or pattern may represent when smart glasses “found potentially relevant context, but confidence is low.”," Verma column 16 lines 65-67 and column 17 lines 1-8, as the user answers the first UI is replaced with a second UI with different form fills and color coding).
As to claim 7, Verma as modified by Ciminelli discloses the service provider system of claim 1, wherein prior to the determining the service state, the operations further comprise:
determining customer journey information for a customer journey of the user with at least one of the service or the agent over a time period, wherein the customer journey information comprises at least one of a current conversational state between the user and the agent or past events of the user with the service, and wherein the determining the service state is based on the customer journey information ("In addition, each UI form may have a sharing confidence index (e.g., a grouping of pending UI form text fields). Some conversational context blocks are not so clear and may activate multiple UI form text fields together in listening mode, e.g.: First party (agent): “May I know your account no. or phone no., please?” Second party (customer): “It is 989-034-7889.”," Verma column 7 lines 19-26, determining what questions the agent has asked and what questions the user has answered (i.e., historical information used to determine a service state of the banking service being provided to the user)).
As to claim 8, Verma as modified by Ciminelli discloses the service provider system of claim 1, wherein the user features comprise real-time time series events including at least one of transaction declines, transaction disputes, account limitations, or user utterances, and wherein the user features further comprise unique user characteristics including at least one of medical conditions, disabilities, a residency, or a language ("In addition, each UI form may have a sharing confidence index (e.g., a grouping of pending UI form text fields). Some conversational context blocks are not so clear and may activate multiple UI form text fields together in listening mode, e.g.: First party (agent): “May I know your account no. or phone no., please?” Second party (customer): “It is 989-034-7889.”," Verma column 7 lines 19-26, determining the things the user has answered (i.e., user utterances and user language characteristics) to change the UI).
As to claim 9, Verma as modified by Ciminelli discloses the service provider system of claim 1, wherein the agent features comprise real-time agent availability information including an agent skill level, hours of operation, an agent sentiment, or agent utterances, and wherein the agent feature further comprise unique agent characteristics including medical conditions, disabilities, a residency, or a language ("In addition, each UI form may have a sharing confidence index (e.g., a grouping of pending UI form text fields). Some conversational context blocks are not so clear and may activate multiple UI form text fields together in listening mode, e.g.: First party (agent): “May I know your account no. or phone no., please?” Second party (customer): “It is 989-034-7889.”," Verma column 7 lines 19-26, determining what questions the agent has asked (i.e., agent utterances and agent language characteristics) to change the UI).
As to claim 10, Verma as modified by Ciminelli discloses the service provider system of claim 1, wherein the simulation engine comprises a large language model (LLM) trained to predict permutations of the plurality of scenarios from combinations of input variables associated with the user data, the agent data, and the service state ("This disclosure provides smart glasses UI form text fields supported by advanced technologies including, but not limited to, AI, voice-to-text, natural language processing, context parsing, and speech-to-profile mapping. AI includes, but is not limited to, all forms of AI including large language models (“LLMs”), including, but not limited to ChatGPT, Bard, and the like. Voice-to-text conversion is the ability to take a voice and convert the audio to digital text images. Natural language processing is the ability to take natural language from a conversation and understand meaning and context from the natural language in the conversation. Context parsing is the ability to listen to a conversation and parse context and meaning out of the audio data obtained in a conversation," Verma column 3 lines 6-19; "A method is provided for an AI-based procedure to autofill and autocorrect UI form text fields based on contextual analyses of real-time conversations between a customer and an agent. In addition, the method may use AR display devices to autofill UI forms based on real-time conversations," Verma column 3 lines 22-27).
As to claim 11, Verma as modified by Ciminelli discloses the service provider system of claim 10, wherein the generating the computing code comprises:
generating, using the LLM, first computing code for dynamic content in the first dynamic UI based on a highest scored one of the plurality of scenarios, wherein the highest scored one of the plurality of scenarios comprises one of the permutations of the combinations from the different communication channels, the available UI content, and the UI fields specific to the user data, the agent data, and the service state ("In this case, the UI form may have two UI form text fields: (1) account no. and (2) phone no. These two UI form text fields may partially match a conversational block. Therefore, a context parsing rule in both fields may suggest looking for a 10-digit numeric value in the second party's response. Because the smart glasses UI form, via a conversation tracking application, may find that the response partially matches two UI form text fields, the smart glasses UI form may split a confidence index between these two AI supported UI form text fields (e.g., 50-50 for each UI form text field, though it may vary from case-to-case, e.g., 20-80/30-70/40-60, etc.)," Verma column 7 lines 27-38; "The AI powered contextual form 404 may include customer ID text field 406 (with 0% confidence index), account no. text field 408 (with 10% confidence index), email ID text field 410 (with 99% confidence index), and phone no. text field 412 (with 50% confidence index). A 99% confidence index may be considered a valid entry value. The UI form and its fields may toggle from one state to another based on the context of the conversation and keep their respective values updated. Confidence Index 402 measures how confident the system is about the currently identified value for a given text field," Verma column 16 lines 54-64; "In addition, a field state 414 may be displayed for each UI form text field. The field state 414 may include, for example, a color-coded or pattern-coded system for identifying conversational context. For example, one color or pattern may represent smart glasses are “actively listening to the conversation,” one color or pattern may represent smart glasses are “inactive, waiting for context,” one color or pattern may represent smart glasses when “context parsing completed, value assigned,” and another color or pattern may represent when smart glasses “found potentially relevant context, but confidence is low.”," Verma column 16 lines 65-67 and column 17 lines 1-8, determining a variety of confidence scenarios to fill out and determining color code scenarios for the UI); and
generating, using the LLM, second computing code for dynamic UI controls of the first dynamic UI based on the highest scored one of the plurality of scenarios ("In this case, the UI form may have two UI form text fields: (1) account no. and (2) phone no. These two UI form text fields may partially match a conversational block. Therefore, a context parsing rule in both fields may suggest looking for a 10-digit numeric value in the second party's response. Because the smart glasses UI form, via a conversation tracking application, may find that the response partially matches two UI form text fields, the smart glasses UI form may split a confidence index between these two AI supported UI form text fields (e.g., 50-50 for each UI form text field, though it may vary from case-to-case, e.g., 20-80/30-70/40-60, etc.)," Verma column 7 lines 27-38; "The AI powered contextual form 404 may include customer ID text field 406 (with 0% confidence index), account no. text field 408 (with 10% confidence index), email ID text field 410 (with 99% confidence index), and phone no. text field 412 (with 50% confidence index). A 99% confidence index may be considered a valid entry value. The UI form and its fields may toggle from one state to another based on the context of the conversation and keep their respective values updated. Confidence Index 402 measures how confident the system is about the currently identified value for a given text field," Verma column 16 lines 54-64; "In addition, a field state 414 may be displayed for each UI form text field. The field state 414 may include, for example, a color-coded or pattern-coded system for identifying conversational context. For example, one color or pattern may represent smart glasses are “actively listening to the conversation,” one color or pattern may represent smart glasses are “inactive, waiting for context,” one color or pattern may represent smart glasses when “context parsing completed, value assigned,” and another color or pattern may represent when smart glasses “found potentially relevant context, but confidence is low.”," Verma column 16 lines 65-67 and column 17 lines 1-8, determining a variety of confidence scenarios to fill out and determining color code scenarios for the UI; "In some examples, step 510 may comprise analyzing a textual input (such as textual data 102, the textual input received by step 508, the first textual input received by step 706, the textual input received by step 802, etc.) and/or a sketch (such as sketch data 104, the sketch received by step 508, the sketch received by step 706, the sketch received by step 802, etc.) to determine at least one change to a selected portion of a design of a user interface (for example, to the portion of the design of the user interface of step 508, to selected portion 110, to the entire design of the user interface, etc.). In one example, the at least one change may include at least one of adding an element to the user interface or removing an element from the user interface. Other non-limiting examples of such changes may include changing a size of an element of the user interface, changing a color of at least part of an element of the user interface, changing a font, changing a layout of elements of the user interface, changing a position of an element of the user interface, changing distance between two elements of the user interface, changing appearance of an element of the user interface, changing a level of details associated with at least part of the user interface, changing a timing of an event associated with an element of the user interface, and so forth. In one example, step 510 may use a machine learning model to analyze the textual input and/or the sketch and/or additional information to determine the at least one change to the portion of the design of the user interface," Ciminelli paragraph 0093; "In some examples, step 512 may comprise implementing at least one change (such as the at least one change determined by step 512) to generate a modified design of a user interface (such as the user interface of step 502). In one example, implanting the at least one change may include encoding the modified design of the user interface and/or design elements of the modified design of the user interface and/or a layout of the modified design of the user interface in digital files (for example in an HTML format and/or a CSS format and/or a source code in a style sheet language and/or media files)," Ciminelli paragraph 0113), and
wherein the computing code for the first dynamic UI includes the first computing code and the second computing code ("In some examples, step 510 may comprise analyzing a textual input (such as textual data 102, the textual input received by step 508, the first textual input received by step 706, the textual input received by step 802, etc.) and/or a sketch (such as sketch data 104, the sketch received by step 508, the sketch received by step 706, the sketch received by step 802, etc.) to determine at least one change to a selected portion of a design of a user interface (for example, to the portion of the design of the user interface of step 508, to selected portion 110, to the entire design of the user interface, etc.). In one example, the at least one change may include at least one of adding an element to the user interface or removing an element from the user interface. Other non-limiting examples of such changes may include changing a size of an element of the user interface, changing a color of at least part of an element of the user interface, changing a font, changing a layout of elements of the user interface, changing a position of an element of the user interface, changing distance between two elements of the user interface, changing appearance of an element of the user interface, changing a level of details associated with at least part of the user interface, changing a timing of an event associated with an element of the user interface, and so forth. In one example, step 510 may use a machine learning model to analyze the textual input and/or the sketch and/or additional information to determine the at least one change to the portion of the design of the user interface," Ciminelli paragraph 0093; "In some examples, step 512 may comprise implementing at least one change (such as the at least one change determined by step 512) to generate a modified design of a user interface (such as the user interface of step 502). In one example, implanting the at least one change may include encoding the modified design of the user interface and/or design elements of the modified design of the user interface and/or a layout of the modified design of the user interface in digital files (for example in an HTML format and/or a CSS format and/or a source code in a style sheet language and/or media files)," Ciminelli paragraph 0113).
As to claim 12, Verma discloses a method comprising:
identifying that a user is engaged in a use of a service of the service provider system with an agent assisting the user with the service, wherein the service has a corresponding service state during a user experience (UX) of the service by the user ("In addition, each UI form may have a sharing confidence index (e.g., a grouping of pending UI form text fields). Some conversational context blocks are not so clear and may activate multiple UI form text fields together in listening mode, e.g.: First party (agent): “May I know your account no. or phone no., please?” Second party (customer): “It is 989-034-7889.”," Verma column 7 lines 19-26, determining what questions the agent has asked and what questions the user has answered (i.e., determining a service state of the banking service being provided to the user));
determining user and agent data for the user and the agent, wherein the user and agent data is associated with real-time user and agent features and historical user and agent characteristics ("A party may identify a user, an agent, or a customer, for example, in a bank setting. Identifying each party is critical and there may be multi-party identification processes. For simplicity, however, consider a two-party identification process. For example, the first party may be a banking service agent, and the second party may be a customer. Speech recognition software may be used to create a banking service agent voice profile, which may be used to identify the first party ID in a conversation thread. Any speech other than the first party ID may then be considered for a second party ID," Verma column 7 lines 9-18; "In addition, each UI form may have a sharing confidence index (e.g., a grouping of pending UI form text fields). Some conversational context blocks are not so clear and may activate multiple UI form text fields together in listening mode, e.g.: First party (agent): “May I know your account no. or phone no., please?” Second party (customer): “It is 989-034-7889.”," Verma column 7 lines 19-26, determining what questions the agent has asked and what questions the user has answered (i.e., historical information));
computing, using a simulation engine based on the user and agent data and the service state, a UX scenario for a dynamic user interface (UI) displayable to at least one of the user or the agent ("In this case, the UI form may have two UI form text fields: (1) account no. and (2) phone no. These two UI form text fields may partially match a conversational block. Therefore, a context parsing rule in both fields may suggest looking for a 10-digit numeric value in the second party's response. Because the smart glasses UI form, via a conversation tracking application, may find that the response partially matches two UI form text fields, the smart glasses UI form may split a confidence index between these two AI supported UI form text fields (e.g., 50-50 for each UI form text field, though it may vary from case-to-case, e.g., 20-80/30-70/40-60, etc.)," Verma column 7 lines 27-38; "The AI powered contextual form 404 may include customer ID text field 406 (with 0% confidence index), account no. text field 408 (with 10% confidence index), email ID text field 410 (with 99% confidence index), and phone no. text field 412 (with 50% confidence index). A 99% confidence index may be considered a valid entry value. The UI form and its fields may toggle from one state to another based on the context of the conversation and keep their respective values updated. Confidence Index 402 measures how confident the system is about the currently identified value for a given text field," Verma column 16 lines 54-64; "In addition, a field state 414 may be displayed for each UI form text field. The field state 414 may include, for example, a color-coded or pattern-coded system for identifying conversational context. For example, one color or pattern may represent smart glasses are “actively listening to the conversation,” one color or pattern may represent smart glasses are “inactive, waiting for context,” one color or pattern may represent smart glasses when “context parsing completed, value assigned,” and another color or pattern may represent when smart glasses “found potentially relevant context, but confidence is low.”," Verma column 16 lines 65-67 and column 17 lines 1-8, determining a variety of confidence scenarios to fill out and determining color code scenarios for the UI);
generating, using the simulation engine based on the UX scenario, the dynamic UI and dynamic content displayable via the dynamic UI ("When the sensor detects the voice of the agent, the microprocessor may be operable to execute a conversation tracking application. The conversation tracking application may be configured to determine one or more conversations between the agent and the customer. The conversation tracking application may extract data directed to a data entry field within the UI. Further, the conversation tracking application may detect data while listening to a conversation, and, in response to the detection, identify a data segment from the data within the data entry field," Verma column 3 lines 39-48, modified UI is presented in the context of a conversation communication channel); and
causing the dynamic UI to be displayed on at least one of a first computing device of the user or a second computing device of the agent during the UX of the service by the user ("A smart glasses interface may display a potential UI form text field value in a side box and the user or agent may be prompted to obtain verification of UI form text field value. In some embodiments, the user or agent engages in UI form text field value verification steps," Verma column 10 lines 63-67, displaying the GUI to both the agent and the user).
However Verma does not appear to explicitly disclose computing code for the dynamic UI and dynamic content displayable via the dynamic UI.
Ciminelli teaches computing code for the dynamic UI and dynamic content displayable via the dynamic UI ("In some examples, step 510 may comprise analyzing a textual input (such as textual data 102, the textual input received by step 508, the first textual input received by step 706, the textual input received by step 802, etc.) and/or a sketch (such as sketch data 104, the sketch received by step 508, the sketch received by step 706, the sketch received by step 802, etc.) to determine at least one change to a selected portion of a design of a user interface (for example, to the portion of the design of the user interface of step 508, to selected portion 110, to the entire design of the user interface, etc.). In one example, the at least one change may include at least one of adding an element to the user interface or removing an element from the user interface. Other non-limiting examples of such changes may include changing a size of an element of the user interface, changing a color of at least part of an element of the user interface, changing a font, changing a layout of elements of the user interface, changing a position of an element of the user interface, changing distance between two elements of the user interface, changing appearance of an element of the user interface, changing a level of details associated with at least part of the user interface, changing a timing of an event associated with an element of the user interface, and so forth. In one example, step 510 may use a machine learning model to analyze the textual input and/or the sketch and/or additional information to determine the at least one change to the portion of the design of the user interface," Ciminelli paragraph 0093; "In some examples, step 512 may comprise implementing at least one change (such as the at least one change determined by step 512) to generate a modified design of a user interface (such as the user interface of step 502). In one example, implanting the at least one change may include encoding the modified design of the user interface and/or design elements of the modified design of the user interface and/or a layout of the modified design of the user interface in digital files (for example in an HTML format and/or a CSS format and/or a source code in a style sheet language and/or media files)," Ciminelli paragraph 0113, generating code when updating a UI’s colors or content).
Accordingly it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Verma to generate code and modify code when updating the UI as taught by Ciminelli. One would have been motivated to make such a combination so that the UI modification could work with any service’s UI application, thus allowing the finished product to work in more kinds of scenarios.
As to claim 13, Verma as modified by Ciminelli discloses the method of claim 12, wherein the computing code comprises at least one of data containers or data objects enabling rendering of the dynamic UI with at least one UI control usable to configure the dynamic UI when displayed by the at least one of the first computing device or the second computing device ("In an embodiment, AI Mode 418 may be toggled ON or OFF. AI Mode ON/OFF 418 is used to toggle the UI form between an AI enabled (AI Mode ON) and a regular mode (AI Mode OFF). Further, Show Context 416 made be toggled ON or OFF, as well. Show context ON 416 may help a first party (agent) know what is understood from a conversation," Verma column 17 lines 22-28, UI can be customized by toggling options).
As to claim 14, Verma as modified by Ciminelli discloses the method of claim 13, further comprising:
receiving a configuration of the dynamic UI based on input for the at least one UI control; and configuring a display of the dynamic UI from the configuration ("In an embodiment, AI Mode 418 may be toggled ON or OFF. AI Mode ON/OFF 418 is used to toggle the UI form between an AI enabled (AI Mode ON) and a regular mode (AI Mode OFF). Further, Show Context 416 made be toggled ON or OFF, as well. Show context ON 416 may help a first party (agent) know what is understood from a conversation," Verma column 17 lines 22-28, UI can be customized by toggling options).
As to claim 15, Verma as modified by Ciminelli discloses the method of claim 12, further comprising:
causing at least one other dynamic UI to be displayed with the dynamic UI for at least one of different content in the at least one other dynamic UI or at least one different communication channel usable with the at least one other dynamic UI (“Further, contextual clues and text may appear on the smart glasses UI form 504. For example, the agent wearing smart glasses 502 may receive a prompt that says, “Get Details,” 506. Based on this prompt, a context clue or text: “Context: Let me get your details,” 508 may appear on the smart glasses UI,” Verma column 17 lines 39-44, displaying context clues or text to the agent (i.e., an other dynamic UI)).
As to claim 16, Verma as modified by Ciminelli discloses the method of claim 12, further comprising:
receiving a navigation to the dynamic UI; and continuing the UX of the service on the dynamic UI based on the navigation (“In this case, the UI form may have two UI form text fields: (1) account no. and (2) phone no. These two UI form text fields may partially match a conversational block. Therefore, a context parsing rule in both fields may suggest looking for a 10-digit numeric value in the second party's response. Because the smart glasses UI form, via a conversation tracking application, may find that the response partially matches two UI form text fields, the smart glasses UI form may split a confidence index between these two AI supported UI form text fields (e.g., 50-50 for each UI form text field, though it may vary from case-to-case, e.g., 20-80/30-70/40-60, etc.). Later in the conversation data feed, the smart glasses UI form may identify an absolute value for the phone no. UI form text field, e.g.: First party (agent): “Where do you want us to update the status?” Second party (customer): “You can update me on my primary phone no. 343-809-5454.”,” Verma column 7 lines 27-45, navigating in the UI to fill out different fields based on the conversation).
As to claim 17, Verma as modified by Ciminelli discloses the method of claim 12, wherein the computing code for the dynamic UI is further generated based on a selected communication channel of a plurality of communication channels for display of the dynamic UI one the at least one of the first computing device or the second computing device ("When the sensor detects the voice of the agent, the microprocessor may be operable to execute a conversation tracking application. The conversation tracking application may be configured to determine one or more conversations between the agent and the customer. The conversation tracking application may extract data directed to a data entry field within the UI. Further, the conversation tracking application may detect data while listening to a conversation, and, in response to the detection, identify a data segment from the data within the data entry field," Verma column 3 lines 39-48, modified UI is presented in the context of a conversation communication channel).
As to claim 18, Verma as modified by Ciminelli discloses the method of claim 12, wherein the user and agent data comprises at least one of real-time time series events for the use of the service, real-time agent availability information of the agent, or user and agent characteristics for the user and the agent ("In addition, each UI form may have a sharing confidence index (e.g., a grouping of pending UI form text fields). Some conversational context blocks are not so clear and may activate multiple UI form text fields together in listening mode, e.g.: First party (agent): “May I know your account no. or phone no., please?” Second party (customer): “It is 989-034-7889.”," Verma column 7 lines 19-26, determining the things the user has answered (i.e., user and agent utterances and user and agent language characteristics) to change the UI).
As to claim 19, Verma discloses a non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising:
receiving an indication that a customer being assisted by an agent during a user experience (UX) of a computing service by the customer ("A party may identify a user, an agent, or a customer, for example, in a bank setting. Identifying each party is critical and there may be multi-party identification processes. For simplicity, however, consider a two-party identification process. For example, the first party may be a banking service agent, and the second party may be a customer. Speech recognition software may be used to create a banking service agent voice profile, which may be used to identify the first party ID in a conversation thread. Any speech other than the first party ID may then be considered for a second party ID," Verma column 7 lines 9-18);
determining, based on the receiving the indication, customer data for the customer and agent data for the agent, wherein the customer data and the agent data are associated with real-time information and historical characteristics of the customer and the agent ("A party may identify a user, an agent, or a customer, for example, in a bank setting. Identifying each party is critical and there may be multi-party identification processes. For simplicity, however, consider a two-party identification process. For example, the first party may be a banking service agent, and the second party may be a customer. Speech recognition software may be used to create a banking service agent voice profile, which may be used to identify the first party ID in a conversation thread. Any speech other than the first party ID may then be considered for a second party ID," Verma column 7 lines 9-18; "In addition, each UI form may have a sharing confidence index (e.g., a grouping of pending UI form text fields). Some conversational context blocks are not so clear and may activate multiple UI form text fields together in listening mode, e.g.: First party (agent): “May I know your account no. or phone no., please?” Second party (customer): “It is 989-034-7889.”," Verma column 7 lines 19-26, determining what questions the agent has asked and what questions the user has answered (i.e., historical information));
identifying a plurality of service states of the computing service and a plurality of corresponding services available to the customer during the UX ("In addition, each UI form may have a sharing confidence index (e.g., a grouping of pending UI form text fields). Some conversational context blocks are not so clear and may activate multiple UI form text fields together in listening mode, e.g.: First party (agent): “May I know your account no. or phone no., please?” Second party (customer): “It is 989-034-7889.”," Verma column 7 lines 19-26, determining what questions the agent has asked and what questions the user has answered (i.e., determining a service state of the banking service being provided to the user));
determining a plurality of scenarios for assisting the customer with the UX of the computing service based on the customer data, the agent data, and the plurality of service states ("In this case, the UI form may have two UI form text fields: (1) account no. and (2) phone no. These two UI form text fields may partially match a conversational block. Therefore, a context parsing rule in both fields may suggest looking for a 10-digit numeric value in the second party's response. Because the smart glasses UI form, via a conversation tracking application, may find that the response partially matches two UI form text fields, the smart glasses UI form may split a confidence index between these two AI supported UI form text fields (e.g., 50-50 for each UI form text field, though it may vary from case-to-case, e.g., 20-80/30-70/40-60, etc.)," Verma column 7 lines 27-38; "The AI powered contextual form 404 may include customer ID text field 406 (with 0% confidence index), account no. text field 408 (with 10% confidence index), email ID text field 410 (with 99% confidence index), and phone no. text field 412 (with 50% confidence index). A 99% confidence index may be considered a valid entry value. The UI form and its fields may toggle from one state to another based on the context of the conversation and keep their respective values updated. Confidence Index 402 measures how confident the system is about the currently identified value for a given text field," Verma column 16 lines 54-64; "In addition, a field state 414 may be displayed for each UI form text field. The field state 414 may include, for example, a color-coded or pattern-coded system for identifying conversational context. For example, one color or pattern may represent smart glasses are “actively listening to the conversation,” one color or pattern may represent smart glasses are “inactive, waiting for context,” one color or pattern may represent smart glasses when “context parsing completed, value assigned,” and another color or pattern may represent when smart glasses “found potentially relevant context, but confidence is low.”," Verma column 16 lines 65-67 and column 17 lines 1-8, determining a variety of confidence scenarios to fill out and determining color code scenarios for the UI);
a plurality of dynamic user interfaces (UIs) each having dynamic content based on the plurality of scenarios ("In this case, the UI form may have two UI form text fields: (1) account no. and (2) phone no. These two UI form text fields may partially match a conversational block. Therefore, a context parsing rule in both fields may suggest looking for a 10-digit numeric value in the second party's response. Because the smart glasses UI form, via a conversation tracking application, may find that the response partially matches two UI form text fields, the smart glasses UI form may split a confidence index between these two AI supported UI form text fields (e.g., 50-50 for each UI form text field, though it may vary from case-to-case, e.g., 20-80/30-70/40-60, etc.)," Verma column 7 lines 27-38; "The AI powered contextual form 404 may include customer ID text field 406 (with 0% confidence index), account no. text field 408 (with 10% confidence index), email ID text field 410 (with 99% confidence index), and phone no. text field 412 (with 50% confidence index). A 99% confidence index may be considered a valid entry value. The UI form and its fields may toggle from one state to another based on the context of the conversation and keep their respective values updated. Confidence Index 402 measures how confident the system is about the currently identified value for a given text field," Verma column 16 lines 54-64; "In addition, a field state 414 may be displayed for each UI form text field. The field state 414 may include, for example, a color-coded or pattern-coded system for identifying conversational context. For example, one color or pattern may represent smart glasses are “actively listening to the conversation,” one color or pattern may represent smart glasses are “inactive, waiting for context,” one color or pattern may represent smart glasses when “context parsing completed, value assigned,” and another color or pattern may represent when smart glasses “found potentially relevant context, but confidence is low.”," Verma column 16 lines 65-67 and column 17 lines 1-8, determining a variety of confidence scenarios to fill out and determining color code scenarios for the UI);
providing options to select from and configure the plurality of dynamic UIs to at least one of the customer or the agent during the UX using the computing code ("A smart glasses interface may display a potential UI form text field value in a side box and the user or agent may be prompted to obtain verification of UI form text field value. In some embodiments, the user or agent engages in UI form text field value verification steps," Verma column 10 lines 63-67, displaying the GUI to both the agent and the user; "In an embodiment, AI Mode 418 may be toggled ON or OFF. AI Mode ON/OFF 418 is used to toggle the UI form between an AI enabled (AI Mode ON) and a regular mode (AI Mode OFF). Further, Show Context 416 made be toggled ON or OFF, as well. Show context ON 416 may help a first party (agent) know what is understood from a conversation," Verma column 17 lines 22-28, UI can be customized by toggling options).
However Verma does not appear to explicitly disclose determining computing code for a plurality of dynamic user interfaces (UIs).
Ciminelli teaches determining computing code for a plurality of dynamic user interfaces (UIs) ("In some examples, step 510 may comprise analyzing a textual input (such as textual data 102, the textual input received by step 508, the first textual input received by step 706, the textual input received by step 802, etc.) and/or a sketch (such as sketch data 104, the sketch received by step 508, the sketch received by step 706, the sketch received by step 802, etc.) to determine at least one change to a selected portion of a design of a user interface (for example, to the portion of the design of the user interface of step 508, to selected portion 110, to the entire design of the user interface, etc.). In one example, the at least one change may include at least one of adding an element to the user interface or removing an element from the user interface. Other non-limiting examples of such changes may include changing a size of an element of the user interface, changing a color of at least part of an element of the user interface, changing a font, changing a layout of elements of the user interface, changing a position of an element of the user interface, changing distance between two elements of the user interface, changing appearance of an element of the user interface, changing a level of details associated with at least part of the user interface, changing a timing of an event associated with an element of the user interface, and so forth. In one example, step 510 may use a machine learning model to analyze the textual input and/or the sketch and/or additional information to determine the at least one change to the portion of the design of the user interface," Ciminelli paragraph 0093; "In some examples, step 512 may comprise implementing at least one change (such as the at least one change determined by step 512) to generate a modified design of a user interface (such as the user interface of step 502). In one example, implanting the at least one change may include encoding the modified design of the user interface and/or design elements of the modified design of the user interface and/or a layout of the modified design of the user interface in digital files (for example in an HTML format and/or a CSS format and/or a source code in a style sheet language and/or media files)," Ciminelli paragraph 0113, generating code when updating a UI’s colors or content).
Accordingly it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the non-transitory machine-readable medium of Verma to generate code and modify code when updating the UI as taught by Ciminelli. One would have been motivated to make such a combination so that the UI modification could work with any service’s UI application, thus allowing the finished product to work in more kinds of scenarios.
As to claim 20, Verma as modified by Ciminelli discloses the non-transitory machine-readable medium of claim 19, wherein the determining the plurality scenarios and the determining the computing code is performed by a simulation engine comprising at least one machine learning (ML) model, and wherein the operations further comprise:
receiving a selection of one of the plurality of dynamic UIs with a configuration of the one of the plurality of dynamic UIs and corresponding content in the one of the plurality of dynamic UIs ("In an embodiment, AI Mode 418 may be toggled ON or OFF. AI Mode ON/OFF 418 is used to toggle the UI form between an AI enabled (AI Mode ON) and a regular mode (AI Mode OFF). Further, Show Context 416 made be toggled ON or OFF, as well. Show context ON 416 may help a first party (agent) know what is understood from a conversation," Verma column 17 lines 22-28, UI can be customized by toggling options); and
rendering the one of the plurality of dynamic UIs on at least one device of at least one of the customer or the agent based on the selection and the configuration ("A smart glasses interface may display a potential UI form text field value in a side box and the user or agent may be prompted to obtain verification of UI form text field value. In some embodiments, the user or agent engages in UI form text field value verification steps," Verma column 10 lines 63-67).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 20240386213 A1 to Ghoche et al. discloses a system and method for autonomous customer support chatbot agent with natural language workflow policies where a conversation between an agent and a customer is used to generate a graphical user interface;
US 20240378393 A1 to El Hattami et al. discloses virtual agent generation based on historical conversation data where a virtual agent GUI is modified based on past conversation data between a customer and an agent; and
US 20230410801 A1 to Mishra discloses targeted generative AI from merged communication transcripts.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL SAMWEL whose telephone number is (313)446-6549. The examiner can normally be reached Monday through Thursday 8:00-6:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL SAMWEL/ Primary Examiner, Art Unit 2171