DETAILED ACTION
Introduction
This Final Office Action is in response to amendments and remarks filed on February 2, 2026, for the application with serial number 18/782,652.
Claims 1, 8, and 15 are amended.
Claims 1-20 are pending.
Response to Remarks/Amendments
35 USC §101 Rejections
The Applicant traverses the rejection of the present claims as being directed to an ineligible abstract idea, contending that the claims are subject matter eligible due to similarities with the claims from McRo. See Remarks p. 11. In response, the Examiner points out that the claims from McRo were found to be eligible because those claims involved a method that was rooted in lip-syncing video technology. In contrast, the present claims recite steps for determining an attitude marker for an interaction that could be implemented mentally or on paper by a human being. The claims are not rooted in computer technology or any other technology. Contrary to the Applicant’s assertions, the claims only contain generic computer components. The memory contained in a conventional computer could be said to constitute a “database.” All computer systems require memory to function.
The Applicant further contends that the present claims are subject matter eligible because the claims process is not a mental process. See Remarks p. 12. In response, the Examiner points to the rejection, below, which does not conclude that the claims are in the category of mental processes. Therefore, the Applicant’s arguments in this regard are moot.
The Applicant further submits that the claimed hardware is not generic computer hardware. See Remarks p. 12. The Examiner respectfully disagrees, and reiterates the assertions provided in the paragraphs, above. The hardware recited in the claims is generic, and the “interconnected system” could merely be a conventional computer network. The claims merely recite the use of generic computer hardware to implement the abstract idea of determining an attitude marker for an interaction.
The rejection for lack of subject matter eligibility is updated and maintained.
35 USC §102/103 Rejections
Amendments to the claims changed the scope of the claims, necessitating further consideration of the prior art. The independent claims remain anticipated by the Sella reference, as indicated in the rejection, below.
The rejection of the dependent claims stands or falls with the rejection of the independent claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The Manual of Patent Examining Procedure (MPEP) provides detailed rules for determining subject matter eligibility for claims in §2106. Those rules provide a basis for the analysis and finding of ineligibility that follows.
Claims 1-20 are rejected under 35 U.S.C. 101. The claimed invention is directed to non-statutory subject matter because the claimed invention recites a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Although claims(s) 1-20 are all directed to one of the four statutory categories of invention, the claims are directed to determining an attitude marker related to an interaction (as evidenced by exemplary independent claim 1; “determine the attitude market corresponding to the sentiment query based on a result of the preliminary analysis received from the interaction analysis framework”), an abstract idea. Certain methods of organizing human activity are ineligible abstract ideas, including managing personal behavior or relationships or interactions between people. See MPEP §2106.04(a). The limitations of exemplary claim 1 include: “perform[ ] natural language processing;” “receive a sentiment query;” “cause conducting of a preliminary analysis;” “determine [an] attitude marker corresponding to the sentiment query;” and “cause transmission of the attitude marker.” The steps are all steps for managing personal behavior related to the abstract idea of determining an attitude marker related to an interaction that, when considered alone and in combination, are part of the abstract idea of determining an attitude marker related to an interaction. The dependent claims further recite steps for managing personal behavior that are part of the abstract idea of determining an attitude marker related to an interaction. These claim elements, when considered alone and in combination, are considered to be abstract ideas because they are directed to a method of organizing human activity which includes analyzing messages and customer interactions to determine customer sentiment.
Under step 2A of the subject matter eligibility analysis, a claim that recites a judicial exception must be evaluated to determine whether the claim provides a practical application of the judicial exception. Additional elements of the independent claims amount to generic computer hardware that does not provide a practical application (a computing system with a computing device with a processor and an end device and user devices; and databases in independent claims 1 and 8; and a computer readable medium with user device and an end device in independent claim 15). See MPEP §2106.04(d)[I]. The claims do not recite an improvement to another technology or technical field, nor do they recite an improvement to the functioning of the computer itself. See MPEP §2106.05(a). The claims require no more than a generic computer (a computing device with a processor and an end device and user devices in independent claims 1 and 8; and a computer readable medium with user device and an end device that executes instructions by a processor in a computing system in independent claim 15) to implement the abstract idea, which does not amount to significantly more than an abstract idea. See MPEP §2106.05(f). Because the claims only recite use of a generic computer, they do not apply the judicial exception with a particular machine. See MPEP §2106.05(b). For these reasons, the claims do not provide a practical application of the abstract idea, nor do they amount to significantly more than an abstract idea under step 2B of the subject matter eligibility analysis. Using a generic computer to implement an abstract idea does not provide an inventive concept. Therefore, the claims recite ineligible subject matter under 35 USC §101.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 5-10, 12-15, and 17-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 20230129482 A1 to Sella et al. (hereinafter ‘SELLA’).
Claim 1 (Currently Amended)
SELLA discloses a computing system (see abstract and ¶[0030] and [0104] and Fig. 1; a variety of computing devices and network systems. A communication server associated with a platform) comprising:
a computing system comprising a configuration of interconnected components, the interconnected components of the computing system comprising a computing device having an insight analysis framework (see again ¶[0083]; various components of connection management system 600 (e.g., message assessment engine 615 and/or an interaction management engine 625) can query message data store 620 to retrieve query-responsive messages, message metrics and/or message statistics), a first database (see ¶[0101]; updated database records), and a second database (see ¶[0093]; maintain data about previous communication exchanges, including topics associated with a client identifier. See again ¶[0101]; update database records);
the first database comprising engagement data and service analytics data (see ¶[0125]-[0126]; a database of messages);
the second database comprising user utility analytics data (see ¶[0093]; maintain data about previous communication exchanges, including topics associated with a client identifier);
the computing device comprising: a processor (see ¶[0043]; a network device with a processor);
and the insight analysis framework that is operably coupled to the processor, the insight analytics framework being an advanced learning model (see ¶[0129]; artificial intelligence or machine learning techniques may be applied to analyze messaging staff data, so as to generate insights and identify patterns, trends, and correlations between such factors regarding agent time, task, skill, and resulting customer satisfaction and other positive sentiment levels) capable of performing natural language processing (NLP) (see ¶[0133] and [0140]; analysis may use natural language understanding (NLU) algorithms and insights to identify specific intents/issues), wherein the insight analysis framework is to:
receive a sentiment query (see again ¶[0083]; various components of connection management system 600 (e.g., message assessment engine 615 and/or an interaction management engine 625) can query message data store 620 to retrieve query-responsive messages, message metrics and/or message statistics) for a service platform to determine an attitude marker associated with the service platform (see ¶[0036]; a transmitted message can include information about network device 105 (e.g., IP address, device type, and/or operating system), information about an associated user 110 (e.g., language spoken, duration of having interacted with client, skill level, sentiment, and/or topic preferences), a received communication, code (e.g., a clickable hyperlink) for generating and transmitting a communication to the network device 105, and/or an instruction to generate and transmit a communication to network device 105), wherein the attitude marker is to be determined based on insights regarding an attitude response towards the service platform received from a plurality of user devices currently connected or have previously connected with the service platform (see ¶[0102]; a dynamic sentiment parameter can be generated to represent a sentiment of messages, conversations, entities, agents, and so on. For example, in cases where the dynamic sentiment parameter indicates that the user is frustrated with the bot);
cause conducting of a preliminary analysis in response to receiving the sentiment query, wherein the preliminary analysis is conducted by an interaction analysis framework on the engagement data for the plurality of user devices, wherein the engagement data comprises a text-based interaction history between the service platform and one or more of the plurality of user devices (see ¶[0004]-[0005] and [0040]; messaging interactions such as email or text. See also ¶[0119]; a recent history of messages);
determine the attitude marker corresponding to the sentiment query (see ¶[0108] and [0129]; positive or negative sentiment in text. Identify patterns, trends, and correlations in agent time, task skill, customer satisfaction and sentiment levels) based on a result of the preliminary analysis received from the interaction analysis framework, the service analytics data for one or more of the plurality of user devices obtained from a first database, and the user utility analytics data for one or more of the plurality of user devices obtained from a second database, wherein the service analytics data comprises data derived from service-level interactions with the service platform (see again ¶[0004]-[0005] and [0040]; messaging interactions such as email or text. See also ¶[0119]; a recent history of messages); and the user utility analytics data comprises usage metrics associated with the service platform (see ¶[0044], [0093[ and [0129]; A software agent or application may be installed on and/or executable on a depicted device, system or server. In one instance, the software agent or application is configured such that various depicted elements can act in complementary manners. For example, a software agent on a device can be configured to collect and transmit data about device usage to a separate connection management system, and a software application on the separate connection management system can be configured to receive and process the data. an establishment time, a usage frequency, a date of last use, any channel constraints and/or supported types of communication), user or agent preferences or constraints (e.g., related to terminal-device selection, response latency, terminal-device consistency, agent expertise, and/or communication-type preference or constraint), and/or user or agent characteristics (e.g., age, language(s) spoken or preferred, geographical location, interests, and so on); and
generate a transmission signal to cause transmission of the attitude marker to an end device identified to receive the attitude marker, the transmission signal comprising identification information associated with the end device (see ¶[0036] and [0081]; transmit a message to a selected terminal device that includes sentiment. Tag a message with sentiment. See also ¶[0033]; a destination address and identifier of a client).
Claim 2 (Original)
SELLA discloses the computing device as set forth in claim 1.
SELLA further discloses wherein the attitude marker is one or more of a final numeric score, a trend of the attitude response over a period of time, a root cause analysis of the attitude response, and an action plan to improve the attitude response (see ¶[0102] and [0109]; the message parameter can be a numerical value that indicates the high intensity of the negative polarity (e.g., a message parameter of 20 on a scale of 0-100, with lower numbers indicating a negative polarity and higher numbers indicating a positive polarity. Where the dynamic sentiment parameter indicates that the user is frustrated with the bot, the system can automatically switch the bot with a user device so that a live agent can communicate with the user. See also ¶[0026]; dynamic forecasting may be based on historical data regarding skills and results, as well as data science to identify patterns, trends, and corrections with granularity, as well as to make predictions. ).
Claim 3 (Original)
SELLA discloses the computing device as set forth in claim 1.
SELLA further discloses wherein the sentiment query comprises one of determining a trend of the attitude response over a period of time, determining a final numeric score, conducting a root cause analysis of the attitude response, determining an action plan to improve the attitude response (see ¶[0102] and [0109]; the message parameter can be a numerical value that indicates the high intensity of the negative polarity (e.g., a message parameter of 20 on a scale of 0-100, with lower numbers indicating a negative polarity and higher numbers indicating a positive polarity. Where the dynamic sentiment parameter indicates that the user is frustrated with the bot, the system can automatically switch the bot with a user device so that a live agent can communicate with the user. See also ¶[0026]; dynamic forecasting may be based on historical data regarding skills and results, as well as data science to identify patterns, trends, and corrections with granularity, as well as to make predictions).
wherein determining the trend of the attitude response, comprises identifying changes in trends of attitude response towards the service platform over the period of time (see ¶[0120] and [0129]; as network device 805 begins to use a different communication channel more frequently, communication server 820 can identify this changing trend and initiate communication sessions using the most used or most frequently used communication channel),
determining the final numeric score is based on an average of numeric scores over the period of time, and the final numeric score is indicative of attitude response (Examiner Note: the claim language only requires one of “determining a trend of the attitude response over a period of time, determining a final numeric score, conducting a root cause analysis of the attitude response, determining an action plan to improve the attitude response;” so a final numeric score is not required),
conducting the root cause analysis of the attitude response comprises to determine a reason behind sentiment trends based on the preliminary analysis received from the interaction analysis framework, the service analytics data, and the user utility analytics data (Examiner Note: the claim language only requires one of “determining a trend of the attitude response over a period of time, determining a final numeric score, conducting a root cause analysis of the attitude response, determining an action plan to improve the attitude response;” root cause analysis is not required), and
determining an action plan comprises utilizing a determined trend of the attitude response, the final numeric score and the root cause of the attitude response to determine actions to improve attitude response towards the service platform (see ¶[0102] and [0109]; the message parameter can be a numerical value that indicates the high intensity of the negative polarity (e.g., a message parameter of 20 on a scale of 0-100, with lower numbers indicating a negative polarity and higher numbers indicating a positive polarity. Where the dynamic sentiment parameter indicates that the user is frustrated with the bot, the system can automatically switch the bot with a user device so that a live agent can communicate with the user.).
Claim 5 (Original)
SELLA discloses the computing device as set forth in claim 1.
SELLA further discloses wherein the insight analysis framework is to process a text-based data and an object-based data to determine the attitude marker, the text-based data comprising user support interaction data, chat logs, email exchanges, notes and summaries taken during meetings with users and data from phone calls and data from in-person interactions (see ¶[0031]; The agent 120 can be an individual, such as a messaging or other type of messaging support agent tasked with providing support or information to the user 110 regarding the website or online service. See also ¶[0004]; voice or chat), and the object-based data comprising user support interaction data, chat logs, email exchanges, notes and summaries taken during meetings with users and data from phone calls and data from in-person interactions (see ¶[0031]; The agent 120 can be an individual, such as a messaging or other type of messaging support agent tasked with providing support or information to the user 110 regarding the website or online service. See also ¶[0004]; voice or chat. See also ¶[0093]; usage frequency, times, resolution stage, topics).
Claim 6 (Original)
SELLA discloses the computing device as set forth in claim 1.
SELLA further discloses wherein the service analytics data comprises at least one of information extracted from support ticket interactions (see ¶[0031; support or information to a user of an online service), problem descriptions (see ¶[0081]; a topic can include a technical issue), resolutions (see ¶[0026]; resolutions), and response times (see ¶[0041] and [0098]; predicted response time and target response times).
Claim 7 (Original)
SELLA discloses the computing device as set forth in claim 1.
SELLA further discloses wherein the user utility analytics data comprises any of preferred feature data (see ¶[0036] and [0102]; topic preferences, preference information), duration of usage data (see ¶[0004]; duration as handle-time), metrics from work management, data derived from customer relationship management (CRM) interfaces, ticket resolution rate, frequency of user interactions (see ¶[0093]; usage frequency), and a speed of resolving tickets.
Claim 8 (Currently Amended)
SELLA discloses a method for insight analysis, the method comprising:
maintaining a computing system (see abstract and ¶[0030] and [0104] and Fig. 1; a variety of computing devices and network systems. A communication server associated with a platform) comprising a configuration of interconnected components, the interconnected components of the computing system comprising a computing device (see again ¶[0083]; various components of connection management system 600 (e.g., message assessment engine 615 and/or an interaction management engine 625) can query message data store 620 to retrieve query-responsive messages, message metrics and/or message statistics), a first database (see ¶[0101]; updated database records), and a second database (see ¶[0093]; maintain data about previous communication exchanges, including topics associated with a client identifier. See again ¶[0101]; update database records),
wherein the first database comprising engagement data and service analytics data (see ¶[0125]-[0126]; a database of messages),
wherein the second database comprising user utility analytics data (see ¶[0093]; maintain data about previous communication exchanges, including topics associated with a client identifier); and
wherein the computing device comprises an insight analysis framework (see ¶[0129]; artificial intelligence or machine learning techniques may be applied to analyze messaging staff data, so as to generate insights and identify patterns, trends, and correlations between such factors regarding agent time, task, skill, and resulting customer satisfaction and other positive sentiment levels);
receiving, at the computing device having the insight analysis framework, a sentiment query (see ¶[0083]; various components of connection management system 600 (e.g., message assessment engine 615 and/or an interaction management engine 625) can query message data store 620 to retrieve query-responsive messages, message metrics and/or message statistics) for a service platform to determine an attitude marker associated with the service platform (see ¶[0036]; a transmitted message can include information about network device 105 (e.g., IP address, device type, and/or operating system), information about an associated user 110 (e.g., language spoken, duration of having interacted with client, skill level, sentiment, and/or topic preferences), a received communication, code (e.g., a clickable hyperlink) for generating and transmitting a communication to the network device 105, and/or an instruction to generate and transmit a communication to network device 105), wherein the attitude marker is to be determined based on insights regarding an attitude response towards the service platform received from a plurality of user devices currently connected or have previously connected with the service platform (see ¶[0102]; a dynamic sentiment parameter can be generated to represent a sentiment of messages, conversations, entities, agents, and so on. For example, in cases where the dynamic sentiment parameter indicates that the user is frustrated with the bot);
causing to conduct a preliminary analysis in response to receiving the sentiment query, wherein the preliminary analysis is conducted by an interaction analysis framework on the engagement data for the plurality of user devices wherein the engagement data comprises a text-based interaction history between the service platform and one or more of the plurality of user devices (see ¶[0004]-[0005] and [0040]; messaging interactions such as email or text. See also ¶[0119]; a recent history of messages);
determining the attitude marker corresponding to the sentiment query(see ¶[0108] and [0129]; positive or negative sentiment in text. Identify patterns, trends, and correlations in agent time, task skill, customer satisfaction and sentiment levels) based on a result of the preliminary analysis received from the interaction analysis framework, the service analytics data for one or more of the plurality of user devices obtained from a first database, and the user utility analytics data for one or more of the plurality of user devices obtained from a second database, wherein the service analytics data comprises data derived from service-level interactions with the service platform (see again ¶[0004]-[0005] and [0040]; messaging interactions such as email or text. See also ¶[0119]; a recent history of messages), and the user utility analytics data comprises usage metrics associated with the service platform (see ¶[0044], [0093[ and [0129]; A software agent or application may be installed on and/or executable on a depicted device, system or server. In one instance, the software agent or application is configured such that various depicted elements can act in complementary manners. For example, a software agent on a device can be configured to collect and transmit data about device usage to a separate connection management system, and a software application on the separate connection management system can be configured to receive and process the data. an establishment time, a usage frequency, a date of last use, any channel constraints and/or supported types of communication), user or agent preferences or constraints (e.g., related to terminal-device selection, response latency, terminal-device consistency, agent expertise, and/or communication-type preference or constraint), and/or user or agent characteristics (e.g., age, language(s) spoken or preferred, geographical location, interests, and so on; and
generating a transmission signal to cause transmission of the attitude marker to an end device identified to receive the attitude marker, the transmission signal comprising identification information associated with the end device (see ¶[0036] and [0081]; transmit a message to a selected terminal device that includes sentiment. Tag a message with sentiment. See also ¶[0033]; a destination address and identifier of a client).
Claim 9 (Original)
SELLA discloses the method as set forth in claim 8.
SELLA further discloses wherein the attitude marker is one or more of a final numeric score, a trend of the attitude response over a period of time, a root cause analysis of the attitude response, and an action plan to improve the attitude response (see ¶[0102] and [0109]; the message parameter can be a numerical value that indicates the high intensity of the negative polarity (e.g., a message parameter of 20 on a scale of 0-100, with lower numbers indicating a negative polarity and higher numbers indicating a positive polarity. Where the dynamic sentiment parameter indicates that the user is frustrated with the bot, the system can automatically switch the bot with a user device so that a live agent can communicate with the user. See also ¶[0026]; dynamic forecasting may be based on historical data regarding skills and results, as well as data science to identify patterns, trends, and corrections with granularity, as well as to make predictions. ).
Claim 10 (Original)
SELLA discloses the method as set forth in claim 8.
SELLA further discloses wherein the sentiment query comprises one of determining a trend of the attitude response over a period of time, determining a final numeric score, conducting a root cause analysis of the attitude response, determining an action plan to improve the attitude response (see ¶[0102] and [0109]; the message parameter can be a numerical value that indicates the high intensity of the negative polarity (e.g., a message parameter of 20 on a scale of 0-100, with lower numbers indicating a negative polarity and higher numbers indicating a positive polarity. Where the dynamic sentiment parameter indicates that the user is frustrated with the bot, the system can automatically switch the bot with a user device so that a live agent can communicate with the user. See also ¶[0026]; dynamic forecasting may be based on historical data regarding skills and results, as well as data science to identify patterns, trends, and corrections with granularity, as well as to make predictions. ).
wherein determining the trend of the attitude response, comprises identifying changes in trends of attitude response towards the service platform for the period of time (see ¶[0120] and [0129]; as network device 805 begins to use a different communication channel more frequently, communication server 820 can identify this changing trend and initiate communication sessions using the most used or most frequently used communication channel),
determining the final numeric score is based on an average of numeric scores over the period of time, and the final numeric score is indicative of attitude response (Examiner Note: the claim language only requires one of “determining a trend of the attitude response over a period of time, determining a final numeric score, conducting a root cause analysis of the attitude response, determining an action plan to improve the attitude response;” so a final numeric score is not required),
conducting the root cause analysis of the attitude response comprises determining a reason behind sentiment trends based on the preliminary analysis received from the interaction analysis framework, the service analytics data, and the user utility analytics data (Examiner Note: the claim language only requires one of “determining a trend of the attitude response over a period of time, determining a final numeric score, conducting a root cause analysis of the attitude response, determining an action plan to improve the attitude response;” root cause analysis is not required), and
determining an action plan comprises utilizing a determined trend of the attitude response, the final numeric score and the root cause of the attitude response to determine actions to improve attitude response towards the service platform (see ¶[0102] and [0109]; the message parameter can be a numerical value that indicates the high intensity of the negative polarity (e.g., a message parameter of 20 on a scale of 0-100, with lower numbers indicating a negative polarity and higher numbers indicating a positive polarity. Where the dynamic sentiment parameter indicates that the user is frustrated with the bot, the system can automatically switch the bot with a user device so that a live agent can communicate with the user.).
Claim 12 (Original)
SELLA discloses the method as set forth in claim 8.
SELLA further discloses wherein the insight analysis framework is to process a text-based data and an object-based data to determine the attitude marker, wherein the text-based data comprising user support interaction data, chat logs, email exchanges, notes and summaries taken during meetings with users and data from phone calls and data from in-person interactions (see ¶[0031]; The agent 120 can be an individual, such as a messaging or other type of messaging support agent tasked with providing support or information to the user 110 regarding the website or online service. See also ¶[0004]; voice or chat), and the object-based data comprising user support interaction data, chat logs, email exchanges, notes and summaries taken during meetings with users and data from phone calls and data from in-person interactions (see ¶[0031]; The agent 120 can be an individual, such as a messaging or other type of messaging support agent tasked with providing support or information to the user 110 regarding the website or online service. See also ¶[0004]; voice or chat. See also ¶[0093]; usage frequency, times, resolution stage, topics).
Claim 13 (Original)
SELLA discloses the method as set forth in claim 8.
SELLA further discloses wherein the service analytics data comprises at least one of information extracted from support ticket interactions (see ¶[0031; support or information to a user of an online service), problem descriptions (see ¶[0081]; a topic can include a technical issue), resolutions (see ¶[0026]; resolutions), and response times (see ¶[0041] and [0098]; predicted response time and target response times).
Claim 14 (Original)
SELLA discloses the method as set forth in claim 8.
SELLA further discloses wherein the user utility analytics data comprises any of preferred feature data (see ¶[0036] and [0102]; topic preferences, preference information), duration of usage data (see ¶[0004]; duration as handle-time), metrics from work management, data derived from customer relationship management (CRM) interfaces, ticket resolution rate, frequency of user interactions (see ¶[0093]; usage frequency), and a speed of resolving tickets.
Claim 15 (Currently Amended)
SELLA discloses a non-transitory computer-readable storage medium storing instructions for insight analysis (see ¶[0174]; non-transitory computer-readable storage media refer to any medium or media that participate in providing instructions to a central processing unit (CPU) for execution), the instructions being executable by a processor(see again ¶[0174]; non-transitory computer-readable storage media refer to any medium or media that participate in providing instructions to a central processing unit (CPU) for execution) to:
maintain a computing system comprising a configuration of interconnected components, the interconnected components of the computing system comprising a computing device (see abstract and ¶[0030] and [0104] and Fig. 1; a variety of computing devices and network systems. A communication server associated with a platform. See also ¶[0083]; various components of connection management system 600 (e.g., message assessment engine 615 and/or an interaction management engine 625) can query message data store 620 to retrieve query-responsive messages, message metrics and/or message statistics), a first database (see ¶[0101]; updated database records), and a second database (see ¶[0093]; maintain data about previous communication exchanges, including topics associated with a client identifier. See again ¶[0101]; update database records),
wherein the first database comprising engagement data and service analytics data (see ¶[0125]-[0126]; a database of messages), wherein the second database comprising user utility analytics data (see ¶[0093]; maintain data about previous communication exchanges, including topics associated with a client identifier), and wherein the computing device comprises an insight analysis framework (see again ¶[0083]; various components of connection management system 600 (e.g., message assessment engine 615 and/or an interaction management engine 625) can query message data store 620 to retrieve query-responsive messages, message metrics and/or message statistics), a first database (see ¶[0101]; updated database records);
receive, at the computing device having the insight analysis framework (see ¶[0129]; artificial intelligence or machine learning techniques may be applied to analyze messaging staff data, so as to generate insights and identify patterns, trends, and correlations between such factors regarding agent time, task, skill, and resulting customer satisfaction and other positive sentiment levels), a sentiment query (see ¶[0083]; various components of connection management system 600 (e.g., message assessment engine 615 and/or an interaction management engine 625) can query message data store 620 to retrieve query-responsive messages, message metrics and/or message statistics) for a service platform to determine an attitude marker associated with the service platform (see ¶[0036]; a transmitted message can include information about network device 105 (e.g., IP address, device type, and/or operating system), information about an associated user 110 (e.g., language spoken, duration of having interacted with client, skill level, sentiment, and/or topic preferences), a received communication, code (e.g., a clickable hyperlink) for generating and transmitting a communication to the network device 105, and/or an instruction to generate and transmit a communication to network device 105), wherein the attitude marker is to be determined based on insights regarding an attitude response towards the service platform received from a plurality of user devices currently connected or have previously connected with the service platform (see ¶[0102]; a dynamic sentiment parameter can be generated to represent a sentiment of messages, conversations, entities, agents, and so on. For example, in cases where the dynamic sentiment parameter indicates that the user is frustrated with the bot);
cause to conduct a preliminary analysis in response to receiving the sentiment query, wherein the preliminary analysis is conducted by an interaction analysis framework on the engagement data for the plurality of user devices, wherein the engagement data comprises a text-based interaction history between the service platform and one or more of the plurality of user devices (see ¶[0004]-[0005] and [0040]; messaging interactions such as email or text. See also ¶[0119]; a recent history of messages);
determine the attitude marker corresponding to the sentiment query (see ¶[0108] and [0129]; positive or negative sentiment in text. Identify patterns, trends, and correlations in agent time, task skill, customer satisfaction and sentiment levels) based on a result of the preliminary analysis received from the interaction analysis framework, the service analytics data for one or more of the plurality of user devices obtained from a first database, and the user utility analytics data for one or more of the plurality of user devices obtained from a second database, wherein the service analytics data comprises data derived from service-level interactions with the service platform (see again ¶[0004]-[0005] and [0040]; messaging interactions such as email or text. See also ¶[0119]; a recent history of messages), and the user utility analytics data comprises usage metrics associated with the service platform (see ¶[0044], [0093[ and [0129]; A software agent or application may be installed on and/or executable on a depicted device, system or server. In one instance, the software agent or application is configured such that various depicted elements can act in complementary manners. For example, a software agent on a device can be configured to collect and transmit data about device usage to a separate connection management system, and a software application on the separate connection management system can be configured to receive and process the data. an establishment time, a usage frequency, a date of last use, any channel constraints and/or supported types of communication), user or agent preferences or constraints (e.g., related to terminal-device selection, response latency, terminal-device consistency, agent expertise, and/or communication-type preference or constraint), and/or user or agent characteristics (e.g., age, language(s) spoken or preferred, geographical location, interests, and so on); and
generate a transmission signal to cause transmission of the attitude marker to an end device identified to receive the attitude marker, the transmission signal comprising identification information associated with the end device (see ¶[0036] and [0081]; transmit a message to a selected terminal device that includes sentiment. Tag a message with sentiment. See also ¶[0033]; a destination address and identifier of a client).
Claim 17 (Original)
SELLA discloses the non-transitory computer-readable storage medium as set forth in claim 15.
SELLA further discloses wherein the instructions are executable by a processor to: determine one of a trend of the attitude response over a period of time and a final numeric score; conduct a root cause analysis of the attitude response ; and determine an action plan to improve the attitude response (see ¶[0102] and [0109]; the message parameter can be a numerical value that indicates the high intensity of the negative polarity (e.g., a message parameter of 20 on a scale of 0-100, with lower numbers indicating a negative polarity and higher numbers indicating a positive polarity. Where the dynamic sentiment parameter indicates that the user is frustrated with the bot, the system can automatically switch the bot with a user device so that a live agent can communicate with the user. See also ¶[0026]; dynamic forecasting may be based on historical data regarding skills and results, as well as data science to identify patterns, trends, and corrections with granularity, as well as to make predictions. ).
wherein determining the trend of the attitude response, comprises identifying changes in trends of attitude response towards the service platform for the period of time (see ¶[0120] and [0129]; as network device 805 begins to use a different communication channel more frequently, communication server 820 can identify this changing trend and initiate communication sessions using the most used or most frequently used communication channel),
determining the final numeric score is based on an average of numeric scores over the period of time, and the final numeric score is indicative of attitude response (Examiner Note: the claim language only requires one of “determining a trend of the attitude response over a period of time, determining a final numeric score, conducting a root cause analysis of the attitude response, determining an action plan to improve the attitude response;” so a final numeric score is not required),
conducting the root cause analysis of the attitude response comprises determining a reason behind sentiment trends based on the preliminary analysis received from the interaction analysis framework, the service analytics data, and the user utility analytics data (Examiner Note: the claim language only requires one of “determining a trend of the attitude response over a period of time, determining a final numeric score, conducting a root cause analysis of the attitude response, determining an action plan to improve the attitude response;” root cause analysis is not required), and
determining an action plan comprises utilizing a determined trend of the attitude response, the final numeric score and the root cause of the attitude response to determine actions to improve attitude response towards the service platform (see ¶[0102] and [0109]; the message parameter can be a numerical value that indicates the high intensity of the negative polarity (e.g., a message parameter of 20 on a scale of 0-100, with lower numbers indicating a negative polarity and higher numbers indicating a positive polarity. Where the dynamic sentiment parameter indicates that the user is frustrated with the bot, the system can automatically switch the bot with a user device so that a live agent can communicate with the user.).
Claim 18 (Original)
SELLA discloses the non-transitory computer-readable storage medium as set forth in claim 15.
SELLA further discloses wherein the attitude marker is one or more of a final numeric score, a trend of the attitude response over a period of time, a root cause analysis of the attitude response, and an action plan to improve the attitude response (see ¶[0102] and [0109]; the message parameter can be a numerical value that indicates the high intensity of the negative polarity (e.g., a message parameter of 20 on a scale of 0-100, with lower numbers indicating a negative polarity and higher numbers indicating a positive polarity. Where the dynamic sentiment parameter indicates that the user is frustrated with the bot, the system can automatically switch the bot with a user device so that a live agent can communicate with the user. See also ¶[0026]; dynamic forecasting may be based on historical data regarding skills and results, as well as data science to identify patterns, trends, and corrections with granularity, as well as to make predictions. ).
Claim 19 (Original)
SELLA discloses the non-transitory computer-readable storage medium as set forth in claim 15.
SELLA further discloses wherein the insight analysis framework is to process a text-based data and an object-based data to determine the attitude marker, wherein the text-based data comprising user support interaction data, chat logs, email exchanges, notes and summaries taken during meetings with users and data from phone calls and data from in-person interactions (see ¶[0031]; The agent 120 can be an individual, such as a messaging or other type of messaging support agent tasked with providing support or information to the user 110 regarding the website or online service. See also ¶[0004]; voice or chat), and the object-based data comprising user support interaction data, chat logs, email exchanges, notes and summaries taken during meetings with users and data from phone calls and data from in-person interactions (see ¶[0031]; The agent 120 can be an individual, such as a messaging or other type of messaging support agent tasked with providing support or information to the user 110 regarding the website or online service. See also ¶[0004]; voice or chat. See also ¶[0093]; usage frequency, times, resolution stage, topics).
Claim 20 (Original)
SELLA discloses the non-transitory computer-readable storage medium as set forth in claim 15.
SELLA further discloses wherein the service analytics data comprises any of information extracted from support ticket interactions (see ¶[0031; support or information to a user of an online service), problem descriptions (see ¶[0081]; a topic can include a technical issue), resolutions (see ¶[0026]; resolutions), and response times (see ¶[0041] and [0098]; predicted response time and target response times). and
wherein the user utility analytics data comprises any of preferred feature data (see ¶[0036] and [0102]; topic preferences, preference information),, duration of usage data (see ¶[0004]; duration as handle-time), metrics from work management, data derived from customer relationship management (CRM) interfaces, ticket resolution rate, frequency of user interactions (see ¶[0093]; usage frequency), and a speed of resolving tickets.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4, 11, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230129482 A1 to Sella et al. as applied to claim 1 above, and further in view of US 20240232539 A1 to Venkateshwaran et al. (hereinafter ‘VENKATESHWARAN’).
Claim 4 (Original)
SELLA discloses the computing device as set forth in claim 1.
SELLA does not specifically disclose, but VENKATESHWARAN discloses, wherein the insight analysis framework is to cause the interaction analysis framework to conduct the preliminary analysis on the engagement data, wherein the interaction analysis framework is a large-language model (see ¶[0041]; extract insights from text documents using large language models such as BERT. See also ¶[0138]; the model is trained on customer data).
SELLA discloses dynamic analytics that includes conducting semantic analysis using natural language understanding (see ¶[0133]). VENKATESHWARAN discloses insight extract using semantic search that includes large language modeling. It would have been obvious to employe the large language model as taught by VENKATESHWARAN in the system executing the method of SELLA with the motivation to perform semantic analysis on text using natural language processing.
Claim 11 (Original)
SELLA discloses the method as set forth in claim 8.
SELLA does not specifically disclose, but VENKATESHWARAN discloses, wherein the insight analysis framework is to cause the interaction analysis framework to conduct the preliminary analysis on the engagement data, wherein the interaction analysis framework is a large-language model (see ¶[0041]; extract insights from text documents using large language models such as BERT. See also ¶[0138]; the model is trained on customer data).
SELLA discloses dynamic analytics that includes conducting semantic analysis using natural language understanding (see ¶[0133]). VENKATESHWARAN discloses insight extract using semantic search that includes large language modeling. It would have been obvious to employe the large language model as taught by VENKATESHWARAN in the system executing the method of SELLA with the motivation to perform semantic analysis on text using natural language processing.
Claim 16 (Original)
SELLA discloses the non-transitory computer-readable storage medium as set forth in claim 15.
SELLA does not specifically disclose, but VENKATESHWARAN discloses, wherein the instructions are executable by a processor to cause the interaction analysis framework to conduct the preliminary analysis on the engagement data, wherein the interaction analysis framework is a large-language model (see ¶[0041]; extract insights from text documents using large language models such as BERT. See also ¶[0138]; the model is trained on customer data).
SELLA discloses dynamic analytics that includes conducting semantic analysis using natural language understanding (see ¶[0133]). VENKATESHWARAN discloses insight extract using semantic search that includes large language modeling. It would have been obvious to employe the large language model as taught by VENKATESHWARAN in the system executing the method of SELLA with the motivation to perform semantic analysis on text using natural language processing.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD N SCHEUNEMANN whose telephone number is (571)270-7947. The examiner can normally be reached M-F 9am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at 571-270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RICHARD N SCHEUNEMANN/Primary Examiner, Art Unit 3624