DETAILED ACTION
Acknowledgement
This final office action is in response to the amendment filed on 10/13/2025.
Status of Claims
Claims 2, 5, 12, and 17 have been canceled.
Claims 1, 3, 6, 11 ,13, 16, and 18 have been amended.
Claims 21-24 have been added.
Claims 1, 3-4, 6-11, 13-16, and 18-24 are now pending.
Response to Arguments
Applicant's arguments filed on 10/13/2025 regarding the 35 U.S.C. 101 and 103 rejections of the claims have been fully considered. The Applicant argues the following:
(1) As per the 101 rejection, the Applicant argues, in summary, that (i) the claims are not directed to an abstract idea. The amended claims are not directed to methods of organizing human activity or to mental processes, but to systems and methods that increase first call resolution and improve customer satisfaction that include machine learning and vector generation; (ii) the claims as a whole integrate the alleged abstract idea into a practical application because the recited system, steps, and non-transitory computer-readable media solve technological problems in contact center systems. The claims improve first call resolution by automating root cause analysis, reduce repeat interactions using predictive modeling, enhance agent performance via KPI feedback and gamification, and transform raw interaction data into actionable insights using RNNs and hierarchical machine learning; and (iii) the claim features amount to significantly more than the abstract idea. The combination of elements of "transforming the history of the customer with the contact center...into a single vector; providing the single vector to a source classification model; and automatically determining, by the source classification model, that a source of the repeat interaction is a customer-related factor,..." is not well-understood, routing, or conventional activity in the field of classifying and resolving repeat interaction.
The Examiner respectfully disagrees with all arguments. As per argument (i), the Examiner maintains the position that amended claims 1, 11, and 16 are directed to the abstract grouping of Mental Processes based on the abstract limitations listed in Step 2A(1). These abstract limitations describe a process of receiving and analyzing customer, agent, and contact center data via modelling to determine causes of repeat customer interactions, which can be practically performed in the human mind via observation, evaluation, and judgements with pen and paper (e.g. root cause analysis). Using models to analyze data and transforming data into vectors or vectorization involving converting data into numerical arrays can practically be done manually and mentally with pen and paper. Mental Processes include claims directed to collecting information, analyzing it, and displaying certain results of the collection and analysis even if they are claimed as being performed on a computer. Dependent claims 3-4 and 22-24 further describe the response, to the cause of the repeat customer interaction which includes “the assigning training to the first agent and reconnecting the customer to the first agent”. These are steps that manages and directs the agent’s personal behavior and also the interaction between a customer and an agent. Certain Methods of Organizing Human Activity encompasses managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions. Per MPEP 2106.04(a), a claim recites a judicial exception when the judicial exception is “set forth” or “described” in the claim.
As per arguments (ii) and (iii), the Examiner maintains the position that the additional elements recited in the claims and listed in Steps 2A(2) and 2B do not integrate the abstract idea into a practical application nor provide significantly more because the additional elements do not improve the functioning of a computer, improve another technology or technical field, or provide a technical solution to a technical problem. These additional elements are viewed as mere instructions to implement an abstract idea on a computer and merely indicates a field of use or technological environment in which to apply the abstract idea. Applying an abstract idea on a computer and/or generally linking the use of the abstract idea to a particular technological environment does not integrate a judicial exception into a practical application or provide an inventive concept (see MPEP 2106.05 (f) and (h)). The improvements argued by the Applicant are in first call resolution and repeat interactions which are considered abstract and non-technical. Per MPEP 2106.05(a), an improvement in the abstract concept (e.g. first call resolution, repeat interaction, agent performance, data transformation) is not an improvement in technology.
The features argued by the Applicant to be significantly more than the abstract idea are indeed the abstract idea itself. As per MPEP 2106.05, an inventive concept cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself. An "inventive concept" is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception. For these reasons, the 35 U.S.C. 101 rejection is maintained.
(2) As per the 103 rejections, the Applicant argues, in summary, that (i) Pham fails to teach vectorization of the history of the customer with the contact center… (previous claims 5, 12, and 17); (ii) Pham fails to classify the source of the repeat interaction as either a customer-related factor, an agent-related factor, or a contact center-related factor (previous claim 2); and (iii) Revanur does not remedy the deficiencies of Pham and does not rank reasons for a repeat interaction.
The Examiner respectfully disagrees with all arguments. The Examiner submits that based on the broadest reasonable interpretation of the claims, Pham in combination with Revanur teach all of the limitations of amended claims 1, 11, and 16 as shown in the updated mappings below.
As per argument (i), Pham teach producing a unit vector (i.e. single vector) of content data in paragraph [0151]. The content data includes agent and contact center information [0110].
As per argument (ii), Pham teaches identifying repeat interactions in paragraphs [0022] and [0045] and identifying interaction driver identifiers from repeat interaction in Fig. 8 and paragraphs [0144} and [0237]. Drivers/reasons such as "forgot password," "report fraud," "the weather," "children," and "covid-19" reflects customer-related factors. Fig. 8 identifiers reflect customer and contact center related factors.
As per argument (iii), Pham alone teaches vectorization of contact data and classifying the source of repeated interactions, therefore, the teaching of Revanur is not required for these limitations. However, for the claim 1 limitation of “automatically, ranking, by a reason ranking model, based on the determining source of the repeat interaction, one or more reasons for the repeat interaction”, the Examiner submits that Pham in combination of Revanur teach this limitation as shown below in the claim 1 rejection. Pham teaches ranking reasons for repeat interactions. Pham does not teach ranking, by a ranking model, however, Revanur teaches ranking by a ranking model contact center queries. In this instance Revanur’s teaching remedy the deficiency of Pham’s teaching. Therefore, the 35 U.S.C. 103 rejection is maintained for independent claims 1, 11, and 16.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-4, 6-11, 13-16, and 18-24 are rejected under 35 U.S.C. 101 because the claimed invention, “Effortless Customer Contact and Increased First Call Resolution System and Methods”, is directed to the abstract ideas, specifically Mental Processes and Certain Methods of Organizing Human Activity, without significantly more. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements individually or in combination provide mere instructions to implement the abstract idea on a computer.
Step 1: Claims 1, 3-4, 6-11, 13-16, and 18-24 are directed to a statutory category, namely a machine (claims 1, 3-4, 6-10, and 21-22), a process (claims 11, 13-15, and 23), and a manufacture (claims 16, 18-20, and 24).
Step 2A (1): Independent claims 1, 11, and 16 are directed to an abstract idea of Mental Processes, based on the following claim limitations: “ receiving,…, a repeat interaction from a customer after a first interaction with a first agent; determining, from the repeat interaction, a history of the customer with the contact center, historical statistics of the first agent, skill statistics of the first agent, and contact center information on the first interaction; transforming the history of the customer with the contact center, the historical statistics of the first agent, the skill statistics of the first agent, and the contact center information on the first interaction into a single vector; providing the single vector to a source classification model; automatically determining, by the source classification model, that a source of the repeat interaction is a customer-related factor, an agent-related factor, or a contact center-related factor; automatically ranking, by a reason ranking model, based on the determined source of the repeat interaction, one or more reasons for the repeat interaction; and performing an action during the repeat interaction that corresponds to the one or more reasons for the repeat interaction to improve customer satisfaction.”. These claim limitations describe a process of receiving and analyzing customer, agent, and contact center data via modelling to determine causes of repeat customer interactions, which can be practically performed in the human mind via observation, evaluation, and judgements with pen and paper (e.g. root cause analysis). Dependent claims 3-4, 6-10, 13-15, and 18-21 further describe the process of the data analysis (e.g. transforming data to vectors, determining probability, ranking), modelling (e.g. training models, evaluating/verifying accuracy), identification of the cause (e.g. customer, agent, or contact center), and the response to the cause (e.g. agent training, reconnecting customer to agent, etc.) with the limitations of “wherein the source of the repeat interaction is determined to be an agent-related factor, and the operations further comprise…modifying a repeat interaction key performance indicator (KPI) of the first agent, or both (claim 3); determining the repeated interaction was initiated within the reconnection buffer period (claim 4); wherein transforming the history of the customer with the contact center, the historical statistics of the first agent, the skill statistics of the first agent, and the contact center information on the first interaction into a source classification model into a single vector comprises: concatenating and vectorizing the history of the customer with the contact center, the historical statistics of the first agent, and the skill statistics of the first agent to produce a sequence of concatenated vectors; providing the sequence of concatenated vectors… to produce a vector; vectorizing the contact center information on the first interaction; and concatenating the vector to the vectorized contact center information to produce the single vector (claims 6, 13, and 18); wherein automatically determining a source of the repeat interaction comprises: outputting, by the source classification model, a probability for each source of a plurality of sources, wherein the probability indicates a likelihood that each source is a cause of the repeat interaction; determining which source has a probability greater than a threshold probability; and determining that each source having a probability greater than the threshold probability is a source of the repeat interaction (claims 7, 14, and 19); wherein the operations further comprise training the source classification model and the reason ranking model (claims 8, 15, and 20); wherein training the source classification model and the reasons ranking model comprises evaluating an accuracy of the source classification model and the reason ranking model until the accuracy reaches a threshold value (claim 9); wherein the operations further comprise periodically verifying an accuracy of the source classification model and the reason ranking model (claim 10); wherein the historical statistics of the first agent comprise, for a relevant agent skill, a percentage of incomplete communication, a percentage of lacking skills or proficiency, and a percentage of incorrect information on previous interactions (claim 21); wherein the performed action comprises… modifying a repeat interaction key performance indicator (KPI) of the first agent…(claims 22-24). These are steps that a human person can perform during a root cause analysis procedure, model training and validation procedure, and an agent/contact center performance evaluation procedure. Creating vectors or vectorization involves converting data into numerical arrays and model training involves fitting a particular model to a dataset, which can practically be done manually and mentally with pen and paper. Dependent claims 3-4 and 22-24 further describe the response, to the cause of the repeat customer interaction with the limitations of “…the operations further comprise assigning training to the first agent…(claim 3); wherein the operations further comprise: opening a reconnection buffer period after the first interaction;… and reconnecting the customer to the first agent (claim 4); wherein the performed action comprises assigning training to the first agent,…, or reconnecting the customer to the first agent (claims 22-24)”, which are considered Certain Methods of Organizing Human Activity as the assigning and reconnecting steps manages and directs the agent’s personal behavior and the interaction between a customer and an agent. Therefore, these limitations, under the broadest reasonable interpretation, fall within the abstract groupings of Mental Processes which include concepts performed in the human mind such as observations, evaluations, judgments, and opinions and Certain Methods of Organizing Human Activity which encompasses managing personal behavior or relationships or interactions between people including social activities, teaching, and following rules or instructions. Mental Processes include claims directed to collecting information, analyzing it, and displaying certain results of the collection and analysis even if they are claimed as being performed on a computer. The courts have found claims requiring a generic computer or nominally reciting a generic computer may still recite a mental process even though the claim limitations are not performed entirely in the human mind. Certain Methods of Organizing Human Activity can encompass the activity of a single person (e.g. a person following a set of instructions), activity that involve multiple people (e.g. a commercial interaction), and certain activity between a person and a computer (e.g. a method of anonymous loan shopping). Therefore, claims 1, 3-4, 6-11, 13-16, and 18-24 are directed to an abstract idea and are not patent eligible.
Step 2A (2): This judicial exception is not integrated into a practical application. In particular, claims 1, 6, 11, 13, 16, and 18 recite additional elements of “a system comprising a processor and a non-transitory computer readable medium operably coupled thereto, the computer readable medium comprising a plurality of instructions stored in associated therewith that are accessible to, and executable by, the processor, to perform operations; a contact center (claims 1, 11, and 16); a recurrent neural network (claims 6, 13, and 18); and a non-transitory computer-readable medium having stored thereon computer-readable instructions executable by a processor to perform operations (claim 16)”. These additional elements do not integrate the abstract idea into a practical application because the claims do not recite (a) an improvement to another technology or technical field and (b) an improvement to the functioning of the computer itself and (c) implementing the abstract idea with or by use of a particular machine, (d) effecting a particular transformation or reduction of an article, or (e) applying the judicial exception in some other meaningful way beyond generally linking the use of an abstract idea to a particular technological environment. These additional elements evaluated individually and in combination are viewed as computing devices that are used to perform the abstract process of receiving and analyzing customer, agent, and contact center data via modelling to determine causes of repeat customer interactions and perform an action during the repeat interaction to improve customer satisfaction. Limitations that recite mere instructions to implement an abstract idea on a computer or merely uses a computer as a tool to perform an abstract idea are not indicative of integration into a practical application (see MPEP 2106.05(f)). Also, limitations that amount to merely indicating a field of use or technological environment (e.g. contact center) in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application (see MPEP 2106.05(h)). Therefore, claims 1, 3-4, 6-11, 13-16, and 18-24 do not include individual or a combination of additional elements that integrate the judicial exception into a practical application and thus are not patent eligible.
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claims 1, 6, 11, 13, 16, and 18 recite additional elements of “a system comprising a processor and a non-transitory computer readable medium operably coupled thereto, the computer readable medium comprising a plurality of instructions stored in associated therewith that are accessible to, and executable by, the processor, to perform operations; a contact center (claims 1, 11, and 16); a recurrent neural network (claims 6, 13, and 18); and a non-transitory computer-readable medium having stored thereon computer-readable instructions executable by a processor to perform operations (claim 16)”. These additional elements are viewed as mere instructions to implement an abstract idea on a computer and merely indicates a field of use or technological environment in which to apply a judicial exception. Applying an abstract idea on a computer does not integrate a judicial exception into a practical application or provide an inventive concept (see MPEP 2106.05(f)). Therefore, claims 1, 3-4, 6-11, 13-16, and 18-24 do not include individual or a combination of additional elements that are sufficient to amount to significantly more than the judicial exception and thus are not patent eligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 6-11, 13-16, 18-20, and 22-24 are rejected under 35 U.S.C. 103 as being unpatentable over Pham et al. (US 2024/0168918 A1) in view of Revanur et al. (US 2020/0137231 A1).
As per claims 1, 11, and 16 (Currently Amended), Pham teaches a classification and resolution system comprising: a processor and a non-transitory computer readable medium operably coupled thereto, the non-transitory computer readable medium comprising a plurality of instructions stored in association therewith that are accessible to, and executable by, the processor, to perform operations which comprise (Pham e.g.. The invention relates to systems and methods that automatically classify, segment, filter, and display alphanumeric content data generated during a user-provider interaction through the use of artificial intelligence and natural language processing technology [0001]. The results of the analysis in turn allow for identification of system and service problems and the implementation of system enhancements (Abstract). Fig. 1 hardware system 100 configuration according to one embodiment generally includes a user 110 that benefits through use of services and products offered by a provider through an enterprise system 200 [0085]. The storage device 124 includes at least one of a non-transitory storage medium for long-term, intermediate-term, and short term storage of computer-readable instructions 126 for execution by the processor 120 [0087].):
Pham teaches receiving, by a contact center, a repeat interaction from a customer after a first interaction with a first agent; (Pham e.g. The embodiments discussed in this specification are described with reference to systems and methods utilized in a call center environment where provider personnel are engaging in shared experiences and performing customer service activities [0084]. The provider system can be configured to generate content data manually or to obtain content data from a third party source, such as a cloud storage service or remote database [0106]. Content data is generated from a transcript of a written or verbal interactive exchange between conversation participants or "content sources." Examples of content data include, but are not limited to, an exchange of instant chat messages between two or more participants or recorded audio data generated during a telephone call (e.g., a consumer support request or help desk call), or a video conference [0082]. Provider-user interactions generally commence when a user initiates contact with a provider by telephone or written electronic communication (e.g., email, SMS text message, an instant chat message, or a social media message) [0108]. In one particular use-case, the system can display data from user-provider interactions involving a user that has initiated multiple interactions with the provider, for the same or similar reasons, within a limited time frame. For instance, identifying a user who made multiple calls to technical support in a six week time period [0020]. In this manner, the provider is able to identify subjects or interaction drivers that required multiple contacts between a user and a provider and, therefore, could represent problems or issues that are difficult to address [0020].)
Pham teaches determining, from the repeat interaction, a history of the customer with the contact center, historical statistics of the first agent, skill statistics of the first agent, and contact center information on the first interaction; (Pham e.g. To further analyze instances of multiple shared experiences, the content data files stored to an interaction database can include a user identifier. The provider network computing device performs the operation of identifying interaction database records having matching user identifiers-i.e., two interactions involving the same customer or end user [0021]. The system generates a repeat driver set that is made up of those interaction database records within the repeat interaction set that have matching interaction driver identifiers. In other words, the final data set includes shared experience records where a user called multiple times (i.e., matching user identifiers) for the same reason (i.e., matching interaction driver identifiers) [0022]. In other cases, the subject classification analysis can utilizes data from prior shared experiences involving the same user. If the user has contacted a provider multiple times to request an electronic transfer, that data may factor into the potential subject identifiers or interaction driver identifiers [0045]. The system can include an interaction database having interaction database records that include content data files and interaction driver identifiers generated from prior shared experiences between a provider and a plurality of end users [0045]. The network computing device retrieves the interaction database records from the interaction database, and utilizes the interaction database records to generate interaction driver identifier [0045]. The content data is stored to a database on the provider system or to a remote storage location. The content data is stored as content data files that include the substance of an exchange of communications along with content metadata [0206]. The content data is stored to a relational database that maintains the content data in a manner that permits the content data files to be associated with certain information, such as one or more subject identifiers and content metadata [0109]. Content metadata can include, for example: (i) sequencing data representing the date and time when the content data was created or otherwise representing an order or sequence in which a shared experience reflected in the content data occurred relative to other shared experiences; (ii) subject identifier data that characterizes the subjects or topics addressed within the content data (e.g., "technical support" or "new product launch demonstration"); (iii) interaction driver identifier data, which can be a subset or subcategory of subject identifier data, and that identifies the reasons why a shared experience was initiated (i.e., the reason a customer initiated the interaction can be, and typically is, a subject or topic addressed within the content data); (iv) weighting data representing the relative importance of subject identifiers through, for example, an analysis of the frequency of communication elements contributing to the subject identifier; (v) content source identifier data that identifies one or more participants to the interaction, which can include a name, an affiliated employer or business, or a job title or role and can further comprise agent identifier data or user identifier data that identifiers an agent or customer by name or identification number; (vi) provider identifier data that identifies owner of the content data; (vii) user source data, such as a telephone number, email address, or user device IP Address; (viii) sentiment data, including sentiment identifiers; (ix) polarity data indicating the relative positive or negative degree of sentiment occurring during a shared experience; (x) resolution data indicating whether a particular user issue was resolved or not, and if so, how the issue was resolved (e.g., the issue is a user forgot a password, and the resolution was a password reset); (xi) an agent identifier indicating the provider agent that participated in the shared experience; or (xii) other types of data useful for provider service to a user or processing content data [0110]. Agent attribute data can include, without limitation: ...(ii) an agent identifier,...(iv) agent service line identifier data indicating a provider department, branch, or division to which an agent is assigned; (v) an agent role designation (e.g., junior agent, senior agent, supervisor, etc.) (vii) agent experience data indicating the duration of professional experience an agent has in one or more relevant roles or in working for a provider (e.g., 2 years' experience in new account creation or 5 years and 2 months working for the provider overall); and (vii) agent training data indicating particular certifications, products, or services that an agent is trained to handle (e.g., an agent is qualified to provide technical support for a provider mobile application, or the agent is qualified to offer advice concerning a particular product or service) [0124].)
Pham teaches transforming the history of the customer with the contact center, the historical statistics of the first agent, the skill statistics of the first agent, and the contact center information on the first interaction into a single vector; providing the single vector to a source classification model; (Pham e.g. The operations include passing to a software application service called a content driver software service (for example), content data files [0004]. The content data files can be alphanumeric text transcriptions of audio discussions between a user and a provider agent or virtual agent or records of written communication between a user and a provider (e.g., emails, SMS text messages, instant chat messages, or messages sent over a social media platform) [0004]. The content driver software service executes a subject classification analysis using the concentrated content data [0008]. The content driver software service receives a plurality of interaction database records and content parameter data that includes sequencing identifiers [0010]. The content driver software service can be implemented with a neural network that executes the subject classification analysis [0014]. The subject classification analysis can be implemented using supervised learning techniques that require training the neural networks with labeled, known training data. The neural network can be a recurrent neural network having a long short-term memory neural network architecture [0016]. In addition to content data, the subject classification analysis can process end user data, content metadata, system configuration data, navigation data, and other useful data and information [0214]. In another embodiment, the content data files are generated by recording telephonic communications between a user and an agent and converting the recorded telephonic communications to alphanumeric content data [0023]. Captured audio data is stored to the provider system and transcribed into alphanumeric text data using a speech-to-text software application and stored as content data files comprising content data [0108]. The content driver software service processes the content data using natural language processing technology that is implemented by one or more artificial intelligence software applications and systems [0127]. The content data is first pre-processes using a reduction analysis to create reduced content data [0135]. Following a reduction analysis, the reduced content data is vectorized to map the alphanumeric text into a vector form [0138]. One approach to vectorising content data includes applying "bag-of-words" modeling [0138]. The content data is, thus, turned into a bag-of-words that includes integer values and the number of times the integers occur in content data. The bag-of-words is turned into a unit vector (i.e. single vector), where all the occurrences are normalized to the overall length [0151]. A similar analysis can be performed on vectors created through other processing, such as Kmeans clustering or techniques that generate vectors where each word in the vector is replaced with a probability that the word represents a subject identifier or request driver data [0151].The content driver software service can also use term frequency-inverse document frequency software processing techniques to vectorize the content data and generating weighting data that weight words or particular subjects [0156].)
Pham teaches automatically determining, by the source classification model, that a source of the repeat interaction is a customer-related factor, an agent-related factor, or a contact center-related factor; (Pham e.g. The provider network computing device executes a subject identification analysis using the concentrated content data (or using the content data files if no concentration analysis is used) to generate an interaction driver identifier or subject identifier for the content data files [0042]. The subject classification analysis can utilizes data from prior shared experiences involving the same user. If the user has contacted a provider multiple times to request an electronic transfer, that data may factor into the potential subject identifiers or interaction driver identifiers [0045]. The subject classification analysis relies on NPL and artificial intelligence technology to identify subjects or topics within the content data. The subject analysis also determines interaction driver identifiers for each of the content data files [0008]. The interaction identifiers are a subcategory of subject identifiers that focus on characterizing the reasons an end user initiated a shared experience ( e.g., to purchase a new service, seek technical support, or ask for assistance in rendering a service) [0008]. The network computing device retrieves the interaction database records from the interaction database, and utilizes the interaction database records to generate interaction driver identifier [0045]. In one particular use-case, the system can display data from user-provider interactions involving a user that has initiated multiple interactions with the provider, for the same or similar reasons, within a limited time frame. For instance, identifying a user who made multiple calls to technical support in a six week time period [0020]. The provider is able to identify subjects or interaction drivers that required multiple contacts between a user and a provider and, therefore, could represent problems or issues that are difficult to address [0020].The content data is analyzed using natural language processing techniques that are implemented by artificial intelligence technology. The resulting outputs can include, without limitation: (i) the identities of conversation participants or "content sources;" (ii) a list of subjects addressed within the content data and that identify the reasons or "driver" for why a customer initiated a shared experience; (iii) weighting data showing the relative importance or engagement associated with certain subjects; and (iv) frequency data defining the proportion of shared experiences that relate to a particular subject identifier or driver for a support request [0083]. The subject classification analysis determines one or more subject identifiers that reflect subjects or topics addressed in the content data. The subject identifiers can be interaction driver identifiers, which are the reasons why a user initiated a shared experience. Unlike conventional systems, the present system is capable of efficiently and accurately identifying multiple subject or interaction driver identifiers [0208]. The system captures and analyzes information automatically and does not depend on provider agents or other personnel to take the time to select an input option [0212]. FIG. 8 is a first example of an Interaction Graphical User Interface according to one embodiment that displays aspects of analyzed content data [0068]. The provider network computing device can return subject identifiers (or interaction driver identifiers) along with subject proportion data and/or subject weighting data. The proportion data or subject weighting data is displayed on the Interaction GUI by displaying each subject identifier with a relative size according to the subject proportion data or weighting data, as illustrated in FIGS. 8 and 10 [0237]. For instance, a provider may find that the interaction driver identifiers indicate that customer support calls concerning a provider mobile software application jump from one time period to the next for users attempting to execute an electronic transfer. This in turn indicates that the provider's mobile application electronic transfer function is not operating correctly. The provider has an opportunity to investigate and resolve any problems relating to the electronic transfers [0024]. The provider can adjust resource availability for users, such as increasing staffing of agents that are prepared to render assistance with the mobile software application, or requiring agent training on the mobile software application electronic transfer feature [0025]. The content data generated from an ongoing user-provider interaction is monitored and analyzed in near real-time. FIG. 11 illustrates a process for ongoing monitoring that begins with determining a user identity and capturing ongoing content data [0219]. A provider can establish certain alert conditions based on pre-determined thresholds, such as generating an interaction alert if sentiment polarity data drops below a given polarity threshold (i.e. customer-related) [0219]. The interaction alert can include agent attribute data relating to the agent engaged in the ongoing shared experience, such as data on the agent training and experience (i.e. agent related) [0221]. The primary agent might use the agent attribute data to determine that an agent lacks certain training or experience that would be beneficial in applying toward a shared experience, such as training on a particular product, technical feature, or service. The primary agent can then communicate relevant information to the agent engaged in the shared experience in an attempt close any experience, training, or knowledge gaps [0221].)
Pham teaches automatically ranking,…, based on the determined source of the repeat interaction, one or more reasons for the repeat interaction; and (Pham e.g. The invention relates to systems and methods that automatically classify, segment, filter, and display alphanumeric content data generated during a user-provider interaction through the use of artificial intelligence and natural language processing technology [0001]. The content driver software service can be implemented with a neural network that executes the subject classification analysis. The neural network performs operations that implement a Kmeans clustering analysis to execute the subject classification analysis [0014]. A subject is then represented by a specified number of words or phrases having the highest probabilities (i.e., the words with the five highest probabilities), or the subject is represented by text data having probabilities above a pre-determined subject probability threshold [0145]. The clustering analysis yields a group of words or communication elements associated with each cluster, which can be referred to as subject vectors. Subjects may each include one or more subject vectors where each subject vector includes one or more identified communication elements (i.e., keywords, phrases, symbols, etc.) within the content data as well as a frequency of the one or more communication elements within the content data [0147]. The content driver software service can be configured to perform an additional concentration analysis following the clustering analysis that selects a pre-defined number of communication elements from each cluster to generate a descriptor set, such as the five or ten words having the highest weights in terms of frequency of appearance (or in terms of the probability that the words or phrases represent the true subject when neural networking architecture is used). In one embodiment, the descriptor sets were analyzed to determine if the reasons driving a customer support request were identified by the descriptor set subject identifiers [0147]. The software model was evaluated according to three categories, including a "good match" where the support request reason(s) were identified by the top words in the subject vector (i.e., the words with the highest weight or frequency), a "moderate" match where the support request reason(s) were identified by the second tier of words in the subject vector (i.e., words six to ten), and a "poor" match where, for instance, the top words in a subject vector did not match or identify the reasons the support request was initiated [0148]. The subject classification analysis can specifically identify one or more interaction driver identifiers that are the reason why a user initiated a shared experience or support service request [0144]. An interaction driver identifier can be determined by, for example, first determining the subject identifiers having the highest weight quantifiers (e.g., frequencies or probabilities) and comparing such subject identifiers against a database of known interaction driver identifiers. To illustrate, the subject identifiers from a shared experience having the five (5) highest frequencies or probabilities might include "forgot password," "report fraud," "the weather," "children," and "covid-19." The provider system compares the top five subject identifiers against a list of known interaction driver identifiers that includes "forgot password" and "report fraud" as a known support driver but not "weather," "children," and "covid-19." In that instance, the provider system identifiers the two support drivers as being "forgot password" and "report fraud." [0144].)
Pham teaches performing an action during the repeat interaction that corresponds to the one or more reasons for the repeat interaction to improve customer satisfaction. (Pham e.g. The systems disclosed herein can also be used for real-time monitoring of shared experiences. That is, the provider system captures content data from an ongoing interaction between a user and a provider and determines the subjects being addressed during the interaction, the interaction drivers, and the sentiment identifiers [0038]. The data resulting from the above-described analyses can be used to identify problems with a provider's system and to implement potential enhancements [0024]. For instance, a provider may find that the interaction driver identifiers indicate that customer support calls concerning a provider mobile software application jump from one time period to the next for users attempting to execute an electronic transfer. This in turn indicates that the provider's mobile application electronic transfer function is not operating correctly. The provider has an opportunity to investigate and resolve any problems relating to the electronic transfers [0024]. The content data generated from an ongoing user-provider interaction is monitored and analyzed in near real-time. FIG. 11 illustrates a process for ongoing monitoring that begins with determining a user identity and capturing ongoing content data [0219]. A provider can establish certain alert conditions based on pre-determined thresholds, such as generating an interaction alert if sentiment polarity data drops below a given polarity threshold [0219]. For example, if the sentiment falls below or above a specified threshold, such as the sentiment becoming too negative or the end user or agent becoming too frustrated, an alert can be routed to an agent computing device. The agent computing device receiving the alert can be a primary agent, such as an agent supervisor or manager, that has the ability to assist [0038]. The alert routed to the agent computing device can include data about the customer or the ongoing interaction, such as subject identifiers, sentiment identifiers, agent data (e.g., name, experience level of the agent), and customer data (e.g., name of the customer, the types of products or accounts held with the provider, if any, or location of the customer) [0039]. This could in turn allow a provider primary agent to intervene or redirect a support request involving a particular product of concern that is reflected in the subject identifiers or involving a product for which the provider wants to increase sales [0222].)
Pham does not explicitly teach ranking by a ranking model, however, Revanur teaches using a trained ranking model to rank categories of contact center queries (Revanur e.g. Revanur teaches a computer system that routes contact center interactions. Interactions between contact center agents and contact center queries that are received at a contact center are monitored (Abstract). A ranking model is trained according to the categories of the contact center queries and the interaction scores of each handled query using machine learning (Abstract). A ranking model is trained according to the categories of the contact center queries, one or more selected business outcomes, and the interaction scores of each agent for each handled query using machine learning [0010]. The ranking model is tested according to various metrics in order to gauge the performance of the ranking model [0010]. Testing module 120 may test the efficacy of a ranking model in order to ensure that the ranking model properly ranks agents according to desired business outcomes [0020]. The machine learning ranking model is trained and tested at operation 330 by generating interaction scores [0044]. Model generating module 115 may train the ranking model using conventional or other machine learning techniques. In some embodiments, the ranking model is trained using a machine learning approach based on matrix factorization, restricted Boltzmann machines, and/or singular value decomposition [0044]. Once model generating module 115 trains the ranking model, testing module 120 may test the model [0044].)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Pham’s classification analysis model with Revanur’s ranking model in order to analyze and route interactions more effectively (Revanur e.g. [0053]).
As per claim 3 (Currently Amended), Pham in view of Revanur teach the classification and resolution system of claim 1, Pham teaches wherein the source of the repeat interaction is determined to be an agent-related factor, and the operations further comprise assigning training to the first agent, modifying a repeat interaction key performance indicator (KPI) of the first agent, or both. (Pham e.g. For instance, a provider may find that the interaction driver identifiers indicate that customer support calls concerning a provider mobile software application jump from one time period to the next for users attempting to execute an electronic transfer. This in turn indicates that the provider's mobile application electronic transfer function is not operating correctly. The provider has an opportunity to investigate and resolve any problems relating to the electronic transfers [0024]. A provider can also modify its interactive voice response ("IVR") software, which outputs automated options for selection by a user, to include an option for users requiring assistance with the provider mobile application electronic transfer function. Thus, the IVR software is modified to reflect the interaction driver identifiers [0025]. The provider can adjust resource availability for users, such as increasing staffing of agents that are prepared to render assistance with the mobile software application, or requiring agent training on the mobile software application electronic transfer feature [0025].)
As per claims 6, 13, and 18 (Currently Amended), Pham in view of Revanur teach the classification and resolution system of claim 1, the method of claim 11, and the non-transitory computer-readable medium of claim 16, wherein transforming the history of the customer with the contact center, the historical statistics of the first agent, the skill statistics of the first agent, and the contact center information on the first interaction into a source classification model into a single vector comprises:
Pham teaches concatenating and vectorizing the history of the customer with the contact center, the historical statistics of the first agent, and the skill statistics of the first agent to produce a sequence of concatenated vectors; (Pham e.g. The present invention to provide systems and methods that automate the process of characterizing a user-provider interaction by converting the interaction to an alphanumeric content data format and using artificial intelligence and natural language processing ("NPL") technology to generate subject matter identifiers and sentiment identifiers that characterize the interaction [0003]. The content data files can further include sequencing data, such as times, dates, or other information evidencing the sequence in which a provider-user interaction reflected in the content data files occurred [0005]. The content driver software service receives a plurality of interaction database records and content parameter data that includes sequencing identifiers. The sequencing identifiers each represent a sequence range, such as a time period over which subject identifier data, interaction driver identifiers, or sentiment identifiers are determined and displayed [0010]. The system can include an interaction database having interaction database records that include content data files and interaction driver identifiers generated from prior shared experiences between a provider and a plurality of end users [0045]. The content data is first pre-processes using a reduction analysis to create reduced content data [0135]. Following a reduction analysis, the reduced content data is vectorized to map the alphanumeric text into a vector form [0138]. One approach to vectorising content data includes applying "bag-of-words" modeling [0138]. The content data is stored to a relational database that maintains the content data in a manner that permits the content data files to be associated with certain information, such as one or more subject identifiers and content metadata [0109]. Content metadata can include, for example: (i) sequencing data representing the date and time when the content data was created or otherwise representing an order or sequence in which a shared experience reflected in the content data occurred relative to other shared experiences;…, etc. [0110].
Pham teaches providing the sequence of concatenated vectors to a recurrent neural network to produce a vector; vectorizing the contact center information on the first interaction; and (Pham e.g. The present invention to provide systems and methods that automate the process of characterizing a user-provider interaction by converting the interaction to an alphanumeric content data format and using artificial intelligence and natural language processing ("NPL") technology to generate subject matter identifiers and sentiment identifiers that characterize the interaction [0003]. The subject classification analysis can be implemented using supervised learning techniques that require training the neural networks with labeled, known training data. The neural network can be a recurrent neural network having a long short-term memory neural network architecture [0016]. FIG. 4 is a diagram of a Recurrent Neural Network RNN, according to at least one embodiment, utilized in machine learning [0064]. An RNN may allow for analysis of sequences of inputs rather than only considering the current input data set [0184]. To implement natural language processing technology, suitable neural network architectures can include, without limitation:...(iv) recurrent neural networks; (v) Long Short-Term Memory ("LSTM") network architecture; (vi) Bidirectional Long Short-Term Memory network architecture, which is an improvement upon LSTM by analyzing word, or communication element, sequences in forward and backward directions; [0197]. The provider system then performs a subject classification analysis by processing the content data using NPL and artificial intelligence software processing techniques that are implemented using neural networks [0208]. The subject classification analysis can be implemented by neural networks that execute unsupervised learning software processing techniques that do not require substantial volumes of known and labeled training data [0213].)
Pham teaches concatenating the vector to the vectorized contact center information to produce the single vector. (Pham e.g. The content driver software service can be implemented with a neural network that executes the subject classification analysis [0014]. The subject classification analysis can be implemented using supervised learning techniques that require training the neural networks with labeled, known training data. The neural network can be a recurrent neural network having a long short-term memory neural network architecture [0016]. FIG. 4 is a diagram of a Recurrent Neural Network RNN, according to at least one embodiment, utilized in machine learning [0064]. The training set content data is then fed to the content driver software service neural networks to identify subjects, content sources, or sentiments and the corresponding probabilities [0169]. An RNN may allow for analysis of sequences of inputs rather than only considering the current input data set [0184]. The provider system then performs a subject classification analysis by processing the content data using NPL and artificial intelligence software processing techniques that are implemented using neural networks [0208]. The subject classification analysis determines one or more subject identifiers that reflect subjects or topics addressed in the content data. The subject identifiers can be interaction driver identifiers, which are the reasons why a user initiated a shared experience. Unlike conventional systems, the present system is capable of efficiently and accurately identifying multiple subject or interaction driver identifiers [0208].)
As per claims 7, 14, and 19 (Original), Pham in view of Revanur teach the classification and resolution system of claim 1, the method of claim 11, and the non-transitory computer-readable medium of claim 16, Pham teaches wherein automatically determining a source of the repeat interaction comprises: outputting, by the source classification model, a probability for each source of a plurality of sources, wherein the probability indicates a likelihood that each source is a cause of the repeat interaction; determining which source has a probability greater than a threshold probability; and determining that each source having a probability greater than the threshold probability is a source of the repeat interaction. (Pham e.g. An interaction driver identifier can be determined by, for example, first determining the subject identifiers having the highest weight quantifiers (e.g., frequencies or probabilities) and comparing such subject identifiers against a database of known interaction driver identifiers [0144]. To illustrate, the subject identifiers from a shared experience having the five (5) highest frequencies or probabilities might include "forgot password," "report fraud," "the weather," "children," and "covid-19." [0144]. The provider system compares the top five subject identifiers against a list of known interaction driver identifiers that includes "forgot password" and "report fraud" as a known support driver but not "weather," "children," and "covid-19." In that instance, the provider system identifiers the two support drivers as being "forgot password" and "report fraud." [0144]. In one embodiment, the subject classification analysis is performed on the content data using a Latent Drichlet Allocation analysis to identify subject data that includes one or more subject identifiers (e.g., topics addressed in the underlying content data) [0145]. Performing the LDA analysis on the reduced content data may include transforming the content data into an array of text data representing key words or phrases that represent a subject (e.g., a bag-of-words array) and determining the one or more subjects through analysis of the array. Each cell in the array can represent the probability that given text data relates to a subject [0145]. A subject is then represented by a specified number of words or phrases having the highest probabilities (i.e., the words with the five highest probabilities), or the subject is represented by text data having probabilities above a pre-determined subject probability threshold [0145].)
As per claims 8, 15, and 20 (Original), Pham in view of Revanur teach the classification and resolution system of claim 1, wherein the operations further comprise, the method of claim 11, which further comprises, and the non-transitory computer-readable medium of claim 16, Pham in view of Revanur teach wherein the operations further comprise training the source classification model and the reason ranking model.
Pham teaches training the source classification model (Pham e.g. Disclosed are systems and methods that automate the process of analyzing interactive content data using artificial intelligence and natural language processing technology to generate subject matter identifiers and sentiment identifiers that characterize the interaction represented by the content data (Abstract). The content driver software service can be implemented with a neural network that executes the subject classification analysis. The neural network performs operations that implement a Kmeans clustering analysis to execute the subject classification analysis [0014]. There are various types of neural network architectures that can be used to implement a clustering analysis, and in particular, implement a Kmeans clustering analysis [0015]. The subject classification analysis can be implemented using supervised learning techniques that require training the neural networks with labeled, known training data. The neural network can be a recurrent neural network having a long short-term memory neural network architecture [0016]. With regard to neural network training, the system can implement supervised learning by performing a labeling analysis on a training set of content data files to generate annotated content data files [0017]. That is, the content data files are labeled to ascertain known subjects, interaction drivers, sentiments, or sentiment polarity, or other information [0017]. The training subject, interaction driver, sentiment, or polarity classification identifiers are compared against the annotated training set content data files to generate an error rate. The weights of the neural network node formulas (i.e., network parameters) of the neural network are adjusted so as to reduce the error rate [0017]. In this manner, the neural network is trained to optimize the parameters that implement the subject classification, sentiment, or other analyses [0017]. FIG. 6 is a flow chart representing a method model development and deployment by machine learning ([0066] and [0199]). The content data is analyzed using natural language processing techniques that are implemented by artificial intelligence technology. The resulting outputs can include, without limitation: (i) the identities of conversation participants or "content sources;" (ii) a list of subjects addressed within the content data and that identify the reasons or "driver" for why a customer initiated a shared experience; (iii) weighting data showing the relative importance or engagement associated with certain subjects; and (iv) frequency data defining the proportion of shared experiences that relate to a particular subject identifier or driver for a support request [0083].)
Pham does not explicitly teach, however, Revanur teaches training the ranking model (Revanur e.g. Revanur teaches a computer system that routes contact center interactions. Interactions between contact center agents and contact center queries that are received at a contact center are monitored (Abstract). A ranking model is trained according to the categories of the contact center queries and the interaction scores of each handled query using machine learning (Abstract). A ranking model is trained according to the categories of the contact center queries, one or more selected business outcomes, and the interaction scores of each agent for each handled query using machine learning [0010]. The ranking model is tested according to various metrics in order to gauge the performance of the ranking model [0010]. Testing module 120 may test the efficacy of a ranking model in order to ensure that the ranking model properly ranks agents according to desired business outcomes [0020]. The machine learning ranking model is trained and tested at operation 330 by generating interaction scores [0044]. Model generating module 115 may train the ranking model using conventional or other machine learning techniques. In some embodiments, the ranking model is trained using a machine learning approach based on matrix factorization, restricted Boltzmann machines, and/or singular value decomposition [0044]. Once model generating module 115 trains the ranking model, testing module 120 may test the model [0044].)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Pham’s classification analysis model with Revanur’s ranking model in order to analyze and route interactions more effectively (Revanur e.g. [0053]).
As per claim 9 (Original), Pham in view of Revanur teach the classification and resolution system of claim 8, Pham in view of Revanur teach wherein training the source classification model and the reasons ranking model comprises evaluating an accuracy of the source classification model and the reason ranking model until the accuracy reaches a threshold value.
Pham teaches wherein training the source classification model comprises evaluation an accuracy of the source classification model until the accuracy reaches a threshold value (Pham e.g. The training subject, interaction driver, sentiment, or polarity classification identifiers are compared against the annotated training set content data files to generate an error rate. The weights of the neural network node formulas (i.e., network parameters) of the neural network are adjusted so as to reduce the error rate [0017]. In this manner, the neural network is trained to optimize the parameters that implement the subject classification, sentiment, or other analyses [0017]. In yet other embodiments, the subject classification analysis can be implemented using supervised learning techniques [0035]. Supervised learning software systems are trained using content data that is well-labeled or "tagged." Supervised learning software systems often require extensive and iterative optimization cycles to adjust the input-output mapping until they converge to an expected and well-accepted level of performance, such as an acceptable threshold error rate between a calculated probability and a desired threshold probability [0129]. The training set content data is then fed to the content driver software service neural networks to identify subjects, content sources, or sentiments and the corresponding probabilities [0169]. For example, the analysis might identify that particular text represents a question with a 35% probability. If the annotations indicate the text is, in fact, a question, an error rate can be taken to be 65% or the difference between the calculated probability and the known certainty. Then parameters to the neural network are adjusted (i.e., constants and formulas that implement the nodes and connections between node), to increase the probability from 35% to ensure the neural network produces more accurate results, thereby reducing the error rate. The process is run iteratively on different sets of training set content data to continue to increase the accuracy of the neural network [0169].)
Pham does not explicitly teach, however, Revanur teaches wherein training the ranking model comprises evaluating accuracy of the ranking model until the accuracy reaches a threshold value (Revanur e.g. A ranking model is trained according to the categories of the contact center queries and the interaction scores of each handled query using machine learning (Abstract). The ranking model is tested according to various metrics to ensure that the ranking model ranks the agents according to one or more selected business outcomes (Abstract). A ranking model is trained according to the categories of the contact center queries, one or more selected business outcomes, and the interaction scores of each agent for each handled query using machine learning [0010]. The ranking model is tested according to various metrics in order to gauge the performance of the ranking model [0010]. Testing module 120 may test the efficacy of a ranking model in order to ensure that the ranking model properly ranks agents according to desired business outcomes [0020]. FIG. 4 is a flow chart depicting a method 400 of routing queries in accordance with an example embodiment [0046]. The ranking model is updated at operation 460. Feedback may be collected upon resolution of an interaction in order to update the ranking model [0053]. In some embodiments, the ranking model is continually updated by training the model with fresh feedback data. In order to determine whether an updated ranking model should replace a currently-deployed model, the two models may be compared using AB testing; if a new model reliably performs better than a deployed model, the new model may be deployed [0053]. As the ranking model is continually updated, additional latent information may be discovered for each agent, thus enabling successive ranking models to route interactions more effectively [0053].)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Pham’s classification analysis model with Revanur’s ranking model in order to analyze and route interactions more effectively (Revanur e.g. [0053]).
As per claim 10 (Original), Pham in view of Revanur teach the classification and resolution system of claim 1, Pham in view of Revanur teach wherein the operations further comprise periodically verifying an accuracy of the source classification model and the reason ranking model.
Pham teaches periodically verifying an accuracy of the source classification model (Pham e.g. FIG. 6 is a flow chart representing a method model development and deployment by machine learning ([0066] and [0199]). Step 606 can include data validation to confirm that the statistics of the ingested data are as expected, such as that data values are within expected numerical ranges, that data sets are within any expected or required categories, and that data comply with any needed distributions such as within those categories [0202]. In step 610, training test data such as a target variable value is inserted into an iterative training and testing loop. For example, features in the training test data are used to train the model based on weights and iterative calculations in which the target variable may be incorrectly predicted in an early iteration as determined by comparison in step 614, where the model is tested. Subsequent iterations of the model training, in step 612, may be conducted with updated weights in the calculations [0203].).
Pham does not explicitly teach, however, Revanur teaches periodically verifying an accuracy of the ranking model (Revanur e.g. The ranking model is tested according to various metrics in order to gauge the performance of the ranking model [0010]. Testing module 120 may test the efficacy of a ranking model in order to ensure that the ranking model properly ranks agents according to desired business outcomes [0020]. FIG. 4 is a flow chart depicting a method 400 of routing queries in accordance with an example embodiment [0046]. The ranking model is updated at operation 460. Feedback may be collected upon resolution of an interaction in order to update the ranking model [0053]. In some embodiments, the ranking model is continually updated by training the model with fresh feedback data. In order to determine whether an updated ranking model should replace a currently-deployed model, the two models may be compared using AB testing; if a new model reliably performs better than a deployed model, the new model may be deployed [0053]. As the ranking model is continually updated, additional latent information may be discovered for each agent, thus enabling successive ranking models to route interactions more effectively [0053].)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to combine Pham’s classification analysis model with Revanur’s ranking model in order to analyze and route interactions more effectively (Revanur e.g. [0053]).
As per claims 22, 23, and 24 (New) Pham in view of Revanur teach the classification and resolution system of claim 1, the method of claim 11, and the non-transitory computer-readable medium of claim 16, Pham teaches wherein the performed action comprises assigning training to the first agent, modifying a repeat interaction key performance indicator (KPI) of the first agent, or reconnecting the customer to the first agent (Pham e.g. For instance, a provider may find that the interaction driver identifiers indicate that customer support calls concerning a provider mobile software application jump from one time period to the next for users attempting to execute an electronic transfer. This in turn indicates that the provider's mobile application electronic transfer function is not operating correctly. The provider has an opportunity to investigate and resolve any problems relating to the electronic transfers [0024]. A provider can also modify its interactive voice response ("IVR") software, which outputs automated options for selection by a user, to include an option for users requiring assistance with the provider mobile application electronic transfer function. Thus, the IVR software is modified to reflect the interaction driver identifiers [0025]. The provider can adjust resource availability for users, such as increasing staffing of agents that are prepared to render assistance with the mobile software application, or requiring agent training on the mobile software application electronic transfer feature [0025].).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Pham et al. (US 2024/0168918 A1) in view of Revanur et al. (US 2020/0137231 A1) and in further view of Deole et al. (US 2021/0144254 A1).
As per claim 4 (Original), Pham in view of Revanur teach the classification and resolution system of claim 1, Pham nor Revanur explicitly teach, however, Deole teaches wherein the operations further comprise: opening a reconnection buffer period after the first interaction; determining the repeated interaction was initiated within the reconnection buffer period; and reconnecting the customer to the first agent. (Deole e.g. The technology disclosed herein enables a call to be immediately reconnected to the same agent of a contact center after the agent has been disconnected while the caller remains connected to the contact center (Abstract). An agent of a contact center may be disconnected from a caller for any number of reasons, however, the technology described below enables the immediate reconnection of a caller to the same agent to which the caller was communicating when the caller remains connected to a contact center system [0020]. FIG. 1 illustrates implementation 100 for immediately reconnecting a call to an agent in a contact center [0021]. Upon determining that a non-recoverable error has occurred reconnect system 101 generates identification information that identifies the communication session (202). The identification information is unique to the communication session at least during a predefined time period for which reconnection may occur (e.g., for at least five minutes during which a caller is likely to stay on for reconnection) [0025]. The identification information may be generated from information about the communication session, such as the identity of the caller, the identity of the agent, an identifier for caller client system 102, an identifier for agent client system 103, a time of day for the communication session, or any other type of relevant information [0025]. FIG. 4 illustrates operational scenario 400 to immediately reconnect a call to a same agent in a contact center [0033]. Once WebRTC server 303 recognizes that a non-recoverable error occurred to disconnect agent client system 304 from WebRTC server 303, WebRTC server 303 transfers disconnect notification 404 at step 6 to reconnect system 301 so that reconnect system 301 can begin the process of immediately reconnecting the agent to the WebRTC call [0036]. Reconnect system 301 places call 401 in a queue at step 8 to wait for the agent to be reconnected. The queue is a priority queue that gives call 401 priority for being connected to the agent when the agent is reconnected. The priority queue essentially ensures that no other call gets routed to the agent before call 401 when the agent logs back into contact center 311's systems [0036].)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify Pham in view of Revanur’s provider/contact center system to include a process of reconnecting/rerouting repeat callers to the same agent as taught by Deole in order to enable a caller/customer to continue the communication session (i.e. provide continuity of service) (Deole e.g. [0030]).
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Pham et al. (US 2024/0168918 A1) in view of Revanur et al. (US 2020/0137231 A1) and in further view of Conway et al. (US 10,129,402 B1).
As per claim 21 (New) Pham in view of Revanur teach the classification and resolution system of claim 1, Pham nor Revanur explicitly teach, however Conway teaches wherein the historical statistics of the first agent comprise, for a relevant agent skill, a percentage of incomplete communication, a percentage of lacking skills or proficiency, and a percentage of incorrect information on previous interactions (Conway e.g. A method for analyzing caller interaction events that includes receiving, by a processor, a caller interaction event between an agent and a caller, extracting, by a processor, caller event data from the caller interaction event, analyzing, by a processor, the caller event data, and generating, by a processor, a report displaying one or more selected categories of the caller event data (Abstract). Generally, a customer is in contact with a customer service representative ("CSR") or call center agent who is responsible for answering the customer's inquiries and/or directing the customer to the appropriate individual, department, information source, or service as required to satisfy the customer's needs (col. 1 lines 40-46). The caller event information can also include additional information concerning each call, such as statistical data relating to the caller interaction event (e.g., time, date and length of call, caller identification, agent identification, hold times, transfers, etc.), and a recording of the caller interaction event (col. 7 lines 11-16). This event data is comprised of a call assessment data corresponding to at least one identifying indicia (e.g., a CSR name, a CSR center identifier, a customer, a customer type, a call type, etc.) and at least one predetermined time interval (col. 23 lines 54-58). In the embodiment shown in FIG. 15, the system 1 includes a PROFILES tab 410, a REVIEW tab 412 and a METRICS tab 414. A variety of the other tabs with additional information can also be made available (col. 24 lines 56-60). The REVIEW tab 412 also includes a visual link to call center or CSR agent folders 422. This includes a list of calls divided by call center or CSR agents (col. 25 lines 36-38). The user could choose an agent from a drop down menu or list of available agents. This returns all calls from the selected agent in the date range specified (col. 25 lines 51-53). The user can also generate a number of CALL CENTER or CSR AGENT REPORTS. These include the following summary reports: corporate summary by location; CSR agent performance; and non-analyzed calls (col. 27 lines 34-37). A CORPORATE SUMMARY BY LOCATION REPORT 502 is shown in FIG. 26 (col. 27 lines 40-41). The CORPORATE SUMMARY BY LOCATION REPORT 502 includes a location column 504 (this identifies the call center location that received the call), a number of calls column 506 (total number of calls received by the associated call center location during the specified reporting interval, an average duration column 508 (total analyzed talk time for all calls analyzed for the associated CSR agent divided by the total number of calls analyzed for the agent), a greater than 150% duration column 510 (percentage of calls for a CSR agent that exceed 150% of the average duration for all calls, a greater than 90 second hold column 512 (percentage of calls for a CSR agent where the CSR places the caller on hold for greater than 90 seconds), a greater than 30 second silence column 514 (percentage of calls for a CSR agent where there is a period of continuous silence within a call greater than 30 seconds), a call transfer column 516 (percentage of calls for a CSR agent that result in the caller being transferred), an inappropriate response column 518 (percentage of calls where the CSR agent exhibits inappropriate behavior or language), an appropriate response column 520 (percentage of calls where the CSR agent exhibits appropriate behavior or language that result in the dissipation of caller distress-these calls can be found in the upset caller/issue resolved folder), a no authentication column 522 (percentage of calls where the CSR agent does not authenticate the caller's identity to prevent fraud), and a score column 524 (a composite score that represents overall call center performance for all calls in the associated call center location.) (cols. 27-28 lines 45-6). The values 526 in the score column 524 are based on the weighted criteria shown in FIG. 27. All weighted values are subtracted from a starting point of 100 except for "appropriate response," which is an additive value (col. 28 lines 7-10). A CSR PERFORMANCE REPORT 528 is shown in FIG. 28. This is a detail level report that identifies analysis results by CSR for the specified time interval. This Report 528 contains a composite score that ranks relative CSR performance for each call type across event filter criteria (col. 28 lines 11-15). FIG. 31 shows a TEAM BY AGENT REPORT 534. This is a summary level report that identifies analysis results by team and agent for the specified time interval. These Reports 534 contain a composite performance score that ranks relative CSR performance across event filter criteria by agent (col. 28 lines 26-31). The Examiner submits that % call transfers could reflect incomplete/unresolved communications, the score could reflect proficiency, and % of no authentication/inappropriate response could reflect incorrect information.)
The Examiner submits that before the effective filing date, it would have been obvious to one of ordinary skill in the art to modify Pham in view of Revanur’s provider/contact center system’s historical statistics of agents to include a percentage of incomplete communication, a percentage of lacking skills or proficiency, and a percentage of incorrect information on previous interactions as taught by Conway in order to monitor the performance of the call center agents to identify possible training needs (Conway e.g. col. 1 lines 62-64).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ayanna Minor whose telephone number is (571)272-3605. The examiner can normally be reached M-F 9am-5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.M./Examiner, Art Unit 3624
/Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624