DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-20 are pending in the instant patent application.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Regarding Claims 1-10, they are directed to a method, however the claims are directed to a judicial exception without significantly more. Claims 1-10 are directed to the abstract idea of identifying predictive data features for target zones.
Performing the Step 2A Prong 1 analysis while referring specifically to independent Claim 1, claim 1 recites generating, and based on data retrieved from a plurality of data sources, a training data set comprising: process data associated with performance of instances of a process; and outcome scores associated with the instances of the process; and determining, target zones associated with the predictive data features, wherein the target zones indicate values of the predictive data features that are associated with a target range of the outcome scores.
These claim limitations fall within the Mental Processes grouping of abstract ideas for they are concepts that can be practically performed in the human mind and/or with pen/paper. Furthermore, the courts have found claims requiring a generic computer or nominally reciting a generic computer may still recite a mental process even though the claim limitations are not performed entirely in the human mind.
Accordingly, the claim recites an abstract idea and dependent claims 2-5, and 6-10 further recite the abstract idea.
Regarding Step 2A Prong 2 analysis, the judicial exception is not integrated into a practical application. In particular the claim recites the elements of a computing system and training, by the computing system, and based on the training data set, a machine learning model to identify predictive data features, indicated by the process data, that are predictive of the outcome scores. The computing system and training, by the computing system, and based on the training data set, a machine learning model to identify predictive data features, indicated by the process data, that are predictive of the outcome scores are merely generic computing devices and do not integrate the judicial exception into a practical application.
With respect to 2B, the claims do not include additional elements amounting to significantly more than the abstract idea. Claims 1 and 6 include various elements that are not directed to the abstract idea under 2A. These elements include a computing system, training, by the computing system, and based on the training data set, a machine learning model to identify predictive data features, indicated by the process data, that are predictive of the outcome scores and the generic computing elements described in the Applicant's specification in at least Para 0108-0120. These elements do not amount to more than the abstract idea because it is a generic computer performing generic functions.
Therefore, Claims 1 and 6, alone or in combination, are not drawn to eligible subject matter as they are directed to abstract ideas without significantly more.
Regarding Claims 11-15, they are directed to a system, however the claims are directed to a judicial exception without significantly more. Claims 11-15 are directed to the abstract idea of identifying predictive data features for target zones.
Performing the Step 2A Prong 1 analysis while referring specifically to independent Claim 11, claim 11 recites generate a training data set that comprises: process data associated with performance of instances of a process; and outcome scores associated with the instances of the process; and determine target zones associated with the predictive data features, wherein the target zones indicate values of the predictive data features that are associated with a target range of the outcome scores.
These claim limitations fall within the Mental Processes grouping of abstract ideas for they are concepts that can be practically performed in the human mind and/or with pen/paper. Furthermore, the courts have found claims requiring a generic computer or nominally reciting a generic computer may still recite a mental process even though the claim limitations are not performed entirely in the human mind.
Accordingly, the claim recites an abstract idea and dependent claims 12-13 and 15 further recite the abstract idea.
Regarding Step 2A Prong 2 analysis, the judicial exception is not integrated into a practical application. In particular the claim recites the elements of one or more processors, memory, and train a machine learning model, based on the training data set, to identify predictive data features, indicated by the process data, that are predictive of the outcome scores. The one or more processors, memory, and train a machine learning model, based on the training data set, to identify predictive data features, indicated by the process data, that are predictive of the outcome scores are merely generic computing devices and do not integrate the judicial exception into a practical application.
With respect to 2B, the claims do not include additional elements amounting to significantly more than the abstract idea. Claims 11 and 15 include various elements that are not directed to the abstract idea under 2A. These elements include one or more processors, memory, train a machine learning model, based on the training data set, to identify predictive data features, indicated by the process data, that are predictive of the outcome scores and the generic computing elements described in the Applicant's specification in at least Para 0108-0120. These elements do not amount to more than the abstract idea because it is a generic computer performing generic functions.
Therefore, Claims 11 and 15, alone or in combination, are not drawn to eligible subject matter as they are directed to abstract ideas without significantly more.
Regarding Claims 16-20, they are directed to a system, however the claims are directed to a judicial exception without significantly more. Claims 16-20 are directed to the abstract idea of identifying predictive data features for target zones.
Performing the Step 2A Prong 1 analysis while referring specifically to independent Claim 16, claim 16 recites generate a training data set that comprises: first process data associated with performance of historical instances of a process; and outcome scores associated with the historical instances of the process; determine target zones associated with the predictive data features, wherein the target zones indicate values of the predictive data features that are associated with a target range of the outcome scores; identify instances of the predictive data features within second process data associated with performance of second instances of the process; determine whether the instances of the predictive data features are associated with second values that are within the target zones; and generate insight output based on determining whether the instances of the predictive data features are associated with the second values that are within the target zones.
These claim limitations fall within the Mental Processes grouping of abstract ideas for they are concepts that can be practically performed in the human mind and/or with pen/paper. Furthermore, the courts have found claims requiring a generic computer or nominally reciting a generic computer may still recite a mental process even though the claim limitations are not performed entirely in the human mind.
Accordingly, the claim recites an abstract idea and dependent claims 17-20 further recite the abstract idea.
Regarding Step 2A Prong 2 analysis, the judicial exception is not integrated into a practical application. In particular the claim recites the elements of one or more processors, computing system, and train a machine learning model, based on the training data set, to identify predictive data features, indicated by the first process data, that are predictive of the outcome scores. The one or more processors, computing system, and train a machine learning model, based on the training data set, to identify predictive data features, indicated by the first process data, that are predictive of the outcome scores are merely generic computing devices and do not integrate the judicial exception into a practical application.
With respect to 2B, the claims do not include additional elements amounting to significantly more than the abstract idea. Claim 16 includes various elements that are not directed to the abstract idea under 2A. These elements include one or more processors, computing system, train a machine learning model, based on the training data set, to identify predictive data features, indicated by the first process data, that are predictive of the outcome scores and the generic computing elements described in the Applicant's specification in at least Para 0108-0120. These elements do not amount to more than the abstract idea because it is a generic computer performing generic functions.
Therefore, Claim 16 is not drawn to eligible subject matter as it is directed to abstract ideas without significantly more.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-2, 9 and 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kennedy (2020/0311611 A1) in view of Hanson et al. (US 8,311,856 B1) further in view of O'Hara et al. (US 2022/0083905 A1).
Regarding Claim 1, Kennedy teaches the limitations of Claim 1 which state
generating, by a computing system, and based on data retrieved from a plurality of data sources, a training data set comprising (Kennedy: Para 0030 via Machine learning tool 34 may also include one more ensemble learning algorithms, which may utilize two or more sets of features to train two or more machine learning models):
training, by the computing system, and based on the training data set, a machine learning model to identify predictive data features, indicated by the process data, that are predictive of the outcome scores (Kennedy: Para 0029, 0043 via Machine learning tool 34 may be initialized using a labeled data set (referred to as “training”). As used herein, a “labeled” data set is a set of data elements with associated labels (e.g. as metadata) identifying a value of the target variable for each data element. For example, if the target is a binary value indicating whether a data element belongs to a certain group, the labeled data set includes data identifying which data elements belong to the group. The labeled data set may be stored in memory 24 and may have a number of features associated therewith. As will be explained further below, feature definition tool 32 may automatically generate additional features for the labeled data set stored in memory 24 to create a pool of candidate features from the pool. Such features may be referred to hereinafter as “synthetic features”. Feature definition tool 32, may automatically select the most predictive features, e.g. the features most strongly correlated to the labels, and provide those selected features to machine learning tool 34. The selected features include zero or more base features and zero or more synthetic features. If the selected features include synthetic features, the machine learning tool 34 may identify relationships between the synthetic features of the labeled data set and the target variable of the tool. Once trained, machine learning tool 34 may take as input a new data element and output a predicted outcome of the target variable…Feature scoring module 53 is configured to determine feature scores for candidate features. The feature score of a feature may be representative of the degree of correlation between that feature and the target of machine learning tool 34. The degree of correlation between a feature and the target may be reflective of how predictive the feature is of the target, when used by machine learning tool 34. For example, if machine learning tool 34 is to identify data elements belonging to several categories, the score of a given candidate feature may give an approximate indication of how strongly predictive the candidate feature is of whether the data element belongs to a particular category).
However, Kennedy does not explicitly disclose the limitations of Claim 1 which state process data associated with performance of instances of a process; and outcome scores associated with the instances of the process.
Hanson though, with the teachings of Kennedy, teaches of
process data associated with performance of instances of a process (Hanson: Col 10 lines 33-44 via a performance module 337 compiles vehicle repair shop performance data, or Key Performance Indicator (KPI) data, that calculates a score and ranks the vehicle repair shop relative to other vehicle repair shops in the market. Finally, in a step 224, the VICMA 330 gathers vehicle repair shop metrics by routing information through the claim processing translation system 322. The KPI data may be compiled for individual claim transactions. When all data fields are captured for a given claim, the claim file may be added to a vehicle repair shop file that includes claim statistics for all claims the vehicle repair shop 108 has processed with the insurance company); and
outcome scores associated with the instances of the process (Hanson: Col 10 lines 33-50 via a performance module 337 compiles vehicle repair shop performance data, or Key Performance Indicator (KPI) data, that calculates a score and ranks the vehicle repair shop relative to other vehicle repair shops in the market. Finally, in a step 224, the VICMA 330 gathers vehicle repair shop metrics by routing information through the claim processing translation system 322. The KPI data may be compiled for individual claim transactions. When all data fields are captured for a given claim, the claim file may be added to a vehicle repair shop file that includes claim statistics for all claims the vehicle repair shop 108 has processed with the insurance company…The vehicle repair shop file may include scores for customer service, repair quality (pass ratio), or cycle time. It may also include estimate metrics that measure the vehicle repair shop's ability to estimate total repair cost, average part amount for estimate, and average hours per estimate (with a breakdown of refinish, repair, and replace)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy with the teachings of Hanson in order to have process data associated with performance of instances of a process; and outcome scores associated with the instances of the process. The motivations behind this being to incorporate the teachings of performance metrics compiling. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Furthermore, Kennedy does not explicitly disclose the limitation of Claim 1 which states determining, by the computing system, target zones associated with the predictive data features, wherein the target zones indicate values of the predictive data features that are associated with a target range of the outcome scores.
O’Hara though, with the teachings of Kennedy/Hanson, teaches of
determining, by the computing system, target zones associated with the predictive data features, wherein the target zones indicate values of the predictive data features that are associated with a target range of the outcome scores (O’Hara: Para 0041, 0049 via Returning to process 200, a model region is assigned to each of the input training records at S220 based on the trained predictive model, the input training records and corresponding predicted values. Assignment of the model regions at S220 first requires determination of the model regions. Next, at S230, a classification model is trained based on the assigned model regions to predict a model region. An example implementation of S220 and S230 will be described below with respect to FIGS. 4-10…In some embodiments, each of the plurality of bins is associated with an exclusive range of target values. At S420, all feature contribution records associated with a target value falling within a range associated with a bin are assigned to that bin…).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy/Hanson with the teachings of O’Hara in order to have determining, by the computing system, target zones associated with the predictive data features, wherein the target zones indicate values of the predictive data features that are associated with a target range of the outcome scores. The motivations behind this being to incorporate the teachings of training a machine learning model based on sets of training data, each of which is associated with a target output. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Regarding Claim 2, the combination of Kennedy/Hanson/O’Hara teaches the limitations of Claim 2 which state
wherein the process data includes one or more of: operational data associated with the instances of the process, customer data associated with customers associated with the instances of the process, or worker data associated with workers that performed the instances of the process (Hanson: Col 6 lines 22-39 via The insurance company may offer assignments to the vehicle repair shop for either repairs or estimates as part of the first notice of loss (FNOL) process. After the vehicle repair shop has been offered the assignment and submitted the estimate, the vehicle repair shop typically completes the corresponding repairs upon approval by the insurance company and absent any special circumstances. The present invention may provide the vehicle repair shop with assignment data needed to prepare a repair estimate or repair the vehicle. Assignment data may include, but not be limited to, customer name, contact information, insurance claim number, assignment date, loss date, loss type, loss type detail, loss description, current vehicle location, location where vehicle may be sent, deductible amount, vehicle type, year/make/model, vehicle identification number (VIN), license plate number, towing company information, damage information, prior damage information, and vehicle safety status (drivable/non-drivable)).
Regarding Claim 9, the combination of Kennedy/Hanson/O’Hara teaches the limitations of Claim 9 which state
the training of the machine learning model identifies a combination of the predictive data features that is predictive of the outcome scores (Kennedy: Para 0029, 0043 via Machine learning tool 34 may be initialized using a labeled data set (referred to as “training”). As used herein, a “labeled” data set is a set of data elements with associated labels (e.g. as metadata) identifying a value of the target variable for each data element. For example, if the target is a binary value indicating whether a data element belongs to a certain group, the labeled data set includes data identifying which data elements belong to the group. The labeled data set may be stored in memory 24 and may have a number of features associated therewith. As will be explained further below, feature definition tool 32 may automatically generate additional features for the labeled data set stored in memory 24 to create a pool of candidate features from the pool. Such features may be referred to hereinafter as “synthetic features”. Feature definition tool 32, may automatically select the most predictive features, e.g. the features most strongly correlated to the labels, and provide those selected features to machine learning tool 34. The selected features include zero or more base features and zero or more synthetic features. If the selected features include synthetic features, the machine learning tool 34 may identify relationships between the synthetic features of the labeled data set and the target variable of the tool. Once trained, machine learning tool 34 may take as input a new data element and output a predicted outcome of the target variable…Feature scoring module 53 is configured to determine feature scores for candidate features. The feature score of a feature may be representative of the degree of correlation between that feature and the target of machine learning tool 34. The degree of correlation between a feature and the target may be reflective of how predictive the feature is of the target, when used by machine learning tool 34. For example, if machine learning tool 34 is to identify data elements belonging to several categories, the score of a given candidate feature may give an approximate indication of how strongly predictive the candidate feature is of whether the data element belongs to a particular category);
and the target zones are associated with combinations of values, associated with the combination of the predictive data features, that are associated with the target range of the outcome scores (O’Hara: Para 0041, 0049 via a model region is assigned to each of the input training records at S220 based on the trained predictive model, the input training records and corresponding predicted values. Assignment of the model regions at S220 first requires determination of the model regions. Next, at S230, a classification model is trained based on the assigned model regions to predict a model region. An example implementation of S220 and S230 will be described below with respect to FIGS. 4-10…each of the plurality of bins is associated with an exclusive range of target values. At S420, all feature contribution records associated with a target value falling within a range associated with a bin are assigned to that bin).
Regarding Claims 11-12, they are analogous to Claims 1-2 and are rejected for the same reasons. (Kennedy: Para 0007).
Claim(s) 3 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kennedy (2020/0311611 A1) in view of Hanson et al. (US 8,311,856 B1) in view of O'Hara et al. (US 2022/0083905 A1) in view of Gvildys et al. (US 2021/0174288 A1) further in view of Wu et al. (US 2020/0151746 A1).
Regarding Claim 3, while Kennedy/Hanson/O’Hara teaches the limitations of Claim 2, it does not explicitly disclose the limitations of Claim 3 which state obtaining the operational data, the customer data, the worker data, and the outcome scores from the plurality of data sources; identifying data elements of the operational data, the customer data, the worker data, and the outcome scores that are associated with same instances of the process; and linking the data elements, in the training data set, that are associated with the same instances.
Gvildys though, with the teachings of Kennedy/Hanson/O’Hara teaches of
obtaining the operational data, the customer data, the worker data, and the outcome scores from the plurality of data sources (Gvildys: Para 0032, 0035 via the data for training the model is provided by a performance monitoring module 116. Functionality of the performance monitoring module is described in detail in U.S. Pat. No. 8,589,215, the content of which is incorporated herein by reference. In general terms, the performance monitoring module 116 monitors agent performance in meeting certain contact center metrics, and determines objective performance measurements based on the monitoring. Such objective performance measurements may include, for example, a number of interactions that have been transferred to another agent per month, customer survey scores, number of repeat calls per month, and the like. In addition to objective performance measurements, the contact center may also consider certain subjective factors that may be important to the contact center, such as for example, enthusiasm, selling skills, teamwork, and the like. Scores for the subjective factors may be given, for example, by a supervisor who may evaluate the subjective factors after analyzing one or more interactions of the agent…The model may be, for example, a statistical model that is trained based on training data provided to the model. In one embodiment, the training data includes input features taking the form of attributes 202a-202c (collectively referenced as 202) of agents when handling, for example, a simulated call. Such attributes may include, without limitation, emotional feature scores 202a, adherence scores 202b, and clarity scores 202c. The input features are mapped/correlated to particular target values. In one embodiment, the target values are agent performance scores 204 provided by the performance monitoring module 116);
identifying data elements of the operational data, the customer data, the worker data, and the outcome scores that are associated with same instances of the process (Gvildys: Para 0035 via The model may be, for example, a statistical model that is trained based on training data provided to the model. In one embodiment, the training data includes input features taking the form of attributes 202a-202c (collectively referenced as 202) of agents when handling, for example, a simulated call. Such attributes may include, without limitation, emotional feature scores 202a, adherence scores 202b, and clarity scores 202c. The input features are mapped/correlated to particular target values. In one embodiment, the target values are agent performance scores 204 provided by the performance monitoring module 116); and
linking the data elements, in the training data set, that are associated with the same instances (Gvildys: Para 0035 via The model may be, for example, a statistical model that is trained based on training data provided to the model. In one embodiment, the training data includes input features taking the form of attributes 202a-202c (collectively referenced as 202) of agents when handling, for example, a simulated call. Such attributes may include, without limitation, emotional feature scores 202a, adherence scores 202b, and clarity scores 202c. The input features are mapped/correlated to particular target values. In one embodiment, the target values are agent performance scores 204 provided by the performance monitoring module 116).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy/Hanson/O’Hara with the teachings of Gvildys in order to have obtaining the operational data, the customer data, the worker data, and the outcome scores from the plurality of data sources; identifying data elements of the operational data, the customer data, the worker data, and the outcome scores that are associated with same instances of the process; and linking the data elements, in the training data set, that are associated with the same instances. The motivations behind this being to incorporate the teachings of predicting performance of candidate contact center agents. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
In addition, Kennedy/Hanson/O’Hara does not explicitly disclose the limitation of converting the operational data, the customer data, the worker data, and the outcome scores to a common data format.
Wu though, with the teachings of Kennedy/Hanson/O’Hara/Gvildys, teaches of
converting the operational data, the customer data, the worker data, and the outcome scores to a common data format (Wu: Para A pre-processing pipeline process can be utilized to generate features from raw data. Specifically, feature engineering can be performed on raw user event data. Raw user event data can be obtained from data store 202 (e.g., raw user event data stored in data store 202). Engineered features can generally be defined as user features that are processed such that the features can be input into the model. For instance, engineered features are the features that are used to train the model. While numeric variables can often be directly input into a model as a feature, categorical variables typically need to be converted in a way that they can input into the model. In this way, categorical variables such as user behavior and attributes can be standardized such that the model can process them accordingly).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy/Hanson/O’Hara/Gvildys with the teachings of Wu in order to have converting the operational data, the customer data, the worker data, and the outcome scores to a common data format. The motivations behind this being to incorporate the teachings of collecting and analyzing data and converting data of various formats into a format that a model can process. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Claim(s) 4-5 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kennedy (2020/0311611 A1) in view of Hanson et al. (US 8,311,856 B1) in view of O'Hara et al. (US 2022/0083905 A1) further in view of Kannan (US 8,396,741 B2).
Regarding Claim 4, while the combination of Kennedy/Hanson/O’Hara teaches the limitations of Claim 2, it does not explicitly disclose the limitation of Claim 4 which states wherein the worker data comprises worker satisfaction scores based on answers to worker surveys provided by the workers.
Kannan though, with the teachings of Kennedy/Hanson/O’Hara, teaches of
wherein the worker data comprises worker satisfaction scores based on answers to worker surveys provided by the workers (Kannan: Col 9 lines 40-45, Col 15 lines 1-5 via In concert with the steps of mining a transcription 314 and assigning a score, the process 310 also gives follow-up surveys to customers, agents, or both to extract additional information about the interaction. Types of surveys include voice surveys, email surveys, text message surveys, browser-based online surveys, etc. The surveys ask for both structured data and instructed data… FIG. 10 illustrates a workflow for integrating customer surveys and agent surveys to increase customer satisfaction. The results of the agent surveys and customer surveys identify the attributes that drive customer dissatisfaction, and which attributes that have the highest affinity to each other).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy/Hanson/O’Hara with the teachings of Kannan in order to have wherein the worker data comprises worker satisfaction scores based on answers to worker surveys provided by the workers. The motivations behind this being to incorporate the teachings of mining customer-agent interactions. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Regarding Claim 5, while the combination of Kennedy/Hanson/O’Hara teaches the limitations of Claim 1, it does not explicitly disclose the limitation of Claim 5 which states wherein the outcome scores comprise customer satisfaction scores indicating subjective satisfaction levels of customers associated with the instances of the process.
Kannan though, with the teachings of Kennedy/Hanson/O’Hara, teaches of
wherein the outcome scores comprise customer satisfaction scores indicating subjective satisfaction levels of customers associated with the instances of the process (Kannan: Col 5 lines 14-25, Col 9 lines 40-46 via Likewise, the data fusion engine 100 gathers information from one or more survey modules 23. The survey module 23 stores survey results 24, net experience scores, customer satisfaction scores and ratings, agent performance scores, etc., and verbatim survey data 29. In some embodiments of the invention, surveys are given to both customers and agents. According to these embodiments, a comparison between the customer survey and the agent survey reveals useful insights. For example, a customer may report a negative interaction experience because the agent was unable to give the customer a particular requested service. However, the company employing the agent may restrict the agent from giving customers the requested service. Therefore, the agent can self-report that they performed well in light of a customer asking for a service that they were unauthorized to provide… In concert with the steps of mining a transcription 314 and assigning a score, the process 310 also gives follow-up surveys to customers, agents, or both to extract additional information about the interaction. Types of surveys include voice surveys, email surveys, text message surveys, browser-based online surveys, etc.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy/Hanson/O’Hara with the teachings of Kannan in order to have wherein the outcome scores comprise customer satisfaction scores indicating subjective satisfaction levels of customers associated with the instances of the process. The motivations behind this being to incorporate the teachings of mining customer-agent interactions. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Regarding Claim 13, it is substantially similar to Claim 5 and is rejected for the same reasons.
Claim(s) 6 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kennedy (2020/0311611 A1) in view of Hanson et al. (US 8,311,856 B1) in view of O'Hara et al. (US 2022/0083905 A1) further in view of Wu et al. (US 2020/0151746 A1).
Regarding Claim 6, while the combination of Kennedy/Hanson/O’Hara teaches the limitations of Claim 1, it does not explicitly disclose the limitations of Claim 6 which states identifying, by the computing system, instances of the predictive data features within second process data associated with second instances of the process; determining, by the computing system, whether the instances of the predictive data features are associated with second values that are within the target zones; and generating, by the computing system, insight output based on determining whether the instances of the predictive data features are associated with the second values that are within the target zones.
Wu though, with the teachings of Kennedy/Hanson/O’Hara, teaches of
identifying, by the computing system, instances of the predictive data features within second process data associated with second instances of the process (Wu: Para 0029, 0055 via As described herein, the KPI analytics system generates actionable KPI-driven customer segments. At a high-level, to generate KPI-driven customer segments, the KPI analytics system builds a propensity model for a KPI of interest to generate predicted outcomes for customers that reflect the likelihood that each customer will reach/perform a particular outcome related to the KPI of interest. The propensity model can be generated using historical user behavior data and/or user attributes correlated with known outcomes. Combining user-level behavior features (e.g., product use frequency, product use recency, product variety, etc.) and user attributes (e.g., age of subscription, country, skill level, etc.) along with known outcomes can be used to train the propensity model (e.g., using machine learning). When applied to existing customers, the propensity model generates a predicted outcome for each customer indicative of a likelihood of the outcome of interest for which the model was trained… The propensity model can be updated periodically based on accuracy. For instance, the trained propensity model can be used to determine the probability of an outcome for an existing customer. After the designated timeframe for the determined probability has passed, an actual outcome for the existing customer is determined and the predicted outcome is compared with the actual outcome. When the predicted outcome does not match the actual outcome, the model can be updated with additional training data (e.g., when a predefined threshold level of outcomes are not correct));
determining, by the computing system, whether the instances of the predictive data features are associated with second values that are within the target zones (Wu: Para 0055 via The propensity model can be updated periodically based on accuracy. For instance, the trained propensity model can be used to determine the probability of an outcome for an existing customer. After the designated timeframe for the determined probability has passed, an actual outcome for the existing customer is determined and the predicted outcome is compared with the actual outcome. When the predicted outcome does not match the actual outcome, the model can be updated with additional training data (e.g., when a predefined threshold level of outcomes are not correct));
and generating, by the computing system, insight output based on determining whether the instances of the predictive data features are associated with the second values that are within the target zones (Wu: Para 0055 via The propensity model can be updated periodically based on accuracy. For instance, the trained propensity model can be used to determine the probability of an outcome for an existing customer. After the designated timeframe for the determined probability has passed, an actual outcome for the existing customer is determined and the predicted outcome is compared with the actual outcome. When the predicted outcome does not match the actual outcome, the model can be updated with additional training data (e.g., when a predefined threshold level of outcomes are not correct)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy/Hanson/O’Hara with the teachings of Wu in order to have identifying, by the computing system, instances of the predictive data features within second process data associated with second instances of the process; determining, by the computing system, whether the instances of the predictive data features are associated with second values that are within the target zones; and generating, by the computing system, insight output based on determining whether the instances of the predictive data features are associated with the second values that are within the target zones. The motivations behind this being to incorporate the teachings of collecting and analyzing data and converting data of various formats into a format that a model can process. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Regarding Claim 14 it is analogous to Claim 6 and is rejected for the same reasons.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kennedy (2020/0311611 A1) in view of Hanson et al. (US 8,311,856 B1) in view of O'Hara et al. (US 2022/0083905 A1) in view of Wu et al. (US 2020/0151746 A1) further in view of Lee (US 2016/0171414 A1).
Regarding Claim 7, while the combination of Kennedy/Hanson/O’Hara/Wu teaches the limitations of Claim 6, it does not explicitly disclose the limitations of Claim 7 which state wherein the insight output identifies one or more particular instances of the process, of the second instances of the process, that are associated with third values that are outside the target zones.
Lee though, with the teachings of Kennedy/Hanson/O’Hara/Wu, teaches of
wherein the insight output identifies one or more particular instances of the process, of the second instances of the process, that are associated with third values that are outside the target zones (Lee: Para 0026, 0048 via At a high level, this disclosure is drawn to a method for generating an intelligent energy KPI system based on modular engineering. The disclosure provides an example of a systematic way to structure a large industrial complex hierarchically, a method to monitor overall energy performance using a few KPIs from the highest level, to transform raw process data to operational intelligence at the lowest level, and to integrate the interconnected information throughout all hierarchical levels. For example, operational intelligence can be obtained through analysis of all relevant historical and current process data. The intelligent energy KPI system can determine proper KPI targets to reflect current plant operations and monitor/detect any energy KPI violations…the system can include a representation of a hierarchical structure or portion of a hierarchical structure to assist a user. For example, any equipment KPI violation can be highlighted by orange or red status color, whereas good energy performance can be indicated by green or yellow status color).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy/Hanson/O’Hara/Wu with the teachings of Lee in order to have wherein the insight output identifies one or more particular instances of the process, of the second instances of the process, that are associated with third values that are outside the target zones. The motivations behind this being to incorporate the teachings of detecting and analyzing KPI violations. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kennedy (2020/0311611 A1) in view of Hanson et al. (US 8,311,856 B1) in view of O'Hara et al. (US 2022/0083905 A1) in view of Wu et al. (US 2020/0151746 A1) in view of Lee (US 2016/0171414 A1) further in view of Balakrishnan et al. (US 2018/0032939 A1).
Regarding Claim 8, while the combination of Kennedy/Hanson/O’Hara/Wu teaches the limitations of Claim 6, it does not explicitly disclose the limitations of Claim 8 which state the second instances of the process are current instances of the process, and the insight output identifies one or more particular instances of the process, from among the current instances of the process, that are associated with instances of the second values that are: currently inside the target zones.
Lee though, with the teachings of Kennedy/Hanson/O’Hara/Wu, teaches of
the second instances of the process are current instances of the process (Lee: Para 0026, 0048 via At a high level, this disclosure is drawn to a method for generating an intelligent energy KPI system based on modular engineering. The disclosure provides an example of a systematic way to structure a large industrial complex hierarchically, a method to monitor overall energy performance using a few KPIs from the highest level, to transform raw process data to operational intelligence at the lowest level, and to integrate the interconnected information throughout all hierarchical levels. For example, operational intelligence can be obtained through analysis of all relevant historical and current process data. The intelligent energy KPI system can determine proper KPI targets to reflect current plant operations and monitor/detect any energy KPI violations…the system can include a representation of a hierarchical structure or portion of a hierarchical structure to assist a user. For example, any equipment KPI violation can be highlighted by orange or red status color, whereas good energy performance can be indicated by green or yellow status color), and
the insight output identifies one or more particular instances of the process, from among the current instances of the process, that are associated with instances of the second values that are: currently inside the target zones (Lee: Para 0026, 0048 via At a high level, this disclosure is drawn to a method for generating an intelligent energy KPI system based on modular engineering. The disclosure provides an example of a systematic way to structure a large industrial complex hierarchically, a method to monitor overall energy performance using a few KPIs from the highest level, to transform raw process data to operational intelligence at the lowest level, and to integrate the interconnected information throughout all hierarchical levels. For example, operational intelligence can be obtained through analysis of all relevant historical and current process data. The intelligent energy KPI system can determine proper KPI targets to reflect current plant operations and monitor/detect any energy KPI violations…the system can include a representation of a hierarchical structure or portion of a hierarchical structure to assist a user. For example, any equipment KPI violation can be highlighted by orange or red status color, whereas good energy performance can be indicated by green or yellow status color).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy/Hanson/O’Hara/Wu with the teachings of Lee in order to have state the second instances of the process are current instances of the process, and the insight output identifies one or more particular instances of the process, from among the current instances of the process, that are associated with instances of the second values that are: currently inside the target zones. The motivations behind this being to incorporate the teachings of detecting and analyzing KPI violations. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Furthermore, Kennedy/Hanson/O’Hara/Wu does not explicitly disclose the limitation of Claim 8 which states insight output identifies one or more particular instances of the process, from among the current instances of the process, that are associated with instances of the second values that are projected to move outside the target zones within a future period of time.
Balakrishnan though, with the teachings of Kennedy/Hanson/O’Hara/Wu/Lee, teaches of
insight output identifies one or more particular instances of the process, from among the current instances of the process, that are associated with instances of the second values that are projected to move outside the target zones within a future period of time (Balakrishnan: Para 0041 via data analytics are used to determine customer satisfaction by using measurable metrics. Metrics include, but are not limited to, meal rate, meal item consumption, amount of leftover meal, amount of leftover meal item, customer reaction upon initial consumption, and customer reaction upon consumption of one or more meal items. Metrics may also involve a time taken to eat or consume each course, time taken to flag a server, customer looking for a server, time taken to find the server, tips received by each server, and various mood evolution analytics for assessing human emotional behavior. The human emotional behavior may include heart rate and skin temperature of the customers, in addition to facial expressions, gestures, mood, etc. The metrics allow for the prediction of variations indicating different satisfaction levels for customers/individuals visiting a food or eating facility, such as a restaurant. Each of the metrics may be associated with a threshold set by the restaurant. If the variations exceed one or more thresholds, then the restaurant may dynamically refine one or more variables/parameters in real-time).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy/Hanson/O’Hara/Wu/Lee with the teachings of Balakrishnan in order to have insight output identifies one or more particular instances of the process, from among the current instances of the process, that are associated with instances of the second values that are projected to move outside the target zones within a future period of time. The motivations behind this being to incorporate the teachings of predicting variations in measurable metrics. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Claim(s) 10 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kennedy (2020/0311611 A1) in view of Hanson et al. (US 8,311,856 B1) in view of O'Hara et al. (US 2022/0083905 A1) further in view of Gvildys et al. (US 2021/0174288 A1) .
Regarding Claim 10, while the combination of Kennedy/Hanson/O’Hara teaches the limitations of Claim 9, it does not explicitly disclose the limitations of Claim 10 which state the process data includes worker data associated with workers that performed the instances of the process, and the combination of the predictive data features includes at least one predictive data feature associated with the worker data.
Gvildys though, with the teachings of Kennedy/Hanson/O’Hara, teaches of
the process data includes worker data associated with workers that performed the instances of the process (Gvildys: Para 0040 via the scorer module 114 gathers performance scores of the selected agents from the data storage device), and
the combination of the predictive data features includes at least one predictive data feature associated with the worker data (Gvildys: Para 0050 via the hiring model 200 takes the scores of the various attributes 202 detected for the candidate agent, and generates a predicted performance score 204 for the candidate agent 102. In one embodiment, the predicted performance score is for predicting performance of the candidate agent in meeting particular metrics of the contact center. Such metrics may relate to, for example, call transfers, repeat calls, number of interactions handled, enthusiasm, teamwork, and the like).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy/Hanson/O’Hara with the teachings of Gvildys in order to have the process data includes worker data associated with workers that performed the instances of the process, and the combination of the predictive data features includes at least one predictive data feature associated with the worker data. The motivations behind this being to incorporate the teachings of predicting performance of candidate contact center agents. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Regarding Claim 15, while the combination of Kennedy/Hanson/O’Hara teaches the limitations of Claim 11, it does not explicitly disclose the limitations of Claim 15 which state wherein the training data set is generated by: obtaining one or more data types and the outcome scores from a plurality of disparate data sources; identifying data elements, within the one or more data types and the outcome scores, that are associated with same instances of the process; and linking the data elements, in the training data set, that are associated with the same instances of the process.
Gvildys though, with the teachings of Kennedy/Hanson/O’Hara teaches of
obtaining one or more data types and the outcome scores from a plurality of disparate data sources (Gvildys: Para 0032, 0035 via the data for training the model is provided by a performance monitoring module 116. Functionality of the performance monitoring module is described in detail in U.S. Pat. No. 8,589,215, the content of which is incorporated herein by reference. In general terms, the performance monitoring module 116 monitors agent performance in meeting certain contact center metrics, and determines objective performance measurements based on the monitoring. Such objective performance measurements may include, for example, a number of interactions that have been transferred to another agent per month, customer survey scores, number of repeat calls per month, and the like. In addition to objective performance measurements, the contact center may also consider certain subjective factors that may be important to the contact center, such as for example, enthusiasm, selling skills, teamwork, and the like. Scores for the subjective factors may be given, for example, by a supervisor who may evaluate the subjective factors after analyzing one or more interactions of the agent…The model may be, for example, a statistical model that is trained based on training data provided to the model. In one embodiment, the training data includes input features taking the form of attributes 202a-202c (collectively referenced as 202) of agents when handling, for example, a simulated call. Such attributes may include, without limitation, emotional feature scores 202a, adherence scores 202b, and clarity scores 202c. The input features are mapped/correlated to particular target values. In one embodiment, the target values are agent performance scores 204 provided by the performance monitoring module 116);
identifying data elements, within the one or more data types and the outcome scores, that are associated with same instances of the process (Gvildys: Para 0035 via The model may be, for example, a statistical model that is trained based on training data provided to the model. In one embodiment, the training data includes input features taking the form of attributes 202a-202c (collectively referenced as 202) of agents when handling, for example, a simulated call. Such attributes may include, without limitation, emotional feature scores 202a, adherence scores 202b, and clarity scores 202c. The input features are mapped/correlated to particular target values. In one embodiment, the target values are agent performance scores 204 provided by the performance monitoring module 116); and
linking the data elements, in the training data set, that are associated with the same instances of the process (Gvildys: Para 0035 via The model may be, for example, a statistical model that is trained based on training data provided to the model. In one embodiment, the training data includes input features taking the form of attributes 202a-202c (collectively referenced as 202) of agents when handling, for example, a simulated call. Such attributes may include, without limitation, emotional feature scores 202a, adherence scores 202b, and clarity scores 202c. The input features are mapped/correlated to particular target values. In one embodiment, the target values are agent performance scores 204 provided by the performance monitoring module 116).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kennedy/Hanson/O’Hara with the teachings of Gvildys in order to have wherein the training data set is generated by: obtaining one or more data types and the outcome scores from a plurality of disparate data sources; identifying data elements, within the one or more data types and the outcome scores, that are associated with same instances of the process; and linking the data elements, in the training data set, that are associated with the same instances of the process. The motivations behind this being to incorporate the teachings of predicting performance of candidate contact center agents. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Claim(s) 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kannan et al. (US 8,396,741 B2) in view of Kennedy (US 2020/0311611 A1) in view of O'Hara et al. (US 2022/0083905 A1) further in view of Wu et al. (US 2020/0151746 A1).
Regarding Claim 16, Kannan teaches the limitations of Claim 16 which state
generate a training data set that comprises: first process data associated with performance of historical instances of a process; and outcome scores associated with the historical instances of the process (Kannan: Col 4 lines 54-61 via In some embodiments of the invention, the agent profile repository 11 stores agent performance data including call satisfaction (CSAT) scores, agent performance scores (APS), and productivity metrics. Although specific types of stored data are set forth explicitly, it will be readily apparent to those with ordinary skill in the art, having the benefit of this disclosure, that the agent profile repository 11 can store a wide variety of other agent history, demographics, and the like).
However, Kannan does not explicitly disclose the limitation of Claim 16 which states train a machine learning model, based on the training data set, to identify predictive data features, indicated by the first process data, that are predictive of the outcome scores.
Kennedy though, with the teachings of Kannan, teaches of
train a machine learning model, based on the training data set, to identify predictive data features, indicated by the first process data, that are predictive of the outcome scores (Kennedy: Para 0029, 0043 via Machine learning tool 34 may be initialized using a labeled data set (referred to as “training”). As used herein, a “labeled” data set is a set of data elements with associated labels (e.g. as metadata) identifying a value of the target variable for each data element. For example, if the target is a binary value indicating whether a data element belongs to a certain group, the labeled data set includes data identifying which data elements belong to the group. The labeled data set may be stored in memory 24 and may have a number of features associated therewith. As will be explained further below, feature definition tool 32 may automatically generate additional features for the labeled data set stored in memory 24 to create a pool of candidate features from the pool. Such features may be referred to hereinafter as “synthetic features”. Feature definition tool 32, may automatically select the most predictive features, e.g. the features most strongly correlated to the labels, and provide those selected features to machine learning tool 34. The selected features include zero or more base features and zero or more synthetic features. If the selected features include synthetic features, the machine learning tool 34 may identify relationships between the synthetic features of the labeled data set and the target variable of the tool. Once trained, machine learning tool 34 may take as input a new data element and output a predicted outcome of the target variable…Feature scoring module 53 is configured to determine feature scores for candidate features. The feature score of a feature may be representative of the degree of correlation between that feature and the target of machine learning tool 34. The degree of correlation between a feature and the target may be reflective of how predictive the feature is of the target, when used by machine learning tool 34. For example, if machine learning tool 34 is to identify data elements belonging to several categories, the score of a given candidate feature may give an approximate indication of how strongly predictive the candidate feature is of whether the data element belongs to a particular category).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kannan with the teachings of Kennedy in order to have train a machine learning model, based on the training data set, to identify predictive data features, indicated by the first process data, that are predictive of the outcome scores. The motivations behind this being to incorporate the teachings of automatic generation of features for training models using machine learning. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Furthermore, Kannan does not explicitly disclose the limitation of Claim 16 which states determine target zones associated with the predictive data features, wherein the target zones indicate values of the predictive data features that are associated with a target range of the outcome scores.
O’Hara though, with the teachings of Kannan/Kennedy, teaches of
determine target zones associated with the predictive data features, wherein the target zones indicate values of the predictive data features that are associated with a target range of the outcome scores (O’Hara: Para 0041, 0049 via Returning to process 200, a model region is assigned to each of the input training records at S220 based on the trained predictive model, the input training records and corresponding predicted values. Assignment of the model regions at S220 first requires determination of the model regions. Next, at S230, a classification model is trained based on the assigned model regions to predict a model region. An example implementation of S220 and S230 will be described below with respect to FIGS. 4-10…In some embodiments, each of the plurality of bins is associated with an exclusive range of target values. At S420, all feature contribution records associated with a target value falling within a range associated with a bin are assigned to that bin…).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kannan/Kennedy with the teachings of O’Hara in order to have determine target zones associated with the predictive data features, wherein the target zones indicate values of the predictive data features that are associated with a target range of the outcome scores. The motivations behind this being to incorporate the teachings of training a machine learning model based on sets of training data, each of which is associated with a target output. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Furthermore, Kannan does not explicitly disclose the limitation of Claim 16 which states identify instances of the predictive data features within second process data associated with performance of second instances of the process; determine whether the instances of the predictive data features are associated with second values that are within the target zones; and generate insight output based on determining whether the instances of the predictive data features are associated with the second values that are within the target zones.
Wu though, with the teachings of Kannan/Kennedy/O’Hara, teaches of
identify instances of the predictive data features within second process data associated with performance of second instances of the process; determine whether the instances of the predictive data features are associated with second values that are within the target zones; and generate insight output based on determining whether the instances of the predictive data features are associated with the second values that are within the target zones (Wu: Para 0029, 0055 via As described herein, the KPI analytics system generates actionable KPI-driven customer segments. At a high-level, to generate KPI-driven customer segments, the KPI analytics system builds a propensity model for a KPI of interest to generate predicted outcomes for customers that reflect the likelihood that each customer will reach/perform a particular outcome related to the KPI of interest. The propensity model can be generated using historical user behavior data and/or user attributes correlated with known outcomes. Combining user-level behavior features (e.g., product use frequency, product use recency, product variety, etc.) and user attributes (e.g., age of subscription, country, skill level, etc.) along with known outcomes can be used to train the propensity model (e.g., using machine learning). When applied to existing customers, the propensity model generates a predicted outcome for each customer indicative of a likelihood of the outcome of interest for which the model was trained… The propensity model can be updated periodically based on accuracy. For instance, the trained propensity model can be used to determine the probability of an outcome for an existing customer. After the designated timeframe for the determined probability has passed, an actual outcome for the existing customer is determined and the predicted outcome is compared with the actual outcome. When the predicted outcome does not match the actual outcome, the model can be updated with additional training data (e.g., when a predefined threshold level of outcomes are not correct)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kannan/Kennedy/O’Hara with the teachings of Wu in order to have identify instances of the predictive data features within second process data associated with performance of second instances of the process; determine whether the instances of the predictive data features are associated with second values that are within the target zones; and generate insight output based on determining whether the instances of the predictive data features are associated with the second values that are within the target zones. The motivations behind this being to incorporate the teachings of collecting and analyzing data and converting data of various formats into a format that a model can process. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Regarding Claim 17, Kannan/Kennedy/O’Hara/Wu teaches the limitations of Claim 17 which state
wherein the first process data includes worker data associated with workers that performed the historical instances of the process (Kannan: Col 4 lines 54-61 via In some embodiments of the invention, the agent profile repository 11 stores agent performance data including call satisfaction (CSAT) scores, agent performance scores (APS), and productivity metrics. Although specific types of stored data are set forth explicitly, it will be readily apparent to those with ordinary skill in the art, having the benefit of this disclosure, that the agent profile repository 11 can store a wide variety of other agent history, demographics, and the like).
Regarding Claim 18, Kannan/Kennedy/O’Hara/Wu teaches the limitations of Claim 18 which state
wherein the first process data further includes at least one of: operational data associated with the performance of the historical instances of the process, or customer data associated with customers associated with the historical instances of the process (Kannan: Col 4 lines 54-61 via In some embodiments of the invention, the agent profile repository 11 stores agent performance data including call satisfaction (CSAT) scores, agent performance scores (APS), and productivity metrics. Although specific types of stored data are set forth explicitly, it will be readily apparent to those with ordinary skill in the art, having the benefit of this disclosure, that the agent profile repository 11 can store a wide variety of other agent history, demographics, and the like).
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kannan et al. (US 8,396,741 B2) in view of Kennedy (US 2020/0311611 A1) in view of O'Hara et al. (US 2022/0083905 A1) in view of Wu et al. (US 2020/0151746 A1) further in view of Gvildys et al. (US 2021/0174288 A1).
Regarding Claim 19, while the combination of Kannan/Kennedy/O’Hara/Wu teaches the limitations of Claim 16, it does not explicitly disclose the limitations of Claim 19 which state wherein the training data set is generated by: obtaining one or more data types and the outcome scores from a plurality of disparate data sources; identifying data elements, within the one or more data types and the outcome scores, that are associated with same instances of the process; and linking the data elements, in the training data set, that are associated with the same instances of the process.
Gvildys though, with the teachings of Kannan/Kennedy/O’Hara/Wu, teaches of
wherein the training data set is generated by: obtaining one or more data types and the outcome scores from a plurality of disparate data sources; identifying data elements, within the one or more data types and the outcome scores, that are associated with same instances of the process; and linking the data elements, in the training data set, that are associated with the same instances of the process (Gvildys: Para 0032, 0035 via the data for training the model is provided by a performance monitoring module 116. Functionality of the performance monitoring module is described in detail in U.S. Pat. No. 8,589,215, the content of which is incorporated herein by reference. In general terms, the performance monitoring module 116 monitors agent performance in meeting certain contact center metrics, and determines objective performance measurements based on the monitoring. Such objective performance measurements may include, for example, a number of interactions that have been transferred to another agent per month, customer survey scores, number of repeat calls per month, and the like. In addition to objective performance measurements, the contact center may also consider certain subjective factors that may be important to the contact center, such as for example, enthusiasm, selling skills, teamwork, and the like. Scores for the subjective factors may be given, for example, by a supervisor who may evaluate the subjective factors after analyzing one or more interactions of the agent…The model may be, for example, a statistical model that is trained based on training data provided to the model. In one embodiment, the training data includes input features taking the form of attributes 202a-202c (collectively referenced as 202) of agents when handling, for example, a simulated call. Such attributes may include, without limitation, emotional feature scores 202a, adherence scores 202b, and clarity scores 202c. The input features are mapped/correlated to particular target values. In one embodiment, the target values are agent performance scores 204 provided by the performance monitoring module 116).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kannan/Kennedy/O’Hara/Wu with the teachings of Gvildys in order to have wherein the training data set is generated by: obtaining one or more data types and the outcome scores from a plurality of disparate data sources; identifying data elements, within the one or more data types and the outcome scores, that are associated with same instances of the process; and linking the data elements, in the training data set, that are associated with the same instances of the process. The motivations behind this being to incorporate the teachings of predicting performance of candidate contact center agents. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kannan et al. (US 8,396,741 B2) in view of Kennedy (US 2020/0311611 A1) in view of O'Hara et al. (US 2022/0083905 A1) in view of Wu et al. (US 2020/0151746 A1) further in view of Lee (US 2016/0171414 A1).
Regarding Claim 20, while the combination of Kannan/Kennedy/O’Hara/Wu teaches the limitations of Claim 16, it does not explicitly disclose the limitations of Claim 20 which state wherein the second instances of the process comprise current instances of the process.
Lee though, with the teachings of Kannan/Kennedy/O’Hara/Wu, teaches of
wherein the second instances of the process comprise current instances of the process (Lee: Para 0026, 0048 via At a high level, this disclosure is drawn to a method for generating an intelligent energy KPI system based on modular engineering. The disclosure provides an example of a systematic way to structure a large industrial complex hierarchically, a method to monitor overall energy performance using a few KPIs from the highest level, to transform raw process data to operational intelligence at the lowest level, and to integrate the interconnected information throughout all hierarchical levels. For example, operational intelligence can be obtained through analysis of all relevant historical and current process data. The intelligent energy KPI system can determine proper KPI targets to reflect current plant operations and monitor/detect any energy KPI violations…the system can include a representation of a hierarchical structure or portion of a hierarchical structure to assist a user. For example, any equipment KPI violation can be highlighted by orange or red status color, whereas good energy performance can be indicated by green or yellow status color).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kannan/Kennedy/O’Hara/Wu with the teachings of Lee in order to have wherein the second instances of the process comprise current instances of the process. The motivations behind this being to incorporate the teachings of detecting and analyzing KPI violations. Furthermore, in addition to being in the same CPC class, the teachings, suggestions, and motivations in this prior art would have led one of ordinary skill to modify the prior art reference or combine prior art reference teachings to arrive at the claimed invention.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TYRONE E SINGLETARY whose telephone number is (571)272-1684. The examiner can normally be reached 9 - 5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Beth Boswell can be reached at 571-272-6737. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.E.S./Examiner, Art Unit 3625
/BETH V BOSWELL/Supervisory Patent Examiner, Art Unit 3625