Prosecution Insights
Last updated: April 19, 2026
Application No. 18/092,345

COMPUTERIZED-METHOD AND COMPUTERIZED-SYSTEM FOR GENERATING A COACHING SESSION HAVING COACHING CONTENT RELATED TO CUSTOMER EXPERIENCE, IN A WEB APPLICATION FOR MANAGING COACHING SESSIONS

Non-Final OA §101§103
Filed
Jan 02, 2023
Examiner
BOSWELL, BETH V
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Nice Ltd.
OA Round
5 (Non-Final)
8%
Grant Probability
At Risk
5-6
OA Rounds
5y 0m
To Grant
7%
With Interview

Examiner Intelligence

Grants only 8% of cases
8%
Career Allow Rate
9 granted / 112 resolved
-44.0% vs TC avg
Minimal -1% lift
Without
With
+-0.7%
Interview Lift
resolved cases with interview
Typical timeline
5y 0m
Avg Prosecution
14 currently pending
Career history
126
Total Applications
across all art units

Statute-Specific Performance

§101
34.4%
-5.6% vs TC avg
§103
38.4%
-1.6% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
11.5%
-28.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 112 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 2, 6, 9 and 10 were cancelled Claims 1, 3 and 11 are amended Claims 1, 3-5, 7-8 and 11 are pending Claims 1, 3-5, 7-8 and 11 are rejected This is a Non-Final Office Action in response to a Request for Continued Examination, amendments and remarks filed on October 10, 2025. Response to Arguments Applicant's arguments filed October 10, 2025 have been fully considered but they are not considered persuasive. Regarding 101 rejections, the examiner does not find the arguments to be persuasive and the rejections are maintained for the following reasons: The applicant asserts that the additional claim elements provide meaningful limitations to transform the abstract idea into a practical application. Further, the applicant asserts that the result of the following process steps integrates the abstract idea into a practical application: automatically scheduling the generated coaching session with the second one or more groups of coaching content operating the schedule the generated coaching session for the agent The examiner respectfully disagrees with the argument that the process steps are not an abstract idea. These process steps recite the abstract idea of certain methods of organizing human activity, see MPEP § 2106.04(a)(2), subsection II, specifically managing interactions between people and business relations. The process steps recited fall under the responsibilities of the person managing the call center. It is common in most industries for managers to determine training and coaching opportunities for their employees based on their performance. Further, for the analysis under Step 2A Prong 2, the applicant claims that “determining second one or more groups of coaching content to maximize the CFRS and presenting the second one or more groups via the GUI and an increase in calculated CFRS to reach a maximum CFRS to improve CX” is applied to result in the change in the system to be patentable. The examiner respectfully disagrees because determining second one or more groups is a mental process that requires a computer, see MPEP 2016.04(a)(2)(III) and thus cannot integrate the invention into a practical application. Specifically, the claim elements cannot be considered additional elements to integrate the abstract idea into a practical application because they themselves are the abstract idea, as stated above. Thus, these claim elements are directed to an abstract idea and do not integrate them into a practical application. Accordingly, the 101 rejections are maintained, including the rejection for the dependent claims (the remaining non-canceled claims, Claims 3-5, 7-8 and 11); please see below for the complete rejections of the claims as amended. Regarding 103 rejections, the examiner does not find the arguments to be persuasive and the rejections are maintained for the following reasons: The applicant asserts that “Tamblyn doesn't teach or suggest automatic scheduling of the generated coaching session with the coaching content with a number of associated interactions”. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Additionally, the examiner does not find this argument to be persuasive because Miller teaches automatic scheduling of coaching sessions, the examiner directs the applicant to ¶0116 where Miller teaches that additional training sessions are automatically scheduled for agents where the system identifies problems with the agent’s interaction with the customer. In regards to net promoter score, Miller reference the use of customer feedback being measured with NPS or survey data to extract interaction features, see ¶0087. For this reason, the examiner finds the argument to be not persuasive. The applicant asserts that “Tamblyn doesn't teach or suggest a dynamically generated Customer Feedback Relevancy Score (CFRS)”. The examiner interpreted the claim limitation “calculating a CFRS and presenting the calculated CFRS via the GUI” as using algorithms to derive the CFRS. The claims, as well as the specification, do not recite the word “dynamically” in regards to calculating a score, thus arguing limitations which are not claimed, see MPEP 2145(VI). Therefore, the examiner interpreted the claim to simply calculate the CFRS using a mathematical formula and referenced ¶0059 of Tamblyn to teach this claim limitation. Additionally, Tamblyn does teach real-time analysis by the analytics module which may be analyzed to determine the quality of coaching (see Tamblyn ¶0140). Further, the applicant asserts that Tamblyn “does not disclose any structured method for calculating relevancy scores or optimizing coaching sessions based on customer feedback variance” as well as “doesn't integrate into a coaching session generation framework”. The examiner interpreted Claim 1 (a)(ii) to determine coaching content to maximize the CFRS. The examiner respectfully disagrees and finds the applicant’s argument to be unpersuasive because the Miller reference teaches that coaching sessions are automatically assigned based on one or more automatically computed agent performance evaluation scores for a more rapid correction of agent behavior, see ¶0066. Further, the applicant asserts that “The automated scheduling, and GUI-based interaction distinguishes the current application as a coaching management system, which is neither taught or suggested by Tamblyn nor Miller.” The examiner respectfully disagrees because automated scheduling is taught by Miller (see ¶116 and ¶0121) and GUI is taught by Tamblyn (see ¶0148). Accordingly, the 103 rejections are maintained, including the rejection for the dependent claims (the remaining non-canceled claims, Claims 3-5, 7-8 and 11); please see below for the complete rejections of the claims as amended. In response to arguments in reference to any depending claims that have not been individually addressed, all rejections made towards these dependent claims are maintained due to a lack of reply by the applicant in regards to distinctly and specifically pointing out the supposed errors in examiner's prior office action (37 CFR 1.111). Examiner asserts that the applicant only argues that the dependent claims should be allowable because the independent claims are unobvious and patentable over the prior art. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-5, 7-8 and 11 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, in this case the exception is an abstract idea. A detailed three-pronged analysis to establish subject matter eligibility (MPEP 2106 (III)) is listed below. Claim 1 – Independent Claim Step 1 – Statutory Categories – Claims 1 is classified as a method, which is a process. Step 2A – Judicial Exceptions At Step 2A Prong One, the claim is evaluated to determine whether it recites a judicial exception (see MPEP 2106.04(a)(2), specifically subsection (II) and (III)). In this case, the claim recites abstract ideas since the limitations listed below are identified as certain methods of organizing human activity, specifically activities that would fall under core responsibilities of a call center administrator. “managing coaching sessions” “…accepts coaching content having one or more interrelated groups to create a coaching session, based on received user selection of coaching content having first one or more groups” “calculating a CFRS and presenting the calculated CFRS” “determining second one or more groups of coaching content” “extracting parameters” “counting occurrences of each category” “detecting a preconfigured number of categories” “sorting interactions” “extracting the preconfigured number of associated interactions” “automatically scheduling the generated coaching session with the second one or more groups of coaching content” “adding a preconfigured number of associated interactions based on interactions Net promoter score (NPS)” “operating the schedule for the agent” These limitations recite certain methods of organizing human activity. The scope of these limitations incorporate general tasks that a manager of a call center would perform on a regular basis, such as coaching and determining groups, scheduling coaching content, adding interactions based on NPS generally encompass teaching and following instructions which is defined as management of personal behavior or relationships or interactions between people (including teaching, and following rules or instructions) as outlined in MPEP §2106.04(a)(2), subsection II. At Step 2A Prong Two, the additional elements consist further following: “a web application for managing coaching sessions” “a graphical User Interface (GUI)” “a Customer Feedback Relevancy Score (CFRS) module” “presenting the second one or more groups via the GUI and an increase in calculated CFRS” “retrieving customer-feedback related data” “retrieving mapping between categories and the first one or more groups” “operating a k-Nearest neighbors algorithm” The recited additional elements, the web application, GUI and CFRS module, fail to add significantly more beyond generally linking computers and functions, whether individually or in combination. First, the utilization of a web application to manage coaching sessions fails to integrate the abstract idea of managing coaching sessions, into a practical application because the web application does no more than employ generic computer functions, see MPEP 2016.05(f). Second, “a graphical user interface (GUI)” fails to integrate the abstract ideas of presenting customer experience data groups and the increase in CFRS into a practical application because the GUI does no more than employ a generic computer function to display this information. Third, the utilization of a Customer Feedback Relevancy Score (CFRS) module fails to integrate the abstract idea of calculating and maximizing the CFRS into a practical application does no more than employ generic computer functions. Alternatively, the CFRS module additional element fails to amount to more than a recitation of the words “apply it” or mere instructions to apply the abstract idea of “calculating a CFRS and presenting the calculated CFRS” as outlined in MPEP§2106.05(f). In conclusion, the additional elements fail to impose a meaningful limit on the judicial exception and does not integrate into a practical application. Step 2B This part of the eligibility analysis evaluates whether the claim, as a whole, amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. The analysis under Step 2A, Prong Two is carried through to Step 2B. Under Step 2B of the patent eligibility analysis, the combination of additional elements is evaluated to determine whether they amount to something “significantly more” than the recited abstract idea (i.e., an innovative concept). The independent claim does not amount to significantly more than the judicial exception given that mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Claim 1 is not patent eligible. Dependent Claims At Step 2A Prong 1, the dependent claims, the remaining non-canceled claims, Claims 3-5, 7-8 and 11, are directed to an abstract idea since the invention recite a certain methods of organizing human activity (see MPEP 2106.04(a)) since the limitations merely uses a computer as a tool to perform an abstract idea to augment existing coaching processes with generic computing elements. These claims lack significantly the extra-solution activity that would warrant eligibility. Claim 3 recites a mathematical concept, specifically to conduct mathematical calculations to normalize data prior to calculating the CFRS. The examiner finds the combination of two abstract ideas does not render the idea as non-abstract, see MPEP 2106.04.I discussing Recognicorp, LLC v. Nintendo Co., Ltd., 855 F. 3d 1322, 1327 (stating combining “one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract”). The additional elements of the claim fail to recite details of how a solution to a problem is accomplished; whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and the particularity or generality of the application of the judicial exception. See MPEP 2106.05(f). Claim 3 is rejected due to being abstract and does not reflect a practical application. Claim 4 recites a mathematical concept using an algorithm. The examiner finds the combination of two abstract ideas does not render the idea as non-abstract, see MPEP 2106.04.I discussing Recognicorp, LLC v. Nintendo Co., Ltd., 855 F. 3d 1322, 1327 (stating combining “one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract”). The additional elements of the claim fail to recite details of how a solution to a problem is accomplished; whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and the particularity or generality of the application of the judicial exception. See MPEP 2106.05(f). Claim 4 is rejected due to being abstract and does not reflect a practical application. Claim 5 recites a mental process, specifically a concept that could be completed using concepts in the human mind, specifically evaluation. The additional elements of the claim fail to recite details of how a solution to a problem is accomplished; whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and the particularity or generality of the application of the judicial exception. See MPEP 2106.05(f). Claim 5 is rejected due to being abstract and does not reflect a practical application. Claim 7 recites a mental process, specifically a concept that could be completed using concepts in the human mind, specifically evaluation. The additional elements of the claim fail to recite details of how a solution to a problem is accomplished; whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and the particularity or generality of the application of the judicial exception. See MPEP 2106.05(f). Claim 7 is rejected due to being abstract and does not reflect a practical application. Claim 8 recites a mental process, specifically a concept that could be completed using concepts in the human mind, specifically evaluation. The additional elements of the claim fail to recite details of how a solution to a problem is accomplished; whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and the particularity or generality of the application of the judicial exception. See MPEP 2106.05(f). Claim 8 is rejected due to being abstract and does not reflect a practical application. Claim 11 recites a mental process, specifically a concept that could be completed using concepts in the human mind, specifically evaluation. The additional elements of the claim fail to recite details of how a solution to a problem is accomplished; whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and the particularity or generality of the application of the judicial exception. See MPEP 2106.05(f). Claim 11 is rejected due to being abstract and does not reflect a practical application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-5, 7-8 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Tamblyn et al, US Pub No. US 20150348163A1, herein referred to as “Tamblyn” and Miller et al, US Pub No. US 20180091654A1, herein referred to as “Miller”. Regarding Independent Claim 1, Tamblyn teaches the following limitations: …in a system that is running the web application (“The data communication links 140, 142 may be wired and/or wireless links traversing a data communication network such as, for example, a local area network, private wide area network, and/or public wide area network such as the Internet.”)(¶0056) for managing coaching sessions, Tamblyn teaches (“According to one embodiment, CX data gathered across various tenants may be used to detect trends and/or themes across different customer types and/or external events. In this regard, the analysis module may determine that a particular type of CX data and/or external event has high or low statistical correlation to certain outcomes. Such determination may be made after observing, either automatically or semi-automatically, how closely the CX data and/or external event tracks with certain outcomes over time and over various tenants. The analysis module may make recommendations based on the deduced correlations. The recommendations may relate to contact center routing, scheduling of agents, cross-sell/upsell efforts, escalation options, coaching scripts, and/or the like. A feedback loop back to the hub server 100 with actual outcome data allows the server to learn and improve based on the actual outcomes from the recommendations.”)(¶0063) said web application for managing coaching sessions accepts coaching content having one or more interrelated groups to create a coaching session, Tamblyn teaches (“… the analysis module may determine that a particular type of CX data and/or external event has high or low statistical correlation to certain outcomes. Such determination may be made after observing, either automatically or semi-automatically, how closely the CX data and/or external event tracks with certain outcomes over time and over various tenants. The analysis module may make recommendations based on the deduced correlations. The recommendations may relate to contact center routing, scheduling of agents, cross-sell/upsell efforts, escalation options, coaching scripts, and/or the like.”)(¶0063) receiving user selection of coaching content having first one or more groups, via a graphical User Interface (GUI), for an agent, operating a Customer Feedback Relevancy Score (CFRS) module, said CFRS module” Tamblyn teaches (“According to one embodiment, a regression decision tree is modeled for each KPI 200. After a decision tree is learned based on current values, changes may be made to one or more input variables 202 to observe how the values affect a particular KPI. Such values may be displayed on a graphical user interface, such as, for example, the dashboard described above. According to one embodiment, a manager viewing the results of the simulation may transmit a command, via the end user device, to the appropriate contact center system 120, to effectuate the change.”)(¶0106) “calculating a CFRS Tamblyn teaches (“The hub server 100 is configured with one or more modules including, for example, an analytics module 204 configured to analyze and aggregate the data into, for example, statistical tables and/or objects (referred to as a CX object) stored in a data storage device. The data storage device may take the form of a hard drive or disk array conventional in the art. According to one embodiment, different objects may be created and published for different KPIs that drive contact center objectives. For example, an object may be created for one or more contact center statistics (e.g., average call length, average queue length, escalations, etc.), customer experience and sentiment, business value and classification, upsell/cross-sell attempts, and the like.”)(¶0059) Under the broadest reasonable interpretation of the term “CFRS” in the claim limitations is interpreted to be equivalent to the term “CX objects” in the Tamblyn teaching above. and presenting the calculated CFRS via the GUI Tamblyn teaches (“According to one embodiment, a regression decision tree is modeled for each KPI 200. After a decision tree is learned based on current values, changes may be made to one or more input variables 202 to observe how the values affect a particular KPI. Such values may be displayed on a graphical user interface, such as, for example, the dashboard described above. According to one embodiment, a manager viewing the results of the simulation may transmit a command, via the end user device, to the appropriate contact center system 120, to effectuate the change.”)(¶0106) wherein the CFRS indicates relevancy of user selection of coaching content having the first one or more groups to customer experience (CX)” Tamblyn teaches (“According to one embodiment, CX data gathered across various tenants may be used to detect trends and/or themes across different customer types and/or external events. In this regard, the analysis module may determine that a particular type of CX data and/or external event has high or low statistical correlation to certain outcomes. Such determination may be made after observing, either automatically or semi-automatically, how closely the CX data and/or external event tracks with certain outcomes over time and over various tenants. The analysis module may make recommendations based on the deduced correlations. The recommendations may relate to contact center routing, scheduling of agents, cross-sell/upsell efforts, escalation options, coaching scripts, and/or the like. A feedback loop back to the hub server 100 with actual outcome data allows the server to learn and improve based on the actual outcomes from the recommendations.”)(¶0063) determining second one or more groups of coaching content to maximize the CFRS and presenting the second one or more groups via the GUI and an increase in calculated CFRS to reach a maximum CFRS to improve CX Tamblyn teaches (“According to one embodiment, CX data gathered across various tenants may be used to detect trends and/or themes across different customer types and/or external events. In this regard, the analysis module may determine that a particular type of CX data and/or external event has high or low statistical correlation to certain outcomes. Such determination may be made after observing, either automatically or semi-automatically, how closely the CX data and/or external event tracks with certain outcomes over time and over various tenants. The analysis module may make recommendations based on the deduced correlations. The recommendations may relate to contact center routing, scheduling of agents, cross-sell/upsell efforts, escalation options, coaching scripts, and/or the like. A feedback loop back to the hub server 100 with actual outcome data allows the server to learn and improve based on the actual outcomes from the recommendations.”)(¶0063) Also refer to (“According to one embodiment, a desired/optimal KPI may also be set by the manager via the graphical user interface, and the simulation run to identify various permutations of the input variables 202 that is predicted to achieve the desired KPI. The various permutations of the input variables may then be displayed on the graphical user interface for selection by the manager as to the particular permutation that is desired to be implemented.”)(¶0107) operating the schedule for the agent. Tamblyn teaches (“6. Hub managers can monitor the dashboard and “push” actions back down to a tenant's contact center system. Push information could cause reactions to routing, cross-selling, queue announcements and agent coaching.”)(¶0075) wherein the first one or more groups and the second one or more groups include: focus area, behavior and knowledge base artifacts Tamblyn teaches (“… According to one embodiment, different objects may be created and published for different KPIs that drive contact center objectives. For example, an object may be created for one or more contact center statistics (e.g. average call length, average queue length, escalations, etc.), customer experience and sentiment, business value and classification, upsell/cross-sell attempts, and the like.”)(¶0059) retrieving customer-feedback related data during a preconfigured period for the agent from a feedback-management component; Tamblyn teaches (“Various types of data including company data, promotions or campaign run by the contact center, customer experience data (including sentiment, interaction results, interaction lengths, customer satisfaction, etc.), contact/retail center statistics data (e.g. average handle time, resolution rate, wait time, average hold duration, etc.), sales data, interaction data, tenant workforce data (agent/employee sick days, schedules, etc.), and/or other contact/retail center factors, all of which are collectively referred to as CX data, are provided by the contact center systems 120 and retail store systems 122 on behalf of the various tenants to the hub server 100 over the data communication links 140, 142.”) (¶0058) retrieving mapping between categories and the first one or more groups; Tamblyn teaches (“Various types of data including company data, promotions or campaign run by the contact center, customer experience data (including sentiment, interaction results, interaction lengths, customer satisfaction, etc.), contact/retail center statistics data (e.g. average handle time, resolution rate, wait time, average hold duration, etc.), sales data, interaction data, tenant workforce data (agent/employee sick days, schedules, etc.), and/or other contact/retail center factors, all of which are collectively referred to as CX data”) (¶0058) counting occurrences of each category to determine a median number for each category; Tamblyn teaches (“Performance Status Indicators—For each KPI and for each company, indicate how the current KPI value compares to similar companies in terms of size, industry, and other normalization factors. Visual indicators may be used to indicate performance: 1) green to indicate above average performance; 2) yellow to indicate average performance; and 3) red to indicate below average performance.”)(¶0053) extracting the preconfigured number of associated interactions which have an NPS score below a preconfigured threshold and having lowest NPS score to be added to the generated coaching session; Tamblyn teaches (“… According to one embodiment, different objects may be created and published for different KPIs that drive contact center objectives. For example, an object may be created for one or more contact center statistics (e.g. average call length, average queue length, escalations, etc.), customer experience and sentiment, business value and classification, upsell/cross-sell attempts, and the like.”)(¶0059) extracting parameters from the customer-feedback related data for each feedback. Wherein the parameters include at least one of: feedback comment, associated Net promoter score (NPS). assigned categories, customer and agent details and associated interaction identifier; Tamblyn teaches (“[0063] According to one embodiment, CX data gathered across various tenants may be used to detect trends and/or themes across different customer types and/or external events. In this regard, the analysis module may determine that a particular type of CX data and/or external event has high or low statistical correlation to certain outcomes. Such determination may be made after observing, either automatically or semi-automatically, how closely the CX data and/or external event tracks with certain outcomes over time and over various tenants. The analysis module may make recommendations based on the deduced correlations. The recommendations may relate to contact center routing, scheduling of agents, cross-sell/upsell efforts, escalation options, coaching scripts, and/or the like. A feedback loop back to the hub server 100 with actual outcome data allows the server to learn and improve based on the actual outcomes from the recommendations.”)(¶0063) However, Tamblyn does not fully teach, but Miller fully teaches the following limitations: automatically scheduling the generated coaching session Miller teaches (“[0121] As noted above, the particular training that is automatically scheduled for the agent may be selected based on particular aspects of score. …”)(¶0121) with the second one or more groups of coaching content and adding a preconfigured number of associated interactions based on interactions Net promoter score (NPS): Miller teaches (“[0072] The metadata of the interaction includes portions of the interaction that are not readily user-modifiable. These may include, for example, customer feedback regarding the interaction (e.g., net promoter score), and the like.”)(¶0072) Also refer to (“[0087] … features based on the metadata about the interaction. Some of these metadata may include the number of transfers of the interaction between agents, customer feedback (e.g., a net promoter score (NPS) or survey data), the time of day of the interaction, and conversation length…)(¶0087) operating a k-Nearest neighbors algorithm to calculate a CFRS foreach category based on the extracted parameters based on the retrieved mapping; Miller teaches (“[0094] The training data may then be used to train, validate, and test the one or more prediction models. Each prediction model may be used to predict an answer or a score for a corresponding one of the questions of the evaluation form (e.g., for a particular score Yi of the evaluation form). In various embodiments of the present invention, each the prediction models may be a model such as a linear regression model, a multiple regression model, a k-nearest neighbors regression, a random forest tree, a support vector machine, or a neural network, which may be selected based on applicability to the particular portion of the evaluation to be predicted and the characteristics of the interaction features supplied to the prediction model.”)(¶0094) detecting a preconfigured number of categories having highest variance by using an isolation forest algorithm, to yield impacted categories; Miller teaches (“[0094]… Each prediction model may be used to predict an answer or a score for a corresponding one of the questions of the evaluation form (e.g., for a particular score Yi of the evaluation form). In various embodiments of the present invention, each the prediction models may be a model such as a linear regression model, a multiple regression model, a k-nearest neighbors regression, a random forest tree, a support vector machine, or a neural network, which may be selected based on applicability to the particular portion of the evaluation to be predicted and the characteristics of the interaction features supplied to the prediction model. The particular choice of learning algorithm may depend on the organization and the type of data collected from the interactions, such as which features have a greater influence on the human computed evaluation scores, as set by business policies of the organization. [0095] For example, often, when using text attributes or features derived from the text of the transcription of the interaction, the features may not be linearly separable. In such circumstances, in one embodiment of the present invention, the prediction model is a random forest tree with regression, which may perform well in circumstances of complex (e.g., linearly inseparable) relationships between the interaction features.”) (¶0094-95) Under the broadest reasonable interpretation of the term “isolation forest” in the claim limitations is interpreted to be equivalent to the term “random forest” in the Miller teaching above. sorting interactions associated with the impacted categories based on NPS score; Miller teaches(“[0072] The metadata of the interaction includes portions of the interaction that are not readily user-modifiable. These may include, for example, identifiers of the agent and customer involved in the interaction (e.g., phone numbers, email addresses, assigned customer numbers or agent numbers, etc.), timestamps of various messages, the number of messages sent in the interaction (e.g., the number of messages sent in a text chat or the number of email messages exchanged), the length (e.g., in minutes) of an audio and/or video interaction, customer feedback regarding the interaction (e.g., net promoter score), and the like.”) (¶0072) Also refer to (“[0087] … features based on the metadata about the interaction. Some of these metadata may include the number of transfers of the interaction between agents, customer feedback (e.g., a net promoter score (NPS) or survey data), the time of day of the interaction, and conversation length…)(¶0087) Therefore, it would have been obvious to someone of ordinary skill in the art to apply the known technique of Miller to utilize net promoter scores to evaluate customer feedback with the known methods of Tamblyn to conduct real-time analytics to optimize performance. Since Miller discloses, how to incorporate net promoter scoring (NPS), isolation forest and k-Nearest Neighbors algorithms into the determination of customer feedback, one of ordinary skill in the art would have been motivated to combine the teaching of Miller with the real-time analytics of customer experience data components teachings of Tamblyn to achieve optimal outcomes in meeting the expectations of customers of a contact center. Regarding Claim 3, Tamblyn and Miller teach all the limitations in the claims above and Tamblyn further teaches the following limitation: calculating the CFRS normalizing the customer-feedback related data by NPS to yield customer-feedback related data having normalized NPS Tamblyn teaches (“Various types of data including company data, promotions or campaign run by the contact center, customer experience data (including sentiment, interaction results, interaction lengths, customer satisfaction, etc.), contact/retail center statistics data (e.g. average handle time, resolution rate, wait time, average hold duration, etc.), sales data, interaction data, tenant workforce data (agent/employee sick days, schedules, etc.), and/or other contact/retail center factors, all of which are collectively referred to as CX data”)(¶0058) See also (“[0068] … The values may be normalized based on normalization factors, such as, for example, company size, vertical industry, and the like. …”)(¶0068) However, the Tamblyn teaching does not fully teach associated Net promoter score (NPS). Miller does disclose associated Net promoter score (NPS) as it relates to customer-feedback related data. Miller teaches (“[0087] Some aspects of embodiments of the present invention are directed to automatically extracting interaction features based on the metadata about the interaction. Some of these metadata may include the number of transfers of the interaction between agents, customer feedback (e.g., a net promoter score (NPS) or survey data), the time of day of the interaction, and conversation length (e.g., the number of chat messages sent, the number of emails sent, the total amount of text sent between the customer and agent, the duration of the text chat session or the audio or video conference).”) (¶0087) Therefore, it would have been obvious to someone of ordinary skill in the art to apply the known technique of Miller to utilize net promoter scores to evaluate customer feedback with the known methods of Tamblyn to conduct real-time analytics to optimize performance. Since Miller discloses, how to incorporate net promoter scoring (NPS) into the determination of customer feedback, one of ordinary skill in the art would have been motivated to combine the teaching of Miller with the real-time analytics components teachings of Tamblyn to achieve optimal outcomes in meeting the expectations of customers of a contact center. Regarding Claim 4, Tamblyn and Miller teach all the limitations in the claims above and Tamblyn further teaches the following limitation: the customer-feedback related data by NPS is normalized based on formula I: (I) whereby is a normalized NPS score, [I is NPS score of current feedback instance, IninX[,i] is minimum NPS score within all the feedback instances for respective agent, and ax(X[:.i]) is maximum NPS score within all the feedback instances for an agent Tamblyn teaches (“[0058] Various types of data including company data, promotions or campaign run by the contact center, customer experience data (including sentiment, interaction results, interaction lengths, customer satisfaction, etc.), contact/retail center statistics data (e.g. average handle time, resolution rate, wait time, average hold duration, etc.), sales data, interaction data, tenant workforce data (agent/employee sick days, schedules, etc.), and/or other contact/retail center factors, all of which are collectively referred to as CX data”) (¶0058) However, the Tamblyn teaching does not fully teach associated Net promoter score (NPS). Miller does disclose associated Net promoter score (NPS) as it relates to customer-feedback related data. Miller teaches (“[0087] Some aspects of embodiments of the present invention are directed to automatically extracting interaction features based on the metadata about the interaction. Some of these metadata may include the number of transfers of the interaction between agents, customer feedback (e.g., a net promoter score (NPS) or survey data), the time of day of the interaction, and conversation length (e.g., the number of chat messages sent, the number of emails sent, the total amount of text sent between the customer and agent, the duration of the text chat session or the audio or video conference).”) (¶0087) Therefore, it would have been obvious to someone of ordinary skill in the art to apply the known technique of Miller to utilize net promoter scores to evaluate customer feedback with the known methods of Tamblyn to conduct real-time analytics to optimize performance. Since Miller discloses, how to incorporate net promoter scoring (NPS) into the determination of customer feedback, one of ordinary skill in the art would have been motivated to combine the teaching of Miller with the real-time analytics components teachings of Tamblyn to achieve optimal outcomes in meeting the expectations of customers of a contact center. Regarding Claim 5, Tamblyn and Miller teach all the limitations in the claims above and Tamblyn further teaches the following limitation: computerized-method further comprising identifying impacted categories in the customer-feedback related data having normalized NPS by counting feedback comments in each category Tamblyn teaches (“Various types of data including company data, promotions or campaign run by the contact center, customer experience data (including sentiment, interaction results, interaction lengths, customer satisfaction, etc.), contact/retail center statistics data (e.g. average handle time, resolution rate, wait time, average hold duration, etc.), sales data, interaction data, tenant workforce data (agent/employee sick days, schedules, etc.), and/or other contact/retail center factors, all of which are collectively referred to as CX data”)(¶0058) However, the Tamblyn teaching does not fully teach associated Net promoter score (NPS). Miller does disclose associated Net promoter score (NPS) as it relates to customer-feedback related data. Miller teaches (“[0087] Some aspects of embodiments of the present invention are directed to automatically extracting interaction features based on the metadata about the interaction. Some of these metadata may include the number of transfers of the interaction between agents, customer feedback (e.g., a net promoter score (NPS) or survey data), the time of day of the interaction, and conversation length (e.g., the number of chat messages sent, the number of emails sent, the total amount of text sent between the customer and agent, the duration of the text chat session or the audio or video conference).”) (¶0087) Therefore, it would have been obvious to someone of ordinary skill in the art to apply the known technique of Miller to utilize net promoter scores to evaluate customer feedback with the known methods of Tamblyn to conduct real-time analytics to optimize performance. Since Miller discloses, how to incorporate net promoter scoring (NPS) into the determination of customer feedback, one of ordinary skill in the art would have been motivated to combine the teaching of Miller with the real-time analytics components teachings of Tamblyn to achieve optimal outcomes in meeting the expectations of customers of a contact center. Regarding Claim 7, Tamblyn and Miller teach all the limitations in the claims above and Tamblyn further teaches the following limitation: web application for managing coaching sessions is a cloud-based application which is implemented as a workflow and having distributed component units and wherein the web application is interacting with a service to coordinate work across the distributed component units Tamblyn teaches (“… According to one embodiment, the data is available either through a robust dashboard and/or through API's which may be used by the tenant to manipulate different decisions that may affect customer experience, such as, for example, interaction routing, work-force management (WFM) scheduling, cross-sell opportunities, music on hold, escalation options, and/or the like.”)(¶0046) Regarding Claim 8, Tamblyn and Miller teach all the limitations in the claims above and Tamblyn further teaches the following limitation: the retrieved mapping between categories and one or more groups is preconfigured by a user of the web application for managing coaching sessions Tamblyn teaches (“… high-level KPI's that may be the outcome of many factors such as, for example, staffing and training levels, specifics and frequencies of sales efforts, routing strategies, contact center policies, and many more. In a ---typical contact center, trial and error are generally used where certain factors under the control of the contact is center are set to particular values, and outcome of the KPIs are measured over time. Thus, there is generally no way of knowing how a change in one of these factors will affect a KPI without actually going through the change and observing the outcome. For instance, adding more agents might increase CSAT (e.g., by reducing wait time and adding more adequate agents to answer callers), but it is generally not known in advance as to how much CSAT will increase.”)(¶0009) Also refer to (“According to one embodiment, a desired/optimal KPI may also be set by the manager via the graphical user interface, and the simulation run to identify various permutations of the input variables 202 that is predicted to achieve the desired KPI. The various permutations of the input variables may then be displayed on the graphical user interface for selection by the manager as to the particular permutation that is desired to be implemented.”)(¶0107) Also refer to (“… agent at the contact center also allows different types of analysis, including for example, analysis of representations being made by specific agents, identification of agents generating complaints, and the like. Training items may be recommended by the analytics module 204 for staff at the contact center and/or retail store in response to such analysis. …”)(¶0146) Regarding Claim 11, Tamblyn and Miller teach all the limitations in the claims above and Tamblyn further teaches the following limitation: user selection of the determined second one or more groups presenting the maximum CFRS and a notification that maximum CFRS achieved Tamblyn teaches (“According to one embodiment, each object is defined by one or more parameters and/or attributes, and associated with methods that may be queried by a subscribing client to access the aggregate data. For example, one method may be invoked to pull statistical data collected by the object across various tenants. Specific parameters provided to the method may qualify the statistical data to be pulled, such as, for example, data relating to particular time periods, particular types of business, particular contact center sizes, and the like. Another method may be invoked by a subscribing client to push data to the object to be aggregated into the statistical data collected by the object. For example, occurrence of a sales event may trigger push of sales data to an object related to upsell attempts. Once generated, the objects may be used to display statistical data, generate benchmark values, make recommendations based on business rules, and the like. The recommendations may be for improving performance of the contact center based on what is learned.”)(¶0060) Also refer to (“According to one embodiment, a desired/optimal KPI may also be set by the manager via the graphical user interface, and the simulation run to identify various permutations of the input variables 202 that is predicted to achieve the desired KPI. The various permutations of the input variables may then be displayed on the graphical user interface for selection by the manager as to the particular permutation that is desired to be implemented.”)(¶0107) Under the broadest reasonable interpretation of the term “group” in the claim limitations is interpreted to be equivalent to the term “objects” in the Tamblyn teaching above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAHUL SHARMA whose telephone number is (571) 272-3058. The examiner can normally be reached Monday thru Friday, 8-5 CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber can be reached at (571) 270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RAHUL SHARMA/Examiner, Art Unit 3626 /Michael Young/Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Jan 02, 2023
Application Filed
Feb 08, 2025
Non-Final Rejection — §101, §103
Mar 05, 2025
Response Filed
Mar 26, 2025
Final Rejection — §101, §103
Jun 24, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Jul 09, 2025
Final Rejection — §101, §103
Oct 10, 2025
Request for Continued Examination
Oct 18, 2025
Response after Non-Final Action
Dec 05, 2025
Non-Final Rejection — §101, §103
Jan 22, 2026
Response Filed
Mar 27, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591823
Decentralized Dynamic Policy Learning and Implementation System
2y 5m to grant Granted Mar 31, 2026
Patent 12567075
METHOD OF SCORING AND VALUING DATA FOR EXCHANGE
2y 5m to grant Granted Mar 03, 2026
Patent 12541745
SYSTEMS AND METHODS FOR RECYCLING CONSUMER ELECTRONIC DEVICES
2y 5m to grant Granted Feb 03, 2026
Patent 12482033
SYSTEMS AND METHODS FOR INTEGRATED MARKETING
2y 5m to grant Granted Nov 25, 2025
Patent 8301482
DETERMINING STRATEGIES FOR INCREASING LOYALTY OF A POPULATION TO AN ENTITY
2y 5m to grant Granted Oct 30, 2012
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
8%
Grant Probability
7%
With Interview (-0.7%)
5y 0m
Median Time to Grant
High
PTA Risk
Based on 112 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month