Prosecution Insights
Last updated: April 19, 2026
Application No. 18/626,893

SYSTEM AND METHOD FOR DISTRIBUTING INTERACTION DATA TO AGENTS

Non-Final OA §101§103§112
Filed
Apr 04, 2024
Examiner
GARCIA-GUERRA, DARLENE
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Nice Ltd.
OA Round
1 (Non-Final)
23%
Grant Probability
At Risk
1-2
OA Rounds
4y 6m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
119 granted / 523 resolved
-29.2% vs TC avg
Strong +34% interview lift
Without
With
+34.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
53 currently pending
Career history
576
Total Applications
across all art units

Statute-Specific Performance

§101
36.6%
-3.4% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
2.6%
-37.4% vs TC avg
§112
16.2%
-23.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 523 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice to Applicant The following is a NON-FINAL Office action upon examination of application number 18/626,893 filed on 04/04/2024. Claims 1-20 are pending in this application, and have been examined on the merits discussed below. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 3. The following is a quotation of 35 U.S.C. 112(b): (B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 4. Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. 5. Claims 1 and 11 each recite the limitation “identifying one or more interaction events” and subsequently recite the phrase “said identified interaction events.” The phrases “one or more interaction events” and “said identified interaction events” render the scope of claims 1 and 11 ambiguous because the claims shift from singular “one or more interaction events” to plural “said identified interaction events.” The phrase “said identified interaction events” is ambiguous because the earlier limitation allows for identification of either a single interaction event or multiple interaction events. The subsequent use of the plural form “identified interaction events” implies a plurality, thereby creating uncertainty as to whether the limitation encompasses the case where only a single interaction event is identified. Furthermore, there is a lack of antecedent basis for the limitation “said identified interaction events” in the claims, which renders claims 1 and 11 indefinite. Appropriate correction is required. 6. Claim 7 recites “A method according to claim 6, wherein said interaction capacity is identified based on the evaluation of agent data items.” The phrase “the evaluation of agent data items” lacks antecedent basis. While claim 6 recites “evaluating, using machine learning, whether said agent has capacity to receive a new interaction request”, claim 6 does not introduce an evaluation of agent data items, therefore rendering the claim indefinite. Appropriate correction is required. 7. Claim 17 recites “A system according to claim 16, wherein said interaction capacity is identified based on the evaluation of agent data items.” The phrase “the evaluation of agent data items” lacks antecedent basis. While claim 16 recites “evaluation, using machine learning, whether said agent has capacity to receive a new interaction request”, claim 16 does not introduce an evaluation of agent data items, therefore rendering the claim indefinite. Appropriate correction is required. 8. Claims 2-10 depend from claim 1 and fail to cure the §112(b) deficiency noted above, and are therefore rendered indefinite based on dependency. 9. Claims 12-19 depend from claim 11 and fail to cure the §112(b) deficiency noted above, and are therefore rendered indefinite based on dependency. Claim Rejections - 35 USC § 101 10. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 11. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The eligibility analysis in support of these findings is provided below, in accordance with MPEP 2106. With respect to Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is first noted that the method (claims 1-10), device (claims 11-19), and method (claim 20), are directed to at least one potentially eligible category of subject matter (i.e., process, machine, and process, respectively). Thus, Step 1 of the Subject Matter Eligibility test for claims 1-20 is satisfied. With respect to Step 2A Prong One, it is next noted that the claims recite an abstract idea that falls into the “Certain Methods of Organizing Human Activity” abstract idea set forth in MPEP 2106 because the claims recite steps for distributing interaction data to agents, which encompasses activity for managing personal behavior or relationships or interactions (e.g., following rules or instructions), and steps that can be performed in the human mind (including observation, evaluation, judgment, opinion), and therefore fall under the “Mental Processes” abstract idea grouping. With respect to independent claim 1, the limitations reciting the abstract idea are indicated in bold below: identifying one or more interaction events from interaction metadata items located in one or more interactions assigned to an agent; generating a prediction prompt for estimating one or more future interaction events for said one or more interactions based on said identified interaction events; and applying said prediction prompt to a machine learning model to estimate said one or more future interaction events for said one or more interactions. These steps are organizing human activity by managing interactions between people by following rules, or instructions, and may also be accomplished mentally such as via human observation and perhaps with the aid of pen and paper. The claim recites limitations that fall under the “Certain Methods of Organizing Human Activity” abstract idea grouping because the limitations focus on collecting, analyzing, and distributing interaction data among agents, which are activities that involve managing human interactions and assignment, which are abstract processes related to human task coordination. The claim also falls under the “Mental Processes” abstract idea grouping because the limitations – identifying interaction events, generating a prediction prompt, and estimating futures interactions – can be performed in the human mind using pen and paper. Therefore, because the limitations above set forth activities falling within the “Certain methods of organizing human activity” and “Mental Processes” abstract idea grouping described in MPEP 2106, the additional elements recited in the claims are further evaluated, individually and in combination, under Step 2A Prong Two and Step 2B below. Independent claims 11 and 20 recite similar limitations as those discussed above and are therefore found to recite the same or substantially the same abstract idea as claim 1. With respect to Step 2A Prong Two, the judicial exception is not integrated into a practical application. With respect to the independent claims, the additional elements are: a machine learning model (claim 1), a computing device, a memory, a processor, and a machine learning model (claim 11), a machine learning model (claim 20). These additional elements have been evaluated, but fail to integrate the abstract idea into a practical application because they amount to using generic computing elements or computer-executable instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), and merely serve to link the use of the judicial exception to a particular technological environment. See MPEP 2106.05(f) and 2106.05(h). In addition, these limitations fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. With respect to Step 2B of the eligibility inquiry, it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to the independent claims, the additional elements are: a machine learning model (claim 1), a computing device, a memory, a processor, and a machine learning model (claim 11), a machine learning model (claim 20). These elements have been considered individually and in combination, but fail to add significantly more to the claims because they amount to using generic computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), and merely serve to link the use of the judicial exception to a particular technological environment and does not amount to significantly more than the abstract idea itself. Notably, Applicant’s Specification suggests that virtually any type of computing device under the sun can be used to implement the claimed invention (Specification at paragraph [0062]). Accordingly, the generic computer involvement in performing the claim steps merely serves to generally link the use of the judicial exception to a particular technological environment, which does not add significantly more to the claim. See, e.g., Alice Corp., 134 S. Ct. 2347, 110 USPQ2d 1976.). Even if the machine learning was evaluated as an element beyond software/code for a generic computer to execute, it is noted that that the claimed use of machine learning is recited at a high level of generality these elements amount to well-understood, routine, and conventional activity in the art, which fails to add significantly more to the claims. See, e.g., Magdon-Ismail et al., US 2009/0055270 (paragraph 39: “Both local and central engines may incorporate analysis techniques, such as artificial intelligence, machine learning and other techniques, which are well known in the art”). See also, Anders et al., US 2020/0020015 (paragraph 101: “inferences may be performed by any combination of means known in the art, such as by pattern-matching, text analytics, semantic analytics, statistical methods, artificial intelligence, Bayesian analysis, machine learning, or keyword searching”). In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrate the abstract idea into a practical application. Their collective functions merely provide generic computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself. Dependent claims 2-10 and 12-19 recite the same abstract idea as recited in the independent claims, and when evaluated under Step 2A Prong One are found to merely recite details that serve to narrow the same abstract idea recited in the independent claims accompanied by the same generic computing elements or software as those addressed above in the discussion of the independent claims, which is not sufficient to amount to a practical application or add significantly more, or other additional elements that fail to amount to a practical application or add significantly more, as noted above. In particular, dependent claims 2-9 recite “wherein said one or more future interaction events comprise interaction termination,” “wherein said one or more future interaction events comprise initiating a new interaction,” “wherein estimating said one or more future interaction events comprises determining a latency in responses of said agent to one or more interactions,” “wherein estimating said one or more future interaction events comprises sequential initiation and termination of said one or more interactions, thereby maintaining a concurrent assignment of interaction requests to said agent,” “comprising identifying an interaction capacity of said agent from said interaction metadata items; and evaluating whether said agent has capacity to receive a new interaction request,” “wherein said interaction capacity is identified based on the evaluation of agent data items,” “wherein evaluating said interaction capacity of said agent comprises comparing an interaction latency of an agent to a threshold value,” “wherein said agent is available for receiving said new interaction request when said interaction latency is below said threshold value and wherein said agent is unavailable for receiving said new interaction request when said latency is above said threshold value,” “wherein when said agent is unavailable for receiving an interaction request, identifying another agent for receiving said interaction request”, however these limitations cover activity for managing personal behavior or relationships or interactions (e.g., following rules or instructions), which is part of the same abstract idea as addressed in the independent claims that falls within the “Certain Methods of Organizing Human Activity” abstract idea grouping and also recite steps that may also be accomplished mentally such as via human observation and perhaps with the aid of pen and paper. Dependent claims 6 and 16 recite additional elements of: machine learning. However, when evaluated under Step 2A Prong Two and Step 2B, these additional elements do not amount to a practical application or significantly more since they merely require generic computing devices (or computer-implemented instructions/code) which as noted in the discussion of the independent claims above is not enough to render the claims as eligible. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to a practical application or significantly more than the abstract idea itself. For more information, see MPEP 2106. Claim Rejections - 35 USC § 103 12. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 13. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 14. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 15. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 16. Claims 1, 3-4, 6-7, 11, 13-14, 16-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Traba et al., Pub. No.: US 2021/0377392 A1, [hereinafter Traba], in view of Low et al., Pub. No.: US 2025/0165678 A1, [hereinafter Low]. As per claim 1, Traba teaches a method of distributing interaction data to agents, the method (paragraph 0002: “methods, apparatus, and systems for routing customer communications…)” comprising: identifying one or more interaction events from interaction metadata items located in one or more interactions assigned to an agent (paragraph 0100, discussing that once the customer is identified, a predictive calculator accesses a profile of the identified customer to determine customer data. For example, the predictive calculator may request customer data for the ANI (automatic number identification) associated with the identified customer from behavior database to determine customer data including customer attributes and customer interaction history. Examples of customer attributes include personality style or type, communication patterns, preferred mode of communication, or a combination thereof. Examples of customer interaction history include interaction sentiment history, distress history, interaction outcomes, or a combination thereof. In one embodiment, behavior database returns the customer data, including interaction history and communication preferences, to predictive calculator; paragraph 0107, discussing that building the predictive models include collecting interaction data, customer data and agent data. Interaction data can include unstructured and structured data from a plurality of different communication channels utilized by an agent to interact with a customer. For example, interaction data may include a transcription of a previous telephone call or video chat between a customer and an agent, the text of an email exchange between the customer and agent, a written essay or other text unilaterally submitted by a customer, an applicant's enrollment application, or a pre-recorded video clip submitted by a customer. Further, structured telephony data such as call length, call origination, hold time, interaction outcome data, and similar data associated with customer interactions may also be collected. The customer data includes biographical and identification information, and the agent data collected can include training level, personality type, and other data. In some embodiments, the input data collected and/or identified may be derived from customer interactions occurring within the contact center and stored in the database, however, in other embodiments, the data may be imported from external sources, such as one or more third-party databases operated by data collection companies; paragraph 0115, discussing that the predictive model may then be utilized to determine the identified outcome or the likelihood of the identified outcome occurring in association with the current interaction. This interaction may be a telephone call, video chat, instant message conversation, email exchange, or other communication session as described herein. The interaction can be real-time, near real-time (i.e., within 5 minutes, preferably within 2 minutes, and more preferably within 1 minute of capture), previously captured, or a combination thereof. In certain preferred embodiments, it is real-time); estimating one or more future interaction events for said one or more interactions based on said identified interaction events (paragraph 0094, discussing a predictive calculator that leverages agent performance, ACD skill, and customer data to predict interaction outcome metric values for every available agent if the interaction were routed to that agent. Each model is specific to a customer metric and the predictive calculator uses each model to make a metric-specific prediction; paragraph 0106, discussing a predictive model for AHT (average handle time) that outputs a predicted AHT for the customer and each available agent, a predictive model for CSAT (Customer Satisfaction) that outputs a predicted CSAT for the customer and each available agent, a predictive model for FCR that outputs the predicted likelihood that the communication will be resolved in the first communication for the customer and each available agent, a predictive model for RR that predicts the likelihood that the customer will stay with the company for the customer and each available agent...For example, if there were three agents available and the optimization customer metric was handle time, the predictive calculator would select the handle time model and the model would output the AHT for each of the three agents; paragraph 0112, discussing that once an outcome to be predicted is identified, a predictive model operable to predict the identified outcome or the likelihood of the identified outcome occurring is built using the input data as standardized…As an example, the model may indicate that whether a customer will cancel his or her service is correlated to the customer's personality, the number of distress events during a call, the agent's experience, and the customer's tenure, and assign a coefficient to each of the four variables); and a model to estimate said one or more future interaction events for said one or more interactions (paragraph 0114, discussing that after a predictive model has identified variables relevant to the identified outcome, a benchmark data set is selected for each identified variable. Specifically, to accurately apply the predictive model to incoming customer interactions, data values related to the relevant variables collected during the incoming customer interactions are standardized before being fed into the model. As discussed above, benchmark data sets define the particular data against which a data value is compared for the generation of its z-score. In other words, selecting a different benchmark data set may generate a different z-score, which, in turn, may result in a different outcome prediction. Thus, selection of benchmark data sets may be utilized to customize prediction results. For example, it may be desired to determine the likelihood of a customer purchasing a product in view of customer interactions recorded in the past six months, rather than all customer interactions ever recorded. To achieve such a prediction result, the benchmark data sets selected would include data associated with customer interactions occurring in the past six months. For example, if the number of distress events per call is deemed relevant to predict an outcome, the number of distress events during a current call may be compared against a benchmark data set that only includes calls recorded in the past six months. Additionally, benchmark data sets may be based on other criteria besides time periods. In one example embodiment, a benchmark data set associated with agent tenure may be selected that includes agent tenure data for different subsets of agents, for example, agents located within a specific contact center or region of the country…; paragraph 0115, discussing that the predictive model may then be utilized to determine the identified outcome or the likelihood of the identified outcome occurring in association with the current interaction). While Traba describes estimating one or more future interaction events for said one or more interactions based on said identified interaction events and a model to estimate said one or more future interaction events for said one or more interactions, Traba does not explicitly teach generating a prediction prompt for estimating one or more future interaction events for said one or more interactions based on said identified interaction events; and applying said prediction prompt to a machine learning model to estimate said one or more future interaction events for said one or more interactions. However, Low in the analogous art of predictive modeling systems teaches these concepts. Low teaches: generating a prediction prompt for estimating one or more future interaction events for said one or more interactions based on said identified interaction events (paragraph 0142, discussing that synthetic user memories of N different users may be built using user interactivity data logged during an experiment performed on N different real human users, e.g., an A/B feature test, or A/B feature test that is not yet completed. N different human users may be separated into multiple groups exposed to different features. Using the processes described in FIGS. 10-11, it is possible to simulate and create N different synthetic user memories for the N different real human users based on user interactivity data logged for these N different real human users. Subsets of log entries in the memory log may be used in a prompt, e.g., a prompt chain, to obtain generated responses from a model. The generated responses may include predicted actions based on the contextual information presented in the prompt [i.e., prediction prompt]; paragraph 0148, discussing that in some cases, in addition to a request to generate a predicted action, the request may include one or additional instruction(s) to prompt the model to form higher-level abstract memories and reasoning about the predicted action and/or past user interactivity data. The higher-level abstract memories and/or reasoning may give explainable insights about the predicted actions and/or behavior of users. For example, the first request may further include a first instruction to generate a first reasoning for the first predicted action. The second request may further include a second instruction to generate a second reasoning for the second predicted action. In another example, the first request may further include a third instruction to identify one or more first natural language log entries in the first subset that led to the first predicted action. The second request may further include a fourth instruction to identify one or more second natural language log entries in the second subset that led to the second predicted action; paragraph 0221, discussing a method, including transforming user data into training data, the user data including data collected from a content streaming platform, and user communication data, and the training data including prompts and responses to the prompts; updating parameters of a model using the training data; inputting a first prompt to the model, the first prompt including a first description of a first persona, a context description, and a question; receiving a first response from the model in response to the first prompt; inputting a second prompt to the model, the second prompt including a second description of a second persona different from the first persona, the context description, and the question; receiving a second response from the model in response to the second prompt; and analyzing the first response and the second response); and applying said prediction prompt to a machine learning model to estimate said one or more future interaction events for said one or more interactions (paragraph 0036, discussing that the one or more models may include machine learning models, which can learn through supervised learning or unsupervised learning. With supervised learning, machine learning models can learn from training data and find patterns or insights from the training data. With unsupervised learning, machine learning models can find patterns or insights directly from the input data; paragraph 0149, discussing that in some cases, to ensure the model generates responses consistently, the model may be prompted to summarize the factors or considerations the synthetic user may take into account when performing a given action. The model may output a set of factors or considerations in a response. Then, the model may be prompted to generate a predicted action [i.e., estimate said one or more future interaction events] for the synthetic user and a reasoning behind the predicted action in view of the factors or considerations that the model produced in the earlier response. The model may generate a response having a predicted action that would be consistent with the factors or considerations that the model produced in the earlier response; paragraph 0203, discussing inputting, into a model, a first subset of the first natural language log entries in the first synthetic user memory log and a first request to generate a first response representing a first predicted action of the first user based on the first subset of the first natural language log entries). Traba is directed towards a system and method for predictive behavioral routing. Low is directed towards a system and method for prediction of user actions. Therefore they are deemed to be analogous as they both are directed towards prediction systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Traba with Low because the references are analogous art because they are both directed to solutions for predictive modeling and interaction routing, which falls within applicant’s field of endeavor (system and method for distributing interaction data to agents), and because modifying Traba to include Low’s features for generating a prediction prompt for estimating one or more future interaction events for said one or more interactions based on said identified interaction events; and applying said prediction prompt to a machine learning model to estimate said one or more future interaction events for said one or more interactions, in the manner claimed, would serve the motivation of better understanding the behavior of various users and evaluating whether a prompt accurately captures the user and the user's behaviors, and whether the model would generate responses that represent an accurate prediction of the user's actions (Low, paragraphs 0120, 0121); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 3, the Traba-Low combination teaches a method according to claim 1. Traba further teaches wherein said one or more future interaction events comprise initiating a new interaction (paragraph 0111, discussing that an outcome associated with a customer interaction is identified as a target of a predictive model. In more detail, for a contact center it may be desirable to predict an actual outcome or the likelihood of some specific outcome occurring in association with a current customer interaction, be it a telephone-based interaction, web-based interaction, or other type of electronic-assisted interaction. For example, it may be useful for a company to predict a customer satisfaction score, an average handling time, or amount of sales during a customer interaction, taking into account the activities, outcomes, and experiences from prior interactions. Further examples of outcomes associated with a customer include whether a customer will terminate his or her account, whether the customer will purchase a product, whether a customer will pay an outstanding bill, whether a customer is a fraudster, and whether a customer will initiate additional subsequent interaction sessions regarding the same issue, or a combination thereof. This is a non-exhaustive list and additional and/or different outcomes related to a customer or customer interaction may be identified). As per claim 4, the Traba-Low combination teaches a method according to claim 1. Traba further teaches wherein estimating said one or more future interaction events comprises determining a latency in responses of said agent to one or more interactions (paragraphs 0120-0123, discussing that a customer communication is received from a customer, and there are three available agents to handle the communication. The predictive calculator uses the predictive handling time model to output handle times for each of the three available agents. For example, the predictive handle time model may output: Agent 1, Predicted Handle Time=80 seconds, Agent 2, Predicted Handle Time=100 seconds, Agent 3, Predicted Handle Time=120 seconds). As per claim 6, the Traba-Low combination teaches a method according to claim 1. Traba further teaches comprising identifying an interaction capacity of said agent from said interaction metadata items (paragraph 0101, discussing that the predictive calculator identifies available agents for handling the customer communication. In several embodiments, predictive calculator determines the available agents by reviewing the occupancy level of agents, e.g., by obtaining agent data from the contact center. ACD (Automatic Call Distributor) dynamically monitors occupancy level of the agents to determine availability and addresses the real-time performance metrics of the agent. This real-time (or near-real time) dynamic data is typically used to select a destination for the customer communication); and evaluating, using machine learning, whether said agent has capacity to receive a new interaction request (paragraph 0018, discussing using dynamic metric optimization to leverage caller and agent information along with advanced analytics to determine, in real-time, the best customer metric(s) to optimize the routing of a customer to a suitable, available agent. Based on data available from interaction analytics (IA)…, the optimization customer metric for the distribution method is determined in real time by artificial intelligence...For example, a specific customer communication may be routed to optimize average handle time (AHT), first call resolution (FCR), customer satisfaction (CSAT), revenue retention (RR), and/or sales. Dynamic metric optimization enables the ACD to have multi-metric improvement capabilities within a given agent skill; paragraph 0101, discussing that the predictive calculator identifies available agents for handling the customer communication. In several embodiments, predictive calculator determines the available agents by reviewing the occupancy level of agents, e.g., by obtaining agent data from the contact center. ACD dynamically monitors occupancy level of the agents to determine availability and addresses the real-time performance metrics of the agent. This real-time (or near-real time) dynamic data is typically used to select a destination for the customer communication). As per claim 7, the Traba-Low combination teaches a method according to claim 6. Traba further teaches wherein said interaction capacity is identified based on the evaluation of agent data items (paragraph 0102, discussing that once the available agents are identified, the predictive calculator accesses a profile of each available agent to determine agent data. For example, predictive calculator may request agent data from behavior database to determine agent performance history. Agent performance history includes one or more of: agent effectiveness, revenue generating proficiency , customer satisfaction level, speed, efficiency, experience, cross-sell ability, personal satisfaction, proficiency at closing a transaction, and occupancy, or any combination thereof. Other data that can additionally or alternatively be used in the embodiment above or various alternative embodiments to determine agent performance include the transaction or task type, the time-of-day, the result, a self-rating of the servicing agent respecting the agent's proficiency in handling the customer, the rating of the customer of the agent's proficiency in handling the customer, the rating of another party, such as the agent's supervisor or another observer, or how the customer was serviced; paragraph 0109, discussing that after input data, including multi-channel interaction data, customer data, and agent data, has been collected and/or identified, the input data is preferably standardized...As an example, the multi-channel interaction data may include information about the number of distress events occurring during telephone calls between customers and agents; paragraph 0111). Claim 11 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 1, as discussed above. Further, as per claim 11 the Traba-Low combination teaches a system for distributing interaction data to agents, the system comprising: a computing device (paragraph 0037: “In the illustrated embodiment, the contact center control system 142 is an information handling system such as a computer, server, workstation, mainframe computer, or other suitable computing device. In other embodiments, the control system 142 may be a plurality of communicatively coupled computing devices coordinated to provide the above functionality for the contact center 100. The control system 142 includes a processor 144 that is communicatively coupled to a system memory 146, a mass storage device 148, and a communication module 150.”; paragraph 0145); a memory (paragraph 0037: “The control system 142 includes a processor 144 that is communicatively coupled to a system memory 146, a mass storage device 148, and a communication module 150….The system memory 146 provides the processor 144 with non-transitory, computer-readable storage to facilitate execution of computer instructions by the processor. Examples of system memory may include random access memory (RAM) devices such as dynamic RAM (DRAM), synchronous DRAM (SDRAM), solid state memory devices, and/or a variety of other memory devices known in the art.”; paragraph 0144); and a processor (paragraph 0037: “The control system 142 includes a processor 144 that is communicatively coupled to a system memory 146, a mass storage device 148, and a communication module 150. The processor 144 can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the control system 142, a semiconductor-based microprocessor (in the form of a microchip or chip set), a macroprocessor, a collection of communicatively coupled processors, or any device for executing software instructions.”; paragraph 0145). Claim 13 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 3, as discussed above. Claim 14 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 4, as discussed above. Claim 16 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 6, as discussed above. Claim 17 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 7, as discussed above. Claim 20 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 1, as discussed above. Further, as per claim 20, the Traba-Low combination teaches a method of predicting interaction events (paragraph 0002: “The present disclosure relates to methods, apparatus, and systems for routing customer communications, and more particularly to determining how to optimize routing across different customer metrics based on employee, customer, and interaction information.”; paragraphs 0018, 0120). 17. Claims 2, 8-10, 12, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Traba in view of Low, in further view of McGann et al., Pub. No.: US 2017/0111509 A1, [hereinafter McGann]. As per claim 2, the Traba-Low combination teaches a method according to claim 1. Traba further teaches wherein said one or more future interaction events comprise termination (paragraph 0106, discussing a predictive model for AHT (average handle time) that outputs a predicted AHT for the customer and each available agent, a predictive model for CSAT (Customer Satisfaction) that outputs a predicted CSAT for the customer and each available agent, a predictive model for FCR that outputs the predicted likelihood that the communication will be resolved in the first communication for the customer and each available agent, a predictive model for RR that predicts the likelihood that the customer will stay with the company for the customer and each available agent...For example, if there were three agents available and the optimization customer metric was handle time, the predictive calculator would select the handle time model and the model would output the AHT for each of the three agents; paragraph 0111, discussing that an outcome associated with a customer interaction is identified as a target of a predictive model. In more detail, for a contact center it may be desirable to predict an actual outcome or the likelihood of some specific outcome occurring in association with a current customer interaction, be it a telephone-based interaction, web-based interaction, or other type of electronic-assisted interaction. For example, it may be useful for a company to predict a customer satisfaction score, an average handling time, or amount of sales during a customer interaction, taking into account the activities, outcomes, and experiences from prior interactions. Further examples of outcomes associated with a customer include whether a customer will terminate his or her account, whether the customer will purchase a product, whether a customer will pay an outstanding bill, whether a customer is a fraudster, and whether a customer will initiate additional subsequent interaction sessions regarding the same issue, or a combination thereof. This is a non-exhaustive list and additional and/or different outcomes related to a customer or customer interaction may be identified; paragraph 0112, discussing that once an outcome to be predicted is identified, a predictive model operable to predict the identified outcome or the likelihood of the identified outcome occurring is built using the input data as standardized. Specifically, in one embodiment, the standardized input data is fed into predictive analytics software that creates a binary logistic regression model based on the input data. The regression model identifies the variables within the input data that correlate to the identified outcome in the context of a customer interaction. Further, a regression coefficient may be assigned to each identified variable to establish the contribution of the variable to the predicted outcome. As an example, the model may indicate that whether a customer will cancel his or her service is correlated to the customer's personality, the number of distress events during a call, the agent's experience, and the customer's tenure, and assign a coefficient to each of the four variables. As will be discussed in detail below, data points associated with each of these four factors may be collected during a current customer interaction, aggregated at the customer level as needed, and multiplied by their respective coefficients to generate a prediction score indicative of the likelihood that a customer will cancel his or her service). The Traba-Low combination does not explicitly teach wherein said one or more future interaction events comprise interaction termination. However, McGann in the analogous art of o interaction routing system teaches this concept. McGann teaches: wherein said one or more future interaction events comprise interaction termination (paragraph 0137, discussing a module that proceeds to calculate a predicted wait time associated with each of the candidate agents. According to one embodiment, the predicted wait time is based on the agent's current status and the threshold customer patience for the identified interaction intent type. The agent's current status may include information on whether the agent is available or not to handle the interaction. If the agent is not currently available, the current status may include information on the interaction that be is currently handling, such as interaction type, intent identified for the interaction, handling time, and the like. According to one embodiment, the wait time is set to be 0 if the agent is currently available, and set to be −1 if the agent is currently busy and not expected to be available until after a time that the caller is predicted to abandon the interaction. For other cases, the wait time is a function of the predicted availability of the agent). The Traba-Low combination describes features related to prediction of user actions and routing. McGann is directed towards optimized routing of interactions to contact center agents based on machine learning. Therefore they are deemed to be analogous as they both are directed towards prediction systems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the Traba-Low combination with McGann because the references are analogous art because they are both directed to solutions for predictive modeling and interaction routing, which falls within applicant’s field of endeavor (system and method for distributing interaction data to agents), and because modifying the Traba-Low combination to include McGann’s feature for including wherein said one or more future interaction events comprise interaction termination, in the manner claimed, would serve the motivation of better meeting real-time needs or desires of the contact center and allowing more efficient use of resources of the contact center (McGann, paragraph 0034); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 8, the Traba-Low combination teaches a method according to claim 6. While Traba describes determining an agent predicted handle time (paragraphs 0120-0123), the Traba-Low combination does not explicitly teach wherein evaluating said interaction capacity of said agent comprises comparing an interaction latency of an agent to a threshold value. However, McGann in the analogous art of interaction routing systems teaches this concept. McGann teaches: wherein evaluating said interaction capacity of said agent comprises comparing an interaction latency of an agent to a threshold value (paragraph 0059, discussing that it may be desirable to route the interaction to the second-best agent if the second-best agent has been idle for a maximum amount of time, or if the occupancy of the optimal agent is higher than a threshold value; paragraph 0125, discussing that in maximizing the total expected reward for multiple interaction in the queue, the alternate reward maximization module leverages information on customer patience and forecast agent availability if certain agents are not currently available to receive an interaction assignment. Based on the information, the alternate reward maximization module determines whether it should hold-off routing an interaction to get a more optimal agent assignment…For example, information on the impact of caller wait time on the final NPS score may be used to calculate the customer patience number of a call intention type. In this regard, a wait time threshold may be identified for each intention type after which the NPS score drops. For instance, assume that for one of the intention types, it is observed that its Average Handling Time is 619 seconds and average caller Wait Time is 40 seconds (with 70% of the calls answered in less than 1 seconds), and the NPS score drops sharply only after 190 seconds. The customer “patience number” for this intention type may be set as 180 seconds. This example illustrates that customers are prepared to wait (for some time) for the right agent rather than settle for a lesser skilled agent. Therefore, given reliable short term forecasts of agent availability, and an estimate of customer's patience or tolerance level for waiting before abandoning or negatively impacting outcomes, that time flexibility may be exploited to do a more optimal interaction-agent match; paragraph 0137, discussing that the module proceeds to calculate a predicted wait time associated with each of the candidate agents. According to one embodiment, the predicted wait time is based on the agent's current status and the threshold customer patience fo
Read full office action

Prosecution Timeline

Apr 04, 2024
Application Filed
Nov 13, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602305
CUSTOMER JOURNEY PREDICTION AND RECOMMENDATION SYSTEMS AND METHODS
2y 5m to grant Granted Apr 14, 2026
Patent 12591927
SYSTEMS AND METHODS FOR DETERMINING A GRAPHICAL USER INTERFACE FOR GOAL DEVELOPMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12591845
METHOD AND ARRANGEMENT FOR CARRYING OUT CONSTRUCTION MEASURES
2y 5m to grant Granted Mar 31, 2026
Patent 12572876
SYSTEM AND METHOD FOR OBTAINING AUDIT EVIDENCE
2y 5m to grant Granted Mar 10, 2026
Patent 12572866
STORE MANAGEMENT SYSTEM AND STORE MANAGEMENT METHOD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
23%
Grant Probability
57%
With Interview (+34.1%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 523 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month