Prosecution Insights
Last updated: April 18, 2026
Application No. 18/477,701

AUTOMATED QUALITY METRIC MODELS BASED ON CUSTOMER DATA

Final Rejection §101§103
Filed
Sep 29, 2023
Examiner
CHOY, PAN G
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Verint Americas Inc.
OA Round
4 (Final)
24%
Grant Probability
At Risk
5-6
OA Rounds
4y 11m
To Grant
59%
With Interview

Examiner Intelligence

Grants only 24% of cases
24%
Career Allow Rate
109 granted / 452 resolved
-27.9% vs TC avg
Strong +35% interview lift
Without
With
+35.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
40 currently pending
Career history
492
Total Applications
across all art units

Statute-Specific Performance

§101
33.9%
-6.1% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
3.8%
-36.2% vs TC avg
§112
18.7%
-21.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 452 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Introduction The following is a final Office Action in response to Applicant’s communications received on February 19, 2026. Claims 1, 8 and 15 have been amended, claims 7, 14 and 20-22 have been canceled, and claim 23 has been added. Currently claims 1-6, 8-13, 15-19 and 23 are pending. Claims 1, 8 and 15 are independent. Response to Amendments Applicant’s amendments necessitated the new ground(s) of rejection in this Office Action. Applicant’s amendments to claims 1, 8 and 15 are NOT sufficient to overcome the 35 U.S.C. § 101 rejection as set forth in the previous Office Action. Therefore, the 35 U.S.C. § 101 rejection to claims 1-6, 8-13, 15-19 and 23 has been maintained. Response to Arguments Applicant’s arguments filed on February 19, 2026 have been fully considered but are not persuasive. In the Remarks on page 11, Applicant’s arguments regarding the 35 U.S.C. § 101 rejection that the features of “removing the metadata from the interaction data…” and “adding text representations of the removed metadata…” by an adding component of the computing device integrate the alleged abstract idea into a practical application that automatically generates quality metrics for agent interactions. In response to Applicant’s arguments, the Examiner respectfully disagrees. In order for a claim to integrate the exception into a practical application, the additional claimed elements must, for example, improve the functioning of a computer or any other technology or technical field (see MPEP § 2106.05(a)), apply the judicial exception with a particular machine (see MPEP § 2106.05(b)), affect a transformation or reduction of a particular article to a different state or thing (see MPEP § 2106.05(c)), or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment (see MPEP § 2106.05(e)). See Revised 2019 Guidance. Here, claim 1 recites an additional element of “a computer device” for performing the steps. This additional element is recited at a high level of generality and merely invoked as a tool to perform generic computer functions including receiving, storing, displaying and transmitting information over a network. However, none of the claim elements reflects an improvement to the functioning of the computer device or another technology or technical field. In this regard, the courts held that “Automating manual and mental processes on generic computers does not make an abstract idea patent eligible.” See Credit Acceptance Corp. v. Westlake Servs., 859 F.3d 1044, 1055 (Fed. Cir. 2017) (“[A]utomation of manual processes using generic computers does not constitute a patentable improvement in computer technology.”). Even allows the computing device to perform the steps in a more quick and efficient ways, the improvement is not on the functioning of the computing device itself, the focus of the claims is not on such an improvement in computers as tools, but on certain independently abstract ideas that use computers as tools. See FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 1095 (Fed. Cir. 2016). Therefore, using generic computer components to implement an abstract idea does not integrate the abstract idea into a practical application. In the Remarks on page 13, Applicant argues that the cited paragraphs of Tapuhi at most teaches that interacting data includes “…reasons for the interaction, disposition data, time on hold, handle time, etc….” (see Tapuhi, paragraph [0056]). However, Tapuhi does not teach time stamps let alone adding time stamps into the text transcript as claimed. There is no teaching or suggestion of adding time stamps to text transcripts anywhere in Tapuhi. In response to applicant’s argument, the Examiner respectfully disagrees. Tapuhi discloses “the quality monitoring system generates a report or list of coaching sessions that were previously automatically generated for a particular agent, a timestamp corresponding to the date on which the coaching session was generated, and the reason or reasons for triggering the coaching session” (see ¶ 131). Therefore, given the broadest reasonable interpretation to one of ordinary skill in the art, Tapuhi teaches the limitation in form of Applicant claimed. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-6, 8-13, 15-19 and 23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As per Step 1 of the subject matter eligibility analysis, it is to determine whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. In this case, claims 1-6 and 23 are directed to a method for quality management, which falls within the statutory category of a process. Claims 8-13 are directed to a system comprising a computing device and at a computer-readable medium, which falls within the statutory category of a machine. Claims 15-19 are directed to a non-transitory computer-readable medium with computer-executable instructions stored, which falls within the statutory category of a product. In Step 2A of the subject matter eligibility analysis, it is to “determine whether the claim at issue is directed to a judicial exception (i.e., an abstract idea, a law of nature, or a natural phenomenon). Under this step, a two-prong inquiry will be performed to determine if the claim recites a judicial exception (an abstract idea enumerated in the 2019 Guidance), then determine if the claim recites additional elements that integrate the exception into a practical application of the exception. See 2019 Revised Patent Subject Matter Eligibility Guidance (2019 Guidance), 84 Fed. Reg. 50, 54-55 (January 7, 2019). In Prong One, it is to determine if the claim recites a judicial exception (an abstract idea enumerated in the 2019 Guidance, a law of nature, or a natural phenomenon). Taking the method as representative, the claims recite the limitations of “receiving an evaluation including at least one question, removing the metadata from the interaction data, adding text representations of the removed metadata, including time stamps corresponding to the indicators of the times during the interaction…into the text transcript, generating a quality metric for the at least one question for the current interaction, associating the quality metric for the at least one question, providing the evaluation including the associated quality metric, receiving interaction data representing previous interactions with one or more agents, using a first portion of the interaction data…training the first large language model, using a second portion of interaction data…generating performance indicator for the first classifier, generating the quality metric for the question, determining that the generated performance indicators for the first large language model, generating the quality metric for the question…, generating performance indicator…, selecting one of the first large language model, the second large language model, and the first classifier, based on the generated performance indicators, selecting one of the first large language model, the second large language model, and the first classifier; generating a quality metric for the at least one question for the current interaction using the selected one of the first large language model, the second large language model, and the first classifier”. None of the limitations recites technological implementation details for any of these steps, but instead recite only results desired by any and all possible means. The limitations, as drafted, are directed to methods that allow user to monitor and evaluate agent performance, and managing interactions between agent and customer, which fall within the certain methods of organizing human activity grouping. See Under the 2019 Guidance, 84 Fed. Reg. 52. The mere recitation of a computing device does not take the claims out of the certain method of organizing human activity group. Accordingly, the claims recite an abstract idea, and the analysis is proceeding to Prong Two. In Prong Two, it is to determine if the claim recites additional elements that integrate the exception into a practical application of the exception. Beyond the abstract idea, claim 1 recites an additional element of “a computing device” for performing the steps, and “a large language model”. The specification describes these additional elements at a high level of generality and merely invoked as tools to perform the generic computer functions including receive interaction data over a network. For example, “The components of the AQM engine 180 may be implemented together or separately using one or more general purpose computing devices such as the computer 500 shown in Fig. 5.” (See ¶ 123). Thus, merely adding a generic computer, generic computer components, or programmed computer to perform generic computer functions does not automatically overcome an eligibility rejection. Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 134 S. Ct. 2347, 2358-59, 110 USPQ2d 1976, 1983-84 (2014); see also Bancorp Servs., L.L.C. v. Sun Life Assurance Co. of Canada (U.S.), 687 F.3d 1266, 1278 (Fed. Cir. 2012) (A computer “employed only for its most basic function… does not impose meaningful limits on the scope of those claims”). As to the learning/training per se, such an argument overlooks the entire education system. Reciting machine learning is placing such learning in a computer context, offering no technological implementation details beyond the conceptual idea to use a machine learning. However, simply implementing the abstract idea on a generic computer does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Further, nothing in the claims that reflects an improvement to the functioning of a computer itself or another technology, effects a transformation or reduction of a particular article to a different state or thing, or applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effect designed to monopolize the exception. Therefore, the additional elements do not integrate the judicial exception into a practical application. The claims are directed to an abstract idea, the analysis is proceeding to Step 2B. In Step 2B of Alice, it is "a search for an ‘inventive concept’—i.e., an element or combination of elements that is ‘sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the [ineligible concept’ itself.’” Id. (alternation in original) (quoting Mayo Collaborative Servs. v. Prometheus Labs., Inc., 132 S. Ct. 1289, 1294 (2012)). The claims as described in Prong Two above, nothing in the claims that integrates the abstract idea into a practical application. The same analysis applies here in Step 2B. Beyond the abstract idea, claim 1 recites an additional element of “a computing device” for performing the steps, and “a large language model”. The specification describes these additional elements at a high level of generality and merely invoked as tools to perform the generic computer functions including receive interaction data over a network. For example, “The components of the AQM engine 180 may be implemented together or separately using one or more general purpose computing devices such as the computer 500 shown in Fig. 5.” (See ¶ 123). Taking the claim elements separately and as an ordered combination, the computing device, at best, may perform the generic computer functions including receiving, manipulating, and transmitting information over a network. However, generic computer for performing generic computer functions have been recognized by the courts as merely well-understood, routine, and conventional functions of generic computers. See MPEP 2106.05 (d) (II) (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)). Thus, simply implementing the abstract idea on a generic computer for performing generic computer functions do not amount to significantly more than the abstract idea. (MPEP 2106.05(a)-(c), (e-f) & (h)). For the foregoing reasons, claims 1-6 and 23 cover subject matter that is judicially-excepted from patent eligibility under § 101 as discussed above, the other claims 8-13 and 15-19 parallel claims 1-6 and 23—similarly cover claimed subject matter that is judicially excepted from patent eligibility under § 101. Therefore, the claims as a whole, viewed individually and as a combination, do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. The claims are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-13, 15-19 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Tapuhi et al., (US 2018/0096617, hereinafter: Tapuhi), and in view of Chu et al., (WO 2024/177735, hereinafter: Chu), and further in view of Davies et al., (US 2018/0196845, hereinafter: Davies). Regarding claim 1, Tapuhi discloses a method for automating quality metrics comprising: receiving an evaluation including at least one question of a plurality of questions and interaction data representing a current interaction with an agent by a receiving component of a computing device (see Abstract; ¶ 5, ¶ 9, ¶ 15, ¶ 86, ¶ 122), wherein the interaction data comprises a text transcript of the interaction and metadata (see ¶ 56, ¶ 63, ¶ 161-163); generating a quality metric for the at least one question for the current interaction using a first large language model or a first classifier and the interaction data representing the current interaction by a generating component of the computing device (see ¶ 75-76, ¶ 98, ¶ 105, ¶ 113, ¶ 122, ¶ 169); associating the quality metric for the at least one question with the current interaction by an associating component of the computing device (see ¶ 12, ¶ 16, ¶ 89, ¶ 169); and providing the evaluation including the associated quality metric for the at least one question to the agent by the computing device (see ¶ 75-76, ¶ 93, ¶ 121-122). Tapuhi discloses training a neural network to compute an overall evaluation score for an interaction based on automatic answered questions (see ¶ 28). Tapuhi does not explicitly disclose using a first large language model or a first classifier; however, Chu in an analogous art for ranking recommender discloses a first large language model or a first classifier (see Abstract; Fig. 15, # 1502-1506; ¶ 13, ¶ 79, ¶ 136). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Tapuhi discloses “storing one or more databases relating to agent data, customer data and interaction data (e.g., details of each interaction with a customer, reason for the interaction, disposition data, time on hold, handle time, etc.) (see ¶ 56); “detecting any of the phrases within an interaction (e.g., speech to text transcript of a voice interaction or chat session) containing the phrases as relating to the associated topic. Topics can be grouped into meta topics and meta topics may be grouped with other meta topics and/or topics from semantic hierarchy or taxonomy of meta topics and topics” (see ¶ 63); and “the quality monitoring system also stores the identified portions include the locations of the identified portions within the interactions (e.g., start and end timestamps)” (see ¶ 70). Tapuhi and Chu do not explicitly disclose using a first large language model or a first classifier; however, Davies in an analogous art for providing interaction data discloses removing the metadata from the interaction data by an adding component of the computing device (see Fig. 8, # 820-825; ¶ 25, ¶ 28, ¶ 30, ¶ 51), wherein the removed metadata comprises indicators of times during the current interaction when the agent was put on hold and one or more sentiments detected during the current interaction (see ¶ 39-40, ¶ 44-45, ¶ 60); adding text representations of the removed metadata, including time stamps corresponding to the indicators of the times during the interaction when the agent was put on hold and the one or more sentiments, into the text transcript by the adding component of the computing device (see ¶ 25-26, ¶ 30, ¶ 42-44, ¶ 48, ¶ 58-60). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi and in view of Chu to include the teaching of Davies in order to gain the commonly understood benefit of such adaption, such as providing the benefit of time-specific data locations, and enabling better decision making. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 2, Tapuhi discloses the method of claim 1, further comprising: receiving interaction data representing previous interactions with one or more agents by the computing device, wherein each interaction is associated with a question of the plurality of questions and a quality metric (see ¶ 15, ¶ 110, ¶ 145, ¶ 162); using a first portion of the interaction data representing the previous interactions (see Fig. 5, # 506; ¶ 65-68, ¶ 87, ¶ 149). Tapuhi discloses training a neural network to compute an overall evaluation score for the interaction (see ¶ 28). Tapuhi does not explicitly disclose the following limitations; however, Chu discloses training the first large language model or the first classifier by the computing device classifier (see Abstract; Fig. 15, # 1502-1506; ¶ 9, ¶ 13, ¶ 79, ¶ 136). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 3, Tapuhi discloses the method of claim 2, further comprising: using a second portion of the interaction data representing the previous interactions, generating performance indicators for the first classifier and the first large language model (see ¶ 15, ¶ 110, ¶ 145, ¶ 162); and generating the quality metric for the question for the current interaction using one of the first large language model or the first classifier based on the performance indicators (see ¶ 75-76, ¶ 98, ¶ 105, ¶ 113, ¶ 122). Tapuhi does not explicitly disclose using a first large language model or a first classifier; however, Chu in an analogous art for ranking recommender discloses the first large language model or the first classifier (see Abstract; Fig. 15, # 1502-1506; ¶ 13, ¶ 79, ¶ 136). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 4, Tapuhi discloses the method of claim 3, further comprising: using the second portion of the interaction data, generating performance indicators for a second large language model or a second classifier (see ¶ 15, ¶ 110, ¶ 145, ¶ 162), wherein the second large language model is not trained using the interaction data and the second classifier is not trained using the first interaction data (see ¶ 148-149, ¶ 163); and generating the quality metric for the question for the current interaction using one of the first large language model, the second large language model, the first classifier, or the second classifier based on the performance indicators (see ¶ 75-76, ¶ 98, ¶ 105, ¶ 113, ¶ 122). Tapuhi does not explicitly disclose a first large language model, a second large language model, a first classifier, and a second classifier; however, Chu in an analogous art for ranking recommender discloses a first large language model, a second large language model, a first classifier, and a second classifier (see ¶ 9-13, ¶ 78-80). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 5, Tapuhi discloses the method of claim 4, further comprising: determining that the generated performance indicators for the first large language model, the second large language model, the first classifier, and the second classifier all fall below a threshold (see ¶ 75, ¶ 94, ¶ 110, ¶ 135); and in response to the determination, generating the quality metric for the question for the current interaction using some combination of the first large language model, the second large language model, the first classifier and the second classifier based on the performance indicators (see ¶ 75-76, ¶ 93, ¶ 113, ¶ 122, ¶ 137-138). Tapuhi does not explicitly disclose a first large language model, a second large language model, a first classifier, and a second classifier; however, Chu discloses a first large language model, a second large language model, a first classifier, and a second classifier (see ¶ 9-13, ¶ 78-80). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 6, Tapuhi does not explicitly disclose following limitations; however, Chu discloses the method of claim 5, wherein the first classifier comprises a neural network classifier, or XGBoost (see ¶ 78-80, ¶ 124). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 8, Tapuhi discloses a system for automating quality metrics comprising: a computing device (see ¶ 171); and a computer-readable medium with computer-executable instructions stored thereon that when executed by the computing device cause the computing device (see ¶ 19, ¶ 172) to: receive an evaluation including at least one question of a plurality of questions and interaction data representing a current interaction with an agent (see Abstract; ¶ 5, ¶ 15, ¶ 86, ¶ 122), wherein the interaction data comprises a text transcript of the interaction and metadata (see ¶ 56, ¶ 63, ¶ 161-163); generate a quality metric for the at least one question for the current interaction using a first large language mode or a first classifier and the interaction data representing the current interaction (see ¶ 75-76, ¶ 98, ¶ 105, ¶ 113, ¶ 122); and associate the quality metric for the at least one question with the current interaction (see ¶ 12, ¶ 16, ¶ 89, ¶ 169); and provide the evaluation including the associated quality metric for the at least one question to the agent by the computing device (see ¶ 75-76, ¶ 93, ¶ 121-122). Tapuhi discloses training a neural network to compute an overall evaluation score for an interaction based on automatic answered questions (see ¶ 28). Tapuhi does not explicitly disclose using a first large language model or a first classifier; however, Chu in an analogous art for ranking recommender discloses a first large language model or a first classifier (see Abstract; Fig. 15, # 1502-1506; ¶ 13, ¶ 79, ¶ 136). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Tapuhi discloses “storing one or more databases relating to agent data, customer data and interaction data (e.g., details of each interaction with a customer, reason for the interaction, disposition data, time on hold, handle time, etc.) (see ¶ 56); “detecting any of the phrases within an interaction (e.g., speech to text transcript of a voice interaction or chat session) containing the phrases as relating to the associated topic. Topics can be grouped into meta topics and meta topics may be grouped with other meta topics and/or topics from semantic hierarchy or taxonomy of meta topics and topics” (see ¶ 63); and “the quality monitoring system also stores the identified portions include the locations of the identified portions within the interactions (e.g., start and end timestamps)” (see ¶ 70). Tapuhi and Chu do not explicitly disclose using a first large language model or a first classifier; however, Davies in an analogous art for providing interaction data discloses remove the metadata from the interaction data by an adding component of the computing device (see Fig. 8, # 820-825; ¶ 25, ¶ 28, ¶ 30, ¶ 51), wherein the removed metadata comprises indicators of times during the current interaction when the agent was put on hold and one or more sentiments detected during the current interaction (see ¶ 39-40, ¶ 44-45, ¶ 60); add text representations of the removed metadata, including time stamps corresponding to the indicators of the times during the interaction when the agent was put on hold and the one or more sentiments, into the text transcript by the adding component of the computing device (see ¶ 25-26, ¶ 30, ¶ 42-44, ¶ 48, ¶ 58-60). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi and in view of Chu to include the teaching of Davies in order to gain the commonly understood benefit of such adaption, such as providing the benefit of time-specific data locations, and enabling better decision making. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 9, Tapuhi discloses the system of claim 8, further comprising computer-executable instructions that when executed by the computing device cause the computing device to: receive interaction data representing previous interactions with one or more agents by the computing device, wherein each interaction is associated with a question of the plurality of questions and a quality metric (see ¶ 15, ¶ 110, ¶ 145, ¶ 162); use a first portion of the interaction data representing the previous interactions (see Fig. 5, # 506; ¶ 65-68, ¶ 87, ¶ 149). Tapuhi discloses training a neural network to compute an overall evaluation score for the interaction (see ¶ 28). Tapuhi does not explicitly disclose the following limitations; however, Chu discloses training the first large language model or the first classifier (see Abstract; Fig. 15, # 1502-1506; ¶ 9, ¶ 13, ¶ 79, ¶ 136). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 10, Tapuhi discloses the system of claim 9, further comprising computer-executable instructions that when executed by the computing device cause the computing device to: using a second portion of the interaction data representing the previous interactions, generate performance indicators for the first classifier and the first large language model (see ¶ 15, ¶ 110, ¶ 145, ¶ 162); and generate the quality metric for the question for the current interaction using one of the first large language model or the first classifier based on the performance indicators (see ¶ 75-76, ¶ 98, ¶ 105, ¶ 113, ¶ 122). Tapuhi does not explicitly disclose using a first large language model or a first classifier; however, Chu in an analogous art for ranking recommender discloses the first large language model or the first classifier (see Abstract; Fig. 15, # 1502-1506; ¶ 13, ¶ 79, ¶ 136). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 11, Tapuhi discloses the system of claim 10, further comprising computer-executable instructions that when executed by the computing device cause the computing device to: use the second portion of the interaction data, generating performance indicators for a second large language model or a second classifier (see ¶ 15, ¶ 110, ¶ 145, ¶ 162), wherein the second large language model is not trained using the interaction data and the second classifier is not trained using the first interaction data (see ¶ 148-149, ¶ 163); and generate the quality metric for the question for the current interaction using one of the first large language model, the second large language model, the first classifier, or the second classifier based on the performance indicators (see ¶ 75-76, ¶ 98, ¶ 105, ¶ 113, ¶ 122). Tapuhi does not explicitly disclose a first large language model, a second large language model, a first classifier, and a second classifier; however, Chu in an analogous art for ranking recommender discloses a first large language model, a second large language model, a first classifier, and a second classifier (see ¶ 9-13, ¶ 78-80). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 12, Tapuhi discloses the system of claim 11, further comprising computer-executable instructions that when executed by the computing device cause the computing device to: determine that the generated performance indicators for the first large language model, the second large language model, the first classifier, and the second classifier all fall below a threshold (see ¶ 75, ¶ 94, ¶ 110, ¶ 135); and in response to the determination, generate the quality metric for the question for the current interaction using some combination of the first large language model, the second large language model, the first classifier, and the second classifier based on the performance indicators (see ¶ 75-76, ¶ 93, ¶ 113, ¶ 122, ¶ 137-138). Tapuhi does not explicitly disclose a first large language model, a second large language model, a first classifier, and a second classifier; however, Chu discloses a first large language model, a second large language model, a first classifier, and a second classifier (see ¶ 9-13, ¶ 78-80). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 13, Tapuhi does not explicitly disclose following limitations; however, Chu discloses the system of claim 12, wherein the first classifier comprises a neural network classifier, or XGBoost (see ¶ 78-80, ¶ 124). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 15, Tapuhi discloses a non-transitory computer-readable medium with computer-executable instructions stored thereon that when executed by a computing device cause the computing device (see ¶ 19, ¶ 170) to: receive an evaluation including at least one question of a plurality of questions and interaction data representing a current interaction with an agent (see Abstract; ¶ 5, ¶ 15, ¶ 86, ¶ 122), wherein the interaction data comprises a text transcript of the interaction and metadata (see ¶ 56, ¶ 63, ¶ 161-163); wherein the metadata identifies one or more events that occurred during the current interaction (see ¶ 44, ¶ 63, ¶ 72); generate a quality metric for the at least one question for the current interaction using a first large language model or a first classifier and the interaction data representing the current interaction (see ¶ 75-76, ¶ 98, ¶ 105, ¶ 113, ¶ 122); and associate the quality metric for the at least one question with the current interaction (see ¶ 12, ¶ 16, ¶ 89, ¶ 169); and provide the evaluation including the associated quality metric for the at least one question to the agent by the computing device (see ¶ 75-76, ¶ 93, ¶ 121-122). Tapuhi discloses training a neural network to compute an overall evaluation score for an interaction based on automatic answered questions (see ¶ 28). Tapuhi does not explicitly disclose using a first large language model or a first classifier; however, Chu in an analogous art for ranking recommender discloses a first large language model or a first classifier (see Abstract; Fig. 15, # 1502-1506; ¶ 13, ¶ 79, ¶ 136). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Tapuhi discloses “storing one or more databases relating to agent data, customer data and interaction data (e.g., details of each interaction with a customer, reason for the interaction, disposition data, time on hold, handle time, etc.) (see ¶ 56); “detecting any of the phrases within an interaction (e.g., speech to text transcript of a voice interaction or chat session) containing the phrases as relating to the associated topic. Topics can be grouped into meta topics and meta topics may be grouped with other meta topics and/or topics from semantic hierarchy or taxonomy of meta topics and topics” (see ¶ 63); and “the quality monitoring system also stores the identified portions include the locations of the identified portions within the interactions (e.g., start and end timestamps)” (see ¶ 70). Tapuhi and Chu do not explicitly disclose using a first large language model or a first classifier; however, Davies in an analogous art for providing interaction data discloses remove the metadata from the interaction data by an adding component of the computing device (see Fig. 8, # 820-825; ¶ 25, ¶ 28, ¶ 30, ¶ 51), wherein the removed metadata comprises indicators of times during the current interaction when the agent was put on hold and one or more sentiments detected during the current interaction (see ¶ 39-40, ¶ 44-45, ¶ 60); add text representations of the removed metadata, including time stamps corresponding to the indicators of the times during the interaction when the agent was put on hold and the one or more sentiments, into the text transcript by the adding component of the computing device (see ¶ 25-26, ¶ 30, ¶ 42-44, ¶ 48, ¶ 58-60). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi and in view of Chu to include the teaching of Davies in order to gain the commonly understood benefit of such adaption, such as providing the benefit of time-specific data locations, and enabling better decision making. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 16, Tapuhi discloses the computer-readable medium of claim 15, further comprising computer-executable instructions that when executed by the computing device cause the computing device to: receive interaction data representing previous interactions with one or more agents by the computing device, wherein each interaction is associated with a question of the plurality of questions and a quality metric (see ¶ 15, ¶ 110, ¶ 145, ¶ 162); use a first portion of the interaction data representing the previous interactions, training the first large language model or the first classifier (see Fig. 5, # 506; ¶ 65-68, ¶ 87, ¶ 149). Tapuhi discloses training a neural network to compute an overall evaluation score for the interaction (see ¶ 28). Tapuhi does not explicitly disclose the following limitations; however, Chu discloses training the first large language model or the first classifier by the computing device classifier (see Abstract; Fig. 15, # 1502-1506; ¶ 9, ¶ 13, ¶ 79, ¶ 136). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 17, Tapuhi discloses the computer-readable medium of claim 16, further comprising computer-executable instructions that when executed by the computing device cause the computing device to: using a second portion of the interaction data representing the previous interactions, generate performance indicators for the first classifier and the first large language model (see ¶ 15, ¶ 110, ¶ 145, ¶ 162); and generate the quality metric for the question for the current interaction using one of the first large language model or the first classifier based on the performance indicators (see ¶ 75-76, ¶ 98, ¶ 105, ¶ 113, ¶ 122). Tapuhi does not explicitly disclose using a first large language model or a first classifier; however, Chu in an analogous art for ranking recommender discloses the first large language model or the first classifier (see Abstract; Fig. 15, # 1502-1506; ¶ 13, ¶ 79, ¶ 136). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 18, Tapuhi discloses the computer-readable medium of claim 17, further comprising computer-executable instructions that when executed by the computing device cause the computing device to: use the second portion of the interaction data, generating performance indicators for a second large language model or a second classifier (see ¶ 15, ¶ 110, ¶ 145, ¶ 162), wherein the second large language model is not trained using the interaction data and the second classifier is not trained using the first interaction data (see ¶ 148-149, ¶ 163); and generate the quality metric for the question for the current interaction using one of the first large language model, the second large language model, the first classifier, or the second classifier based on the performance indicators (see ¶ 75-76, ¶ 98, ¶ 105, ¶ 113, ¶ 122). Tapuhi does not explicitly disclose a first large language model, a second large language model, a first classifier, and a second classifier; however, Chu in an analogous art for ranking recommender discloses a first large language model, a second large language model, a first classifier, and a second classifier (see ¶ 9-13, ¶ 78-80). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 19, Tapuhi discloses the computer-readable medium of claim 18, further comprising computer-executable instructions that when executed by the computing device cause the computing device to: determine that the generated performance indicators for the first large language model, the second large language model, the first classifier, and the second classifier all fall below a threshold (see ¶ 75, ¶ 94, ¶ 110, ¶ 135); and in response to the determination, generate the quality metric for the question for the current interaction using some combination of the first large language model, the first classifier, the second large language model, and the second classifier based on the performance indicators (see ¶ 75-76, ¶ 93, ¶ 113, ¶ 122, ¶ 137-138). Tapuhi does not explicitly disclose a first large language model, a second large language model, a first classifier, and a second classifier; however, Chu discloses a first large language model, a second large language model, a first classifier, and a second classifier (see ¶ 9-13, ¶ 78-80). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 23, Tapuhi discloses the method of claim 1, wherein the first large language model and the first classifier were trained using historical interaction data, and further comprising: generating performance indicators for a second large language model, the first large language model, and the first classifier, wherein the second large language model was not trained using the historical interaction data (see ¶ 43, ¶ 74-76); generating performance indicators for each of first large language model, the second large language model, and the first classifier (see ¶ 93-95, ¶ 100); based on the generated performance indicators, selecting one of the first large language model, the second large language model, and the first classifier (see ¶ 112, ¶ 125); and generating a quality metric for the at least one question for the current interaction using the selected one of the first large language model, the second large language model, and the first classifier (see ¶ 75-76, ¶ 95, ¶ 105, ¶ 119-122, ¶ 163). Tapuhi does not explicitly disclose a first large language model, a second large language model and a first classifier; however, Chu discloses a first large language model, a second large language model, and a first classifier (see ¶ 9-13, ¶ 78-79, ¶ 88, ¶ 136). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Tapuhi to include the teaching of Chu in order to gain the commonly understood benefit of such adaption, such as providing the benefit of a more optimal solution with a specified model, in turn of operational efficiency. Since the combination of each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hoeg (US 2014/0249813) discloses a method for interfaces allowing limited edits to data corresponding to transcripts based on in response to commands received via a user interface module. Hasan et al., (US 2023/0342557) discloses a system for training a virtual agent comprising storing conversation between the virtual agent and a user in logs. Lev-tov et al., (WO 2018/064199) discloses a method for automatic quality management includes receiving agent interactions with customer, evaluating a plurality of answers corresponding to the questions, and identifying one or more underperforming quality metrics in accordance with the comparisons of the aggreged quality metrics with the threshold values. Licato et al., (US 2024/0403710) discloses a method that includes prompting a first trained large language model to generate a plurality of arguments, determine a ranking of the plurality of arguments using a second trained large language model. Fan et al., (CN 111382573) discloses a method for quality evaluation comprises extracting the answer of the answer and answer characteristic of aiming at the problem based on the quality metric and a correlation metric to determine the answer to the question of quality score. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAN CHOY whose telephone number is (571)270-7038. The examiner can normally be reached 5/4/9 compressed work schedule. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAN G CHOY/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Sep 29, 2023
Application Filed
Apr 19, 2025
Non-Final Rejection — §101, §103
Jul 22, 2025
Response Filed
Aug 08, 2025
Final Rejection — §101, §103
Oct 27, 2025
Request for Continued Examination
Oct 31, 2025
Response after Non-Final Action
Nov 15, 2025
Non-Final Rejection — §101, §103
Nov 25, 2025
Interview Requested
Dec 10, 2025
Applicant Interview (Telephonic)
Dec 10, 2025
Examiner Interview Summary
Feb 19, 2026
Response Filed
Apr 07, 2026
Final Rejection — §101, §103
Apr 16, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12548101
TRANSPORTATION OPERATOR COLLABORATION FOR ENHANCED USER EXPERIENCE AND OPERATIONAL EFFICIENCY
2y 5m to grant Granted Feb 10, 2026
Patent 12511600
SYSTEMS AND METHODS FOR SIMULATION FORECASTING INCLUDING DYNAMIC REALIGNMENT
2y 5m to grant Granted Dec 30, 2025
Patent 12505462
ACTIONABLE KPI-DRIVEN SEGMENTATION
2y 5m to grant Granted Dec 23, 2025
Patent 12450522
METHOD AND SYSTEM FOR ANALYZING PURCHASES OF SERVICE AND SUPPLIER MANAGEMENT
2y 5m to grant Granted Oct 21, 2025
Patent 12367439
Swarm Based Orchard Management
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
24%
Grant Probability
59%
With Interview (+35.0%)
4y 11m
Median Time to Grant
High
PTA Risk
Based on 452 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month