Prosecution Insights
Last updated: April 19, 2026
Application No. 18/652,452

SYSTEM AND METHOD FOR SUGGESTING ANSWERS ON AGENT PERFORMANCE EVALUATION FORMS USING GENERATIVE ARTIFICIAL INTELLIGENCE

Non-Final OA §101§103§112
Filed
May 01, 2024
Examiner
PATEL, SHREYANS A
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Nice Ltd.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
359 granted / 403 resolved
+27.1% vs TC avg
Moderate +7% lift
Without
With
+7.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
46 currently pending
Career history
449
Total Applications
across all art units

Statute-Specific Performance

§101
21.3%
-18.7% vs TC avg
§103
36.0%
-4.0% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 20 recites the limitation "The non-transitory computer-readable medium of claim 18." The claim 18 is a method claims. There is insufficient antecedent basis for this limitation in the claim. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1, 10 and 19 are ineligible because it is directed to an abstract idea—namely, automating human performance evaluation and form filling (a method of organizing human activity and a mental process) using generic computer components. At its core, the claim just has a system that selects an agent–customer interaction in a CRM, retrieves an evaluation form, obtains a transcript, sends that data to a generative AI/LLM to get suggested answers, writes those answers back into the form, and displays the updated evaluation. The “processor,” “non transitory computer readable medium,” “CRM interface,” “generative AI service,” and “request object” are all conventional computing elements performing routine data gathering, formatting, sending to a black box service, receiving a response, and displaying results; they do not improve the functioning of the computer or the LLM itself, nor do they recite any unconventional technical implementation. As a result, the additional elements do not amount to “significantly more” than the abstract idea, and the claims are rejected under 101 Abstract Idea as being directed to an abstract idea without an inventive concept. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims are (i) mere instructions to implement the idea on a computer, and/or (ii) recitation of generic computer structure that serves to perform generic computer functions that are well-understood, routine, and conventional activities previously known to the pertinent industry. Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. There is further no improvement to the computing device. Dependent claims 2-9, 11-18 and 20 are further recite an abstract idea performable by a human and do not amount to significantly more than the abstract idea as they do not provide steps other than what is conventionally known in question and answer management system. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5-6, 10, 14-15 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Tapuhi et al. (US Claim 10,902,737) in view of Tai et al. (“An Exanimation…of LLM to Aid Analysis of Textual Data; Jan 2024; pgs. 1-14). Claims 1, 10 and 19, Tapuhi teaches a performance evaluation system configured to intelligently suggest answers to questions on performance evaluations using a generative artificial intelligence (AI) service, the performance evaluation system comprising: a processor and a non-transitory computer readable medium operably coupled thereto, the computer readable medium comprising a plurality of instructions stored in association therewith that are accessible to, and executable by, the processor, to perform intelligent suggestion operations ([Fig. 12B] [ processor; non-transitory CRM; receiving an evaluation form, an interaction, and compute the overall evaluation score) which comprise: receiving a selection of an interaction between a user and an agent of a customer relationship management (CRM) system, wherein the selection designates a performance evaluation that evaluates the agent for the interaction ([Fig. 2] step 202; an interaction for evaluation is identified along with an evaluation form to use to evaluate the interaction); fetching evaluation form data for the performance evaluation, wherein the evaluation form data includes one or more questions with one or more answer options to each of the one or more questions ([Fig. 3] [col. 8 line 8 to col 9 line 5] step 310; an evaluation form includes one or more questions that relate to an agent’s performance; the form developer also set the data type of the answers whether the answers are yes or no type); determining a transcript of the interaction between the user and the agent, wherein the transcript comprises text data available or converted from the interaction ([col. 5 lines1-16] [col 7 lines 16-31] recorded calls may be processed by speech recognition module 44 to generate recognized text; topics are detected within a speech to text transcript of a voice interaction or the transcript of a text based chat session); responsive to receiving the one or more suggested answers, updating the performance evaluation to include the one or more suggested answer to the one or more questions ([Fig. 10] automatic evaluation: a quality monitoring system is capable of automatically filling in answers to at least some portions of the evaluation form based on an automatic analysis of the interaction; the scores and answer are stored for later output after answering each question); and outputting the updated performance evaluation in an interface of the CRM system for an evaluation process that utilizes the performance evaluation ([Figs. 5, 9A, 10] evaluation scores are stored for outputs and used for training/coaching; the customized training session is presented to the agent via the agent device). Tapuhi teaches building inputs for an automatic filling engine, not an LLM. The difference between the prior art and the claimed invention is that Tapuhi does not explicitly teach generating, for the generative AI service comprising at least one large language model (LLM), a request object for one or more suggested answers to the one or more questions based on the one or more answer options and the transcript, wherein the request object prompts the at least one LLM of the generative AI service to respond to the one or more questions with the one or more suggested answers based at least on the one or more answer options and the transcript. Tai teaches generating, for the generative AI service comprising at least one large language model (LLM), a request object for one or more suggested answers to the one or more questions based on the one or more answer options and the transcript, wherein the request object prompts the at least one LLM of the generative AI service to respond to the one or more questions with the one or more suggested answers based at least on the one or more answer options and the transcript ([Literature Review] [ LLM and Data Processing] [LLMs and Empirical Research] [Results] LLM; a codebook and inputted the sample text and codebook into LLM; LLM to determine if the codes were present in a sample text provided and requested evidence to support the coding; code (questions) and sample text (transcript) transmitted as a prompt to the LLM which returns answer per code); Tai further teaches requesting the one or more suggested answers from the generative AI service based on the request object ([Literature Review] [ LLM and Data Processing] [LLMs and Empirical Research] [Results] inputted the sample text and codebook into an LLM; determining by the LLM code if the codes were present in each prompt submission as a request for answer on the coders); Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Tapuhi with teachings of Tai by modifying the system and method for automatic quality evaluation of interactions as taught by Tapuhi to include generating, for the generative AI service comprising at least one large language model (LLM), a request object for one or more suggested answers to the one or more questions based on the one or more answer options and the transcript, wherein the request object prompts the at least one LLM of the generative AI service to respond to the one or more questions with the one or more suggested answers based at least on the one or more answer options and the transcript; requesting the one or more suggested answers from the generative AI service based on the request object as taught by Tai for the benefit of providing a systematic and reliable platform for code identification and offering a means of avoiding analysis misalignment (Thai [Abstract]). Claims 5 and 14, Tapuhi further teaches the performance evaluation system of claim 1, wherein, prior to generating the request object, the intelligent suggestion operations further comprise: filtering the one or more questions for questions marked for autosuggested answers from the generative AI service ([Fig. 7] [col. 21 lines 13-27] filtering one or more questions marked for autosuggested answers is identical to selecting only Q&A for automatic answering; generating a list of Q&A and use those questions as input to automatic evaluation/prediction effectively performs that filtering; prediction model). Claims 6 and 15, Tapuhi further teaches the performance evaluation system of claim 1, wherein the generating the request object is further based on additional relevant data associated with at least one of answer guidance, answer policies, or documentation associated with a company entity corresponding to the CRM system ([col. 9 lines 54-60] [Fig. 2] for each question q of the evaluation form (operation 204), in operation 206 the quality monitoring system identifies one or more portions of the interaction that are relevant to the question q). Claim 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Tapuhi et al. (US Claim 10,902,737) in view of Tai et al. (“An Exanimation…of LLM to Aid Analysis of Textual Data; Jan 2024; pgs. 1-14) and further in view of Warren (US 2004/0036722). Claims 7 and 16, Tapuhi and Tai teach all the limitations in claim 1. The difference between the prior art and the claimed invention is that Tapuhi nor Tai explicitly teach wherein the generating the request object is further based on one of user defined prompts comprising user input for a question prompt to the generative AI service or an autogenerated prompts if one of the user defined prompts is not present for a corresponding question. Warren teaches wherein the generating the request object is further based on one of user defined prompts comprising user input for a question prompt to the generative AI service or an autogenerated prompts if one of the user defined prompts is not present for a corresponding question ([Summary of the Invention] [0025-0026] the system receives user input defining a user defined prompt; user-defined prompt replaces the pre-configured prompt within the next text box; determining whether a user defined prompt has been previously defined for the text box; if a user defined prompt has been previously define, the system displays the text box with the user defined prompt; if not previously defined, the system displays the text box with a pre-configured prompt displayed within the text box). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Tapuhi and Tai with teachings of Warren by modifying the system and method for automatic quality evaluation of interactions as taught by Tapuhi to include wherein the generating the request object is further based on one of user defined prompts comprising user input for a question prompt to the generative AI service or an autogenerated prompts if one of the user defined prompts is not present for a corresponding question as taught by Warren for the benefit of incorporating helpful instructional capabilities into software that can be effectively targeted to particular matters that confront users of all skill levels (Warren [0018]). Claims 8-9 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Tapuhi et al. (US Claim 10,902,737) in view of Tai et al. (“An Exanimation…of LLM to Aid Analysis of Textual Data; Jan 2024; pgs. 1-14) and further in view of Hewitt (US 2024/0037344). Claims 8 and 17, Tapuhi further teaches the performance evaluation system of claim 1, wherein the intelligent suggestion operations further comprise: displaying the one or more suggested answers in association with a corresponding one of the one or more questions ([Fig. 12A] display device for displaying suggested Q&A); The difference between the prior art and the claimed invention is that Tapuhi nor Tai explicitly teach providing an option to verify or modify the one or more suggested answers; and responsive to completing each of the one or more questions, publishing the updated performance evaluation having the one or more questions completed based, at least in part, on the one or more suggested answers. Hewitt teaches providing an option to verify or modify the one or more suggested answers ([Fig. 2] [0026-0027] provides the capabilities to accept and edit the suggested responses); and responsive to completing each of the one or more questions, publishing the updated performance evaluation having the one or more questions completed based, at least in part, on the one or more suggested answers ([Figs. 2 & 4] [0026-0027] [0033-0035] accepting one of the suggested responses will replace the current response; the newly accepted response will then be used by the conventional AI when responding to user inputs). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Tapuhi and Tai with teachings of Hewitt by modifying the system and method for automatic quality evaluation of interactions as taught by Tapuhi to include providing an option to verify or modify the one or more suggested answers; and responsive to completing each of the one or more questions, publishing the updated performance evaluation having the one or more questions completed based, at least in part, on the one or more suggested answers as taught by Hewitt for the benefit of improving upon conventional response writing by suggesting responses generated by text summarization of documents within an entity's own knowledgebase in conjunction with question-answering methods (Hewitt [0014]). Claims 9 and 18, Hewitt further teaches the performance evaluation system of claim 8, wherein the intelligent suggestion operations further comprise: receiving a modification to one of the one or more suggested answers via the option; and providing the modification to the generative AI service for additional learning with the at least one LLM ([Figs. 2 & 4] [0026-0027] [0033-0035] received as a selection of one of buttons; accepting one of the suggested responses will replace the current response; the newly selected suggested response replaces a current response; the newly selected recommended response will then be used by the conversation AI when responding to user inputs). Allowable Subject Matter Claims 2, 11 and 20 are objected. Claims 2, 11 and 20 can be allowable by if 1) claims 2 and 11 are rewritten into the independent claims 1, 10 and 19 AND overcome the 101 Abstract Idea set forth. For independent claims 2, 11 and 20 and dependent claims 3-4, 12-13 and 20, respectively, the performance evaluation system of claim 1, wherein the generating the request object comprises: processing, using a transcript data processor, the transcript associated with at least one of the interaction and additional relevant data associated with the evaluation form data to create processed transcript data with instructions to set the context of the transcript; processing, using an evaluation form data processor, the one or more questions, the one or more answer options, and user defined question prompt data to create a question prompt list; and generating at least one prompt to the at least one LLM based on the processed transcript data, the question prompt list, and a system configuration for communicating with the generative AI service. Claims 2, 11 and 20 recites a specific, LLM oriented prompt generation architecture that is neither taught nor suggested by Tapuhi nor Tai, even in combination. Tapuhi only computes numeric features and uses internal models—never natural language context instructions or prompts—while Tai uses a single, manually crafted prompt for research coding and does not disclose this modular, CRM centric pipeline. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Amir et al. (WO 20170184773) teaches receiving, by a processor, a question including text; identifying, by the processor, one or more identified topics from a plurality of tracked topics tracked by an analytics system in accordance with the text of the question, the analytics system being configured to perform analytics on a plurality of interactions with a plurality of agents of a contact center; outputting, by the processor, the one or more identified topics; associating, by the processor, one or more selected topics with the question, the selected topics one or more of the identified topics; adding, by the processor, the question and the selected topics to the evaluation form; and outputting the evaluation form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHREYANS A PATEL whose telephone number is (571)270-0689. The examiner can normally be reached Monday-Friday 8am-5pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SHREYANS A. PATEL Primary Examiner Art Unit 2653 /SHREYANS A PATEL/Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

May 01, 2024
Application Filed
Nov 28, 2025
Non-Final Rejection — §101, §103, §112
Mar 25, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586597
ENHANCED AUDIO FILE GENERATOR
2y 5m to grant Granted Mar 24, 2026
Patent 12586561
TEXT-TO-SPEECH SYNTHESIS METHOD AND SYSTEM, A METHOD OF TRAINING A TEXT-TO-SPEECH SYNTHESIS SYSTEM, AND A METHOD OF CALCULATING AN EXPRESSIVITY SCORE
2y 5m to grant Granted Mar 24, 2026
Patent 12548549
ON-DEVICE PERSONALIZATION OF SPEECH SYNTHESIS FOR TRAINING OF SPEECH RECOGNITION MODEL(S)
2y 5m to grant Granted Feb 10, 2026
Patent 12548583
ACOUSTIC CONTROL APPARATUS, STORAGE MEDIUM AND ACCOUSTIC CONTROL METHOD
2y 5m to grant Granted Feb 10, 2026
Patent 12536988
SPEECH SYNTHESIS METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
96%
With Interview (+7.4%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month