Prosecution Insights
Last updated: April 19, 2026
Application No. 18/772,497

SYSTEM AND METHOD FOR AUTOMATICALLY EVALUATING DATA ITEMS USING A MACHINE LEARNING MODEL

Non-Final OA §103
Filed
Jul 15, 2024
Examiner
AL AUBAIDI, RASHA S
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Nice Ltd.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
89%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
577 granted / 744 resolved
+15.6% vs TC avg
Moderate +11% lift
Without
With
+11.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
782
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
55.9%
+15.9% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 744 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This communication in response to application filed 07/15/2024. Claim Rejections - 35 USC § 103 2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cattaneo et al. (Pub.No.: 2024/0211960 A1) in view of Surdick (US PAT # 9,742,914 B2). Regarding claims 1 and 8, Cattaneo teaches a method and system of evaluating data items using a machine learning model (see abstract and [0016]), the method comprising, using one or more computer processors: producing, by a machine learning (ML) model, one or more answers to one or more questions, wherein the one or more questions are applied to an input data item (reads on automatically scoring the quality of an agent-customer interaction. In one embodiment, an interaction quality score may be determined using one or more natural language processing programs and/or machine learning models to analyze a piece of content see [0015]), and wherein the one or more questions and the input data item are input to the machine learning model (see [0016] and [0020]); and transmitting one or more output data items to a remote computer over a communication network based on the calculated score (reads on outputting evaluation results, including scores for further processing or display, see [0033] and [0035]). Cattaneo features already addressed in the rejection of claim 1 and 8, however Cattaneo does not specifically teach “calculating a score for the input data item based on one or more of the produced answers”. Yet, Surdick teaches scoring an interaction based on answers to predefined evaluation questions contained in an evaluation form. For example, the evaluation configuration tool 322 includes an answer column 332 (see col. 5, line 62 through col. 6, line 8). Surdick further teaches the questions 330 in the evaluation form 221, the answer column 332 includes a number of answer rows 339, each answer row 339 being associated with one of a number of discrete answer values 340 for the question (e.g., a yes/no or a pass/fail answer). Each discrete answer value 340 is associated with a number of points. For example, in FIG. 5, if the evaluation agent 116 provides an answer value of ‘Yes’ for the first evaluation question 331, five points are added to the customer service agent's 108 evaluation score. Alternatively, if the evaluation agent 116 provides an answer value of ‘No’ for the first evaluation question 331, then zero points are added to the customer service agent's 108 evaluation (see col. 5, line 62 through col. 6, line 8). Thus, it would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of scoring an interaction based on answer, as taught by Surdick, into the evaluation dimension of Cattaneo in order to improve standardization, interpretability, and consistency of automated quality assessments. Independent claim 15 is rejected for the same reasons addressed in independent claims 1 and however, claim 15 substitutes a large language model (LLM) for the ML model of claim 1. Since Cattaneo already teaches language-model-based processing of interaction text, then using an LLM is an obvious substitution. Regarding claims 2, 9 and 16, the combination of Cattaneo and Surdick teaches wherein the input data item comprises a transcript of a call, and wherein the method comprises routing, by an automatic call dialer (ACD) the call to an agent computing device (routing calls using an ACD is already discussed by Cattaneo [0001] and a transcript of a call discussed in [0019] of Cattaneo as well). Claims 3 and 10 recite “wherein the questions are sorted in a plurality of levels, and wherein the calculating of a score is performed based on one or more of the answers corresponding to one or more of the levels”. Note that Cattaneo teaches evaluation interactions across multiple evaluation dimension as discussed in [0003], [0015-0016] and [0021] and Surdick teaches grouped evaluation questions in an evaluation form (see col. 7, lines 3-9). Thus, organizing questions into levels is an obvious design choice. Regarding claims 4 and 11, the combination of Cattaneo and Surdick teaches wherein one or more of the questions are included in a form, the form generated using a graphical user interface (GUI), wherein the form is stored in a structured query language (SQL) database (Surdick teaches he evaluation form 221 includes a number of evaluation questions that are used to evaluate the performance of the agents who work at the customer service call center 104 (see col. 4, line 61 through col. 5, line 1) and the selected queries and/or target media sets 226 are output from the UI module 218 and stored (e.g., in a data storage device) for later use in the evaluation mode of the agent evaluation module 114 (see col. 5, lines 10-14). Also, storing such forms in a database including SQL databases is a conventional implementation). Regarding claims 5 and 12, the combination of Cattaneo and Surdick teaches wherein the producing of one or more of the answers is performed based on an automatic evaluation plan, wherein the evaluation plan comprises the form and one or more filtering conditions, and wherein the producing of one or more answers is triggered based on a predefined time interval (Cattaneo teaches automatic evaluation of interactions [0032]. Note that triggering such evaluations based on predefined time intervals is a routine scheduling technique). Regarding claims 6 and 13, the combination of Cattaneo and Surdick teaches producing, by the ML model, a justification to one or more of the answers (Cattaneo teaches explaining or providing reasoning for evaluation results, see [0031]). Regarding claims 7 and 14, the combination of Cattaneo and Surdick teaches wherein the plurality of levels comprise a level of critical questions, wherein the calculating of a score comprises: if one or more critical questions are unanswered, assigning a score of zero to the data item (this may read on the swear words metric is a measure of if the agent uttered any swear words during the interaction. In some embodiments, there could be a target value which is desired for the swear words, that may be set at a value greater than, equal to, and/or less than zero. In instances, a response could be scored off this with either a positive or negative result based on the comparison. For example, if the target value was set for zero swear words, a value above zero could be worse (e.g., one, two, or more swear words), and a value of zero could be better. In some embodiments, a weighted penalty will be applied to the appropriateness dimension score if the agent uses one or more swear words during the interaction, see [0029]. Note that “assigning a score of zero to the data item” can be applied to unanswered question, measure the agent uttered or the like). Conclusion 3. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Rasha S. AL-Aubaidi whose telephone number is (571) 272-7481. The examiner can normally be reached on Monday-Friday from 8:30 am to 5:30 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Ahmad Matar, can be reached on (571) 272-7488. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /RASHA S AL AUBAIDI/Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Jul 15, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593179
System and Method for Efficiency Among Devices
2y 5m to grant Granted Mar 31, 2026
Patent 12581225
CHARGING BOX FOR EARPHONES
2y 5m to grant Granted Mar 17, 2026
Patent 12576367
POLYETHYLENE MEMBRANE ACOUSTIC ASSEMBLY
2y 5m to grant Granted Mar 17, 2026
Patent 12563147
Shared Speakerphone System for Multiple Devices in a Conference Room
2y 5m to grant Granted Feb 24, 2026
Patent 12563330
ELECTRONIC DEVICE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
89%
With Interview (+11.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 744 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month