Prosecution Insights
Last updated: April 19, 2026
Application No. 18/488,427

SYSTEMS AND METHODS FOR SERVICE CENTER CONTROL AND MANAGEMENT

Final Rejection §103
Filed
Oct 17, 2023
Examiner
AL AUBAIDI, RASHA S
Art Unit
2693
Tech Center
2600 — Communications
Assignee
Verizon Patent and Licensing Inc.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
89%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
577 granted / 744 resolved
+15.6% vs TC avg
Moderate +11% lift
Without
With
+11.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
782
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
55.9%
+15.9% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 744 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 1. This in response to an amendment filed 11/20/2025. No claims have been added. Claim 1-2, 7, 11-12, 15-17 and 20 have been amended. No claims have been canceled. Claims 1-20 are still pending in this application. Claim Rejections - 35 USC § 103 2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Krishnapuram et al. (Pub.No.: 2012/0039460 A1) in view of Green et al. (Pub.No.: 2019/0287517 A1) and further in view of BUEHLER et al. (Pub.No.: 2018/0131811 A1). Regarding claims 1, 11 and 16, Krishnapuram teaches a method, device and non-transitory computer-readable storage medium (see abstract), comprising: analyzing, by a device, information related to a service request between a user and an agent of a service provider, the service request information comprising data related to interactions between the user and agent (reads on analyzing customer service quality, where “customer call service quality parameters” are identified from interaction data (e.g., call transcripts, metrics) and quantified for analysis, see [0024-0027]); determining, by the device, based on the analysis, an intent of the user and a responsiveness by the agent, the intent comprising data related to functionality provided by the service provider, the responsiveness comprising data indicating mechanisms employed by the agent related to the intent (reads on the parameters that include measures such as Average Handling Time (AHT), Expressed Dissatisfaction Rate (EDR), First-Call Resolution (FCR), loyalty, call flow compliance, and communication skill, see [0024] also see [0026-0030]); determining, by the device, based on the determined intent and responsiveness, a score for the service request (this reads on service quality can be a qualitative and a quantitative measure of performance. A service organization can measure its overall performance by measuring the two types of gaps based on several parameters measured from customer interactions. These parameters include: Conversion (successful, partially successful, unsuccessful), AHT (Average Call Handling Time), C-SAT (interaction rated on a scale by customer), EDR (Expressed Dissatisfaction rate), FCR Rate (First Call Resolution Rate), Net Promoter Score (Customer loyalty measure), Call flow (opening with correct greeting, covering all aspects during the call, giving/taking proper and complete information, following mandatory processes), Communication Skills (effective listening, personalized interaction with the customer, courteous, polite, professional, clarity in speech, good pacing, proper voice inflection, pronunciation, confidence, see [0009]), the score being a type of score that corresponds to the responsiveness data (reads on quantifying and correlating these service quality parameters using AI, NLP, statistical techniques, etc., which effectively results in a score that reflects the quality of the agent’s performance and whether the customer’s intent was addressed, see [0013 and 0026]); and determining, by the device, whether the intent of the user was addressed during the service request based on the determined score (reads on determine aspects of service performance—e.g., did the agent follow the prescribed flow, are communication skills appropriate, and potentially whether the customer’s needs were met based on those metrics, see [0027]). Krishnapuram features is already addressed in the rejection of claims 1, 11 and 16. Krishnapuram does not specifically teach “controlling, by the device, a result of the service request based on the determination of whether the intent was addressed, the control of the result corresponding to capabilities related to the functionality provided by the service provider”. However, Green teaches routing customer service interactions, beginning with analyzing historical interactions between users and agents as discussed in [0002-0003 and 0012-0013]. Green further discloses that after monitoring the live interaction, the system identifies keywords/issues and selects a resolution resource from a pool (see [0006-0007] and [0011]), then the user may be connected to the potential resolution resource automatically, or connected in response to the user and/or the agent accepting the suggestion (see [0014]). Thus, it would have been obvious to combine the analytics and scoring system of Green with the automatic routing/escalation mechanism of Krishnapuram to deliver analytical backbone and provide necessary outcome-control mechanism. Note that controlling the service request result is based on whether user intent is addressed or not. Krishnapuram and Green features already addressed in the rejection of claims 1, 11 and 16. The combination of Krishnapuram and does not specifically teach “the control of the result comprising transferring the service request to another agent having a history of behaviors that includes a behavior other than the type of score that corresponds to the responsiveness data”. However, BUEHLER teaches establishing a communication link between the given requestor and the dedicated agent (see [0007], [0024] and [0038]). Buehler teaches further teaches selecting an agent to assign to the given requestor includes retrieving a pool of agents from a data store, where the agents in the pool of agents are associated with the entity and where each agent in the pool of agents has a rating and rating is based on feedback from customers having previous interactions with the agent (see [0007]). Note that customer feedback score (i.e., rating) for a given agent is an average of feedback scores provided by customers during previous interactions with the agent and stored as a composite feedback score on the agent record database (see [0078-0080] and [0095-0097]). Further, BUEHLER teaches the rating for each agent may be adjusted based on different factors. For example, the rating may be adjusted based on at least one of an amount of time since an agent has interacted with a customer, a number of customers assigned to an agent, an amount or a percentage of missed calls amongst the calls routed to the agent. Adjusting the agent rating using the amount of time since the agent has interacted with a customer provides load balancing amongst the agents in the applicable pool of agents (see [0079-0081]). Thus, it would have been obvious to one of an ordinary skill in the art before the effective filing date of the claimed invention to incorporate the feature of routing requests to agents with a history of a good performance/scoring (i.e., positive score), as taught by BUHLER into the combination of Krishnapuram and Green in order to enhances customer’s experience and improve call center performance in general. Regarding claims 2, 12 and 17, the combination of Krishnapuram, Green and BUEHLER teaches wherein the control of the result comprises transferring the service request to another agent, wherein the transfer enables further interaction data to be generated and analyzed (reads on escalation to another resources, see [0046-0047]). Claims 3, 13 and 18 recite “wherein the result comprises a modification of an account of the user with the service provider”. The limitation of “modification of an account of the user” believed to be inherent if not obvious within the teachings of Krishnapuram and Green because emphasizes measuring whether customer intent (e.g., resolution of service/billing/technical issue) was addressed (see [0027] and [0057]). This inherently touches on service provider functions like account management. Also, Green teaches escalation to resolution resources, which could reasonably include an agent empowered to perform account changes. Regarding claim 4, the combination of Krishnapuram, Green and BUEHLER teaches: analyzing the determined responsiveness based on the determined intent (see Krishnapuram [0024-0027]); and determining the score based on the analysis of the determined responsiveness (see Krishnapuram [0009, 0013 and 0026]). Regarding claim 5, the combination of Krishnapuram, Green and BUEHLER teaches: determining the type of the score for the service request based on the analysis of the determined responsiveness, wherein the type of score corresponds to the interactions data between the user and the agent (see [0012-0013] and [0024-0029]). Regarding claim 6, the combination of Krishnapuram, Green and BUEHLER teaches: retrieving, from a database, information related to the user, the user information comprising historical data related to activities of the user (see Green [0032, 0057 and 0092]); and retrieving, from the database, information related to the agent, the agent information comprising historical data related to activities of the agent (see Green [0032]). Regarding claim 7, the combination of Krishnapuram, Green and BUEHLER teaches wherein the determination of the intent of the user and the responsiveness of the agent are respectfully based on the user information and agent information (see Green [0012-0014]). Regarding claim 8, the combination of Krishnapuram, Green and BUEHLER teaches wherein the database from which the user information is retrieved is a user database (see Green [0032, 0057 and 0092]). Regarding claim 9, the combination of Krishnapuram, Green and BUEHLER teaches wherein the database from which the agent information is retrieved is an agent database (see Green [0032, 0057 and 0092]). Regarding claim 10, the combination of Krishnapuram, Green and BUEHLER teaches wherein the interactions data comprises an indication of the result of the service request, the result comprising an action performed on behalf of the service provider in response to the intent of the user (this reads on interaction data FCR, dissatisfaction, compliance metrics that indicates the result of the service request, which corresponds to action taken in response to the user’s intent, see Krishnapuram [0026-0027]). Dependent claims 14 and 19 are rejected for the same reasons addressed in dependent claims 4 and 5, respectively. Dependent claims 15 and 20 are rejected for the same reasons addressed in dependent claims 6 and 7, respectively. Response to Arguments 3. Applicant’s arguments for independent claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion 4. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. 5. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Rasha S. AL-Aubaidi whose telephone number is (571) 272-7481. The examiner can normally be reached on Monday-Friday from 8:30 am to 5:30 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Ahmad Matar, can be reached on (571) 272-7488. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /RASHA S AL AUBAIDI/Primary Examiner, Art Unit 2693
Read full office action

Prosecution Timeline

Oct 17, 2023
Application Filed
Aug 21, 2025
Non-Final Rejection — §103
Nov 20, 2025
Response Filed
Mar 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593179
System and Method for Efficiency Among Devices
2y 5m to grant Granted Mar 31, 2026
Patent 12581225
CHARGING BOX FOR EARPHONES
2y 5m to grant Granted Mar 17, 2026
Patent 12576367
POLYETHYLENE MEMBRANE ACOUSTIC ASSEMBLY
2y 5m to grant Granted Mar 17, 2026
Patent 12563147
Shared Speakerphone System for Multiple Devices in a Conference Room
2y 5m to grant Granted Feb 24, 2026
Patent 12563330
ELECTRONIC DEVICE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
89%
With Interview (+11.1%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 744 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month