Prosecution Insights
Last updated: April 18, 2026
Application No. 18/647,421

AI ENHANCED CUSTOMER SUPPORT AUTOMATION

Non-Final OA §101§103
Filed
Apr 26, 2024
Examiner
BAHL, SANGEETA
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
21%
Grant Probability
At Risk
3-4
OA Rounds
4y 8m
To Grant
40%
With Interview

Examiner Intelligence

Grants only 21% of cases
21%
Career Allow Rate
93 granted / 452 resolved
-31.4% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
40 currently pending
Career history
492
Total Applications
across all art units

Statute-Specific Performance

§101
37.6%
-2.4% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
5.4%
-34.6% vs TC avg
§112
11.8%
-28.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 452 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This communication is a Non-Final Office Action in response to communications received on 4/3/26. Claims 1, 3,8,11-12, 15-17, 20 have been amended. Therefore, Claims 1-20 are now pending and have been addressed below. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/29/26 has been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (an abstract idea) without significantly more. Step 1: Identifying Statutory Categories In the instant case, claims 1-10 are directed to a method, claims 16-20 are directed to a non-transitory medium and claims 11-15 are directed to a system. Thus, the claims fall within one of the four statutory categories. Nevertheless, the claims fall within the judicial exception of an abstract idea. Step 2A: Prong 1 Identifying a Judicial Exception Under Step 2A, prong 1, Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention recites an abstract idea without significantly more. Independent claims 1, 11 and 16 recite methods for performing customer support operation that includes receiving a customer request comprises customer identification data to identify a specific customer for the customer request; checking a customer proficiency data set to identify historical data and analytical data of the specific customer and other customers of the product or service area; obtaining a customer statement of understanding for the current problem wherein a prompt, based on identified historical data and analytical data, is presented to the customer to explain the current problem and experience of the customer with the current problem; providing a set of questions, based on the customer statement and the current problem, to obtain customer responses; calculating, based on the customer statement and the customer responses, a customer proficiency rating for the customer to identify a customer skill level for the current problem; updating the current customer proficiency rating, based on support contact history for the customer comprising a frequency of customer calls for the current problem, and an average duration of the customer calls and determining, based on the customer proficiency rating, to assign a support agent for the current problem; selecting the support agent based on a difficulty of the current problem and a specialized area of the support agent for the current problem; and routing the customer to the support agent and upon resolving the current problem, ending communication with the customer and storing the difficulty of the current problem and a time spent to resolve the current problem for the customer in the customer proficiency data set. These limitations as drafted, are a process that, under its broadest reasonable interpretation, covers methods of organizing human activity (including commercial interactions such as business relations, managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions), including a person’s interaction with computer) and mathematical calculations (calculating customer proficiency rating), but for the recitation of generic computer components. That is, other than reciting the structural elements (such as an Artificial Intelligence (AI) virtual support agent (Claim 1, 11 , 16), one or more computer processors, a memory (Claim 11), a computer storage medium (Claim 16)), the claims are directed to providing customer support by routing customer to agent based on proficiency rating. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation of organizing human activity but for the recitation of generic computer components, the claim recites an abstract idea. Step 2A Prong 2 - This judicial exception is not integrated into a practical application because the claim merely describes how to generally “apply” the concept of receiving data, analyzing it, and providing routing based on proficiency rating. In particular, the claims only recites the additional element – an Artificial Intelligence (AI) virtual support agent (Claim 1), one or more computer processors, a memory (Claim 11), a computer storage medium (Claim 16)). The additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component and merely add the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Simply implementing the abstract idea on generic components is not a practical application of the abstract idea. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. In addition, limitations reciting data gathering such as “receiving a customer request..“ is insignificant pre-solution activity that merely gather data and, therefore, do not integrate the exception into a practical application for that additional reason. See In re Bilski, 545 F.3d 943, 963 (Fed. Cir. 2008) (en bane), aff’d on other grounds, 561 U.S. 593 (2010) (characterizing data gathering steps as insignificant extra-solution activity); see also CyberSource, 654 F.3d at 1371-72 (noting that even if some physical steps are required to obtain information from a database (e.g., entering a query via a keyboard, clicking a mouse), such data-gathering steps cannot alone confer patentability); GIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering). Accord Guidance, 84 Fed. Reg. at 55 (citing MPEP § 2106.05(g)). The claims are directed to an abstract idea. When considered in combination, the claims do not amount to improvements to the functioning of a computer, or to any other technology or technical field, as discussed in MPEP 2106.05(a), applying the judicial exception with, or by use of, a particular machine, as discussed in MPEP 2106.05(b), effecting a transformation or reduction of a particular article to a different state or thing, as discussed in MPEP 2106.05(c), or applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception, as discussed in MPEP 2106.05(e). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they does not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea. Step 2B: Considering Additional Elements The claimed invention is directed to an abstract idea without significantly more. The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” to; providing customer support by routing customer to agent based on proficiency rating. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception because mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The independent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. The claims are not patent eligible. The dependent claim(s) when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail to establish that the claim(s) is/are not directed to an abstract idea. The dependent claims are not significantly more because they are part of the identified judicial exception. See MPEP 2106.05(g). The claims are not patent eligible. With respect to the Artificial Intelligence (AI) virtual support agent (Claim 1), one or more computer processors, a memory (Claim 11), a computer storage medium (Claim 16)), these limitations are described in Applicant’s own specification as generic and conventional elements. See Applicants specification, Paragraph [0022] details “ The processor 102 may be implemented as microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions.[0035] System includes one or more processors used with AI virtual support agent, [0036] The system 100 may also include a memory 204 coupled to the processor 102.The memory 204 may include any non-transitory computer-readable medium including volatile memory (e.g., RAM), and/or non-volatile memory (e.g., EPROM, flash memory, Memristor, etc.). ” These are basic computer elements applied merely to carry out data processing such as, discussed above, receiving, analyzing, transmitting and displaying data. As discussed in Step 2A, Prong Two above, the recitations of “receiving steps” amount to receiving data over a network and are well understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. Furthermore, the use of such generic computers to receive or transmit data over a network has been identified as a well understood, routine and conventional activity by the courts. See Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AVAuto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93, OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result-a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Also see MPEP 2106.05(d) discussing elements that the courts have recognized as well-understood, routine and conventional activities in particular fields. Lastly, the additional elements provides only a result-oriented solution which lacks details as to how the computer performs the claimed abstract idea. Therefore, the additional elements amount to mere instructions to apply the exception. See MPEP 2106.05(f). Furthermore, these steps/components are not explicitly recited and therefore must be construed at the highest level of generality and amount to mere instructions to implement the abstract idea on a computer. Therefore, the claimed invention does not demonstrate a technologically rooted solution to a computer-centric problem or recite an improvement to another technology or technical field, an improvement to the function of any computer itself, applying the exception with, or by use of, a particular machine, effect a transformation or reduction of a particular article to a different state or thing, add a specific limitation other than what is well-understood, routine and conventional in the field, add unconventional steps that confine the claim to a particular useful application, or provide meaningful limitations beyond generally linking an abstract idea to a particular technological environment such as computing. Viewing the limitations as an ordered combination does not add anything further than looking at the limitations individually. Taking the additional claimed elements individually and in combination, the computer components at each step of the process perform purely generic computer functions. Viewed as a whole, the claims do not purport to improve the functioning of the computer itself, or to improve any other technology or technical field. Use of an unspecified, generic computer does not transform an abstract idea into a patent-eligible invention. Thus, the claim does not amount to significantly more than the abstract idea itself. Dependent claims 2-10, 12-15, and 17-20 add additional limitations, but these only serve to further limit the abstract idea, and hence are nonetheless directed towards fundamentally the same abstract idea as representative claims 1, 11 and 16. Claims 2-4, 12-13 and 17-18 recites wherein receiving the customer request for customer support further comprises receiving the customer request from one of a virtual phone agent, a virtual video-conferencing agent, a virtual text messaging agent, a virtual email agent, a support case bot, an interactive web form, or an online chatbot; checking the customer proficiency data set to identify historical data and analytical data of the specific customer further comprises checking the customer proficiency data set, via the AI virtual support agent, based on the customer identification data to identify a historical average customer proficiency rating, or a most recent customer proficiency rating for the specific customer; updating the customer proficiency rating based on at least one of the historical data and the analytical data for the customer, the historical average customer proficiency rating, or the most recent customer proficiency rating for the customer. The claims are directed to the same abstract idea as independent claims and simply provide further details to limit the abstract idea. The claims do not provide any new additional elements beyond abstract idea. Therefore, whether analyzed individually or as an ordered combination, they fail to integrate the abstract idea into a practical application or provide significantly more than the abstract idea. Regarding claims 5-7, 14 and 19, The method of claim 1, wherein providing the set of questions further comprises identifying, via the AI virtual support agent, based on the customer responses, a proposed solution for the current problem, and providing the proposed solution to the customer; providing documentation to the customer that is related to the proposed solution, and answering customer questions; providing the set of questions further comprises receiving , via the AI virtual support agent, a customer question and answering the customer question, wherein answering the customer question further comprises providing at least one of questions related to the customer question, or documentation related to the customer question. The claims are directed to the same abstract idea as independent claims and simply provide further details to limit the abstract idea. The limitation of identifying via AI virtual support agent merely adds the words apply it (or an equivalent) with the judicial exception , or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea as discussed in MPEP 2106.05(f). The claims do not provide any new additional elements beyond abstract idea. Therefore, whether analyzed individually or as an ordered combination, they fail to integrate the abstract idea into a practical application or provide significantly more than the abstract idea. Regarding claims 8-10, 15 and 20, The method of claim 1, ending the communication with the customer further comprises based on receiving a customer response that the current problem is resolved; via the AI virtual support agent, ending the customer support; and storing statistics related to the customer support, wherein the statistics comprise an updated customer proficiency rating for the customer; wherein determining, via the AI virtual support agent, based on the customer proficiency rating, to assign the support agent for the current problem further comprises identifying, based on the current problem and the product or service area, one or more of a product or service area of the support agent, an expertise level of the support agent, or experience of the support agent for the current problem; wherein providing the set of questions further comprises providing , via the AI virtual support agent, a plurality of interactive customer requests, based on one or more customer responses to the set of questions, ending the customer support based on resolving the current problem; and updating the customer proficiency rating based on historical data and analytical data of the customer and other customers of the product or service area. The claims are directed to the same abstract idea as independent claims and simply provide further details to limit the abstract idea. The claims do not provide any new additional elements beyond abstract idea. Therefore, whether analyzed individually or as an ordered combination, they fail to integrate the abstract idea into a practical application or provide significantly more than the abstract idea. The dependent claims do not integrate into a practical application. As such, the additional elements individually or in combination do not integrate the exception into a practical application, but rather, the recitation of any additional element amounts to merely reciting the words “apply it” (or equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (See MPEP 2106.05(f)). The dependent claims also do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are merely used to apply the abstract idea to a technological environment. These limitations do not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. See MPEP 2106.05d. Thus, the claims do not add significantly more to an abstract idea. The claims are ineligible. Therefore, since there are no limitations in the claim that transform the exception into a patent eligible application such that the claim amounts to significantly more than the exception itself, the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter. See (Alice Corporation Pty. Ltd. v. CLS Bank International, et al.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7, 9, 11-14, 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Sait et al. (US 2023/0089757 A1) in view of Daianu (US 11,270,235 B1), further in view of Khudia (US 2018/01365027 A1) and Ruano (US8,155,948 B2) Regarding claims 1, 11 and 16, Sait discloses the computer-implemented method/system/medium comprising: Sait teaches providing an Artificial Intelligence (AI) virtual support agent to perform automated customer support operations ([0014] an interactive voice response (IVR) may provide the set of questions to the user. An automated conversation tool, such as a chatbot, may provide the set of questions, [0016] the call from a user categorized as belonging to the first category may be routed to the virtual agent as the user of the first category may have a higher technical skill level than average and may be able to resolve the query with the guidance received from the virtual agent) comprising Sait discloses receiving a customer request for customer support for a current problem of a product or service area ([0067] method 500, at block 502, a call for technical support is received from a user of a user device. The user device may be, for example, the user device 200. In an example, the user may call for technical support for resolving queries about products/services of interest, working of a product, and the like. The call may be in any format, such as a voice call, a text message, or a chat message.), wherein the customer request comprises customer data ([0033]the system 100 may be connected to a user device 200 through a communication network 202, Fig 2 # 200 user device/user identity) Sait discloses checking a customer proficiency to identify analytic data of the customer and other customers of the product or service area ([0025] The processor 102 may monitor the responses received from the user. The processor 102 may execute instructions 106 to categorize the user in a category from among a plurality of categories based on the responses. In one example, the plurality of categories may correspond to the technical skill level and the type of support to be provided to the user. [0056]FIG. 3, responses 302 to a series of questions may be received, for example, through an IVR, from a user seeking resolution to a query, speech to text conversion 304 may be performed where applicable, and the responses may be analyzed by the first machine learning model 206 to categorize the user (proficiency data). Categorization of users based on technical skills may be performed by the first machine learning model 206 using word embeddings 306 and customer classifier 308 .[0082] determine a technical skill level of the user based on responses received to a set of questions from the user, in response to a query received from the user over an incoming call) Sait discloses obtaining a customer statement of understanding for the current problem ([0012] a virtual agent may first provide a series of instructions in response to a query posed by the user. [0014] when a user calls a customer support center to seek a resolution for product or service issues Fig 5 #502 receive query from user, [0058] the user may state ‘printer is not working’ in the form of unstructured text. Further, in response to a question, the user may mention that ‘paper is jammed’.) wherein a prompt, based on analytical data is presented to the customer to explain the current problem and experience of the customer with the current problem ([0024] a chatbot may provide the set of questions (prompt). In various examples, the questions may be provided in audio format or as speech converted to text or in text., [0025] the set of questions may be provided in a series, with a next question being provided based on the response received for a previous question. (analytical data). The user may provide responses to the set of questions as text or speech, which may be converted into text., [0050] the response provided by a user for a question posed to the user, such as “have you performed troubleshooting before?”. In an example, other responses such as “yes” and “of course” may have a similar context as the word “okay”. [0053] the user may call the customer support center regarding an issue such as a printer related issue. A respondent, such as an interactive voice response (IVR) or a chatbot or a human agent, may provide a set of questions related to the printer issue. The set of questions may be provided in series, such as ‘what troubleshooting steps have you performed?’, ‘were any software driver or firmware changes made on the printer?’, and the like) Sait discloses providing a set of questions, based on the customer statement and the current problem, to obtain customer responses (Fig5 #504 providing a series a questions to user, [0014] when a user calls a customer support center to seek a resolution for product or service issues or for enquires, the user may be provided with a set of questions. In an example, an interactive voice response (IVR) may provide the set of questions to the user. In another example, a human agent may provide the set of questions. In yet another example, an automated conversation tool, such as a chatbot, may provide the set of questions., [0053] the user may call the customer support center regarding an issue such as a printer related issue. A respondent, such as an interactive voice response (IVR) or a chatbot or a human agent, may provide a set of questions related to the printer issue. The set of questions may be provided in series, such as ‘what troubleshooting steps have you performed?’, ‘were any software driver or firmware changes made on the printer?’, and the like. [0068] At block 504, a series of questions may be provided to the user to assess a technical skill level of the user. The series of questions may be provided by a respondent. In an example, an interactive voice response (IVR) may be used as the respondent to provide the series of questions to the user.); Sait discloses calculating, based on the customer statement and the customer responses, a current customer proficiency rating for the customer to identify a customer skill level for the current problem (Fig 5 #504-506 assigning the technical skill level of the user based on responses [0069]At block 506, the technical skill level (proficiency rating) of the user may be assessed based on the responses received to the series of questions, [0024] The query may be related to, for example, products/services of interest, the working of a product, and the like. In response to the call, the user may be provided with a set of questions.[0025] Processor 106 to categorize the user in a category from among a plurality of categories based on the responses. In one example, the plurality of categories may correspond to the technical skill level (proficiency level) and the type of support to be provided to the user [0026]categorize a user based on a probability of the user being technically skilled as determined from the responses of the user. In one example, each category may be associated with a probability range of a user being technically skilled and the user may be classified into one of the categories based on the probability determined for the user. [0028] categorize users based on their technical skill levels. [0029] Users categorized in the first category may be those who have a higher than average technical skill level and users categorized in the second category may be those who have an average or lower than average technical skill level. [0082]. to determine the technical skill level of the user, the user may be categorized into a category from among a plurality of categories based on the responses. In an example, the processor 102 may use the first machine learning model 206 to categorize the user based on the technical skill level of the user.); updating the current proficiency rating ([0044] at the end of a call, a human agent or the user may provide a feedback to the system 100 to indicate if the call may have been handled by a virtual agent (update rating based on feedback) or to indicate whether any problems arose during the call, for better categorization of users, classification of queries, and decision making. The feedback may be used to update the decision rules and the machine learning models., [0055] if the user had indicated in one of the responses that they have recently installed an update to a printer driver, the agent may take this into account while providing the resolution steps.) and Sait discloses determining, based on the customer proficiency rating, to assign a support agent for the current problem.(Fig 5 #508 and [0070] At block 508, the call of the user may be routed to one of a human agent and a virtual agent based on the assessment. In an example, the call may be routed to the virtual agent when the user is categorized in the first category as having a technical skill level above an average technical skill level usable to resolve the query. In an example, the call may be routed to the human agent when the user is categorized in the second category as having a technical skill level below the average technical skill level usable to resolve the query, [0084] a decision engine 406 may be used to decide to route the call to the human agent or the virtual agent. In an example, the decision engine may use decision rules related to the technical skill level of the user and the technical complexity of the query for routing the call to the human agent or the virtual agent as has been explained earlier.). Sait discloses wherein the determining further comprises: selecting the support agent based on a difficulty of the current problem ([0071] call routing based on technical skills of users and technical complexity of a query, [0075] technical complexity of the query of the user may be assessed. In an example, a second machine learning model, such as the second machine learning model 208, may be used to classify the issue or query based on its technical complexity. In an example, the query may be classified into a class from among a plurality of classes based on a technical complexity of the query. For example, the query may be classified into a first class when the technical complexity of the query is assessed as being greater than a predefined threshold technical complexity., [0076] at block 610, the call of the user may be routed to one of a human agent and a virtual agent based on the technical skill level of the user and the technical complexity of the query.); and automatically routing the customer to the support agent.([0076] at block 610, the call of the user may be routed to one of a human agent and a virtual agent based on the technical skill level of the user and the technical complexity of the query. In an example, the virtual agent may be the virtual agent 210. In an example, a decision engine, such as the decision engine 406, may provide a decision to route the call to the human agent or the virtual agent based on decision rules. [0077]if the user is categorized in either the first category or the second category and the query from the user is complex, calls from the user may be routed to the human agent, to receive guidance from the human agent.) Sait discloses upon resolving the current problem, ending communication with the customer and storing the feedback ([0062] the decision rules used by the decision engine 406 may be predefined initially and updated based on feedback received from human agents and users after a call is concluded.) Sait does not specifically teach the customer request comprises customer identification data to identify a specific customer for the customer request; checking a customer proficiency data set to identify historical data of the specific customer; updating the current customer proficiency rating, based on support contact history for the customer comprising a frequency of customer calls for the current problem, and an average duration of the customer calls; selecting the support agent based on a specialized area of the support agent for the current problem; storing the difficulty of the current problem and a time spent to resolve the current problem for the customer in the customer proficiency data set. Sait, however teaches categorize the user in a category from among a plurality of categories based on the responses. In one example, the plurality of categories may correspond to the technical skill level and the type of support to be provided to the user. ([0025]) Daianu teaches receiving a customer request for customer support for a current problem of a product or service area (Fig 5 #502 receive a query and a personal identification from user); wherein the customer request comprises customer identification data to identify a specific customer for the customer request (Fig 1 # 110 user profile, Col 1 lines 57-62 retrieving, based on the personal ID, a user profile associated with the user, wherein the user profile comprises: user attribute data, a clickstream history of the user, and a product SKU of the product. Col 8 lines 66-67 a featurization system 306 can extract syntax and semantic data 308 from each query. The extracted syntax and semantic data 308 can include an intent of the query and entity information regarding context of the query. Fig 5 #502 and Col 11 lines 39-47 At step 502, a query and a personal ID is received from a user of a product. In some implementations, the query can be associated with a product or service offered an organization. In other implementations, the personal ID is associated with the user and can include Social Security number, serial number of product, service number, a random string of digits assigned to the user by the organization, and other forms of identifiers associated with the user for user identification.); checking a customer proficiency data set to identify historical data and analytical data of the specific customer (Col 1 lines 58-62 retrieving, based on the personal ID, a user profile associated with the user, wherein the user profile comprises: user attribute data, a clickstream history of the user (historical data), and a product SKU of the product. Col 3 lines 59-67, Col 4 lines 1-9 The user profile data may include user attribute data such as address, geographic location information, marital status, phone number, e-mail address, employment information, employment history, number of dependents, financial and tax history, current tax year information, prior tax year information, medical history, education, demographic information, and other information that describes features or characteristics of the user. In some cases, the user profile data can include product or service specific information, such as a clickstream history (analytical data) of the user or a product SKU of the user's product. For example, the user profile of a user of a software product can include a clickstream history of the user's session(s) with the software product. The clickstream history of the user is a navigation path of the user such as the various tabs, pages, sections, subsections, etc. of the software product that the user has visited in one or more previous sessions (historical data) the user has used the software product.); obtaining a customer statement of understanding for the current problem, based on identified historical data and analytical data (Col 3 lines 10-14 a user may have a query regarding how to access a specific feature of a tax preparation software product, such as retrieving the previous year's tax return information., Col 3 lines 29-32 a user may have a query regarding how to determine the number of dependents to claim when using a tax preparation software product to prepare tax documents.); updating support contact history for the customer comprising a frequency of customer calls and an average duration of the customers call (Col 8 lines 26-42 The user-agent interaction database 304 can store recordings of previous user-agent interactions that can provide information related to the content of user-agent interaction and user sentiment. In some cases, the user-agent interaction database 304 can store chat logs, email chains, and other forms of communications (frequency of customer contact) regarding the interaction between the user and the agent. Each user-agent interaction stored in the user-agent interaction database 304 (e.g., can be converted to score(s), such as a NET PROMOTER SCORE®, a sentiment score, or other metrics that indicates a measurement of user satisfaction, an effectiveness of user-agent interaction, or a measurement associated with the user-agent interaction (e.g., time elapsed during the user-agent interaction) (duration).; selecting the support agent based on a specialized area of the support agent for the current problem; automatically routing the customer to the support agent (Col 2 lines 2-11 generating, based on the user attribute data, the processed user data, and the agent profile data for each agent in the set of available agents, a predicted quality score for each agent in the set of available agents. The method further includes determining a qualified agent with a highest predicted quality score from the set of available agents, wherein the agent profile data of the agent corresponds to the query. The method further includes routing the user, based on the product SKU, to the agent with the highest predicted quality score.); storing the difficulty of the current problem and a time spent to resolve the current problem for the customer (Col 8 lines 26-42 The user-agent interaction database 304 can store recordings of previous user-agent interactions that can provide information related to the content of user-agent interaction and user sentiment. Each user-agent interaction stored in the user-agent interaction database 304 (e.g., can be converted to score(s), such as a NET PROMOTER SCORE®, a sentiment score, or other metrics that indicates a measurement of user satisfaction, an effectiveness of user-agent interaction, or a measurement associated with the user-agent interaction (e.g., time elapsed during the user-agent interaction) (time spent) Col 10 lines 59-62a featurization system 306 can extract syntax and semantic data 308 from each query. The extracted syntax and semantic data 308 can include an intent of the query and entity information regarding context of the query.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included the customer request comprises customer identification data to identify a specific customer for the customer request; checking a customer proficiency data set to identify historical data and analytical data of the specific customer; updating the current customer proficiency rating, based on support contact history for the customer comprising a frequency of customer calls for the current problem, and an average duration of the customer calls; selecting the support agent based on a specialized area of the support agent for the current problem; storing the difficulty of the current problem and a time spent to resolve the current problem for the customer, as disclosed by Daianu in the system disclosed by Sait, for the motivation of providing a routing system to a user of a product by retrieving, based on the personal ID, a user profile associated with the user comprises: user attribute data, a clickstream history of the user, and a product SKU of the product (Col 1 lines 55-62 Daianu) Sait/Daianu does not teach updating the current customer proficiency rating, based on support contact history for the customer comprising a frequency of customer calls for the current problem, Khudia teaches updating the customer proficiency rating, based on support contact history for the customer ([0032] determined that a maximum number of attempts (frequency) has not been reached (at 328), the support process 300 includes determining whether a new request from the requester is needed at 332. The support process 300 returns to selecting the starting point of the support path at 318, and the support process 300 may reevaluate at least one of the issue-related information (determined at 304) or the requester information (determined at 308) in order to select a new starting point for attempting to resolve the issue indicated by the assistance request. The support process 300 may select a different starting point in the same channel 164 that was previously selected, such as a starting point that is more suitable for a requester that has less experience or less proficiency (updated proficiency)with the software application than was previously assumed in one or more prior unsuccessful instances of the process 300, [0033] if it is determined that a new request from the requester is needed (at 332), then the support process 300 includes prompting the requester for a new assistance request at 334. For example, in at least some implementations, the prompting of the requester (at 334) may include querying the requester for a new or differently-worded description of the issue, or querying the requester for additional details regarding the issue. [0032] after prompting the requester for a new assistance request (at 334), the support process 300 returns to receiving the assistance request (at 302), and the above-described operations 302 through 334 may be repeated (updating user information including proficiency/experience/skill) indefinitely until the issue is resolved (at 322), or until the maximum number of attempts has been reached (at 328). comprising a frequency of customer calls for the current problem ([0041] select the starting point (at 410) may be based on a variety of requester information, including age, experience (e.g. experience with software application, experience with client device 110, experience with computers or electronic devices in general, etc.), technical skills of the particular requester (e.g. degree, certification, credential, training, etc.), or other characteristics of the particular requester (e.g. previous instances or interactions with support system 150 (frequency), demographic information, one or more other software applications operated by the particular requester, responses or inputs by the particular requester that indicate proficiency or lack thereof, etc.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included updating the customer proficiency rating, based on support contact history for the customer comprising a frequency of customer calls for the current problem, as disclosed by Khudia in the system disclosed by Sait/Daianu, for the motivation of providing a method to improve efficiency of support systems by appropriately tailoring the provision of support to the expertise and personal characteristics of the requester. Because the starting point within the support menu is appropriately selected based on information characterizing the requester (e.g. age, proficiency of the requester, etc.), the starting point may be selected to reduce or eliminate one or more operations that may are unnecessary for resolution of the issue. ([0036] Khudia) Sait/Daianu/Khudia do not specifically teach storing data for the customer in the customer proficiency data set Ruano teaches updating the current customer proficiency rating (Col 8 lines 7-14 USD module 230 associates a default USS with the user, until such time as USD module 230 can determine a more accurate skill level (and USS) for the user. In an alternate embodiment, USD module 230 stores USS information associated with each user, according to a unique user identifier associated with the user. In such embodiments, the default USS is the stored USS for the user. Col 8 lines 17-24 USD module 230 revises the stored USS for each user based on newly received user input. Thus, in one embodiment, USD module 230 adaptively configures the USS based on the user's current actual expertise, adjusting the historically-oriented default value to account for changes in the user's skill level (user proficiency rating). For example, as a novice user progresses, the user's skill level typically increases., Col 11 lines 45-51 at block 465, the SDA module adjusts the USS based on the SS of each selected SWP. In one embodiment, USD module 230 adjusts the USS based on the SS of each selected SWP.); storing statistics/data related to the customer in the customer proficiency data set (Col 8 lines 7-14 USD module 230 associates a default USS with the user, until such time as USD module 230 can determine a more accurate skill level (and USS) for the user. In an alternate embodiment, USD module 230 stores USS information associated with each user, according to a unique user identifier associated with the user. In such embodiments, the default USS is the stored USS for the user. Col 8 lines 17-24 USD module 230 revises the stored USS for each user based on newly received user input. Thus, in one embodiment, USD module 230 adaptively configures the USS based on the user's current actual expertise, adjusting the historically-oriented default value to account for changes in the user's skill level (user proficiency rating). For example, as a novice user progresses, the user's skill level typically increases., Col 11 lines 45-51 at block 465, the SDA module adjusts the USS based on the SS of each selected SWP. In one embodiment, USD module 230 adjusts the USS based on the SS of each selected SWP.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included storing statistics/data related to the customer in the customer proficiency data set, as disclosed by Ruano in the system disclosed by Sait/Daianu/Jungmeisteris, for the motivation of providing a method of assess a user's skill level and provide instructions and/or information to the user based on the user's skill level. (Col 4 lines 27-30 Ruano) and reducing the time users spend sifting through unhelpful or inappropriate information, leasing the use to a proper solution more quickly (Col 13 lines 23-25 Ruano) Claim 11. Sait discloses the system, one or more computer processors; and a memory ([0022]-[0023] The processor 102 may be implemented as microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions [0036] The system 100 may also include a memory 204 coupled to the processor 102.containing a program which when executed by the one or more computer processors performs an operation Claim 16. Sait discloses the computer program product comprising a computer-readable storage medium ([0036] The system 100 may also include a memory 204 coupled to the processor 102. The memory 204 may include any non-transitory computer-readable medium including volatile memory (e.g., RAM), and/or non-volatile memory (e.g., EPROM, flash memory, Memristor, etc.).) having computer-readable program code embodied therewith, the computer-readable program code executable by one or more computer processors to perform an operation. Regarding claim 2, Sait as modified by Daianu/Khudia/Ruano teaches the method of claim 1, Sait teaches wherein receiving the customer request for customer support further comprises receiving the customer request from one of a virtual phone agent, a virtual video-conferencing agent, a virtual text messaging agent, a virtual email agent, a support case bot, an interactive web form, or an online chatbot. ([0014] when a user calls a customer support center to seek a resolution for product or service issues or for enquires, the user may be provided with a set of questions. In an example, an interactive voice response (IVR) (virtual phone agent) may provide the set of questions to the user. In another example, a human agent may provide the set of questions. In yet another example, an automated conversation tool, such as a chatbot, may provide the set of questions., [0067] the user may call for technical support for resolving queries about products/services of interest, working of a product, and the like. The call may be in any format, such as a voice call, a text message, or a chat message.) Regarding claims 3, 12 and 17, Sait as modified by Daianu/Khudia teaches the method of claim 1, Sait/Daianu/Khudia do not teach checking the customer proficiency data set to identify historical data and analytical data of the specific customer further comprises checking the customer proficiency data set, via the AI virtual support agent, based on the customer identification data, to identify, a historical average customer proficiency rating, or a most recent customer proficiency rating for the specific customer. Ruano teaches checking the customer proficiency data set to identify historical data and analytical data of the specific customer further comprises checking the customer proficiency data set, via the AI virtual support agent, based on the customer identification data, to identify, a historical average customer proficiency rating, or a most recent customer proficiency rating for the specific customer. (Col 8 lines 7-24 USD module 230 associates a default USS (proficiency rating) with the user, until such time as USD module 230 can determine a more accurate skill level (and USS) for the user. In an alternate embodiment, USD module 230 stores USS information associated with each user, according to a unique user identifier associated with the user. USD module 230 revises the stored USS for each user based on newly received user input. Thus, in one embodiment, USD module 230 adaptively configures the USS based on the user's current actual expertise, adjusting the historically-oriented default value to account for changes in the user's skill level. For example, as a novice user progresses, the user's skill level typically increases. Col 8 lines 34-36 the output module 250 uses the USS to determine a user skill level, such as, "expert" or "novice" (recent proficiency rating for user)) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included checking the customer proficiency data set, via the AI virtual support agent, based on the customer identification data, to identify, a historical average customer proficiency rating, or a most recent customer proficiency rating for the specific customer, as disclosed by Ruano in the system disclosed by Sait/Daianu, for the motivation of providing a method of assess a user's skill level and provide instructions and/or information to the user based on the user's skill level. (Col 4 lines 27-30, Col 8 lines 43-47 Ruano). Regarding claims 4, 13, 18. Sait as modified by Daianu/Khudia/Ruano teaches the method of claim 3, further comprises Sait/Daianu/Khudia do not teach updating the customer proficiency rating based on at least one of the historical data and the analytical data for the customer, the historical average customer proficiency rating, or the most recent customer proficiency rating for the customer. Ruano teaches updating the customer proficiency rating based on at least one of the historical data and the analytical data for the customer, the historical average customer proficiency rating, or the most recent customer proficiency rating for the customer. (Col 8 lines 7-14 USD module 230 associates a default USS with the user, until such time as USD module 230 can determine a more accurate skill level (and USS) for the user. In an alternate embodiment, USD module 230 stores USS information associated with each user, according to a unique user identifier associated with the user. In such embodiments, the default USS is the stored USS for the user. Col 8 lines 17-24 USD module 230 revises the stored USS for each user based on newly received user input. Thus, in one embodiment, USD module 230 adaptively configures the USS based on the user's current actual expertise, adjusting the historically-oriented default value to account for changes in the user's skill level (user proficiency rating). For example, as a novice user progresses, the user's skill level typically increases., Col 11 lines 45-51 at block 465, the SDA module adjusts the USS based on the SS of each selected SWP. In one embodiment, USD module 230 adjusts the USS based on the SS of each selected SWP.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included updating the customer proficiency rating based on at least one of the historical data and the analytical data for the customer, the historical average customer proficiency rating, or the most recent customer proficiency rating for the customer, as disclosed by Ruano in the system disclosed by Sait/Daianu, for the motivation of providing a method of assess a user's skill level and provide instructions and/or information to the user based on the user's skill level. (Col 4 lines 27-30 Ruano) and reducing the time users spend sifting through unhelpful or inappropriate information, leasing the use to a proper solution more quickly (Col 13 lines 23-25 Ruano) Regarding claims 5, 14 and 19, Sait as modified by Daianu/Khudia/Ruano teaches the method of claim 1, Sait teaches wherein providing the set of questions further comprises identifying, via the AI virtual support agent, based on the customer responses, a proposed solution for the current problem, and providing the proposed solution to the customer. ([0063] when the call from the user seeking resolution for their query is routed to one of the human agent or the virtual agent, the human agent or the virtual agent may also utilize the responses provided by the user to provide guidance (proposed solution) for resolution of the user query. [0077] if the user is categorized under a first category (high technical skill level) and the query from the user is non-complex, the calls from the user may be routed to the virtual agent, as the user of the first category may be able to resolve the query from resolution steps provided by the virtual agent. [0084] when the call from the user seeking resolution for their query is routed to one of the human agent or the virtual agent as explained above, the human agent or the virtual agent may provide resolution steps corresponding to the user query for resolution of the user query. To provide the resolution steps, the agent may also take into account the responses provided by the user to the set of questions.) Regarding claim 6. Sait as modified by Daianu/Khudia/Ruano teaches the method of claim 5, further comprises Sait teaches providing documentation to the customer that is related to the proposed solution, and answering customer questions. ([0063] the human agent or the virtual agent may also utilize the responses provided by the user to provide guidance for resolution of the user query. [0084] when the call from the user seeking resolution for their query is routed to one of the human agent or the virtual agent as explained above, the human agent or the virtual agent may provide resolution steps corresponding to the user query (documentation of proposed solution) for resolution of the user query. To provide the resolution steps, the agent may also take into account the responses provided by the user to the set of questions) Regarding claim 7. Sait as modified by Daianu/Khudia/Ruano teaches the method of claim 1, wherein providing the set of questions further comprises Sait teaches receiving , via the AI virtual support agent, a customer question and answering the customer question ([0058] the user may state ‘printer is not working’ in the form of unstructured text. Further, in response to a question, the user may mention that ‘paper is jammed’.[0077] the query from the user is non-complex, the calls from the user may be routed to the virtual agent, as the user of the first category may be able to resolve the query from resolution steps provided by the virtual agent.), wherein answering the customer question further comprises providing at least one of questions related to the customer question, or documentation related to the customer question ([0082] system 100 to determine a technical skill level of the user based on responses received to a set of questions from the user, in response to a query received from the user over an incoming call. In an example, the query may be related to, for example, a query about products/services of interest, a query about the working of a product, [0084]. the virtual agent may provide resolution steps (documentation) corresponding to the user query for resolution of the user query. To provide the resolution steps, the agent may also take into account the responses provided by the user to the set of questions., [0045] the set of questions provided to the user, and responses received may also be made available to the agent to whom the call is routed. The agent may also seek further information regarding the issue while providing the resolution steps to efficiently resolve the issue.) Regarding claim 9. Sait as modified by Daianu/Khudia/Ruano teaches the method of claim 1, Sait teaches wherein determining, via the AI virtual support agent, based on the customer proficiency rating, to assign the support agent for the current problem further comprises identifying, based on the current problem and the product or service area, one or more of a product or service area of the support agent ([0055] the calls from users categorized under the second category 312 may be routed to the human agent to provide them more detailed guidance in view of their lower technical skills or emotional state. In an example, when the call from the user seeking resolution for their query is routed to one of the human agent or the virtual agent, the human agent or the virtual agent may also utilize the responses provided by the user to the series of questions to provide resolution steps for resolution of the user query. For example, if the user had indicated in one of the responses that they have recently installed an update to a printer driver, the agent may take this into account while providing the resolution steps., [0060] The decision engine 406 may decide to route the call to one of a virtual agent 210 and a human agent device 212 of a human agent based on the technical skill level and the technical complexity.), an expertise level of the support agent, or experience of the support agent for the current problem ([0044] the decision rules may be defined manually based on the experience of human agents in dealing with various categories of users and complexities of queries.). Further, Daianu also teaches assign the support agent for the current problem further comprises identifying, based on the current problem and the product or service area, one or more of a product or service area of the support agent, an expertise level of the support agent, or experience of the support agent for the current problem (Col 2 lines 54-57 determining a qualified agent to route a user based on, for example, a user's query and profile as well as expertise, background, and/or experience of available agents. Col 2 lines 66-67, Col 3 lines 1-2 A qualified agent is an available agent that, based on the user's query and profile, has the expertise, background, and/or experience to specifically address the user's query as well as has the highest generated predicted quality score. A predicted quality score is generated for each available agent and takes into consideration the user's query and profile, as well as the agent's expertise, background, and/or experience.) Claims 8, 10, 15, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sait et al. (US 2023/0089757 A1) in view of Daianu (US 11,270,235 B1) further in view of Khudia (US 2018/036502 A1) and Ruano (US8,155,948 B2) as applied to claims 1, 11, 16, further in view of Jungmeisteris et al. (US 2022/0374956 A1) Regarding claims 8, 15 and 20, Sait as modified by Daianu/Khudia/Ruano teaches the method of claim 1, wherein ending the communication with the customer further comprises Sait/Daianu/Khudia do not specifically teach based on receiving a customer response that the current problem is resolved; via the AI virtual support agent, ending the customer support; However, Sait teaches receiving feedback at end of the customer support ([0044] at the end of a call, a human agent or the user may provide a feedback to the system 100 [0062] the decision rules used by the decision engine 406 may be predefined initially and updated based on feedback received from human agents and users after a call is concluded.). Also Daianu teaches survey completed by user via email following interaction with agent and storing metrics that indicates a measurement of user satisfaction, an effectiveness of user-agent interaction, or a measurement associated with the user-agent interaction (Col 8 lines 38-40, 50-52) Jungmeisteris teaches based on receiving a customer response that the current problem is resolved; via the AI virtual support agent, ending the customer support ([0062] FIGS. 4A and 4B each depict a user interface (400 and 410, respectively) which inquire whether the user's problem was solved. In FIG. 4A, the user's problem was resolved, and a progression of screens is displayed in which system 110 requests additional information from the user (402), takes in additional input from the user (404), requests free-form text input with user feedback (406), and end the interaction (408)., Fig 4A #400 user selects ‘yes’ problem solved, 408 disconnect/end interaction) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included based on receiving a customer response that the current problem is resolved; via the AI virtual support agent, ending the customer support, as disclosed by Jungmeisteris in the system disclosed by Sait/Daianu/Khudia, for the motivation of providing a method of inquire whether the user's problem was solved before ending the interaction ([0062] Jungmeisteris) Sait/Daianu/Khudia/Jungmeisteris do not teach storing in the customer proficiency data set statistics related to the customer support, wherein the statistics comprise an updated customer proficiency rating for the customer. Daianu teaches storing statistics related to the customer support (Col 8 lines 28-44The user-agent interaction database 304 can store recordings of previous user-agent interactions that can provide information related to the content of user-agent interaction and user sentiment. In some cases, the user-agent interaction database 304 can store chat logs, email chains, and other forms of communications regarding the interaction between the user and the agent. Each user-agent interaction stored in the user-agent interaction database 304 (e.g., can be converted to score(s), such as a NET PROMOTER SCORE®, a sentiment score, or other metrics that indicates a measurement of user satisfaction, an effectiveness of user-agent interaction, or a measurement associated with the user-agent interaction (e.g., time elapsed during the user-agent interaction). The score of a user-agent interaction can be stored in the agent profile of the associated agent. Ruano teaches storing statistics related to the customer support, wherein the statistics comprise an updated customer proficiency rating for the customer. (Col 8 lines 7-14 USD module 230 associates a default USS with the user, until such time as USD module 230 can determine a more accurate skill level (and USS) for the user. In an alternate embodiment, USD module 230 stores USS information associated with each user, according to a unique user identifier associated with the user. In such embodiments, the default USS is the stored USS for the user. Col 8 lines 17-24 USD module 230 revises the stored USS for each user based on newly received user input. Thus, in one embodiment, USD module 230 adaptively configures the USS based on the user's current actual expertise, adjusting the historically-oriented default value to account for changes in the user's skill level (user proficiency rating). For example, as a novice user progresses, the user's skill level typically increases., Col 11 lines 45-51 at block 465, the SDA module adjusts the USS based on the SS of each selected SWP. In one embodiment, USD module 230 adjusts the USS based on the SS of each selected SWP.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included updating the customer proficiency rating based on historical data and analytical data of the customer and other customers of the product or service area, as disclosed by Ruano in the system disclosed by Sait/Daianu/Jungmeisteris, for the motivation of providing a method of assess a user's skill level and provide instructions and/or information to the user based on the user's skill level. (Col 4 lines 27-30 Ruano) and reducing the time users spend sifting through unhelpful or inappropriate information, leasing the use to a proper solution more quickly (Col 13 lines 23-25 Ruano) Regarding claim 10. Sait as modified by Daianu/Khudia/Ruano teaches the method of claim 1, Sait teaches wherein providing the set of questions further comprises providing, via the AI virtual support agent, a plurality of interactive customer requests, based on one or more customer responses to the set of questions, ([0044] at the end of a call, a human agent or the user may provide a feedback to the system 100 to indicate if the call may have been handled by a virtual agent or to indicate whether any problems arose during the call, for better categorization of users, classification of queries, and decision making, [0045] on routing the call, the query of the user, the set of questions provided to the user, and responses received may also be made available to the agent to whom the call is routed. The agent may also seek further information regarding the issue while providing the resolution steps to efficiently resolve the issue.) and feedback received from an agent and customer after a call is concluded ([0062]). However, Saif does not specifically teach ending the customer support based on resolving the current problem Jungmeisteris teaches ending the customer support based on resolving the current problem ([0062] FIGS. 4A and 4B each depict a user interface (400 and 410, respectively) which inquire whether the user's problem was solved. In FIG. 4A, the user's problem was resolved, and a progression of screens is displayed in which system 110 requests additional information from the user (402), takes in additional input from the user (404), requests free-form text input with user feedback (406), and end the interaction (408)., Fig 4A #400 user selects ‘yes’ problem solved, 408 disconnect/end interaction) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included ending the customer support based on resolving the current problem, as disclosed by Jungmeisteris in the system disclosed by Sait/Daianu, for the motivation of providing a method of inquire whether the user's problem was solved before ending the interaction ([0062] Jungmeisteris) Sait/Daianu/Khudia/Jungmeisteris do not specifically teach updating the customer proficiency rating based on historical data and analytical data of the customer and other customers of the product or service area. Ruano teaches updating the customer proficiency rating based on historical data and analytical data of the customer and other customers of the product or service area (Col 8 lines 7-14 USD module 230 associates a default USS with the user, until such time as USD module 230 can determine a more accurate skill level (and USS) for the user. In an alternate embodiment, USD module 230 stores USS information associated with each user, according to a unique user identifier associated with the user. In such embodiments, the default USS is the stored USS for the user. Col 8 lines 17-24 USD module 230 revises the stored USS for each user based on newly received user input. Thus, in one embodiment, USD module 230 adaptively configures the USS based on the user's current actual expertise, adjusting the historically-oriented default value to account for changes in the user's skill level (user proficiency rating). For example, as a novice user progresses, the user's skill level typically increases., Col 11 lines 45-51 at block 465, the SDA module adjusts the USS based on the SS of each selected SWP. In one embodiment, USD module 230 adjusts the USS based on the SS of each selected SWP.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included updating the customer proficiency rating based on historical data and analytical data of the customer and other customers of the product or service area, as disclosed by Ruano in the system disclosed by Sait/Daianu/Jungmeisteris, for the motivation of providing a method of assess a user's skill level and provide instructions and/or information to the user based on the user's skill level. (Col 4 lines 27-30 Ruano) and reducing the time users spend sifting through unhelpful or inappropriate information, leasing the use to a proper solution more quickly (Col 13 lines 23-25 Ruano). Response to Arguments Applicant's arguments filed 4/3/26 have been fully considered but they are not persuasive. Regarding 101 rejection, examiner has considered all arguments and respectfully disagrees. New limitations have been addressed in rejection above. The current claims are directed to abstract idea of organizing human activity (including commercial interactions such as business relations, managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions), including a person’s interaction with computer) and mathematical calculations (calculating customer proficiency rating), but for the recitation of generic computer components. That is, other than reciting the structural elements (such as an Artificial Intelligence (AI) virtual support agent (Claim 1, 11 , 16), one or more computer processors, a memory (Claim 11), a computer storage medium (Claim 16)), the claims are directed to providing customer support by routing customer to agent based on proficiency rating. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation of organizing human activity but for the recitation of generic computer components, the claim recites an abstract idea. The judicial exception is not integrated into a practical application because the claim merely describes how to generally “apply” the concept of receiving data, analyzing it, and providing routing based on proficiency rating. In particular, the claims only recites the additional element – an Artificial Intelligence (AI) virtual support agent (Claim 1), one or more computer processors, a memory (Claim 11), a computer storage medium (Claim 16)). The additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component and merely add the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Simply implementing the abstract idea on generic components is not a practical application of the abstract idea. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Regarding improvement, Examiner respectfully disagrees. While the Applicant’s specification may disclose alleged improvements to processor efficiency, the specification merely recites the alleged improvements ([0013] with no further detail to how the claim set achieves such an improvement. MPEP 2106.05(a) recites “If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement.” After the examiner has consulted the specification and determined that the disclosed invention improves technology, the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology. Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316, 120 USPQ2d 1353, 1359 (patent owner argued that the claimed email filtering system improved technology by shrinking the protection gap and mooting the volume problem, but the court disagreed because the claims themselves did not have any limitations that addressed these issues). That is, the claim must include the components or steps of the invention that provide the improvement described in the specification. Examiner notes neither specification nor claims recite how the improvement/processor efficiency is achieved. The instant claims are directed to an abstract idea, and does not integrate the abstract idea into a practical application. The additional elements recited in the instant claims are only to generic computing components that implement the abstract idea on a computing environment. As such, it can be interpreted that the instant claims only make the abstract idea more efficient, and there are not actual changes/improvements to any computing components. Applicant’s arguments with respect to claim(s) 103 rejection have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. New limitations have been addressed in rejection above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Rath (US 2021/014136) discloses a system receives support tickets and trains a machine-learning model to identify support agents who had experiences resolving support tickets of multiple complexities. The system receives a support ticket, identifies a topic of the support ticket, and estimates a complexity of the support ticket. Gray (US 9,218,410) discloses the adaptive module 430 tracks the behavior of the user (stored in the user information database 440 as engagement performance) and adapts the engagement to adjust to the user's skill level. Dwane (US 20200364758A1) discloses the CSat prediction data 402 may include a customer support issue category 404, a customer sentiment score 406, an issue complexity rating 408, a classification confidence rating 410, and a customer context score 412, Kannan (US11,080721) discloses the ASL engine 203 uses a customer experience score to measure, compare, and improve models. FIG. 9 is a block schematic diagram that depicts a model for customer experience score according to the invention. In FIG. 9, an example model for a customer experience score 402 incorporates measures of customer effort involved in an engagement 400, time spent on the engagement 404, and outcome of the engagement 406. The consumer experience score is developed as a statistical model that is a function of these there parameters, where the customer effort score is measures as a function of how long resolution took, how many channels, and how many contacts it took for resolution. (Fig 6) Wadhwa (US 7,526,722) teaches updating the customer proficiency rating, based on support contact history for the customer comprising a frequency of customer current problem (Col 4 lines 44-55the system may determine a user's proficiency category based on the number of times the user has used the system.(contact history) The user history 112 may indicate the number of times the user has logged in. The more times the user logs in, the more advanced the user may be deemed to be. After a predetermined number of log-ins, the system may update the user's proficiency category to a more advanced level. For example, after a single log-in, the user's proficiency category may be determined to be "beginner," and may be increased to "intermediate" after 10 log-ins. In one example embodiment, the date and/or time of the user's last log-in may be recorded in the user history 112.), and an average duration (Col 5 lines 1-10 The user history 112 may also indicate for a previously encountered event, an amount of time that has passed since occurrence of the event, a number of log-ins after occurrence of the event, and/or a number of times the user has revisited the point in the sequence of executed instructions at which the event had previously occurred without recurrence of the event. As time passes, the number of log-ins increases, and/or the number times the user has revisited the point in the execution sequence increases without recurrence of the event, the user's proficiency category may be advanced.) Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANGEETA BAHL whose telephone number is (571)270-7779. The examiner can normally be reached 7:30 - 4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached at 571-270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SANGEETA BAHL/Primary Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Apr 26, 2024
Application Filed
Jul 10, 2025
Non-Final Rejection — §101, §103
Sep 18, 2025
Examiner Interview Summary
Sep 18, 2025
Applicant Interview (Telephonic)
Oct 14, 2025
Response Filed
Feb 06, 2026
Final Rejection — §101, §103
Mar 29, 2026
Response after Non-Final Action
Apr 03, 2026
Request for Continued Examination
Apr 03, 2026
Response after Non-Final Action
Apr 04, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591914
REAL-TIME COLLATERAL RECOMMENDATION
2y 5m to grant Granted Mar 31, 2026
Patent 12548099
SYSTEMS AND METHODS FOR PRIORITIZED FIRE SUPPRESSION
2y 5m to grant Granted Feb 10, 2026
Patent 12524739
CREATING AND USING TRIPLET REPRESENTATIONS TO ASSESS SIMILARITY BETWEEN JOB DESCRIPTION DOCUMENTS
2y 5m to grant Granted Jan 13, 2026
Patent 12482304
SYSTEM AND A METHOD FOR AUTHENTICATING INFORMATION DURING A POLICE INQUIRY
2y 5m to grant Granted Nov 25, 2025
Patent 12450617
LEARNING FOR INDIVIDUAL DETECTION IN BRICK AND MORTAR STORE BASED ON SENSOR DATA AND FEEDBACK
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
21%
Grant Probability
40%
With Interview (+19.3%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 452 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month