Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This communication is a Final Office Action in response to communications received on 12/31/25.
Claims 1, 5-9, 12-15, 19 and 20 have been amended.
Therefore, Claims 1-20 are now pending and have been addressed below.
Double patenting rejection is withdrawn in view of terminal disclaimer filed on 12/31/25.
Terminal Disclaimer
The terminal disclaimer filed on 12/31/15 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of US12118568 has been reviewed and is accepted. The terminal disclaimer has been recorded.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception without significantly more.
Step 1: Identifying Statutory Categories
In the instant case, claims 1-7 are directed to a method, Claims 15-20 are directed to a non-transitory medium and Claims 8-14 are directed to a system. Thus, this claim falls within one of the four statutory categories. Nevertheless, the claim falls within the judicial exception of an abstract idea.
Step 2A: Prong 1 Identifying a Judicial Exception
Under Step 2A, prong 1, Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1, 8 and 15 recite methods, for a customer support campaign comprising observing communications between a human operator and at least one customer regarding at least one customer support case in the customer support campaign, process that mimics human dialog; performing, a self-assessment to determine that the humanoid is adequately trained for the customer support campaign; provisioning, the humanoid to handle at least one future customer support case in the customer support campaign in response to the humanoid determining that it is adequately trained for the customer support campaign; receiving, from a customer, a request for customer support associated with the customer support campaign, the request indicating a customer support issue; interpreting, information in the request for customer support; and providing, support for the customer support issue based on interpreting the information and based on the self-training. wherein providing the support for the customer support issue includes coordinating to execute actions to resolve the customer support issue.
These limitations as drafted, are a process that, under its broadest reasonable interpretation, covers methods of organizing human activity (managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)), but for the recitation of generic computer components. That is, other than reciting the structural elements (such as (Claim 1, 8, 15)self-training, by a humanoid of a customer support system, the humanoid comprising a computer, a communications module, a plugin execution module, a system external to humanoid (Claim 8) a communication interface configured to enable network communications; one or more memories configured to store data; and one or more processors coupled to the communication interface and memory (Claim 15) One or more non-transitory computer readable storage media), the claims are directed to providing customer support for a customer support campaign. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation of organizing human activity but for the recitation of generic computer components, the claim recites an abstract idea.
Step 2A Prong 2 - This judicial exception is not integrated into a practical application because the claim merely describes how to generally “apply” the concept of training a humanoid to provide customer support. In particular, the claims only recites the additional element – (Claim 1, 8, 15)self-training, by a humanoid of a customer support system, the humanoid comprising a computer, a communications module, a plugin execution module, a system external to humanoid (Claim 8) a communication interface configured to enable network communications; one or more memories configured to store data; and one or more processors coupled to the communication interface and memory (Claim 15) One or more non-transitory computer readable storage media. The additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Simply implementing the abstract idea on generic components is not a practical application of the abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. a) The additional elements merely add the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). The claims are directed to an abstract idea. Further, the limitation of “self-train a humanoid based on communications between a human and customer” is simply application of a computer model, itself an abstract idea. Furthermore, such training and applying of a model is no more than putting data into a black box machine learning operation, devoid of technological implementation and application details. Each step requires a generic computer to perform generic computer functions.
When considered in combination, the claims do not amount to improvements to the functioning of a computer, or to any other technology or technical field, as discussed in MPEP 2106.05(a), applying the judicial exception with, or by use of, a particular machine, as discussed in MPEP 2106.05(b), effecting a transformation or reduction of a particular article to a different state or thing, as discussed in MPEP 2106.05(c), or applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception, as discussed in MPEP 2106.05(e). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they does not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea.
Step 2B: Considering Additional Elements
The claimed invention is directed to an abstract idea without significantly more. The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” to; provide customer support using humanoid. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception because mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The independent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. The claims are not patent eligible. The dependent claim(s) when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail to establish that the claim(s) is/are not directed to an abstract idea. The dependent claims are not significantly more because they are part of the identified judicial exception. See MPEP 2106.05(g). The claims are not patent eligible. With respect to (Claim 1, 8, 15)self-training, by a humanoid of a customer support system, the humanoid comprising a computer, a communications module, a plugin execution module, a system external to humanoid (Claim 8) a communication interface configured to enable network communications; one or more memories configured to store data; and one or more processors coupled to the communication interface and memory (Claim 15) One or more non-transitory computer readable storage media, these limitations are described in Applicant’s own specification as generic and conventional elements. See Applicants specification, Para [0023The humanoid is configured to be trained (e.g., through self-learning, supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, etc.) to address support issues on behalf of a customer support center. For example, the humanoid can use one or more machine learning models and/or custom automation capabilities Paragraph [0133] details “ computing device 1700 may include one or more processor(s) 1705, one or more memory element(s) 1710, storage 1715, a bus 1720, one or more network processor unit(s) 1725 interconnected with one or more network input/output (I/O) interface(s).” These are basic computer elements applied merely to carry out data processing such as, discussed above, receiving, analyzing, transmitting and displaying data. Furthermore, the use of such generic computers to receive or transmit data over a network has been identified as a well understood, routine and conventional activity by the courts. See Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AVAuto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result-a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added)); Also see MPEP 2106.05(d) discussing elements that the courts have recognized as well-understood, routine and conventional activities in particular fields. Lastly, the computing device provides only a result-oriented solution which lacks details as to how the computer performs the claimed abstract idea. Therefore the processor/device amounts to mere instructions to apply the exception. See MPEP 2106.05(f). Furthermore, these steps/components are not explicitly recited and therefore must be construed at the highest level of generality and are well-understood, routine and conventional limitations that amount to mere instructions to implement the abstract idea on a computer. Therefore, the claimed invention does not demonstrate a technologically rooted solution to a computer-centric problem or recite an improvement to another technology or technical field, an improvement to the function of any computer itself, applying the exception with, or by use of, a particular machine, effect a transformation or reduction of a particular article to a different state or thing, add a specific limitation other than what is well-understood, routine and conventional in the field, add unconventional steps that confine the claim to a particular useful application, or provide meaningful limitations beyond generally linking an abstract idea to a particular technological environment such as computing. Viewing the limitations as an ordered combination does not add anything further than looking at the limitations individually. Taking the additional claimed elements individually and in combination, the computer components at each step of the process perform purely generic computer functions. Viewed as a whole, the claims do not purport to improve the functioning of the computer itself, or to improve any other technology or technical field. Use of an unspecified, generic computer does not transform an abstract idea into a patent-eligible invention. Thus, the claim does not amount to significantly more than the abstract idea itself. Further, claims to a system and computer-readable storage medium are held ineligible for the same reason, e.g., the generically-recited computers add nothing of substance to the underlying abstract idea. Dependent claims 2-7, 9-14, and 16-20 add additional limitations, for example but these only serve to further limit the abstract idea, and hence are nonetheless directed towards fundamentally the same abstract idea as representative claims 1, 8 and 15.
Claims 2, 9, 16 recites processing questions and answers from the communications. Claims 3-4, 10-11, 17-18 recite presenting, to the human operator, a proposed answer to a question associated with the customer support campaign; obtaining, from the human operator, a response to the proposed answer; and self-training for the customer support campaign based on the response to the proposed answer; increasing a confidence level of the humanoid to handle customer support cases associated with the customer support campaign when the response to the proposed answer is positive; and decreasing the confidence level of the humanoid to handle customer support cases associated with the customer support campaign when the response to the proposed answer is not positive. Claims 5-9, 12-14, 19-20 recite assessing a confidence level of the humanoid for the customer support campaign; and determining that the humanoid is adequately trained for the customer support campaign when the confidence level is above a threshold level; determining that the humanoid is adequately trained for the customer support campaign based on the self-training includes: providing answers to a number of questions associated with the customer support campaign; determining a percentage of the answers that are correct answers; and determining that the humanoid is adequately trained for the customer support campaign when the percentage is above a threshold percentage. determining that the humanoid is adequately trained for the customer support campaign based on the self-training includes: determining that the humanoid has provided answers to at least a particular number of questions; and determining that the humanoid is adequately trained for the customer support campaign when the percentage is above the threshold percentage and the humanoid has provided answers to at least the particular number of questions. These limitations do not include an improvement to another technology or technical field, an improvement to the functioning of the computer itself, or meaningful limitations beyond generally linking the use of the abstract idea to a particular technological environment. See MPEP 2106.05d. Thus, nothing in the claim adds significantly more to an abstract idea. The claims are ineligible. Dependent claims do not integrate abstract idea into a practical application. The dependent claims recite no new additional elements. As such, the additional elements individually or in combination do not integrate the exception into a practical application, but rather, the recitation of any additional element amounts to merely reciting the words ‘‘apply it’’ (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). The dependent claims also do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of a computing system is merely being used to apply the abstract idea to a technological environment. That is, the claims provide no practical limits or improvements to any technology. Accordingly, dependent claims are also ineligible.
Therefore, since there are no limitations in the claim that transform the exception into a patent eligible application such that the claim amounts to significantly more than the exception itself, the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter. See (Alice Corporation Pty. Ltd. v. CLS Bank International, et al.).
Examiner Note: Subject matter free of prior art
Regarding Claims 1, 8 and 15, Lopes discloses computer-implemented method/system/medium comprising:
Lopes (US 11,552,909 B2) discloses self-training, by a humanoid of a customer support system for a customer support campaign (Col 2 lines 42-52The training set may be used to train the chatbot whereas the testing set may be used to test the chatbot. The specialist may be a subject matter expert that may have extensive knowledge in a particular domain of a dataset. Domains may include, but are not limited to, retail, social media content, business, technology, medical, academic, government, industrial, food chain, legal or automotive.), the self-training comprising the humanoid observing communications between a human operator and at least one customer regarding at least one customer support case in the customer support campaign (Col 3 lines 55-60 Active learning may be used to interact with a user, such as the subject matter expert, to provide new data labels or label new datapoints. Training and updating a ML model may include supervised, unsupervised, and semi-supervised ML procedures. Supervised learning may use a labeled dataset or a labeled training set to build, train and update a model., Col 5 lines 6-12The data in the client database 202 may be unlabeled. The data in the client database 202 may be structured and unstructured data. The data may include initial data that may be gathered from previous interactions between the client and the user. For example, initial data may include frequently asked questions, client's closed tickets, or previous integrations recorded (i.e. logs from previous conversations with the user). The initial data may be used as input for creating the chatbot 210.), the humanoid comprising a computer executed process that mimics human dialog (Col 2 lines 35-40A chatbot may be a software application designed to conduct conversations with a user in lieu of providing direct contact with a live human agent. The user may type or ask a question and the chatbot may attempt to interpret the question, and then provide an answer.);
Lopes discloses determining, by the humanoid, that the humanoid is adequately trained for the customer support campaign based on the self-training (Col 6 lines 24-32 the validation script 212 may be used to determine the quality of the clusterization and labeling. The validation script 212 may be an automated statistical technique that may be developed in Python language or any other computer language that is applicable. The validation script 212 may use a k-fold cross validation technique to infer the chatbot's accuracy (confidence level) in answering questions asked by the user. Col 6 lines 58-67The accuracy of the chatbot 210 may be generated by the following statistical formula calculated over the confusion matrix, Col 8 lines 50-51 The validation script 212 may infer the chatbot's accuracy in answering questions asked by the user. Col 9 lines 47-57When the chatbot 210 is confident about its accuracy (adequately trained), the chatbot 210 may present a confidence value of around 100% for one of its clusters, and a value closer to 0% for the remaining 9 clusters. On the other hand, when the chatbot 210 is not confident about its accuracy, the chatbot 210 may present similar confidence values for at least two of its clusters., Fig 3 #316 is the chatbot accurate?, Col 10 lines 14-20 determine, at operation 316, whether the chatbot 210 is accurate. For example, the chatbot 210 may be accurate if the report states that the chatbot's precision is 100% (confidence level). In another example, the chatbot 210 may be accurate if the report states that the chatbot's precision is 90%. If the chatbot 210 is accurate and no adjustments need to be made, then at operation 318 the chatbot 210 is ready for use, Col 9 lines 34-42 Cluster accuracy may be calculated over the confusion matrix and may be defined as the total number of utterances predicted correctly by the chatbot 210 (threshold number of questions answered correctly for cluster (customer support campaign) divided by the total number of utterances (N), Col 10 lines 14-20 determine, at operation 316, whether the chatbot 210 is accurate. For example, the chatbot 210 may be accurate if the report states that the chatbot's precision is 100%. In another example, the chatbot 210 may be accurate if the report states that the chatbot's precision is 90%. If the chatbot 210 is accurate and no adjustments need to be made, then at operation 318 the chatbot 210 is ready for use)
Lopes discloses provisioning, by the humanoid, the humanoid to handle at least one future customer support case in the customer support campaign in response to the humanoid determining that it is adequately trained for the customer support campaign (Col 10 lines 12-22 the chatbot 210 may be accurate if the report states that the chatbot's precision is 90%. If the chatbot 210 is accurate and no adjustments need to be made, then at operation 318 the chatbot 210 is ready for use, Col 10 lines 14-20 determine, at operation 316, whether the chatbot 210 is accurate. For example, the chatbot 210 may be accurate if the report states that the chatbot's precision is 100%. In another example, the chatbot 210 may be accurate if the report states that the chatbot's precision is 90%. If the chatbot 210 is accurate and no adjustments need to be made, then at operation 318 the chatbot 210 is ready for use. Fig 3 # 318 chatbot is ready for use).
Lopes does not specifically teach receiving, by the humanoid and from a customer, a request for customer support associated with the customer support campaign, the request indicating a customer support issue; interpreting, by the humanoid, information in the request for customer support; and providing, by the humanoid, support for the customer support issue based on interpreting the information and based on the self-training; wherein providing the support for the customer support issue includes coordinating with a communications module associated with the humanoid, a plugin execution module associated with the humanoid, or a system external to the humanoid to execute actions to resolve the customer support issue.
Emery (US 11,138,388 B2) teaches receiving, by the humanoid and from a customer, a request for customer support associated with the customer support campaign, the request indicating a customer support issue (Col 5 lines 8-16; the conversational bot routing engine 140 may receive a request from the web/app server 130, or directly from the user, for starting an online dialog with the user. The online dialog, also known as a chat session, may allow the user to receive answers to inquiries and receive information from a bot via the conversational bot routing engine 140. The bot may be selected by the user or a default bot recommended to the user based on historical data of the user, before the conversation begins. Col 7 lines 30-40 receive a user request for a conversation with a bot. The user request analyzer 510 can analyze the user request to determine a user query and/or other related information. In one embodiment, the other related information in the user request may indicate that the user query should be directed to a bot that is specified by the user with the user request. The user may specify the bot based on a domain of the bot. For example, the user may select a weather bot to have a conversation about weather (support issue), or select a sport bot to have a discussion about some sports. Col 7 lines 60-64 the user request analyzer 510 may forward a bot selection request extracted from the user request to the conversation bot recommender 540, for recommending a bot to the user, e.g. based on the user query. Fig 6 #602 receive a user request ) ; interpreting, by the humanoid, information in the request for customer support ( Fig 6 #604 analyze user request to determine a user query, Col 5 lines 19-28when the conversational bot routing engine 140 determines that a reply provided by the user-specified bot is a valid answer to the query, the conversational bot routing engine 140 may recommend a new bot to the user and/or re-direct the user to the new bot to continue the conversation.); and providing, by the humanoid, support for the customer support issue based on interpreting the information and based on the self-training. (Col 8 lines 30-38 the bot reply analyzer 530 may provide query/reply pairs to the bot recommendation model trainer 550 for training a bot recommendation model. The bot recommendation model will be used for recommending a bot to a user based on a query. A query/reply pair may include a historical query and a historical reply corresponding to the historical query. The historical reply was provided by a conversational bot with an associated confidence score. For example, after the bot reply analyzer 530 forward, Col 9 lines 59-64 At 618, the bot reply is provided to the user as a response. At 620, a query/reply pair of the query and the bot reply is provided for training a recommendation model, which can be used for future bot recommendation.
JP202139525A discusses a humanoid artificially intelligent robot is utilized to provide guidance and services with voice and images by learning based on collected data
Marrelli (US10104232 B2) discloses integrating a cognitive system into a call center. The system and method include ingesting, through an instant messaging application, one or more original questions from one or more call center agents; ingesting, through the instant messaging application, one or more answers associated with the one or more original questions; receiving, through the instant messaging system, one or more additional questions; determining one or more proposed answers to each additional question based on analysis of the one or more original questions and answers. System utilizing a plug-in module to moderate the interactions between the cognitive system and the instant messaging application.
Brown (US2019/0042988 A1) discusses the Al agent system may provide adaptive features (e.g.,
adjusts behavior over time to improve how Al agent system reacts/responds to users), stateful
features (e.g., past conversations with users are remembered and are part of context of Al agent
system when interacting with user) ([0044]). The Al agent system 10 may, without limitation,
provide the following functionalities: obtain answers to questions from client system 12 about a
business (such as metrics about the business, knowledge of how and where the business
conducts business, information about product and services of a business, information about the
market or industry of the business, information about how a business is organized, and the like),
engage in conversation with users via client system 12, provide assistance with workflows, listen
to requests from client system 12, take actions based on requests, initiate communication with
employees of an enterprise ([0053)).
Kondadadi (US 2020/0142997) discusses hybrid QA application for calculating a confidence
score for generating an answer to the input question using a retrieval QA application. calculate a
confidence score for the retrieval QA application 212 based on the input question using the
machine learning model 2114 for the retrieval QA application 212. The confidence score indicates
the likelihood that the retrieval QA application 212 can successfully generate an accurate answer
to the input question. In other words, the confidence score indicates the likelihood that an answer
generated by the retrieval QA application 212 is the correct answer to the input question. ([0109])
JP2019-3267 disclose an Al service system and the accuracy evaluation processing unit 23
executes the accuracy evaluation processing of each Al service system. That is, inquiry data in
the test data stored in the test data storage unit 22 is input to each Al service system 3, and
answer data for the inquiry data is received from each Al service system 3. Then, the correct
answer data corresponding to the inquiry data stored in the test data storage unit 22 is compared
with the answer data received from the Al service system 3. If they match or are included, the
answer is determined to be correct. If they do not match or are not included, it is determined that
the answer is incorrect. Based on the number of inquiry data input to the Al service system 3 and
the number of correct answers, correct answer rate = number of correct answers + number of
inquiry data input to Al service system x 100
NPL Nordheim, “An initial model of trust in chatbots for customer service’ discusses in Table 1 various factors for chatbot expertise. Table 4 shows frequency of correct answers provided by chatbot.
However, the prior art fail to either disclose or sufficiently suggest the combination features as claimed. Although the above references teach similar aspects of the independent claims, none of these references individually or in reasonable combination discloses all the limitations as claimed as a whole is not obvious over prior art. None of the prior art teaches or renders obvious the combination features including at least “wherein providing the support for the customer support issue includes coordinating with a communications module associated with the humanoid, a plugin execution module associated with the humanoid, or a system external to the humanoid to execute actions to resolve the customer support issue.” Further, dependent claims are not taught by prior art due to their dependence from Independent claims.
Response to Arguments
Applicant's arguments filed 12/31/25 have been fully considered but they are not persuasive.
Regarding 101 rejection, new limitations have been considered in rejection above. Applicant on page 13-14 states that new limitations provide improve the technical field of automated customer service. Examiner has considered all arguments and respectfully disagrees. The additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Simply implementing the abstract idea on generic components is not a practical application of the abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. a) The additional elements merely add the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). The claims are directed to an abstract idea. Further, the limitation of “self-train a humanoid based on communications between a human and customer” is simply application of a computer model, itself an abstract idea. Furthermore, such training and applying of a model is no more than putting data into a black box machine learning operation, devoid of technological implementation and application details. Each step requires a generic computer to perform generic computer functions. The specification/claims do not recite the alleged improvements and provide with no further detail on how the claim set achieves such an improvement. MPEP 2106.05(a) recites “If it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. That is, the disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement.” After the examiner has consulted the specification and determined that the disclosed invention improves technology, the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology. Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316, 120 USPQ2d 1353, 1359 (patent owner argued that the claimed email filtering system improved technology by shrinking the protection gap and mooting the volume problem, but the court disagreed because the claims themselves did not have any limitations that addressed these issues). That is, the claim must include the components or steps of the invention that provide the improvement described in the specification. Examiner notes neither specification nor claims recite how the improvement to technical field is achieved. The instant claims are directed to an abstract idea, and does not integrate the abstract idea into a practical application. The additional elements recited in the instant claims are only to generic computing components that implement the abstract idea on a computing environment.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Pace (US 2020/0351405) discloses calculating the channel RIQ score by comparing: the performance data related to the first bot performing the first engagement scenario; and the target bot capabilities for performing the first engagement scenario via the first communication channel.
Marrelli (US10,104,232) discloses determining one or more proposed answers to each additional question based on analysis of the one or more original questions and answers; determining a confidence score for each of the one or more proposed answers; if the confidence score of the proposed answer exceeds a confidence threshold, providing the proposed answer to the call center agent
Konig (US 11,367,080) discloses personalizing a delivery of services to a first customer including: providing a customer profile; updating the customer profile via performing a first process to collect interaction data, the first process including the steps of: monitoring activity on the communication device and, therefrom, detecting the first interaction with the first contact center; identifying data relating to the first interaction for collecting as the interaction data; and updating the customer profile to include the interaction data identified from the first interaction;
Sampat (US 2020/0327196 A1) discloses receive a request to generate a chatbot; determine a chatbot template for the chatbot based on the request; obtain custom chatbot information according to the chatbot template; generate a chatbot corpus for the chatbot using the custom chatbot information and the chatbot template; generate a set of question and answer (QnA) pairs based on the chatbot corpus
Polleri teaches assessing, by a training module, at least one confidence level of a machine learning model, of the plurality of machine learning models associated with the customer support campaign (Col 46 lines 64-67, Col 47 lines 2-17 A model execution system 1018 may access the trained machine-learning models 1015, provide and format input data to the trained models 1015 (e.g., code integration request data) and determine the predicted outcomes based on the execution of the models. The outputs of the trained models 1015 may be provided to client devices 1050 or other output systems via the API 1012 and/or user interface components 1014. Further, the outputs of the trained models 1015 may include not only a prediction of the outcome of the code integration request (e.g., approved or denied) but also various related data such as a confidence value associated with the prediction (machine learning model), Col 60 lines 18-21The confidence score can reflect how likely this type of machine learning model will perform beyond a confidence for a particular result being correct. )
Kannan teaches identifying, by the humanoid, a best answer for a question from the at least one customer in the communications ([0023] chatbot or virtual agent, Fig 4B # 410, 416 best answer for user question with confidence of 0.95 and [0079] The three intents 412, 414 and 416 are depicted to be associated with confidence scores of ‘0.9’, ‘0.82’ and ‘0.95’, respectively. The VA may be configured to compare the confidence scores of the intents 412-416 with a predefined threshold score of ‘0.8’. As more than one intent from among the predicted intents 412-416 is associated with a confidence score that is greater than the predefined threshold score of ‘0.8’, [0083) the VAs may also intervene in ongoing human agent interactions with customers. For example, in scenarios where a human agent may take time to respond to a customer query as it involves fetching of an appropriate answer from a database and where an answer is readily available with the VA, the VA may intervene in the conversation, thereby enabling reduction in the AHT of the human agent.); and causing the best answer to be displayed to the human operator for potential provision by the human operator to the customer. ([Fig 5 #504 human agent, #512 (best answer) account detail inquiry intent and [0085]receive each customer input in an ongoing manner and predict one or more intents corresponding to the customer input. The processor 202 may also be configured to compute a confidence score for each predicted intent. Accordingly, an intent 512 displaying text ‘# ACCOUNT DETAILS INQUIRY’ is depicted to be predicted for the customer input 510. Further, the predicted intent 512 is depicted to be associated with a confidence score of ‘0.98’., [0086], [0087])
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANGEETA BAHL whose telephone number is (571)270-7779. The examiner can normally be reached 7:30 - 4PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached at 571-270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SANGEETA BAHL/Primary Examiner, Art Unit 3629