DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This is a first action on the merits in response to the application filed26 September 2024. Claims 1-20 are pending and have been examined.
Priority
Applicant’s claim for the benefit of a prior-filed application 17/370844 is acknowledged. A terminal disclaimer has been received and accepted for application 17/370844 (Pat. 12,131,334) from which the application herein is a continuation.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08 April 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to customer support relationship management and determining a topic script for interacting with customer, an abstract idea, without significantly more. Independent claim 1 recites a non-transitory computer readable medium (product) for customer care management, independent claim 11 recites a system for customer care management, and independent claim 20 recites a process for customer care management. Independent claims 1, 14, and 20 recite substantially similar limitations.
Taking independent claim 1 as representative, claim 1 recites at least the following limitations:
accessing, during a training data input phase of a model training algorithm, prior support interaction data that is associated with prior interactions between prior customers and prior customer service representatives;
accessing, during the training data input phase of the model training algorithm, first datasets of the prior support interaction data;
selecting, during a model generation phase of the model training algorithm, a machine learning algorithm;
training, during the model generation phase of the model training algorithm, a model using the machine learning algorithm and the first datasets of the prior support interaction data;
determining, during the model generation phase of the model training algorithm, that a first training error measurement of the model exceeds a training error threshold;
based on determining that the first training error measurement of the model exceeds the training error threshold, selecting, during the model generation phase of the model training algorithm, a different machine learning algorithm;
retraining, during the model generation phase of the model training algorithm, the model using the different machine learning algorithm and the first datasets of the prior support interaction data;
determining, during the model generation phase of the model training algorithm, that a second training error measurement of the retrained model is below the training error threshold;
classifying a customer who is contacting customer care via a support session regarding a problem into a customer category of multiple customer categories based at least on customer account information of the customer;
based on determining that the second training error measurement of the retrained model is below the training error threshold, identifying a customer care topic in a predetermined set of multiple customer care topics that correspond to the problem using the retrained model;
retrieving or generating a topic script that corresponds to the customer category of the customer for the customer care topic in the predetermined set of customer care topics in which the topic script includes one or more topic issues related to the customer care topics; and
providing the topic script for presentation to a customer service representative (CSR) to prompt the CSR to discuss the one or more topic issues related to the customer care topic with the customer.
Under Step 1of the eligibility analysis, independent claims 1, 11, and 20 recite at least one step or act, including accessing prior support interaction data. Thus, the claims fall within one of the statutory categories of invention.
Under Step 2A Prong One, the limitations of claim 1 for accessing prior support interaction data, accessing first datasets of the prior support interaction data, selecting a machine learning algorithm, training a model using the machine learning algorithm and the first datasets, determining that a first training error measurement of the model exceeds a training error threshold, selecting a different machine learning algorithm, retraining the model using the different machine learning algorithm and the first dataset, determining that a second training error measurement of the retrained model is below the training error threshold, classifying a customer who is contacting customer care, identifying a customer care topic in a predetermined set of multiple customer care topics that corresponds to the problem using the retrained model, retrieving or generating a topic script that corresponds to the customer category in the predetermined set of customer care topics, and providing the topic script for presentation to a customer service representative, as drafted, illustrates a process that, under its broadest reasonable interpretation covers performance of the limitation in the mind (evaluations, judgements, observations) because none of the additional elements preclude the steps from practically being performed in the mind, or by a human using pen and paper. A senior customer service representative could monitor the interactions of a trainee customer service representative with customers, classify the customer, identify a customer care topic, retrieve or generate a topic script, and provide the topic script, evaluate the customer care session, provide updated topic scripts, and coach the trainee customer service representative using a pen and paper to improve performance and efficiency in meeting the customer care needs and providing a solution to the business problem. Therefore, the limitations fall into the mental processes grouping and accordingly the claims recite an abstract idea. Because “providing the topic script for presentation to a customer service representative (CSR) to prompt the CSR to discuss the one or more topic issues related to the customer care topic with the customer” is a form of managing personal behavior and relationships or interactions between people (representative to customer) the claims fall within the abstract concept grouping of certain methods of organizing human activity.
Additionally, the steps for accessing prior support interaction data, accessing the first dataset, retrieving a topic script, and providing the topic script for presentation constitute insignificant extra solution activity because transmitting information for display is insignificant extra-solution activity. See MPEP 2106.05(g).
Under Step 2A Prong Two, the judicial exception of claim 1 is not integrated into a practical application. In particular, the claims only recite a processor and storage device for performing the recited steps. These elements are recited at a high level of generality (i.e., as a generic processor performing a generic computer function) and amount to no more than mere instructions to apply the exception using generic computer components. See MPEP 2106.05(f). For example, Applicant’s specification at paragraph [0029] states: “The processors 204 and the memory 206 of the computing nodes 106 may implement an operating system 210. In turn, the operating system 210 may provide an execution environment for the customer support platform 108. The operating system 210 may include components that enable the computing nodes 106 to receive and transmit data via various interfaces (e.g., user controls, communication interface, and/or memory input/output devices), as well as process data using the processors 204 to generate output.” Adding generic computer components to perform generic functions, such as data gathering, performing calculations, and outputting a result would not transform the claim into eligible subject matter. See MPEP 2106.05(d). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Claim 1 generally links the use of the abstract idea to machine learning as a particular technological environment or field of use. The recited model training algorithm and machine learning algorithm merely require execution of an algorithm that can be performed by a generic computer component and provides no detail regarding the operation of that algorithm. As such, the claim requirement amounts to mere instructions to implement the abstract idea on a computer, and, therefore, is not sufficient to make the claim patent eligible. In other words, as applied to the claims herein, the model training algorithm and the machine learning algorithm are used as tools to process data by inputting and outputting data without significantly more, similar to applying the abstract idea using a general purpose computer to process the data and generate an output. The additional limitation of a machine learning algorithm and computer process, as claimed herein, do not result in improved computer functionality or technical/ technology improvement and hence do not result in a practical application. The limitations when taken individually or as an ordered combination do not offer an inventive concept that is eligible under 35 U.S.C. 101.
While dependent claims 2, 3, 12, and 13 incorporate language from paragraphs [0043] –[0044] of the specification to further define the machine learning algorithms presented in the claims, recitation of known mathematical technics for training a model places the limitation in the abstract idea category of math. However, if the data output is used in a meaningful way, beyond transmission of the data collection and analysis results, and the output leverages the technology in a manner that solves a technical problem, then the claim limitations may be construed as patent eligible, depending on the claim language and disclosure.
Examiner directs Applicant to Example 47 of the updated Subject Matter Eligibility Guidance. In Example 47, claim 2 step (c) recites training an artificial neural network (ANN) using a selected algorithm. The training algorithm is a backpropagation algorithm and a gradient descent algorithm. When given their broadest reasonable interpretation in light of the background, the backpropagation algorithm and gradient descent algorithm are mathematical calculations. Step (d) recites detecting one or more anomalies in a data set using the trained ANN. The claim does not provide any details about how the trained ANN operates or how the detection is made, and the plain meaning of “detecting” encompasses mental observations or evaluations, e.g., a computer programmer’s mental identification of an anomaly in a data set. Step (e) recites analyzing the one or more detected anomalies using the trained artificial neural network to generate anomaly data. The step of analyzing includes both determining that an anomaly has been detected and may further include suggesting a type or cause of the anomaly. Regarding step (f), the step of outputting the anomaly data merely requires a generic output using the trained ANN. The claim does not impose any limits on how the data is output or require any particular components that are used to output the anomaly data. The recitation of “using a trained ANN” in limitations (d) and (e) also merely indicates a field of use or technological environment in which the judicial exception is performed. The recitations of “(a) receiving continuous training data” and “(g) outputting the anomaly data from the trained ANN” are recited at a high level of generality. These elements amount to receiving or transmitting data over a network and are well understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The claims herein are analogous to ineligible claim 2 of Example 47.
Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of a processor and storage device amounts to no more than mere instructions to apply the exception using a generic computer component which cannot provide an inventive concept. The claims recite the use of machine learning to identity a customer care topic.
Dependent claims 2 – 10 and 12-19 include the abstract ideas of the independent claims. The limitations of the dependent claims merely narrow the mental process/ certain methods of organizing human activity by describing the types of mathematical algorithms used in the data processing steps and how the data is categorized and manipulated to generate or retrieve a script for the end user’s task performance, and to determining completion of the customer support task. The limitations of the dependent claims are not integrated into a practical application because none of the additional elements set forth any limitations that meaningfully limit the abstract idea implementation, therefore the claims are directed to an abstract idea. There are no additional elements that transform the claim into a patent eligible idea by amounting to significantly more. The analysis above applies to all statutory categories of invention. Therefore claims 1 -20 are ineligible under 35 U.S.C. 101.
Allowable Subject Matter
Claims 1-20 are rejected under 35 U.S.C. 101, but the claims would be allowable if the aforementioned rejections are overcome.
A prior art search was conducted. Examiner analyzed claims 1, 11, and 20 in view of the prior art applied to related application 17/370844, references cited on the IDS, and results of the search herein. Examiner asserts that none of the prior art, taken individually or in combination, teaches or suggests the claim limitations as presented in independent claims 1, 11, and 20. More specifically, the prior art does not teach or disclose the execution of the trained machine learning algorithm as specifically claimed. While Sivasubramanian et al. (US 2021/0157834), Klein et al. (US 2018/0007102), and London (US 2020/0257996) combined, disclose classifying a customer of a support session regarding a problem, executing a trained machine learning algorithm to identify a customer care topic, retrieving and providing a script that corresponds to the customer care topic to a customer service representative, monitoring multiple support sessions for compliance using an effectiveness rating, and updating scripts in response thereto, the combined references taken individually or in any combination, fail to teach the specific ordered sequence of limitations presented in independent claims 1, 11, and 20. Moreover, since the specific ordered combined sequence of claim elements recited in claims 1, 14, and 20 can only be found as recited in Applicant’s specification, any combination of the cited references and/or any combination of the cited references and/or additional references to teach all the claim elements, including the features discussed above, would be the result of impermissible hindsight reconstruction. Accordingly, any combination of Sivasubramanian et al., Klein et al., and London, and any additional references would be improper to teach the claimed invention. Accordingly the prior art rejections set forth in the previous action are withdrawn.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Monnett et al. (US 2020/0258013) - System trains machine learning model to determine content data, metadata, and context data for support ticket communications, in response to receiving support ticket communications. Machine learning model receives communication associated with support ticket, and determines content data, metadata, and context data for communication. System converts content data, metadata, and context data for communication into first impulse for first channel and second impulse for second channel. System determines first channel value based on first type of conversion of first impulse and any impulses for first channel that are converted from data that is determined for support ticket event. System determines second channel value based on second type of conversion of second impulse and any impulses for second channel that are converted from data that is determined for support ticket event.
Tapia et al. (US 2017/0019315) - the analytic application may receive an indication of an issue affecting one or more user devices that are using the wireless carrier network. The analytic application may analyze the performance data using a trained machine learning model to determine a root cause for the issue affecting the one or more user devices. The trained machine learning model may employ multiple types of machine learning algorithms to analyze the performance data. The analytic application may provide the root cause or the solution that resolves the root cause for presentation. Following the application of a selected machine learning algorithm to the training corpus, the model training module 220 may determine a training error measurement of the machine learning model. The training error measurement may indicate the accuracy of the machine learning model in generating a solution. Accordingly, if the training error measurement exceeds a training error threshold, the model training module 220 may use a rules engine 224 to select an additional type of machine learning algorithm based on a magnitude of the training error measurement. The training error threshold may be a stabilized error value that is greater than zero. In various embodiments, the rules engine 224 may contain algorithm selection rules that match specific ranges of training error measurement values to specific types of machine learning algorithms. The different types of machine learning algorithms may include a Bayesian algorithm, a decision tree algorithm, a SVM algorithm, an ensemble of trees algorithm (e.g., random forests and gradient-boosted trees), an isotonic regression algorithm, and/or so forth.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LETORIA G KNIGHT whose telephone number is (571)270-0485. The examiner can normally be reached M-F 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao WU can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/L.G.K/Examiner, Art Unit 3623 /RUTAO WU/Supervisory Patent Examiner, Art Unit 3623