Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to amendment filed on August 15, 2025.
Claims 1, 7, 11, and 17 have been amended.
No new claims have been added.
The objections and rejections from the prior correspondence that are not restated herein are withdrawn.
Response to Arguments
Applicant's arguments filed on August 15, 2025 with regards to the 101 rejection have been fully considered but are not persuasive. Applicant argues that the “amended independent claims recite improvements to a technical field and improvements to the functioning of a computer by way of a ‘specifically made model’,” and that the claimed invention improves “inefficiencies of conventional systems by reducing the overall burden on implementing devices”.
The Examiner respectfully disagrees. Amended independent claims now recite a new generating limitation (“generating machine learning encodings of the client features from the value metric of the digital account and the previous client device interactions”) as well as a second generating limitation, which now recites “generating, utilizing a machine learning model to analyze the machine learning encodings of the extracted client features, a predicted client disposition classification and a disposition classification probability from the machine learning encodings of the client features.” These limitations recite a judicial exception and are interpreted as mental processes, which can be performed in the human mind with a pen and paper. MPEP 2106.05(a) states that the judicial exception alone cannot provide the improvement. Furthermore, the limitation “in response to a client contacting an automated client interaction system, utilizing client credentials to extract client features corresponding to the client of the automated client interaction system by accessing, via a digital database, a value metric of a digital account and previous client device interactions” is an additional element that is mere instructions to apply an exception – see MPEP 2106.05(f). The additional element recited in the claim does not integrate the abstract idea into a practical application nor sufficient to amount to significantly more than the judicial exception. Therefore, the 101 rejection of the claims are maintained.
Applicant’s arguments with regards to the 103 rejections have been fully considered but are moot because the arguments allege that only the newly added limitations are not taught by the prior art of record. It should be noted that a new prior art reference to Peng, in combination with previously cited prior art, teaches the newly added limitations as shown in the rejections below.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 rejected under 35 U.S. C. 101 because the claimed invention is directed to an
abstract idea without significantly more
Step 1: Claims 1-10 recite a method. Claims 11-16 recite a non-transitory computer readable medium. Claims 17-20 recite a system. Therefore, claims 1-10 are directed to a process, claims 11-16 are directed to an article of manufacture, and claims 17-20 are directed to a machine.
With respect to claim 1,11,17
2A Prong 1: the claim(s) recites a judicial exception.
- generating machine learning encodings of the client features from the value metric of the
digital account and the previous client device interactions (Mental Process – can be performed in the human mind with a pen and paper.)
- generating […] to analyze the machine learning encodings of the extracted client features, a predicted client disposition classification and a disposition classification probability from the machine learning encodings of the client features (Mental Process – can be performed in the human mind with a pen and paper.)
- generating an automated interaction response utilizing the predicted client disposition classification, a predicted client disposition classification, a disposition classification probability and a disposition classification threshold (Mental Process – can be performed in the human mind with a pen and paper.)
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional Elements:
- in response to a client contacting an automated client interaction system, utilizing client credentials to extract client features corresponding to the client of the automated client interaction system by accessing, via a digital database, a value metric of a digital account and previous client device interactions; (mere instructions to apply an exception – see MPEP 2106.05(f))
- utilizing a machine learning model, (mere instructions to apply an exception – see MPEP 2106.05(f))
-providing the automated interaction response to the client via the automated client interaction system. (Providing response - Insignificant Extra Solution Activity- See MPEP 2106.05(g))
(Claim 11) A non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause a computer system to: extract client features corresponding to a client of an automated client interaction system; (mere instructions to apply an exception – see MPEP 2106.05(f))
(Claim 17) A system comprising: at least one processor; and at least one non-transitory computer-readable storage medium storing instructions that, when executed by the at least one processor, cause the system to (mere instructions to apply an exception –see MPEP 2106.05(f))
2B: The claim(s) does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional Elements:
-in response to a client contacting an automated client interaction system, utilizing client
credentials to extract client features corresponding to the client of the automated client interaction system by accessing, via a digital database, a value metric of a digital account and previous client device interactions; (mere instructions to apply an exception – see MPEP 2106.05(f))
- utilizing a machine learning model, (mere instructions to apply an exception – see MPEP 2106.05(f))
-providing the automated interaction response to the client via the automated client interaction system. (Insignificant Extra Solution Activity- See MPEP 2106.05(g) WURC : See 2106.05(d)(II)(i): Receiving or transmitting data over a network)
(Claim 11) A non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause a computer system to: extract client features corresponding to a client of an automated client interaction system; (mere instructions to apply an exception – see MPEP 2106.05(f))
(Claim 17) A system comprising: at least one processor; and at least one non-transitory computer-readable storage medium storing instructions that, when executed by the at least one processor, cause the system to (mere instructions to apply an exception –see MPEP 2106.05(f))
Therefore, the claim(s) are ineligible.
With respect to claim 2,12,18
2A Prong 1: the claim(s) recites a judicial exception.
-identifying a client query (Mental Process – identifying query can be done using pen and paper.)
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional Elements:
-via the automated client interaction system; (mere instructions to apply an exception –see MPEP 2106.05(f))
-in response to identifying the client query, providing the automated interaction response, wherein the automated interaction response comprises an indicator of the predicted client disposition classification. (mere instructions to apply an exception –see MPEP 2106.05(f))
2B: The claim(s) does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional Elements:
-via the automated client interaction system; (mere instructions to apply an exception –see MPEP 2106.05(f))
-in response to identifying the client query, providing the automated interaction response, wherein the automated interaction response comprises an indicator of the predicted client disposition classification. (mere instructions to apply an exception –see MPEP 2106.05(f))
Therefore, the claim(s) are ineligible.
With respect to claim 3,13,19
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional Elements:
-in response to a user interaction with the automated interaction response, initiating a client-agent response session between the client and an agent device; (mere instructions to apply an exception –see MPEP 2106.05(f))
- providing the predicted client disposition classification for display via the agent device. (Insignificant Extra Solution Activity- See MPEP 2106.05(g))
2B: The claim(s) does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional Elements:
-in response to a user interaction with the automated interaction response, initiating a client-agent response session between the client and an agent device; (mere instructions to apply an exception –see MPEP 2106.05(f))
- providing the predicted client disposition classification for display via the agent device. (Insignificant Extra Solution Activity- See MPEP 2106.05(g), WURC : See 2106.05(d)(II)(i): Receiving or transmitting data over a network)
Therefore, the claim(s) are ineligible.
With respect to claim 4,14,20
2A Prong 1: the claim(s) recites a judicial exception.
to determine a ground truth client disposition; (Mental Process – determining truth can be done using pen and paper)
comparing the predicted client disposition classification and the ground truth client disposition (Mental Process – comparing query with truth can be done using pen and paper)
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional Elements:
-monitoring client interaction with the automated client interaction system; (mere instructions to apply an exception –see MPEP 2106.05(f))
- training the machine learning model (mere instructions to apply an exception –see MPEP 2106.05(f))
2B: The claim(s) does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional Elements:
-monitoring client interaction with the automated client interaction system; (mere instructions to apply an exception –see MPEP 2106.05(f))
- training the machine learning model (mere instructions to apply an exception –see MPEP 2106.05(f))
Therefore, the claim(s) are ineligible.
With respect to claim 5,15
2A Prong 1: the claim(s) recites a judicial exception.
generating the predicted client disposition classification and the disposition classification probability (Mental Process and Mathematical Concept – determining classification and probability can be done using pen and paper)
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional Elements:
- utilizing one or more of a random forest model or gradient boosted decision tree model (mere instructions to apply an exception –see MPEP 2106.05(f))
2B: The claim(s) does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional Elements:
- utilizing one or more of a random forest model or gradient boosted decision tree model (mere instructions to apply an exception –see MPEP 2106.05(f))
Therefore, the claim(s) are ineligible.
With respect to claim 6
2A Prong 1: the claim(s) recites a judicial exception.
determining a previous disposition from a previous interaction by the client with the automated client interaction system; (Mental Process– determining classification can be done using pen and paper)
generating the predicted client disposition classification and the disposition classification probability from the previous disposition utilizing the machine learning model. (Mental Process and Mathematical Concept – determining classification and probability can be done using pen and paper)
Therefore, the claim(s) are ineligible.
With respect to claim 7
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional Elements:
- Extracting client features comprises extracting a digital account duration a direct deposit status of a digital account and application device activity on the digital account (mere instructions to apply an exception –see MPEP 2106.05(f))
2B: The claim(s) does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional Elements:
- Extracting client features comprises extracting a digital account duration a direct deposit status of a digital account and application device activity on the digital account (mere instructions to apply an exception –see MPEP 2106.05(f))
Therefore, the claim(s) are ineligible.
With respect to claim 8
2A Prong 1: the claim(s) recites a judicial exception.
-determining that the disposition classification probability satisfies the disposition classification threshold (Mental Process– comparing probabilities with threshold can be done using pen and paper)
-generating the automated interaction response comprising an indicator of the predicted client disposition classification. (Mental Process– indicating classification, can be done using pen and paper)
Therefore, the claim(s) are ineligible.
With respect to claim 9
2A Prong 1: the claim(s) recites a judicial exception.
withholding an additional automated interactive response corresponding to the additional predicted client disposition classification based on comparing the additional disposition classification probability and a disposition classification threshold. (Mental Process– comparing probabilities with threshold can be done using pen and paper)
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional Elements:
-generating, utilizing the machine learning model, an additional predicted client disposition classification and an additional disposition classification probability from additional client features; (mere instructions to apply an exception –see MPEP 2106.05(f))
2B: The claim(s) does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional Elements:
-generating, utilizing the machine learning model, an additional predicted client disposition classification and an additional disposition classification probability from additional client features; (mere instructions to apply an exception –see MPEP 2106.05(f))
Therefore, the claim(s) are ineligible.
With respect to claim 10
2A Prong 1: the claim(s) recites a judicial exception.
2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination.
Additional Elements:
- providing an interactive voice response indicating the predicted client disposition classification or providing an automated text response indicating the predicted client disposition classification in a digital message thread. (Insignificant Extra Solution Activity- See MPEP 2106.05(g))
2B: The claim(s) does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional Elements:
- providing an interactive voice response indicating the predicted client disposition classification or providing an automated text response indicating the predicted client disposition classification in a digital message thread. (Insignificant Extra Solution Activity- See MPEP 2106.05(g) WURC : See 2106.05(d)(II)(i): Receiving or transmitting data over a network)
Therefore, the claim(s) are ineligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2,4-12, 14-18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sivasankar (Pub. No.: US 20210365834 A1 Sivasankar), hereafter Sivasankar, in view of Peng (Pub. No.: US 20190147356 A1), hereafter Peng.
Regarding claim 1, Sivasankar teaches:
in response to a client contacting an automated client interaction system, utilizing client credentials to extract client features corresponding to the client of the automated client interaction system by accessing, via a digital database, a value metric of a digital account and previous client device interactions; ([14] teaches a model training platform may initiate a model training session by importing an automated assistant transcript, which may include a set of records reflecting user interactions with an automated assistant, where each record may include the user's query, its classification by the automated assistant (e.g., by assigning a topic and a subtopic to each query), the user's intent inferred by the automated assistant from the query based on the assigned topic and subtopic, and the responsive action inferred by the automated assistant from the query based on the identified user's intent; [29] teaches performing one or more operations on one or more accounts associated with the user (e.g., executing one or more database queries to compute the account balance for one or more accounts associated with the user); see Fig. 5 #520A-N Database server.)
generating […] the client features from the value metric of the digital account and the previous client device interactions ([28] teaches determining a responsive action corresponding to the identified intent, where the responsive action may be identified by applying a set of rules to the identified intent and to certain query metadata items (such as the user identifier), where the identified intent may be “account balance inquiry,” and the parameters of the query may further identify one or more accounts to which the intended action should be applied; [29] teaches performing the identified responsive action, which may involve performing one or more operations on one or more accounts associated with the user (e.g., executing one or more database queries to compute the account balance for one or more accounts associated with the user); see [14] as taught above for importing an automated assistant transcript, which may include a set of records reflecting user interactions with an automated assistant, where each record may include the user's query, its classification by the automated assistant (e.g., by assigning a topic and a subtopic to each query), the user's intent inferred by the automated assistant from the query based on the assigned topic and subtopic, and the responsive action inferred by the automated assistant from the query based on the identified user's intent);
generating, utilizing a machine learning model to analyze the […] extracted client features ([14] teaches a model training platform may initiate a model training session by importing an automated assistant transcript), a predicted client disposition classification ([14] teaches a set of records reflecting user interactions with an automated assistant. Each record may include the user's query, its classification by the automated assistant (e.g., by assigning a topic and a subtopic to each query), the user's intent inferred by the automated assistant from the query based on the assigned topic and subtopic, and the responsive action inferred by the automated assistant from the query based on the identified user's intent.) and a disposition classification probability from the […] client features. ([16] teaches comparing the actual model output with the desired model output (e.g., the subtopic and the corresponding confidence score, the topic and the corresponding confidence score, or the intent and the corresponding confidence score))
generating an automated interaction response utilizing the predicted client disposition classification ([14] teaches a set of records reflecting user interactions with an automated assistant. Each record may include the user's query, its classification by the automated assistant (e.g., by assigning a topic and a subtopic to each query), the user's intent inferred by the automated assistant from the query based on the assigned topic and subtopic, and the responsive action inferred by the automated assistant from the query based on the identified user's intent.) the disposition classification probability ([16] teaches comparing the actual model output with the desired model output (e.g., the subtopic and the corresponding confidence score, the topic and the corresponding confidence score, or the intent and the corresponding confidence score)) and a disposition classification threshold ([16] teaches adjusting values of one or more model parameters responsive to determining that the difference of the actual and desired model output exceeds a specified threshold classification error.) ([13] teaches the automated assistant may then identify (e.g., by applying a set of rules to the identified intent and to certain query metadata items) and perform a responsive action corresponding to the identified intent. Upon performing the responsive action, the automated assistant may produce a response and return it to the requesting client)
providing the automated interaction response to the client via the automated client interaction system ([13] teaches the automated assistant may then identify (e.g., by applying a set of rules to the identified intent and to certain query metadata items) and perform a responsive action corresponding to the identified intent. Upon performing the responsive action, the automated assistant may produce a response and return it to the requesting client).
Sivasankar does not appear to explicitly teach machine learning encodings of the client features.
However, Peng teaches the limitation ([41] teaches the first representation of the user is encoded into a first feature representation that is representative of the user's behavior (see Fig. 2 206), and [54-55] teach a predicted user behavior model is generated by applying the deep recurrent neural network to input data, which includes the feature representations of a user generated or encoded by the unsupervised deep neural networks (see Fig. 2 210)).
Accordingly, it would have been obvious to a person having ordinary skill in the art at the time of the effective filing of the invention, having the teachings of Sivasankar and Peng before them, to include Peng’s feature learning in Sivasankar’s system that performs supervised learning of an automated assistant. One would have been motivated to make such a combination in order to avoid inaccurately predicting user behavior and also account for temporal user behavior or time dependent user behavior as taught by Peng [3].
Regarding claim 2, the claim recites similar limitation as corresponding to claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Sivasankar also teaches:
identifying a client query via the automated client interaction system; ([0013] teaches an automated assistant (e.g., a chat bot or an interactive voice response system) may analyze a received query in order to associate it with a topic and a subtopic and identify the user's intent.)
in response to identifying the client query, providing the automated interaction response, ([Fig 1] teaches receiving and processing query, identifying intent, and performing responsive action.)
PNG
media_image1.png
819
424
media_image1.png
Greyscale
wherein the automated interaction response comprises an indicator of the predicted client disposition classification. ([40] teaches an identifier of the intent that the automated assistant has associated with the query.)
Regarding claim 4, the claim recites similar limitation as corresponding to claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Sivasankar also teaches:
monitoring client interaction with the automated client interaction system to determine a ground truth client disposition; ([40] teaches adjusting values of one or more model parameters responsive to determining that the difference of the actual and desired model output exceeds a specified threshold classification error)
training the machine learning model by comparing the predicted client disposition classification and the ground truth client disposition ([0016] teaches a Supervised model training may involve running the model on a data sample from a training data set, comparing the actual model output with the desired model output (e.g., the subtopic and the corresponding confidence score, the topic and the corresponding confidence score, or the intent and the corresponding confidence score))
Regarding claim 5, the claim recites similar limitation as corresponding to claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Sivasankar also teaches:
utilizing the machine learning model comprises generating the predicted client disposition classification ([14] teaches a set of records reflecting user interactions with an automated assistant. Each record may include the user's query, its classification by the automated assistant (e.g., by assigning a topic and a subtopic to each query), the user's intent inferred by the automated assistant from the query based on the assigned topic and subtopic, and the responsive action inferred by the automated assistant from the query based on the identified user's intent.)
and the disposition classification probability ([16] teaches comparing the actual model output with the desired model output (e.g., the subtopic and the corresponding confidence score, the topic and the corresponding confidence score, or the intent and the corresponding confidence score))
utilizing one or more of a random forest model or gradient boosted decision tree model ([0032] teaches the above-referenced models employed to identify the topic, subtopic, intent, and responsive action may utilize a variety of automatic classification methodologies, such as Bayesian classifiers, support vector machines (SVMs), random forest classifiers, gradient boosting classifiers, neural networks, etc. Supervised training of a model may involve adjusting, based on example input-output pairs, one or more parameters of a model that maps an input (e.g., a vector of feature values characterizing an object) to an output (e.g., a category of a predetermined set of categories).)
Regarding claim 6, the claim recites similar limitation as corresponding to claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Sivasankar also teaches:
determining a previous disposition from a previous interaction by the client with the automated client interaction system; generating the predicted client disposition classification and the disposition classification probability from the previous disposition utilizing the machine learning model. ([Fig. 3A] teaches storing previous user queries, probabilities, and responses in a table to refer to for future queries)
PNG
media_image2.png
596
419
media_image2.png
Greyscale
Regarding claim 7, the claim recites similar limitation as corresponding to claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Sivasankar also teaches:
Extracting client features comprises extracting a digital account duration, a direct deposit status of a digital account, or application device activity on the digital account. ([0028] teaches At block 150, the computer system may determine a responsive action corresponding to the identified intent. In certain implementations, the responsive action may be identified by applying a set of rules to the identified intent and to certain query metadata items (such as the user identifier). In an illustrative example, the identified intent may be “account balance inquiry,” and the parameters of the query may further identify one or more accounts to which the intended action should be applied.)
Regarding claim 8, the claim recites similar limitation as corresponding to claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Sivasankar also teaches:
determining that the disposition classification probability satisfies the disposition classification threshold and generating the automated interaction response comprising an indicator of the predicted client disposition classification. ([40] teaches comparing the actual model output with the desired model output (e.g., the subtopic and the corresponding confidence score, the topic and the corresponding confidence score, or the intent and the corresponding confidence score), and adjusting values of one or more model parameters responsive to determining that the difference of the actual and desired model output exceeds a specified threshold classification error)
Regarding claim 9, the claim recites similar limitation as corresponding to claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Sivasankar also teaches:
generating, utilizing the machine learning model, an additional predicted client disposition classification and an additional disposition classification probability from additional client features; withholding an additional automated interactive response corresponding to the additional predicted client disposition classification based on comparing the additional disposition classification probability and a disposition classification threshold. ([0033] teaches supervised model training may utilize one or more training data sets. Each training data set includes a plurality of data items, such that each data item specifies a set of classification feature values for an object (e.g., represented by a vector, each element of which represents the number of occurrences in the query of the word identified by the index of the element) and a corresponding classification of the object (e.g., represented by the confidence score associated with a topic of the query). Supervised model training may involve running the model on the data items from the training data set, comparing the actual model output with the desired model output (i.e., the category and the corresponding confidence score associated with the data item by the training data set), and adjusting values of one or more model parameters responsive to determining that the value of a predetermined quality metric exceeds a specified threshold value.) ([13] teaches the automated assistant may then identify (e.g., by applying a set of rules to the identified intent and to certain query metadata items) and perform a responsive action corresponding to the identified intent. Upon performing the responsive action, the automated assistant may produce a response and return it to the requesting client)
Regarding claim 10, the claim recites similar limitation as corresponding to claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Sivasankar also teaches:
providing an interactive voice response indicating the predicted client disposition classification or providing an automated text response indicating the predicted client disposition classification in a digital message thread. ([0013] teaches an automated assistant (e.g., a chat bot or an interactive voice response system) may analyze a received query in order to associate it with a topic and a subtopic and identify the user's intent. (Where a chatbot produces an automated text response))
Regarding claim 11, the claim recites similar limitation as corresponding to claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Sivasankar also teaches:
A non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause a computer system. ([0069] The data storage device 618 may include a computer-readable storage medium 628 on which may be stored one or more sets of instructions (e.g., instructions of the methods 100, 200, and/or 400 of supervised machine learning for automated assistants, in accordance with one or more aspects of the present disclosure) implementing any one or more of the methods or functions described herein. The may also reside, completely or at least partially, within main memory 604 and/or within processing device 602 during execution thereof by computer system 600, main memory 604 and processing device 602 also constituting computer-readable media. The instructions may further be transmitted or received over a network 620 via network interface device 608.)
Regarding claim 12, the claim recites similar limitation as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale.
Regarding claim 14, the claim recites similar limitation as corresponding claim 4 and is rejected for similar reasons as claim 2 using similar teachings and rationale.
Regarding claim 15, the claim recites similar limitation as corresponding claim 5 and is rejected for similar reasons as claim 2 using similar teachings and rationale.
Regarding claim 16, the claim recites similar limitation as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale.
Regarding claim 17, the claim recites similar limitation as corresponding to claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Sivasankar also teaches:
A system comprising: at least one processor; and at least one non-transitory computer-readable storage medium storing instructions that, when executed by the at least one processor, cause the system ([0068] The computer system 600 may further include a network interface device 608, which may communicate with a network 620. The computer system 600 also may include a video display unit 610 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 612 (e.g., a keyboard), a cursor control device 614 (e.g., a mouse) and/or an acoustic signal generation device 616 (e.g., a speaker). In one embodiment, video display unit 610, alphanumeric input device 612, and cursor control device 614 may be combined into a single component or device (e.g., an LCD touch screen).)
Regarding claim 18, the claim recites similar limitation as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale.
Regarding claim 20, the claim recites similar limitation as corresponding claim 4 and is rejected for similar reasons as claim 2 using similar teachings and rationale.
Claims 3,13,19 are rejected under 35 U.S.C. 103 as being unpatentable over Sivasankar in view of Peng as applied to claims 1, 11, and 17 above, and further in view of Nair (Pub. No.: US 2021/ 0105246 A1), hereafter Nair.
Regarding claim 3, the claim recites similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale.
Sivasankar in view of Peng does not appear to explicitly teach in response to a user interaction with the automated interaction response, initiating a client-agent response session between the client and an agent device; providing the predicted client disposition classification for display via the agent device.
Nair teaches:
in response to a user interaction with the automated interaction response, initiating a client-agent response session between the client and an agent device; providing the predicted client disposition classification for display via the agent device. ( [0016] teaches conventionally, messaging centers may be accessed on a user device via an interactive interface using at least a messaging application available on the interface. FIG. 1 presents an exemplary messaging center application interface and solution 100. In particular, FIG. 1 illustrates a user device 102 with messaging center interface 104. The user device 102 maybe a tablet, iPad, cell phone or the like. For exemplary purposes, user device 104 can be a smart phone or laptop. The user device 102 may be equipped with various applications for performing various tasks. For example, the user device 102 may be used for web browsing, video streaming, bill payments, and online purchases. Additionally, the user device 10 be equipped with applications that enable the user to make purchases and transfers using a payment provider application and/or a digital wallet, and/or access application with the payment provider, merchant, messaging center, etc. Further, the user device 102 may be capable of making phone calls and communicating with one or more other communications devices using a cellular network, Wi-Fi, Bluetooth, BLE, NFC, WLAN, etc. For example, in the communication the user may communicate via the user device 102 with a service agent, bot, or other at an application dashboard of a messaging center 104. In the exemplary message center dashboard 104 of FIG. 1, for example, a user can review account activity associated with a payment processing service. The account activity can include information about a user's account balance, invoicing, and other recent activity 106. At this messaging center dashboard 104, the user may also message, chat or otherwise communicate with customer service agent regarding their account. As illustrated FIG. 1, the user may also be flagged on pending notifications108 regarding previous communications with the customer service center representative. Note that the term customer service representative is being broadly used to represent a bot, agent, or other entity which may be used in communicating with a user/customer. In some instances, the term customer service agent may be interchanged with the term customer service user or simply agent.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Sivasankar, Peng, and Nair before them, to include Nair’s feature of initiating session with an agent using an automated assistant in Sivasankar and Peng’s system that performs supervised learning of an automated assistant. One would have been motivated to make such a combination in order to improve efficiency by initiating a live chat if the model cannot understand a query as taught by Nair [0015])
Regarding claim 13, the claim recites similar limitation as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale.
Regarding claim 19, the claim recites similar limitation as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW JUNG whose telephone number is (571)270-3779. The examiner can normally be reached Monday through Friday from 9am to 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID WILEY can be reached on 571-272-4150. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW J JUNG/Supervisory Patent Examiner, Art Unit 2175