Prosecution Insights
Last updated: April 19, 2026
Application No. 18/678,210

ARTIFICIAL INTELLIGENCE ASSISTANCE FOR PROVIDING CLIENT SUPPORT BASED ON MESSAGING PLATFORM COMMUNICATIONS

Non-Final OA §102§103§DP
Filed
May 30, 2024
Examiner
SERRAGUARD, SEAN ERIN
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Truist Bank
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
92 granted / 134 resolved
+6.7% vs TC avg
Strong +34% interview lift
Without
With
+33.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
43 currently pending
Career history
177
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner Note Regarding Double Patenting The pending claims of the Instant Application have been reviewed for double patenting in light of the pending claims of co-pending Application No. 18/678,009. Though the pending claims do not currently constitute double patenting, examiner reserves the right to change this determination in light of later amendments provided by the applicant. Claim Objections Claims 2, 9, and 16 are objected to because of the following informalities: Regarding claim 2, and mutatis mutandis claims 9 and 16, the claim part "another issue" at line 5 should read as "an other issue", to correspond with the part name "the other issue" at line 13. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 8, and 15 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ferris (U.S. Pat. App. Pub. No. 2024/0283868, hereinafter Ferris). Regarding claim 1, Ferris discloses A system comprising (The systems and methods described with reference to the “conversational artificial intelligence to train contact center agents”, as implemented using “computing device 100”; Ferris, ¶ [0022], [0126]) : a processing device (“the computing device 100 may include a central processing unit (CPU) or processor 105”; Ferris, ¶ [0024]); and a non-transitory memory including instructions that are executable by the processing device for causing the processing device to (“the computing device 100 may include... a main memory 110” and the “various components, modules, and/or servers of FIG. 2 (as well as the other figures included herein) may each include one or more processors executing computer program instructions and interacting with other system components for performing the various functionalities described herein” where “computer program instructions may be stored in a memory”; Ferris, ¶ [0024], [0056]) : receive, via a graphical user interface on a user device, a request for a practice query for a simulated training environment, (As described with reference to FIG. 9, “the engine can be configured as a continuously running outbound campaign aimed at a specific queue (group or team) of agents” and the system is responsive to “an agent activat[ing] themselves within this queue” which is understood as providing a request for practice {practice query}, “the agent receives an inbound interaction powered by the outbound campaign tool. {for a simulated training environment}”; Ferris, ¶ [0171]; FIG. 9) wherein the request is received in a natural language format and indicates an answer or a resource associated with an issue with a service (The system initiates “the training by initiating a virtual communication to a user device of the first agent” in “response to detecting the one or more triggering events,” where “triggering events” can “include... receiving input from [the user device of the first agent] that selects the first agent for receiving the training related to the simulated interaction,” and where “the input may be provided in the form of free speech or text (e.g., unstructured, natural language input).”; Ferris, ¶ [0077], [0183]-[0184]); provide the request in the natural language format as a first input to a machine learning model, (The input from the “first agent” which initiates the training, is received as a “unstructured, natural language input” by the system, the system being “automated, intelligent, and efficient AI-based technology for training agents in a contact center” generated using “a machine learning or neural network-based approach”; Ferris, ¶ [0053], [0077], [0127], [0171]) wherein the machine learning model is trained on historical communication logs from a messaging platform (The system includes “conversation data is gathered and then used to update the simulated interactions” where, in one example, the “conversation data is gathered and imported... from historical interactions handled by the contact center... between an agent and a customer during interactions” which “may have occurred via a chat interface, through text, or via voice calls.”; Ferris, ¶ [0086], [0167], [0171]) to generate a first output indicating a query associated with the answer or the resource based on the first input in the natural language format (In response to the initiation, “the simulated interactions or calls are conducted” where “the agent receives an inbound interaction powered by the outbound campaign tool” and the types of scenarios presented to an agent may be determined by the agent, and may reflect a “particular type of interaction” or a “type of training exercise.”; Ferris, ¶ [0171]-[0172]); and present the query in the natural language format on the user device via the graphical user interface for use in resolving the issue with the service in the simulated training environment (“At step 920, the simulated interactions or calls are conducted” where “the agent receives an inbound interaction powered by the outbound campaign tool” where “once connection is established, one of the customer bots then communicates with the agent so to simulate the same experience as an inbound interaction from an actual caller or customer” which can occur “via a chat interface, through text, or via voice calls,” which can be received via the agent device 230.; Ferris, ¶ [0059]-[0060], [0086], [0171]-[0172]). Regarding claim 8, Ferris discloses A method comprising (The systems and methods described with reference to the “conversational artificial intelligence to train contact center agents”, as implemented using “computing device 100” including “a central processing unit (CPU) or processor 105 and a main memory 110” where “various components, modules, and/or servers of FIG. 2 (as well as the other figures included herein) may each include one or more processors executing computer program instructions and interacting with other system components for performing the various functionalities described herein”; Ferris, ¶ [0022], [0024], [0056], [0126]) : receiving, by a processing device and via a graphical user interface on a user device, a request for a practice query for a simulated training environment, (As described with reference to FIG. 9, and using the previously described processor, “the engine can be configured as a continuously running outbound campaign aimed at a specific queue (group or team) of agents” and the system is responsive to “an agent activat[ing] themselves within this queue” which is understood as providing a request for practice {practice query}, “the agent receives an inbound interaction powered by the outbound campaign tool. {for a simulated training environment}”; Ferris, ¶ [0171]; FIG. 9) wherein the request is received in a natural language format and indicates an answer or a resource associated with an issue with a service (The system initiates “the training by initiating a virtual communication to a user device of the first agent” in “response to detecting the one or more triggering events,” where “triggering events” can “include... receiving input from [the user device of the first agent] that selects the first agent for receiving the training related to the simulated interaction,” and where “the input may be provided in the form of free speech or text (e.g., unstructured, natural language input).”; Ferris, ¶ [0077], [0183]-[0184]); providing, by the processing device, the request in the natural language format as a first input to a machine learning model, (Using the previously described processor, the input from the “first agent” which initiates the training, is received as a “unstructured, natural language input” by the system, the system being “automated, intelligent, and efficient AI-based technology for training agents in a contact center” generated using “a machine learning or neural network-based approach”; Ferris, ¶ [0053], [0077], [0127], [0171]) wherein the machine learning model is trained on historical communication logs from a messaging platform (The system includes “conversation data is gathered and then used to update the simulated interactions” where, in one example, the “conversation data is gathered and imported... from historical interactions handled by the contact center... between an agent and a customer during interactions” which “may have occurred via a chat interface, through text, or via voice calls.”; Ferris, ¶ [0086], [0167], [0171]) to generate a first output indicating a query associated with the answer or the resource based on the first input (In response to the initiation, “the simulated interactions or calls are conducted” where “the agent receives an inbound interaction powered by the outbound campaign tool” and the types of scenarios presented to an agent may be determined by the agent, and may reflect a “particular type of interaction” or a “type of training exercise.”; Ferris, ¶ [0171]-[0172]); and presenting, by the processing device, the query in the natural language format on the user device via the graphical user interface for use in resolving the issue with the service in the simulated training environment (Using the previously described processor, “At step 920, the simulated interactions or calls are conducted” where “the agent receives an inbound interaction powered by the outbound campaign tool” where “once connection is established, one of the customer bots then communicates with the agent so to simulate the same experience as an inbound interaction from an actual caller or customer” which can occur “via a chat interface, through text, or via voice calls,” which can be received via the agent device 230.; Ferris, ¶ [0059]-[0060], [0086], [0171]-[0172]). Regarding claim 15, Ferris discloses A non-transitory computer-readable medium comprising program code that is executable by a processing device for causing the processing device to (The systems and methods described with reference to the “conversational artificial intelligence to train contact center agents”, as implemented using “computing device 100” including “a central processing unit (CPU) or processor 105 and a main memory 110” where “various components, modules, and/or servers of FIG. 2 (as well as the other figures included herein) may each include one or more processors executing computer program instructions and interacting with other system components for performing the various functionalities described herein”; Ferris, ¶ [0022], [0024], [0056], [0126]) : receive, via a graphical user interface on a user device, a request for a practice query for a simulated training environment, (As described with reference to FIG. 9, “the engine can be configured as a continuously running outbound campaign aimed at a specific queue (group or team) of agents” and the system is responsive to “an agent activat[ing] themselves within this queue” which is understood as providing a request for practice {practice query}, “the agent receives an inbound interaction powered by the outbound campaign tool. {for a simulated training environment}”; Ferris, ¶ [0171]; FIG. 9) wherein the request is received in a natural language format and indicates an answer or a resource associated with an issue with a service (The system initiates “the training by initiating a virtual communication to a user device of the first agent” in “response to detecting the one or more triggering events,” where “triggering events” can “include... receiving input from [the user device of the first agent] that selects the first agent for receiving the training related to the simulated interaction,” and where “the input may be provided in the form of free speech or text (e.g., unstructured, natural language input).”; Ferris, ¶ [0077], [0183]-[0184]); provide the request in the natural language format as a first input to a machine learning model, (The input from the “first agent” which initiates the training, is received as a “unstructured, natural language input” by the system, the system being “automated, intelligent, and efficient AI-based technology for training agents in a contact center” generated using “a machine learning or neural network-based approach”; Ferris, ¶ [0053], [0077], [0127], [0171]) wherein the machine learning model is trained on historical communication logs from a messaging platform (The system includes “conversation data is gathered and then used to update the simulated interactions” where, in one example, the “conversation data is gathered and imported... from historical interactions handled by the contact center... between an agent and a customer during interactions” which “may have occurred via a chat interface, through text, or via voice calls.”; Ferris, ¶ [0086], [0167], [0171]) to generate a first output indicating a query associated with the answer or the resource based on the first input in the natural language format (In response to the initiation, “the simulated interactions or calls are conducted” where “the agent receives an inbound interaction powered by the outbound campaign tool” and the types of scenarios presented to an agent may be determined by the agent, and may reflect a “particular type of interaction” or a “type of training exercise.”; Ferris, ¶ [0171]-[0172]); and present the query in the natural language format on the user device via the graphical user interface for use in resolving the issue with the service in the simulated training environment (“At step 920, the simulated interactions or calls are conducted” where “the agent receives an inbound interaction powered by the outbound campaign tool” where “once connection is established, one of the customer bots then communicates with the agent so to simulate the same experience as an inbound interaction from an actual caller or customer” which can occur “via a chat interface, through text, or via voice calls,” which can be received via the agent device 230.; Ferris, ¶ [0059]-[0060], [0086], [0171]-[0172]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3, 7, 9-10, 14, and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ferris as applied to claim(s) 1, 8, and 15 above, and further in view of Mitchem (U.S. Pat. No. 11706337, hereinafter Mitchem). Regarding claim 2, the rejection of claim 1 is incorporated. Ferris discloses all of the elements of the current invention as stated above. Ferris further discloses wherein the query is a first query (The query is a first query, as the query is the first “training call to the agent”, of a plurality of interactions, from the system, in response to the request for a practice query, as received from the agent.; Ferris, ¶ [0128], [0169]-[0170]). However, Ferris fail(s) to expressly recite wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: receive, via the graphical user interface on the user device, a second query associated with another request with respect to another issue with the service, wherein the second query is received in the natural language format; provide the second query in the natural language format as a second input to the machine learning model, wherein the machine learning model is trained on historical communication logs from the messaging platform to generate a second output indicating a second answer to the second query based on the second input in the natural language format; and present the second answer in the natural language format on the user device via the graphical user interface for use in resolving the other issue with the service. Mitchem teaches systems and methods of an AI assistant for a customer service representative. (Mitchem, ¶ Col. 2, lines 11-15). Regarding claim 2, Mitchem teaches wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: receive, via the graphical user interface on the user device, a second query associated with another request with respect to another issue with the service, (“At step 310, one or more inputs may be received. The input may be received by an AI assistant (e.g., the AI assistant 101 in FIG. 1 and/or the AI assistant 201 in FIG. 2)” where the “input may be associated with a customer” and “may be received from a device associated with the CSR (e.g., the CSR device 202 in FIG. 2)” and the “input may comprise ... a sequence of question, answers, and/or statements” from either the user, the agent, or both, which is understood as including a second question from the user and/or agent {a second query associated with another request} which can be with respect to “a CSR workflow” {another issue with the service}, and the “input may comprise a UI input” received through a “graphic user interface (GUI).”; Mitchem, ¶ Col. 10, line 42-Col. 11, line 9) wherein the second query is received in the natural language format (“The input may comprise a speech input (e.g., the speech input 102 in FIG. 1). speech input 102 may be received via an audio (e.g., voice) and/or video call, such as between the CSR and the customer.”; Mitchem, ¶ Col. 10, lines 53-58); provide the second query in the natural language format as a second input to the machine learning model, (“The AI assistant 201 {the machine learning model}” then receives the “one or more inputs from the CSR device 202 {provide the second query…to}” and the input can be “a speech input” and “may comprise speech of the CSR. {in the natural language format as a first input...}”; Mitchem, ¶ Col. 10, lines 53-58) wherein the machine learning model is trained on historical communication logs from the messaging platform to generate a second output indicating a second answer to the second query (“The AI assistant 201 may be configured to be trained on...a database of field data” and “may be configured to self-learn, such as by learning based on previously received inputs” and “previously delivered outputs” as well as “to self-learn based on the results (e.g., success) of previously delivered outputs, such as in response to inputs” where “based on outcomes of previous communication sessions, the AI assistant 201 may learn to adjust the outputs for certain inputs... [and] may be configured to be taught and/or to learn based on inputs and/or results of determined outputs” of “an artificial neural network.”; Mitchem, ¶ Col. 9, lines 27-45) based on the second input in the natural language format (“an output may be determined... by the AI assistant... based on the one or more inputs” in light of “customer information, such as an account of the customer or a history of transactions and/or communication sessions of the customer... using predictive analysis”; Mitchem, ¶ Col. 12, line 62 - Col. 13, line 12); and present the second answer in the natural language format on the user device via the graphical user interface for use in resolving the other issue with the service (“an indication of the output {the second answer} may be sent to the CSR... via the CSR device {...on the user device via the graphical user interface}” where the “output may comprise a suggested CSR statement {...in the natural language format}” which can include “a response to a customer question {...for use in resolving the other issue with the service}”; Mitchem, ¶ Col. 13, lines 13-16, Col. 16, lines 15-28). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the customer service training systems of Ferris to incorporate the teachings of Mitchem to include wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: receive, via the graphical user interface on the user device, a second query associated with another request with respect to another issue with the service, wherein the second query is received in the natural language format; provide the second query in the natural language format as a second input to the machine learning model, wherein the machine learning model is trained on historical communication logs from the messaging platform to generate a second output indicating a second answer to the second query based on the second input in the natural language format; and present the second answer in the natural language format on the user device via the graphical user interface for use in resolving the other issue with the service. The customer service agent training systems of Ferris provide a simulated training environment for a customer service representative. However, the simulated training environment of Ferris fails to expressly describe the responding to additional questions related to customer service needs. Mitchem teaches an AI assistant, which, in the context of the customer service training systems of Mitchem, can respond to questions and answers from either the customer or the agent, to address workflow deficiencies “to guide the CSR through unexpected situations,” which can help avoid unresponsive or inaccurate CSR statements, which provides the recognized benefit of improving information quality, thereby improving training quality. (Mitchem, Col. 1, lines 15-24; Col. 1, line 60-Col. 2, line 11). Regarding claim 3, the rejection of claim 2 is incorporated. Ferris discloses all of the elements of the current invention as stated above. However, Ferris fail(s) to expressly recite wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: provide a third answer in the natural language format as a third input to the machine learning model, wherein the machine learning model is trained on the historical communication logs to generate a third output indicating a third query associated with the third answer based on the third input in the natural language format; and present the third query in the natural language format on the user device via the graphical user interface for use in resolving the other issue with the service. The relevance of Mitchem is described above with relation to claim 2. Regarding claim 3, Mitchem teaches wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: provide a third answer in the natural language format as a third input to the machine learning model, (“At step 310, one or more inputs may be received. The input may be received by an AI assistant (e.g., the AI assistant 101 in FIG. 1 and/or the AI assistant 201 in FIG. 2)” where the “input may be associated with a customer” and “may be received from a device associated with the CSR (e.g., the CSR device 202 in FIG. 2)” and the “input may comprise ... a sequence of question, answers, and/or statements” from either the user, the agent, or both, which is understood as including providing any number of inputs, including a third input, from the user and/or agent {a third answer... as a third input}, and the “input may comprise a UI input” received through a “graphic user interface (GUI).”; Mitchem, ¶ Col. 10, line 42-Col. 11, line 9) wherein the machine learning model is trained on the historical communication logs to generate a third output (“The AI assistant 201 may be configured to be trained on...a database of field data” and “may be configured to self-learn, such as by learning based on previously received inputs” and “previously delivered outputs” as well as “to self-learn based on the results (e.g., success) of previously delivered outputs, such as in response to inputs” where “based on outcomes of previous communication sessions, the AI assistant 201 may learn to adjust the outputs for certain inputs... [and] may be configured to be taught and/or to learn based on inputs and/or results of determined outputs” of “an artificial neural network.”; Mitchem, ¶ Col. 9, lines 27-45) indicating a third query associated with the third answer based on the third input in the natural language format (As described with reference to an example, “the customer is calling the insurance company to file an insurance claim” and the “output... may comprise a new question for the CSR to ask the customer {...in a natural language format}” such as, “based on a determination that further information is needed from the customer”; Mitchem, ¶ Col. 13, lines 13-32); and present the third query in the natural language format on the user device via the graphical user interface for use in resolving the other issue with the service (“an indication of the output {the second answer} may be sent to the CSR... via the CSR device {...on the user device via the graphical user interface}” where the “output may comprise a suggested CSR statement {...in the natural language format}” which can include “a response to a customer question {...for use in resolving the other issue with the service}”; Mitchem, ¶ Col. 13, lines 13-16, Col. 16, lines 15-28). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the customer service training systems of Ferris to incorporate the teachings of Mitchem to include wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: provide a third answer in the natural language format as a third input to the machine learning model, wherein the machine learning model is trained on the historical communication logs to generate a third output indicating a third query associated with the third answer based on the third input in the natural language format; and present the third query in the natural language format on the user device via the graphical user interface for use in resolving the other issue with the service. The customer service agent training systems of Ferris provide a simulated training environment for a customer service representative. However, the simulated training environment of Ferris fails to expressly describe the responding to additional questions related to customer service needs. Mitchem teaches an AI assistant, which, in the context of the customer service training systems of Mitchem, can respond to questions and answers from either the customer or the agent, to address workflow deficiencies “to guide the CSR through unexpected situations,” which can help avoid unresponsive or inaccurate CSR statements, which provides the recognized benefit of improving information quality, thereby improving training quality. (Mitchem, Col. 1, lines 15-24; Col. 1, line 60-Col. 2, line 11). Regarding claim 7, the rejection of claim 2 is incorporated. Ferris discloses all of the elements of the current invention as stated above. However, Ferris fail(s) to expressly recite wherein the second output generated by the machine learning model further comprises a number of times that the second query has been provided to the machine learning model. The relevance of Mitchem is described above with relation to claim 2. Regarding claim 7, Mitchem teaches wherein the second output generated by the machine learning model further comprises a number of times that the second query has been provided to the machine learning model (As disclosed with reference to an example, “the AI assistant 201 may determine that an output” as given in response to an input “resulted in waiver of fees a number of times,” where, in an effort to avoid waiving the fees, “the AI assistant may determine a different output a subsequent time and/or that to associate a different output with the received input.” This is understood as including determining a “number of times” that a second query has been presented to the “AI assistant” such that the correlation (i.e., that the output in response to the input results in “waiving the fees”) between the output and the input may be drawn.; Mitchem, ¶ Col. 9, lines 26-45). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the customer service training systems of Ferris to incorporate the teachings of Mitchem to include wherein the second output generated by the machine learning model further comprises a number of times that the second query has been provided to the machine learning model. The customer service agent training systems of Ferris provide a simulated training environment for a customer service representative. However, the simulated training environment of Ferris fails to expressly describe the responding to additional questions related to customer service needs. Mitchem teaches an AI assistant, which, in the context of the customer service training systems of Mitchem, can respond to questions and answers from either the customer or the agent, to address workflow deficiencies “to guide the CSR through unexpected situations,” which can help avoid unresponsive or inaccurate CSR statements, which provides the recognized benefit of improving information quality, thereby improving training quality. (Mitchem, Col. 1, lines 15-24; Col. 1, line 60-Col. 2, line 11). Regarding claim 9, the rejection of claim 8 is incorporated. Claim 9 is substantially the same as claim 2 and is therefore rejected under the same rationale as above. Regarding claim 10, the rejection of claim 9 is incorporated. Claim 10 is substantially the same as claim 3 and is therefore rejected under the same rationale as above. Regarding claim 14, the rejection of claim 9 is incorporated. Claim 14 is substantially the same as claim 7 and is therefore rejected under the same rationale as above. Regarding claim 16, the rejection of claim 15 is incorporated. Claim 16 is substantially the same as claim 2 and is therefore rejected under the same rationale as above. Regarding claim 17, the rejection of claim 16 is incorporated. Claim 17 is substantially the same as claim 3 and is therefore rejected under the same rationale as above. Claims 4, 11, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ferris and Mitchem as applied to claim(s) 3, 10, and 17 above, and further in view of Mahmoud (U.S. Pat. App. Pub. No. 2022/0156298, hereinafter Mahmoud). Regarding claim 4, the rejection of claim 3 is incorporated. Ferris and Mitchem disclose all of the elements of the current invention as stated above. However, Ferris and Mitchem fail(s) to expressly recite wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: prompt, via the graphical user interface, a user to input a rating for the second answer to the second query or for the third query associated with the third answer; receive, via the graphical user interface, the rating for the second answer to the second query or for the third query associated with the third answer; and train the machine learning model using the rating by weighting the second answer or the third query in a training dataset based on the rating. Mahmoud teaches systems and methods for an “agent-assist system that provides context-aware recommendations to agents.” (Mahmoud, ¶ [0002]). Regarding claim 4, Mahmoud teaches wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: prompt, via the graphical user interface, a user to input a rating for the second answer to the second query or for the third query associated with the third answer (“the agent device 114 may collect and send feedback data indicating how relevant the agent 112 and/or user 104 felt the results were for responding to the query,” where, as shown in FIG. 4A, the use of selectable buttons to provide feedback is a prompted feedback.; Mahmoud, ¶ [0097]; FIG. 4A); receive, via the graphical user interface, the rating for the second answer to the second query or for the third query associated with the third answer (The feedback can be “explicit feedback data 428” regarding the outputs, which as applied to the outputs of Mitchem is explicit feedback for the second answer to the second query and/or for the third query associated with the third answer.; Mahmoud, ¶ [0097]; FIG. 4A); and train the machine learning model using the rating by weighting the second answer or the third query in a training dataset based on the rating (“The feedback is then communicated to a server that stores the current query, recommendation, feedback, user (agent) ID, and other metadata (timestamp, etc.) so that other systems can learn from such feedback to boost or penalize recommendations based on the given feedback among other criteria.”; Mahmoud, ¶ [0074]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the customer service training systems of Ferris, as modified by the AI assistant systems of Mitchem, to incorporate the teachings of Mahmoud to include wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: prompt, via the graphical user interface, a user to input a rating for the second answer to the second query or for the third query associated with the third answer; receive, via the graphical user interface, the rating for the second answer to the second query or for the third query associated with the third answer; and train the machine learning model using the rating by weighting the second answer or the third query in a training dataset based on the rating. The AI assist systems of Mahmoud incorporate user and agent feedback regarding generated responses, which allows the system to determine whether the responses, or portions thereof, were relevant, and to improve the recommendations provided to the agents, as recognized by Mahmoud. (Mahmoud, ¶ [0002], [0037]). Regarding claim 11, the rejection of claim 10 is incorporated. Claim 11 is substantially the same as claim 4 and is therefore rejected under the same rationale as above. Regarding claim 18, the rejection of claim 17 is incorporated. Claim 18 is substantially the same as claim 4 and is therefore rejected under the same rationale as above. Claims 5-6, 12-13, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ferris and Mitchem as applied to claim(s) 2, 9, and 16 above, and further in view of Friio (U.S. Pat. App. Pub. No. 2022/0070296, hereinafter Friio). Regarding claim 5, the rejection of claim 2 is incorporated. Ferris and Mitchem disclose all of the elements of the current invention as stated above. However, Ferris and Mitchem fail(s) to expressly recite wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: determine an amount of time involved in resolving the other issue; and train the machine learning model using the second answer to the second query and the amount of time involved in resolving the other issue. Friio teaches systems and methods for “customer assistance via call or contact centers and internet-based service options”. (Friio, ¶ [0002]). Regarding claim 5, Friio teaches wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: determine an amount of time involved in resolving the other issue (“the contact center system 200 ... may store customer data” including “customer profiles, contact information, service level agreement (SLA), and interaction history (e.g., details of previous interactions with a particular customer, including the nature of previous interactions, disposition data, wait time, handle time, and actions taken by the contact center to resolve customer issues)” as well as “interaction data” including “data relating to numerous past interactions between customers and contact centers”; Friio, ¶ [0038]); and train the machine learning model using the second answer to the second query and the amount of time involved in resolving the other issue (“the analytics module 250 also may generate, update, train, and modify predictors or models 252 based on collected data, such as, for example, customer data, agent data, and interaction data,” which includes, but is not limited to, all questions and answers {the second answer to the second query} and the “handle time” {the amount of time involved in resolving the other issue}.; Friio, ¶ [0038], [0048]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the customer service training systems of Ferris, as modified by the AI assistant systems of Mitchem, to incorporate the teachings of Friio to include wherein the memory further includes instructions that are executable by the processing device for causing the processing device to: determine an amount of time involved in resolving the other issue; and train the machine learning model using the second answer to the second query and the amount of time involved in resolving the other issue. The assistant systems of Friio include “generat[ing], updat[ing], train[ing], and modify[ing] predictors or models” based on “customer data, agent data, and interaction data… to predict behaviors of, for example, customers or agents, in a variety of situations” such that the system can “tailor interactions based on such predictions” and “thereby improving overall contact center performance and the customer experience,” as recognized by Friio. (Friio, ¶ [0049]). Regarding claim 6, the rejection of claim 2 is incorporated. Ferris and Mitchem disclose all of the elements of the current invention as stated above. However, Ferris and Mitchem fail(s) to expressly recite wherein the machine learning model is further configured to generate a confidence score for the second answer, and wherein the memory further includes instructions that are executable by the processing device for causing the processing device to present the confidence score on the user device via the graphical user interface. The relevance of Friio is described above with relation to claim 5. Regarding claim 6, Friio teaches wherein the machine learning model is further configured to generate a confidence score for the second answer (“the dialog manager 272 may also be configured to compute a confidence level for the selected response”; Friio, ¶ [0066]), and wherein the memory further includes instructions that are executable by the processing device for causing the processing device to present the confidence score on the user device via the graphical user interface (the “[computed] confidence level” is provided “to the agent device 230,” where the “agent device 230” is “a computing device configured to communicate with the servers of the contact center system 200, perform data processing associated with operations, and interface with customers via voice, chat, email, and other multimedia communication mechanisms” and the “asynchronous resolution facilitator then builds (or formats instructions for building) an agent interface (which may be constructed as a webpage) that is configured to visually display the data associated with the customer request”; Friio, ¶ [0040], [0066], [0158]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the customer service training systems of Ferris, as modified by the AI assistant systems of Mitchem, to incorporate the teachings of Friio to include wherein the machine learning model is further configured to generate a confidence score for the second answer, and wherein the memory further includes instructions that are executable by the processing device for causing the processing device to present the confidence score on the user device via the graphical user interface. The assistant systems of Friio include “generat[ing], updat[ing], train[ing], and modify[ing] predictors or models” based on “customer data, agent data, and interaction data… to predict behaviors of, for example, customers or agents, in a variety of situations” such that the system can “tailor interactions based on such predictions” and “thereby improving overall contact center performance and the customer experience,” as recognized by Friio. (Friio, ¶ [0049]). Regarding claim 12, the rejection of claim 9 is incorporated. Claim 12 is substantially the same as claim 5 and is therefore rejected under the same rationale as above. Regarding claim 13, the rejection of claim 9 is incorporated. Claim 13 is substantially the same as claim 6 and is therefore rejected under the same rationale as above. Regarding claim 19, the rejection of claim 16 is incorporated. Claim 19 is substantially the same as claim 5 and is therefore rejected under the same rationale as above. Regarding claim 20, the rejection of claim 16 is incorporated. Claim 20 is substantially the same as claim 6 and is therefore rejected under the same rationale as above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ghoche (U.S. Pat. App. Pub. No. 2024/0177172) discloses systems and methods for using generative AI for customer support, the AI model being fine-tuned on the task of generating a template workflow based on historical dialogue training. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sean E. Serraguard whose telephone number is (313)446-6627. The examiner can normally be reached 07:00-17:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel C. Washburn can be reached at (571) 272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Sean E Serraguard/Patent Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

May 30, 2024
Application Filed
Feb 03, 2026
Non-Final Rejection — §102, §103, §DP
Apr 08, 2026
Examiner Interview Summary
Apr 08, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603095
Stereo Audio Signal Delay Estimation Method and Apparatus
2y 5m to grant Granted Apr 14, 2026
Patent 12598250
SYSTEMS AND METHODS FOR COHERENT AND TIERED VOICE ENROLLMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597429
PACKET LOSS CONCEALMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12512093
Sensor-Processing Systems Including Neuromorphic Processing Modules and Methods Thereof
2y 5m to grant Granted Dec 30, 2025
Patent 12505835
HOME APPLIANCE AND SERVER
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+33.6%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month