DETAILED ACTION
This action is responsive to RCE filed on November 14th, 2025.
Claims 1~3 and 5~20 are examined.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/14/25 has been entered.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/14/25 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1~3 and 5~20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1~3, 5, 6, and 8~20 are rejected under 35 U.S.C. 103 as being unpatentable over Huet et al. hereinafter Huet (U.S 2011/0235797) and Rubens (U.S 2023/0117113) in view of Sundararaman et al. hereinafter Sundararaman (U.S 2020/0050949).
Regarding Claim 1,
Huet taught a method facilitates integrating a computing system into healthcare systems, comprising:
receiving a request for live interaction from a user [¶24, posing a textual query to the conversational agent];
in response to receiving the request, causing an automatic live interaction to be conducted with the user in which one or more first messages are received from the user, and one or more second messages are sent to the user [¶24, enable a user to type in queries to which the conversational agent can respond and attempt to service the customer request for information; ¶26~¶33];
periodically during the automatic live interaction:
using an up-to-date textual transcript for the automatic live interaction to assess whether the live interaction is one well-suited to a human live interaction [¶82; expressing frustration during conversation; ¶84, patterns; ¶94, transcript is created by the conversational agent 228 at the time the customer enters their first query and thereafter records all communications];
in response to determining that the live interaction is well-suited to a human live interaction:
causing to be initiated between the user and a human agent a human live interaction in place of the automatic live interaction [¶87, customer can be escalated to a live agent];
in connection with causing the initiating, causing to be presented to the human agent text corresponding to at least some of the first messages and at least some of the second messages [¶94, when the conversational agent 228 escalates the customer to a live agent, a session transcript that may contain the text of the entire interaction between the customer and the conversational agent may be forwarded to the live agent. The transcript may be studied by the live agent prior to conversing with the customer; ¶11].
Huet did not specifically teach wherein the assessing comprises: applying a trained machine learning model to at least a portion of an up-to-date textual transcript for the automatic live interaction to classify an intent of the user and assess whether the live interaction is one well-suited to a human live interaction.
Rubens taught wherein the assessing comprises: applying a trained machine learning model to at least a portion of an up-to-date textual transcript for the automatic live interaction to classify an intent of the user and assess whether the live interaction is one well-suited to a human live interaction [¶100, receive service request data which includes a service request for receiving at least one service for at least one issue faced by the at least one user; ¶101, initiating a chatbot based on the service request; ¶105~¶106, determining an intent based on the analyzing the service request data using a first machine learning model; ¶107, generating an adjusted service level based on the intent using a second machine learning model. The second machine learning model may be trained for adjusting the service level based on the classifying the intent; ¶108, assigning an agent to a user based on the adjusted service level].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention was made, to combine, Ruben’s teachings with the teachings of Huet, because the combination results in a frictionless Customer Experience (CX) after customers enter a ‘bot-dialogue’, allowing the customers to have immediate access to live customer agents within adjustable service levels [Rubens: ¶54].
The combination of Huet and Rubens did not specifically teach applying a trained machine learning model…to classify an intent of the user by creating a dependent variable associated with the healthcare systems.
Sundararaman taught applying a trained machine learning model…to classify an intent of the user by creating a dependent variable associated with the healthcare systems [¶23, the digital assistant platform may provide a user interface (e.g., a chatbot and/or the like) that enables a user to access data managed by the healthcare data platform. The digital assistant platform may extract the keywords from the query using a natural language processing model, and identify the intent classification and/or the entity using a machine learning model. The digital assistant platform may generate a response to the query based on the analytical information (e.g., via the chatbot and/or another user interface), and/or cause another action to be performed based on the analytical information; ¶73].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention was made, to combine, Sundararaman’s teachings with the teachings of Huet and Rubens, because the combination use interactions with the user and patterns in the interactions as learning points to provide more accurate results over time, thereby conserving significant amounts of time, manual effort, computational resources, and/or network resources that may otherwise be used to determine analytical information associated with a data element [Sundararaman: ¶24].
Regarding Claim 2,
Huet taught wherein the automatic live interaction is via text [¶24, textual].
Regarding Claim 3,
Huet taught wherein the automatic live interaction is via voice [¶101, voice in place of text interaction], the method further comprising causing the first messages received from the user in voice form to be automatically transformed into text form [¶101, voice can track text responses implying speech-to-text conversion].
Regarding Claim 5,
Huet-Rubens taught further comprising: accessing training data representing live interaction transcripts for each of which an intent has been determined; and using the accessed training data to train the machine learning model that is applied [¶80, creation of an optimum chatbot usage and live interaction based on intents and training sets from the bot]. The rationale to combine as discussed in claim 1, applies here as well.
Regarding Claim 6,
Huet-Rubens taught wherein the trained machine learning model is of one or more of the following machine learning model types: long short-term memory network; neural network; bidirectional encoder representations from transformers; dual intent and entity transformer; transformer deep learning model; GPT-2; or large language model [Rubens: ¶69, machine learning]. The rationale to combine as discussed in claim 1, applies here as well.
Regarding Claim 8,
Huet taught further comprising: selecting, based on the textual transcript for the automatic live interaction, one of a plurality of human agent categories as best-suited to take over the live interaction, and wherein the initiated human live interaction is initiated with a human agent in the selected human agent category [¶93, conversational agent 228 may determine a service category that is required, and escalate the customer to a live agent that is competent to address queries falling into that category].
Regarding Claim 9,
Huet taught further comprising: causing to be presented to the human agent text corresponding to at least some of the first messages and at least some of the second messages [¶11, providing the live agent with at least a portion of the transcript of the textual conversation].
Regarding Claim 10,
Huet taught wherein the causing text presentation causes the text to be presented in a first display location, and wherein the human live interaction includes one or more third messages that are received from the user, and one or more fourth message originated by the human agent that are sent to the user, the method further comprising causing to be presented to the human agent text corresponding to at least some of the third messages and at least some of the fourth messages, in a second display location adjacent to the first display location [¶95, Fig. 6].
Regarding Claim 11,
Huet taught further comprising causing to be presented to the human agent information about the user that is not related to the live interaction [¶75, user id, other info].
Regarding Claims 12~20, the claims are similar in scope to claims 1~3, 5~6, and 8~10 and therefore, rejected under the same rationale.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Huet, Rubens, and Sundararaman in view of Mishra (U.S 2023/0410801).
Regarding Claim 7,
Huet-Rubens-Sundararaman-Mishra taught wherein applying the trained machine learning model also obtains a predicted entity referenced by the user in the automatic live interaction [¶129, intelligent routing system 925 can execute one or more machine-learning techniques to train a model that predicts whether a message received from network device 905 may be successfully addressed by a bot 915].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention was made, to combine, Mishra’s teachings with the teachings of Huet, Rubens, and Sundararaman, because the combination allows categorizing a user intent associated with system communications and identifying appropriate systems for a response is a primary concern in providing a user with a positive experience [Mishra: ¶5].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEE SOO KIM whose telephone number is (571)270-3229. The examiner can normally be reached M-F 9AM-5PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas Taylor can be reached on (571) 272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HEE SOO KIM/Primary Examiner, Art Unit 2443