Prosecution Insights
Last updated: April 17, 2026
Application No. 18/369,088

SYSTEM AND METHOD FOR ADAPTIVELY TRAVERSING CONVERSATION STATES USING CONVERSATIONAL Al TO EXTRACT CONTEXTUAL INFORMATION

Non-Final OA §103
Filed
Sep 15, 2023
Examiner
LEE, JANGWOEN
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Babblebots Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
36 granted / 44 resolved
+19.8% vs TC avg
Strong +24% interview lift
Without
With
+24.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
23 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
26.5%
-13.5% vs TC avg
§103
54.6%
+14.6% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
4.1%
-35.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 44 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the Application filed on 09/15/2023. Claims 1-20 are pending and have been examined. Claims 1, 8, and 15 are independent. This Application was published as US Pub No. 20240096312. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority based on application IN202241052867 filed in Indian Patent Office (IPO) on 09/15/2022 and receipt of a certified copy thereof. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Asokan et al., (US Pub No. 2021/0312399, hereinafter, Asokan) in view of Larionov et al., ( "Tartan: A retrieval-based socialbot powered by a dynamic finite-state machine architecture." arXiv preprint arXiv:1812.01260 (2018), hereinafter, Larionov). Regarding Claim 1, Asokan discloses a processor-implemented method for adaptively traversing conversation states between a user and an artificially intelligent bot to extract contextual information (Asokan, Fig. 1, par [015], "…a system for conducting an automated interview session between a candidate and an automated chat resource may include at least one processor..."; Fig.2, par [047], "…The question selector 204 may also be configured to analyze a candidate's response to a question/scenario presented to the candidate during an automated interview session to generate a question-answer relevance score..."; par [049], "…the chat bot 208 may leverage one or more machine learning and/or artificial intelligence techniques to determine whether to present one or more questions to a candidate..."): loading, at a conversation server, a plurality of conversation states from a custom database based on a request received from a user device for an automated conversation with the artificially intelligent bot (Asokan, Figs. 1 and 2, par [039], "…The user device 102 and/or the candidate device 108 may allow a user and/or a candidate to interact with the cloud-based system 106 over the network 104…."; par [041], "…an input user may interact with the user device 102 via a graphical user interface (GUI) of the application 110 in order to manage a recruitment process..."; par [042], "…a candidate may interact with the candidate device 108 via a GUI of the application 112 in order to participate in a recruitment process..."; Fig.2, cloud-based system (i.e., a conversation server), par [047], "…The question selector 204 may be configured to retrieve a personality trait from a candidate profile... use the personality trait as a reference to retrieve a question/scenario (i.e., conversation states) from the question generator 206 and present the question/scenario to a candidate during an automated interview session…."; par [052], see also candidate database 210, resume data 212, and/or transcript database 214 (i.e., custom databases)), wherein the plurality of conversation states define a logical flow of the automated conversation and comprise a first conversation state and N subsequent conversation states (where N is a positive integer greater than 1) (Fig.2, par [047], "…This analysis may allow the question selector 204 dynamically decide the flow of questions similarly to manual interviews...", "…the question selector 204 may be trained regarding how to dynamically switch between questions/scenarios...") wherein each of the plurality of conversation states comprises a content boundary that demarcates a scope of acceptable questions and responses in the conversation (Fig.7, par [086], "…In block 716...The question selector 204 may analyze the candidate's response to the question/scenario presented to the candidate to generate a Q-A relevance score to determine whether to ask another question or modify the previous question presented to the candidate. The Q-A relevance score may be indicative of a question-answer relevance..."; it is construed that the questions/scenario has a content boundary to generate Q-A relevance score); dynamically generating, at the conversation server, a first question associated with the first conversation state by obtaining a prompt for a large language model (LLM), wherein the prompt is obtained by analyzing at least one of (i) a resume of the user (Asokan, Fig.4: blocks 402 and 404, par [067], "…the system 100 (e.g., via the cloud-based system 106 or, more specifically, the question generator 206) may analyze a candidate resume and/or a communication transcript to identify one or more personality traits, generate one or more questions based on the identified one or more personality traits, and label the generated questions..."), or (ii) a job description associated with the automated conversation with at least domain-specific ML model associated with the job- description; monitoring in real-time, at the conversation server using at least one custom ML model with the content boundary, a first response provided by the user to the first question asked by the artificially intelligent bot at the user device, to determine whether the first response is inside or outside of the content boundary associated with the first conversation state (Asokan, Figs 4 and 8, par [089], "…In block 806, a user may receive one or more responses from the candidate based on the one or more questions/scenarios (i.e., conversation states) presented to the candidate by the user..."; Fig.8, Block 812: Select Transcript, Analyze Candidate Responses, and Generate Trait Scores and Q-A Relevance Scores (i.e., related to content boundary) ); generating in real-time, at the conversation server using the at least one custom ML model (Asokan, Fig.1, par [038], "…The system 100 also may analyze candidate personalities and conduct automated conversational interview sessions using machine learning models...to retrain machine learning models (e.g., AI models). Such a feedback loop ( e.g., data pipelining) may refine the system 100 to learn from real-time data and conduct automated interview sessions with a "human touch" using less effort..."; Fig.2, par [050], "…different chat bots 208 may be created to have different profiles. The profile of a particular chat bot may be used to select a chat bot with expertise regarding a particular job position (i.e., custom ML model)...") and the LLM, a first follow-up question by performing one of: (a) determining a missing content in the first response using the at least one domain- specific ML model associated with the job-description, if the first response is outside the content boundary of the first conversation state, to redirect the user to the first conversation state with the first follow-up question, or (b) analyzing at least one of (i) the resume of the user (Asokan, Fig.4: blocks 402 and 404, par [067], "…the system 100 (e.g., via the cloud-based system 106 or, more specifically, the question generator 206) may analyze a candidate resume…"), (ii) the job description with the at least domain-specific ML model associated with the job-description, if the first response is inside the content boundary of the first conversation state, to direct the user to a first subsequent state with the first follow-up question (Fig.7, par [087], "…In block 722, the question selector 204 may determine whether the Q-A relevance score is sufficient...If the question selector 204 determines that the Q-A relevance score is sufficient...the method 700 may advance to block 724 If the question selector 204 determines that the candidate profile does contain one or more additional personality traits, the question selector 204 may retrieve one or more personality traits contained by the candidate profile, and the method 700 may return to block 704..."); monitoring in real-time, at the conversation server using the at least one custom ML model and the LLM with the content boundary, a second response provided by the user to the first follow- up question asked by the artificially intelligent bot at the user device, to (a) determine whether the second response is inside or outside of the content boundary associated with the first subsequent state (Asokan, par [087], "…the process of presenting questions to the candidate, receiving responses from the candidate, generating trait scores, and generating Q-A relevance scores may repeat one or more times. This process may repeat until a scenario-based question retrieve from the question generator 206 for a particular personality trait is complete...") and (b) extract a skill level of the user (Asokan, Fig.8, par [091], "…In block 818, the user may determine whether the trait scores and/or Q-A relevance scores are accurate..."; par [018], "…the first personality trait comprises at least one of an attrition rate, a job history, an education history, a job skill, a hobby, or a certification (Emphasis added)..."; Fig.5, par [071], "…In block 504, the personality analyzer 202 may analyze one or more candidate resumes to identify one or more personality traits that the candidate may possess..."); generating in real-time, at the conversation server using the custom ML model and the LLM, a second follow-up question by analyzing at least one of (i) the resume of the user, (ii) the job description (Asokan, Fig.2, par [046], "…the personality analyzer 202 may be configured to analyze one or more candidate resumes to identify one or more personality traits and generate one or more trait scores...rank one or more candidate resume based on the one or more personality traits identified from the candidate resume and for the job position the candidate applied for..."; par [048], "…The question generator 206 may be configured to generate one or more questions based on the identified one or more personality traits..."), and (iii) the updated N subsequent conversation states with the at least one custom ML model and the LLM, to direct the user to the updated N subsequent conversation states of the conversation (par [047], "…the question selector 204 may be trained regarding how to dynamically switch between questions/scenarios..."); and repeating generating follow-up questions for the artificially intelligent bot in real-time at the conversation server using the custom ML model and the LLM for adaptively traversing the N updated conversation states between the user and the artificially intelligent bot to extract contextual information (Asokan, par [087], "…the process of presenting questions to the candidate, receiving responses from the candidate, generating trait scores, and generating Q-A relevance scores may repeat one or more times. This process may repeat until a scenario-based question retrieve from the question generator 206 for a particular personality trait is complete..."). However, Asokan does not explicitly disclose the use of the large language model at the conversation server and the limitation of "automatically computing possible paths of the conversation using the at least one custom ML model and the LLM with the second response to obtain an updated N subsequent conversation states of the conversation, wherein the updated N subsequent conversation states optimize contextual information retrieval from the user in the automated conversation based on the skill level of the user". But, Larionov, in the analogous field of endeavor, discloses automatically computing possible paths of the conversation using the at least one custom ML model and the LLM (Larionov, Fig2: System Architecture Diagram of the Tartan socialbot, 3.1 Overview, "…It makes use of the infrastructure provided by the Amazon Alexa Prize team (COBOT)...The bot is intended to be used via an Alexa device, such as an Echo or a Dot (i.e., the use of the large language model)") with the second response to obtain an updated N subsequent conversation states of the conversation, wherein the updated N subsequent conversation states optimize contextual information retrieval from the user in the automated conversation based on the skill level of the user (Larionov, 3.3 Dialog Manager, "…Tartan’s unique design required a robust way to choose between response generators and FSMs on the fly and to be able to switch from one to the other while remembering the states of all currently active FSMs in order to potentially return to them in the future..."; 3.4 Finite State Machines (FSMs), "…FSMs allowed Tartan to more easily maintain context throughout a conversational arc, and facilitated a more structured conversation that Tartan could logically analyze and continue...", "…the bot can completely control the conversation by asking the interlocutor a series of directed questions, while at the other end of the spectrum the bot can instead allow the interlocutor to dictate the conversational arc..."). It would have been obvious to one of ordinary skill in the art, before effective filing date of the claimed invention, to have modified a system/method for conducting an automated interview session using a chatbot and machine learning models of Asokan with a socialbot based on Natural Language Understanding and Processing of Amazon Alexa platform and Finite State Machines (FSMs) of Larionov with a reasonable expectation of success to provide users with an engaging and fluent conversation by blending flexible finite-state models with data-based generative and retrieval models (Larionov, Abstract). Regarding Claim 2, Asokan in view of Larionov discloses he processor-implemented method of claim 1, wherein the processor is configured to re-train the at least one custom ML model by: Asokan further discloses (i) tagging content data associated with the responses (Asokan, Fig.8, par [091], "…A user may interact with the user device 102 via a GUI of the application 110 in order to label one or more responses from the candidate with a particular personality trait…"); and (ii) improving a classification threshold by identifying a pattern in the content of the user using unsupervised learning to re-train the at least one custom ML models (par [091], "…The system 100 may train one or more machine learning models of the system 100 using the user labeled one or more candidate responses and/or the user modified/corrected trait scores and/or Q-A relevance scores..."). Regarding Claim 3, Asokan in view of Larionov discloses the processor-implemented method of claim 1. Asokan further discloses comprising evaluating the user on a plurality of parameters associated with the skills of the user by extracting at least one of a contextual feature or a vocal feature from responses provided by the user, wherein the plurality of parameters comprise at least one of a response duration parameter, a sentiment parameter, a personality parameter, a meaningfulness parameter, a grammar parameter, a filler word usage parameter, or a monosyllabic answer parameter (Asokan, Fig.6, par [076], "…In block 606, the question generator 206 may select a communication transcript from the transcript database 214 and may analyze the communication transcript to identify one or more personality traits that the candidate may possess...", "…a personality trait that may be identified on a communication transcript is at least one of a language fluency, a positivity, an empathy, an attentivity (i.e., attentiveness), an emotional stability, and a patience..."). Regarding Claim 4, Asokan in view of Larionov discloses the processor-implemented method of claim 1. Asokan further discloses comprising enabling the user to practice the automated conversation, wherein the processor generates new follow-up questions based on the first response provided by the user for the first question (Asokan, par [047], "…question selector 204 may also be configured to analyze a candidate's response to a question/scenario presented to the candidate during an automated interview session to generate a question-answer relevance score to determine whether to ask another question/scenario or modify the previous question/scenario presented to the candidate…."; par [087], "…the process of presenting questions to the candidate, receiving responses from the candidate, generating trait scores, and generating Q-A relevance scores may repeat one or more times. This process may repeat until a scenario-based question retrieve from the question generator 206 for a particular personality trait is complete..."). Regarding Claim 5, Asokan in view of Larionov discloses the processor-implemented method of claim 1. Asokan further discloses comprising simulating the automated conversation based on (a) a selected job description that is selected by the user from a list of job descriptions and (b) a resume provided by the user (Asokan, paras [041, 042, 047], "…an input user may interact with the user device 102 to create job positions, obtain candidate resumes for open job positions", "…The GUI may allow the candidate to upload a resume...", "…The question selector 204 may be configured to retrieve a personality trait from a candidate profile, which may be maintained by the personality analyzer 202, and use the personality trait as a reference to retrieve a question/scenario from the question generator 206 and present the question/scenario to a candidate during an automated interview session..."). Regarding Claim 6, Asokan in view of Larionov discloses the processor-implemented method of claim 1. Asokan further discloses comprising monitoring a response to a theoretical question by providing content of a standard answer to at least one custom ML model (Asokan, par [084], "…the personality analyzer 202 may leverage a supervised machine learning model that may be a trained combination of a labeled data set and an unlabeled data set...", "…The labeled data set may be available from open source data sets (e.g., Kaggle and other websites) and/or other relevant data sets..."). Regarding Claim 7, Asokan in view of Larionov discloses the processor-implemented method of claim 1. Asokan further discloses comprising monitoring a response of a work experience related question by providing (a) a project detail from the resume (Asokan, Fig.5, par [071], "…a personality trait that may be identified on a candidate resume is at least one of an attrition rate, a job history, an education history, a job skill, a hobby, and a certification..."; Fig.2, par [048], "…if a candidate enumerated many job experiences on a candidate resume, one or more questions may be generated by the question generator 206 to obtain an explanation from the candidate for the multiple job experiences..."), and Larionov further discloses (b) a template of questions associated a role associated with the automated interview (Larionov, 1. Introduction, "…finite-state machines (FSMs) would lead the human participant through a more scripted interaction...", "…FSMs to provide for locally cohesive structure (such as an introduction and topic-specific episodes)..."; 3.4.3 Interruption FSMs, Templates "…We implemented a generalized model utilizing utterance-level embeddings and templated responses. We store pairs of common questions and ideal answers and, given a query, we calculate a similarity metric across all the stored templates..."). Claim 8 is a system claim with limitations similar to the limitations of Claim 1 and is rejected under similar rationale. Additionally, Asokan discloses a system for adaptively generating voice-based follow-up questions based on a state of a conversation in an automated interview using custom machine learning (ML) models with large language models, comprising: a memory that stores a set of instructions; and a processor that is configured to execute the set of instructions for (Asokan, Fig. 1, par [015], "…a system for conducting an automated interview session between a candidate and an automated chat resource may include at least one processor..."; par [015], "…at least one memory comprising a plurality of instructions stored thereon that, in response to execution by the at least one processor...") ... Rationale for combination is similar to that provided for Claim 1. Claim 9 is a system claim with limitations similar to the limitations of Claim 2 and is rejected under similar rationale. Claim 10 is a system claim with limitations similar to the limitations of Claim 3 and is rejected under similar rationale Claim 11 is a system claim with limitations similar to the limitations of Claim 4 and is rejected under similar rationale Claim 12 is a system claim with limitations similar to the limitations of Claim 5 and is rejected under similar rationale Claim 13 is a system claim with limitations similar to the limitations of Claim 6 and is rejected under similar rationale Claim 14 is a system claim with limitations similar to the limitations of Claim 7 and is rejected under similar rationale Claim 15 is a non-transitory computer-readable storage medium claim with limitations similar to the limitations of Claim 1 and is rejected under similar rationale. Additionally, Asokan discloses a non-transitory computer-readable storage medium storing a sequence of instructions, which when executed by one or more processors (Asokan, par [018], "…one or more non-transitory machine-readable storage media comprising a plurality of instructions stored thereon that, in response to execution by at least one processor, causes the at least one processor to..."), causes deriving a subset from a dataset based on proclivity of entity devices towards a category, comprising ... Rationale for combination is similar to that provided for Claim 1. Claim 16 is a non-transitory computer-readable storage medium claim with limitations similar to the limitations of Claim 2 and is rejected under similar rationale Claim 17 is a non-transitory computer-readable storage medium claim with limitations similar to the limitations of Claim 3 and is rejected under similar rationale Claim 18 is a non-transitory computer-readable storage medium claim with limitations similar to the limitations of Claim 4 and is rejected under similar rationale Claim 19 is a non-transitory computer-readable storage medium claim with limitations similar to the limitations of Claim 5 and is rejected under similar rationale Claim 20 is a non-transitory computer-readable storage medium claim with limitations similar to the limitations of Claims 6 and 7 and is rejected under similar rationale Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sejpal et al., (US Pub No. 2022/0270594, hereinafter, Sejpal), discloses systems to automatic speech recognition and, more particularly to modifying an output of an automated speech recognition system using Reinforcement Learning (RL) (or similar type of machine learning) to enable the AI engine to adapt to individual customers while the AI engine is engaged in a conversation with each customer. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JANGWOEN LEE whose telephone number is (703)756-5597. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BHAVESH MEHTA can be reached at (571)272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JANGWOEN LEE/Examiner, Art Unit 2656 /BHAVESH M MEHTA/Supervisory Patent Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Sep 15, 2023
Application Filed
Sep 05, 2025
Non-Final Rejection — §103
Apr 02, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597432
HUM NOISE DETECTION AND REMOVAL FOR SPEECH AND MUSIC RECORDINGS
2y 5m to grant Granted Apr 07, 2026
Patent 12586571
EFFICIENT SPEECH TO SPIKES CONVERSION PIPELINE FOR A SPIKING NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12573381
SPEECH RECOGNITION METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12567430
METHOD AND DEVICE FOR IMPROVING DIALOGUE INTELLIGIBILITY DURING PLAYBACK OF AUDIO DATA
2y 5m to grant Granted Mar 03, 2026
Patent 12566930
CONDITIONING OF PRODUCTIVITY APPLICATION FILE CONTENT FOR INGESTION BY AN ARTIFICIAL INTELLIGENCE MODEL
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+24.2%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 44 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month