Prosecution Insights
Last updated: April 19, 2026
Application No. 18/762,969

METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR DETERMINING SERVICE MODE

Non-Final OA §101§103
Filed
Jul 03, 2024
Examiner
AZIZ, SHEZA ABDUL
Art Unit
2657
Tech Center
2600 — Communications
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
6 currently pending
Career history
6
Total Applications
across all art units

Statute-Specific Performance

§101
20.0%
-20.0% vs TC avg
§103
65.0%
+25.0% vs TC avg
§102
5.0%
-35.0% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim [1-20] rejected under 35 U.S.C. 101 because the claimed invention is directed to a mental process without significantly more. Regarding claim 1, 10, 19 recite a method, system and a non-transitory computer readable medium, for determining a service mode, comprising: generating an intent parameter by identifying a user intent in a query content input by a user [person can generate an intent]; generating an emotion parameter by analyzing a sentiment inclination in the query content [person can generate an emotion]; generating a confidence parameter by analyzing a similarity between the query content and training data for training an adaptive strategy model [person can generate a confidence parameter]; and determining a service mode for replying to the query content based on the intent parameter, the emotion parameter, and the confidence parameter [person can generate which mode to use to reply based on intent, emotion and confidence]. As described above, these limitations can be carried out as a series of mental steps. This judicial exception is not integrated into a practical application because the only additional elements recited are an adaptive strategy model, a memory, a processor, and a computer program executing on a user device, and these additional elements are nothing more than instructions to apply the mental process using a general-purpose software model and general-purpose hardware. These claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as described above, the only additional elements recited are an adaptive strategy model, a memory, a processor, and a computer program executing on a user device, and these additional elements are nothing more than instructions to apply the mental process using general-purpose software and hardware. Regarding Claim 2, 11, 20 recite a method, system and a non-transitory computer readable medium, further comprising: in response to receiving the query content input by the user, generating a reply content based on the query content by a predefined strategy based on the reply content, determining whether the predefined strategy in response to the predefined strategy These additional limitations do not prevent the process from being carried out as a mental process. Claim 2, 11, 20 describe an adaptive strategy model and a predefined strategy model. These models are described at a high level such that they are general-purpose software being used as a tool to implement the mental process. Thus, they don’t describe a practical application or significantly more than the mental process. Claims 3 and claim 12, and recite a method and a system where in generating the intent parameter comprises: generating an evaluated intent value based on the user intent and a preset intent set [person can generate an intent] and make comparison to preset intent]; and generating the intent parameter based on the evaluated intent value and a preset value [person can generate an intent based on the evaluation and a preset value]. As described above, these limitations can be carried out as a series of mental steps. This judicial exception is not integrated into a practical application because the only additional elements recited are an electronic system executing on a user device, and these additional elements are nothing more than instructions to apply the mental process using general purpose hardware. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as described above, the only additional elements recited are an electronic system executing on a user device, and these additional elements are nothing more than instructions to apply the mental process using general purpose hardware. Claims 4 and claim 13, and recite a method and a system, wherein generating the emotion parameter comprises: determining an average emotional value based on scores, emoticons and keywords in the query content [person can calculate an average emotional value based on these factors]; and generating the emotion parameter based on the average emotional value and a preset value [person can generate an emotion based on calculating the average and using a preset value]. As described above, these limitations can be carried out as a series of mental steps. This judicial exception is not integrated into a practical application because the only additional elements recited are an electronic system executing on a user device, and these additional elements are nothing more than instructions to apply the mental process using general purpose hardware. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as described above, the only additional elements recited are an electronic system executing on a user device, and these additional elements are nothing more than instructions to apply the mental process using general purpose hardware. Claims 5 and claim 14, and recite a method and a system, wherein determining a service mode for replying to the query content comprises: determining a decision parameter based on the intent parameter, the emotion parameter, and the confidence parameter [person can determine an a decision based on these factors]; and determining whether the service mode is a model service mode using an adaptive strategy model or a direct service mode based on the decision parameter and a preset value [person can determine whether a certain mode should be used or engaging a human agent based on values]. As described above, these limitations can be carried out as a series of mental steps. Claims 5 and 14 don’t describe any additional elements. Thus, they don’t describe a practical application or significantly more than the mental process. Claims 6 and claim 15, and recite a method and a system, wherein determining the decision parameter comprises: determining evaluation factors corresponding to the intent parameter, the emotion parameter, and the confidence parameter, the evaluation factors being one of a level or a weight [person can determine and evaluate factors corresponding to various factors and apply weight as needed to each factor].; and determining the decision parameter based on the evaluation factors corresponding to the intent parameter, the emotion parameter, and the confidence parameter [person can determine the decision based on these various factors]. As described above, these limitations can be carried out as a series of mental steps. Claims 6 and 15 don’t describe any additional elements. Thus, they don’t describe a practical application or significantly more than the mental process. Claims 7 and claim 16, and recite a method and a system, wherein determining the decision parameter comprises: determining a service mode corresponding to the intent parameter, the emotion parameter, and the confidence parameter [person can determine as service based on deciding and using the factors ]; determining proximity between the service mode and a preset mode [person can determine using proximity between which service mode to select using a preset mode]; and determining the decision parameter based on the proximity [ person can determine the decision parameter using the proximity model]. As described above, these limitations can be carried out as a series of mental steps. Claims 7 and 16 don’t describe any additional elements. Thus, they don’t describe a practical application or significantly more than the mental process. Claims 8 and claim 17, and recite a method and a system, further comprising: determining whether the query content comprises a preset content [person can determine whether the input content has a preset content]; and determining that the service mode is a direct service mode in response to the query content comprising the preset content [person can determine whether service mode is a direct mode consisting of a preset content]. As described above, these limitations can be carried out as a series of mental steps. Claims 8 and 17 don’t describe any additional elements. Thus, they don’t describe a practical application or significantly more than the mental process. Claims 9 and claim 18, and recite a method and a system of claim 1, further comprising: determining whether the adaptive strategy model is in a preset scenario [person can determine whether the adaptive strategy is in a preset scenario]; and determining that the service mode is the direct service mode in response to the adaptive strategy model being in the preset scenario [person can determine whether the service mode is a direct service mode in response to using an adaptive strategy being in the preset scenario]. As described above, these limitations can be carried out as a series of mental steps. Claims 9 and 18 don’t describe any additional elements. Thus, they don’t describe a practical application or significantly more than the mental process. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims [1, 5, 6, 7, 8, 9, 10, 14, 15, 16, 17, 18, 19] are rejected under 35 U.S.C. 103 as being unpatentable over Renard (US Patent No US20190215249) in view of Jones (US. Patent No. US 20220028378 A1). Regarding claim 1, Renard teaches a method for determining a service mode, comprising: generating an intent parameter by identifying the user intent - [0046 "In some embodiments, the AI engine 322 uses automatic speech recognition and natural language processing to determine a customer's intent (i.e. question) and uses an algorithm derived using machine learning to determine an appropriate response"]; [0053, 0054 “In one embodiment, the conversation ranking engine 324 ranks a session based on one or more of the quality of the AI engine’s 322 answers and the sentiment of the session. In one embodiment, the “quality” of the conversation may be determined by the conversation ranking engine 324 using one or more of the following, which may be referred to herein as “qualitative criteria”: The confidence of the AI engine 322 in the user’s (e.g. customer’s) intent. For example, when the AI engine 322 is not confident in the accuracy of what it believes the customer is asking, the conversation ranking engine 324 may affect the ranking in favor of intervention by a user (e.g. a customer service agent). This may be based on an average over the course of the session (e.g. a whole-session average confidence) or a portion of the session (e.g. there has been a series of low- confidence intents, which satisfies a threshold number of low-confidence intents, and argues in favor of human agent intervention, so the conversation ranking engine 324 adjusts the ranking accordingly). Based on different types of algorithm, as mentioned above to compute the confidence, the system 100 may provide the confidence as a percentage or other numerical value. Based on the final confidence score, it may decide to activate the intent expected (e.g. when the confidence is over 80%) or to create a dialog between the user and the machine to ask more precisions (i.e. follow-up questions to better discern the expected intent). When the confidence satisfies a first threshold (e.g. when between 60 and 79.9% accuracy for confidence), the system 100 may automatically activate the session handling by a human (e.g. an agent), or, when another threshold is satisfied (e.g. when under 60% of accuracy for the confidence), the system 100 enters the session in a waiting list to be taken over by a human (e.g. an agent). It should be noted that the thresholds provided are merely examples and others may be used without departing from the disclosure herein. In one embodiment, the thresholds may be parameters in the system 100 that may be defined”]. Generating an emotion parameter by analyzing a sentiment inclination in the query content- [0060 " In one embodiment, the “sentiment” of the conversation may be evaluated by the conversation ranking engine 324 using machine-based sentiment analysis of each interaction. In one embodiment, sentiment analysis refers to the use of natural language processing, text analysis, computational linguistics and biometrics (e.g. voice) to systematically identify, extract and study affective states and subjective information, e.g., to determine the attitude of a speaker, writer or other subject. Depending on the embodiment, the attitude may be one or more of a judgment or evaluation (as in appraisal theory), affective state (i.e., the emotional state of the author or speaker), or the intended emotional communication (i.e., the emotional effect intended by the author or interlocutor). Depending on the embodiment, the conversation ranking engine 324 uses one or more of the following, which may be referred to as “sentiment criteria,” to evaluate the “sentiment” of the session and rank the session"]; [0074 "In some embodiments, one or more of the foregoing sentiment criteria are used to generate a sentiment metric, which is used by the conversation ranking engine 324 alone, or in combination with a sentiment metric (depending on the embodiment), to rank the conversation"]. This implies generating an emotion parameter based on sentiment analysis. Generating a confidence parameter determining a service mode for replying to the query content based on the intent parameter,Based on different types of algorithm, as mentioned above to compute the confidence, the system 100 may provide the confidence as a percentage or other numerical value. Based on the final confidence score, it may decide to activate the intent expected (e.g. when the confidence is over 80%) or to create a dialog between the user and the machine to ask more precisions (i.e. follow-up questions to better discern the expected intent). When the confidence satisfies a first threshold (e.g. when between 60 and 79.9% accuracy for confidence), the system 100 may automatically activate the session handling by a human (e.g. an agent), or, when another threshold is satisfied (e.g. when under 60% of accuracy for the confidence), the system 100 enters the session in a waiting list to be taken over by a human (e.g. an agent). It should be noted that the thresholds provided are merely examples and others may be used without departing from the disclosure herein. In one embodiment, the thresholds may be parameters in the system 100 that may be defined”]; [0059 “In some embodiments, one or more of the foregoing qualitative criteria are used to generate a quality metric, which is used by the conversation ranking engine 324 alone, or in combination with a sentiment metric (depending on the embodiment), to rank the conversation”]; [0074 -0075 “In some embodiments, one or more of the foregoing sentiment criteria are used to generate a sentiment metric, which is used by the conversation ranking engine 324 alone, or in combination with a sentiment metric (depending on the embodiment), to rank the conversation. The conversation ranking engine 324 uses one or more of the qualitative criteria and the sentiment criteria (e.g. via sentiment analysis) to determine a rank for a session (e.g. based on the determined metrics). For example, the conversation ranking engine 324 uses the qualitative criteria and the sentiment criteria to determine a value used to determine a session’s rank. The value used to determine the session’s rank may vary depending on the embodiment. For example, the value may be a weighted average calculated using the qualitative criteria and sentiment criteria. In some embodiments, the weight each criterion is assigned may be dynamic (e.g. may be set by a team of users, may vary over time and be assigned using machine learning, etc.”] However, Renard does not teach generating a confidence parameter by analyzing a similarity between the query content and training data for training an adaptive strategy. However, Jones teaches generating a confidence parameter based on percentage match and then the behavior changes (adaptive) depending on the computed value - [ 0046 " In operation 430, each intent classifier generates a confidence score for the utterance. The confidence score may be based on comparing the query entities with the one or more entities associated with the intent classifier performing the comparison. In some embodiments, the confidence score is generated based on a number of entities, for the intent classifier generating the score, that match the query entities. In some instances, the confidence score is generated based on a percentage match between query entities and specified entities of the intent classifier"]; [0041" In operation 314, the decision component 150 determines whether a resource or bot exists for the on-topic entity and intent. The resource for the on-topic entity and intent may be a resource or chat bot trained with relevant knowledge on the topic and intent of the query. Where the decision component 150 matches the topic and intent with a resource or bot, the decision component 150 proceeds to operation 316. In operation 316, the decision component 150 connects to the resource or bot and transfers the query or portions thereof to the resource or bot. In some embodiments, in operation 304, where an on-topic intent is identified by the decision component 150 may enact an intent override. In such embodiments, the decision component 150 overrides an utterance topic with a page-based topic, based on the intent. The decision component 150 then proceeds to operation 314"]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Renaud into the teaches of Jones because both references address improving automated interactions systems by evaluating user state and system confidence to determine appropriate handling of user requests and combining known decision parameters to improve routing accuracy represents a predictable optimization. Regarding claim 5, Renard teaches the method according to claim 1, wherein determining a service mode for replying to the query content comprises: determining a decision parameter based on the intent parameter, the emotion parameter, and the confidence parameter; [0059 “In some embodiments, one or more of the foregoing qualitative criteria are used to generate a quality metric, which is used by the conversation ranking engine 324 alone, or in combination with a sentiment metric (depending on the embodiment), to rank the conversation”]; [0074 -0075 “In some embodiments, one or more of the foregoing sentiment criteria are used to generate a sentiment metric, which is used by the conversation ranking engine 324 alone, or in combination with a sentiment metric (depending on the embodiment), to rank the conversation. The conversation ranking engine 324 uses one or more of the qualitative criteria and the sentiment criteria (e.g. via sentiment analysis) to determine a rank for a session (e.g. based on the determined metrics). For example, the conversation ranking engine 324 uses the qualitative criteria and the sentiment criteria to determine a value used to determine a session’s rank. The value used to determine the session’s rank may vary depending on the embodiment. For example, the value may be a weighted average calculated using the qualitative criteria and sentiment criteria. In some embodiments, the weight each criterion is assigned may be dynamic (e.g. may be set by a team of users, may vary over time and be assigned using machine learning, etc.”] However, Renard doesn’t teach determining whether the service mode is a model service mode using an adaptive strategy model or a direct service mode based on the decision parameter and a preset value. However, Jones teaches determining whether the behavior changes (adaptive) or a direct mode depending on the computed value - [0041" In operation 314, the decision component 150 determines whether a resource or bot exists for the on-topic entity and intent. The resource for the on-topic entity and intent may be a resource or chat bot trained with relevant knowledge on the topic and intent of the query. Where the decision component 150 matches the topic and intent with a resource or bot, the decision component 150 proceeds to operation 316. In operation 316, the decision component 150 connects to the resource or bot and transfers the query or portions thereof to the resource or bot. In some embodiments, in operation 304, where an on-topic intent is identified by the decision component 150 may enact an intent override. In such embodiments, the decision component 150 overrides an utterance topic with a page-based topic, based on the intent. The decision component 150 then proceeds to operation 314"]; [0042 “In embodiments where the decision component 150 does not identify a resource or bot matching the topic and intent, in operation 314, the decision component 150 proceeds to operation 318. In operation 318, the decision component 150 determines whether a suitable or relevant human agent exists to respond to the query based on the intent and topic. The decision component 150 may determine the human agent exists by comparing the entity name and value or intent and topic with information, such as a profile, for the human agent. When the decision component 150 identifies a relevant human agent, the decision component 150 passes the query to the human agent at operation 320. Where the decision component 150 determines no relevant human agent exists, the decision component 150 proceeds to operation 322. In operation 322, the decision component 150 determines whether a topic or intent resource exists. The topic or intent resource may be a network resource, such as a database, a webpage, or other suitable data repository accessible to the decision component 150. The decision component 150 may identify a relevant topic or intent resource by comparing name and value pairs or components of the query with information within or metadata for the resource. Where the decision component 150 identifies a topic or intent resource, the decision component 150 provides the resource as a response to the query in operation 324. Where the decision component 150 identifies no topic or intent resource relevant to the query, the decision component cooperates with one or more other components of the query routing system 102 to generate and present one or more clarification questions to a user within a user interface at operation 310”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Renaud into the teaches of Jones because both references address improving automated interactions systems by evaluating user content, emotions and system confidence to determine appropriate handling of user requests and combining known decision parameters to improve routing accuracy whether to route to direct service mode or use an adaptive strategy represents a predictable optimization. Regarding claim 6 Renard in view of Jones does teach the method according to claim 5, wherein determining the decision parameter comprises: determining evaluation factors corresponding to the intent parameter, the emotion parameter, and the confidence parameter, the evaluation factors being one of a level or a weight; and determining the decision parameter based on the evaluation factors corresponding to the intent parameter, the emotion parameter, and the confidence parameter – Renard discloses -[0075 “The conversation ranking engine 324 uses one or more of the qualitative criteria and the sentiment criteria (e.g. via sentiment analysis) to determine a rank for a session (e.g. based on the determined metrics). For example, the conversation ranking engine 324 uses the qualitative criteria and the sentiment criteria to determine a value used to determine a session's rank. The value used to determine the session's rank may vary depending on the embodiment. For example, the value may be a weighted average calculated using the qualitative criteria and sentiment criteria. In some embodiments, the weight each criterion is assigned may be dynamic (e.g. may be set by a team of users, may vary over time and be assigned using machine learning, etc.”]; [0087 “In one embodiment, the rank uses a weighted average. For example, in one embodiment, an accuracy for each parameter, e.g., the sentiment analysis, distribution of words per sentence and/or conversations, number of utterances in the conversation regarding the whole conversation of the same agent, etc. Depending on the embodiment, the computing of accuracy can be through an artificial neural network or simple linear algebra calculation, e.g., vectors distances. In one embodiment, one or more of the parameter is then normalized as a percentage. In one embodiment, a parameter receives a weight to calculate the average accuracy of the conversation. In some embodiments, the weight can be defined as a parameter in the system 100 or another type of artificial neural network is trained to define, through regression, what the best weight distribution for parameters is based on a small number of weights defined by a team in charge to train the system 100”] Regarding claim 7, Renard teaches the method according to claim 5, wherein determining the decision parameter comprises: determining a service mode corresponding to the intent parameter, the emotion parameter, and the confidence parameter. Renaud does teach handling based on multiple user state parameters, including sentiment metric representing an emotional state and confidence values associated with the systems determination of user’s intent. It also teaches generating a numeric metric derived from such parameters and arranging or prioritizing sessions based on their relative ranking or proximity to decision criteria – [0053 “In one embodiment, the conversation ranking engine 324 ranks a session based on one or more of the quality of the AI engine's 322 answers and the sentiment of the session. In one embodiment, the “quality” of the conversation may be determined by the conversation ranking engine 324 using one or more of the following, which may be referred to herein as “qualitative criteria”]; [ 0059 “In some embodiments, one or more of the foregoing qualitative criteria are used to generate a quality metric, which is used by the conversation ranking engine 324 alone, or in combination with a sentiment metric (depending on the embodiment), to rank the conversation”] determining proximity between the service mode and a preset mode; and determining the decision parameter based on the proximity – [0091 “FIGS. 8a-o are example user interfaces presented to a human agent according to one embodiment of the system described above in reference to FIGS. 1-3. In FIG. 8A, the “All” tab 802, which indicates that 140 sessions are in progress is selected. In one embodiment, the grid 804 includes a visual indicator for each of those sessions. A visual indicator for a session may be arranged within the grid relative to other visual indicators associated with other sessions based on one or more criteria. For example, the indicators in the grid may be arranged based on ranking (e.g. so that the conversations that are determined to be going poorly are located in proximity, such as near the top of the UI). In another example, the indicators in the grid may be arranged based on age (e.g. so that the sessions are ordered oldest to newest in the bar 810). In another example, the indicators in the grid may be arranged based on whether an agent has intervened or is intervening (e.g. so that indicators associated with such sessions are located within proximity to one another within bar 810). In another example, the indicators in the grid may be arranged based on channel (e.g. so that sessions associated with SMS are visually grouped, sessions associated with phone calls are visually grouped, and sessions associated with e-mail are visually grouped”]. Regarding claim 8, the method according to claim 1, comprising determining whether the query content comprises a preset content; and determining that the service mode is a direct service mode in response to the query content comprising the preset content. Renaud doesn’t teach this but Jones teaches comparing content to preset content and then determining the service mode is a direct service - [0040 “In some embodiments, if the decision component 150 determines the entity is on-topic, in operation 304, the decision component 150 proceeds to operation 312. In operation 312, the decision component 150 determines if the entity matches a topic and intent of a current URL of the browser. The decision component 150 may compare the one or more of the name and value pair to metadata for or keywords associated with the current URL. Where the decision component 150 determines the topic and intent match between the entity and the current URL, the decision component 150 may proceed to operation 314. In some embodiments, where the decision component 150 determines no topic and intent match occurred between the entity and the current URL, in operation 316, the decision component 150 may redirect the browser to a subsequent URL. The subsequent URL may be a URL which matches one or more of the topic and intent of the query. Once the decision component 150 redirects the browser to the subsequent URL, the decision component 150 may proceed to operation 314”]; [0041 “In operation 314, the decision component 150 determines whether a resource or bot exists for the on-topic entity and intent. The resource for the on-topic entity and intent may be a resource or chat bot trained with relevant knowledge on the topic and intent of the query. Where the decision component 150 matches the topic and intent with a resource or bot, the decision component 150 proceeds to operation 316. In operation 316, the decision component 150 connects to the resource or bot and transfers the query or portions thereof to the resource or bot. In some embodiments, in operation 304, where an on-topic intent is identified by the decision component 150 may enact an intent override. In such embodiments, the decision component 150 overrides an utterance topic with a page-based topic, based on the intent. The decision component 150 then proceeds to operation 314”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Renard to incorporate the routing framework of Jones in order to apply Renaud’s computed user state parameters within a known bot-versus-human architecture. Doing so would have predictably enabled automated escalation to either a bot or human, thereby improving response reliability and overall user experience. Regarding claim 9, the method according to claim 1, compromising determining whether the adaptive strategy model is in a preset scenario; and determining that the service mode is a direct service mode in response to the adaptive strategy model being in the preset scenario. Renaud doesn’t teach this but Jones teaches determining the service mode is a direct service model in response to an adaptive strategy - [0042 “In embodiments where the decision component 150 does not identify a resource or bot matching the topic and intent, in operation 314, the decision component 150 proceeds to operation 318. In operation 318, the decision component 150 determines whether a suitable or relevant human agent exists to respond to the query based on the intent and topic. The decision component 150 may determine the human agent exists by comparing the entity name and value or intent and topic with information, such as a profile, for the human agent. When the decision component 150 identifies a relevant human agent, the decision component 150 passes the query to the human agent at operation 320. Where the decision component 150 determines no relevant human agent exists, the decision component 150 proceeds to operation 322. In operation 322, the decision component 150 determines whether a topic or intent resource exists. The topic or intent resource may be a network resource, such as a database, a webpage, or other suitable data repository accessible to the decision component 150. The decision component 150 may identify a relevant topic or intent resource by comparing name and value pairs or components of the query with information within or metadata for the resource. Where the decision component 150 identifies a topic or intent resource, the decision component 150 provides the resource as a response to the query in operation 324. Where the decision component 150 identifies no topic or intent resource relevant to the query, the decision component cooperates with one or more other components of the query routing system 102 to generate and present one or more clarification questions to a user within a user interface at operation 310”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Renard to incorporate the routing framework of Jones in order to apply Renaud’s computed user state parameters within a known bot-versus-human architecture. Doing so would have predictably enabled automated escalation to a human agent when system confidence is not satisfied, thereby improving response reliability and overall user experience. Regarding claim 10, recites an electronic device comprising at least one processor and a memory coupled to the at least one processor and having instructions stored therein, cause the device to perform the method of claim 1. Renard in view of Jones does disclose an electronic device comprising at least one processor and a memory coupled to at least one processor and having instructions stored therein to perform actions – Renard discloses - [0110 “A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories that provide temporary storage of at least some program code in order to reduce the number of times code may be retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers”]. As discussed above with respect to claim 1, these steps are rendered obvious in view of Renard in combination with Jones. Regarding claim 14, recites an electronic device according to claim 10, wherein determining a service mode for replying to the query content to perform the method of claim 5. Renard in view of Jones does disclose an electronic device as indicated in claim 10. As discussed above with respect to claim 5, these steps are rendered obvious in view of Renard in combination with Jones. Regarding Claim 15, it recites the electronic device according to claim 14 wherein determining the decision parameter consists of the methods outlined in claim 6. Claim 15 is rejected for the same reasons as claim 6. Regarding claim 16, recites an electronic device according to claim 14, wherein determining the decision parameter to perform the method of claim 7. As discussed above with respect to claim 7, these steps are rendered obvious in view of Renard in combination with Jones. Regarding claim 17, recites an electronic device according to claim 10, wherein the actions perform the method of claim 8. As discussed above with respect to claim 8, these steps are rendered obvious in view of Renard in combination with Jones. Regarding claim 18, recites an electronic device according to claim 10, wherein the actions perform the method of claim 9. As discussed above with respect to claim 9, these steps are rendered obvious in view of Renard in combination with Jones. Regarding claim 19, recites a computer program tangibly stored on a non-transitory computer-readable medium and comprising machine-executable instructions, when executed perform the method of claim 1. Renard in view of Jones does discloses a computer program tangibly stored on a non-transitory computer-readable medium and comprising machine-executable instructions – Renaud discloses [0091 “Furthermore, the technology can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any non-transitory storage apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device”]. Claim 19 is rejected for the same reasons as claim 1. Claim [2, 11, 20] are rejected under 35 U.S.C. 103 as being unpatentable over Renard (US Patent US-20190215249-A1) in view of Jones (US. Patent No. US 20220028378 A1) and in further view of Puttagunta (US Patent No.US20250321959). Regarding claim 2, Renard in view of Jones teaches in response to receiving the query content input by the user, generating a reply content based on the query content However, Renard in view of Jones does not teach based on the reply content, determining whether the predefined strategy model has provided a complete reply and in response to the predefined strategy model having not provided a complete reply, generating a reply content based on the query content by the adaptive strategy model- But Puttagunta teaches generating a reply content based on the query content by a predefined strategy and further teaches generating a response when the first strategy hasn’t provided a satisfactory response (complete reply) and then using a second strategy by invoking a generative model (adaptive) – [Fig 2B, step 254]; [0029 “In the present application, improved techniques for querying data in a database system are disclosed. One aspect of the disclosure includes a method for querying data in a database system. A natural language description is received. A query is generated based on at least a first portion of the natural language description and one or more language processing rules. In response to a determination that the query is not satisfying the one or more language processing rules, at least a second portion of the natural language description is provided to a GenAI model. The query is updated via the GenAI model processing at least the second portion of the natural language description”]; [Fig 2B, step 256]; [0030 “Additional implementations of the disclosure may include one or more of the following optional features. The query is executed at a database to retrieve data. One or more database tables are determined based on at least a third portion of the natural language description. The query is generated further based on the determined one or more database tables. The one or more language processing rules are based at least in part on Backus-Naur Form (BNF). In response to a determination that the query satisfies the one or more language processing rules, the query is executed at a database to retrieve data. A result of the large language model is verified based on one or more guardrails, wherein the one or more guardrails comprise syntactic rules. A result of the large language model is verified based on one or more guardrails, wherein the one or more guardrails comprise semantic rules, wherein the semantic rules comprise semantic rules corresponding to one or more of the following: column types, choice values, numbers, dates, or time. Tokens are added to a tokenizer for the large language model, wherein the added tokens include one or more of the following: operators, table names, or column names.”]; Under the broadest reasonable interpretation “not satisfying” is equated to “not a complete reply”. [Fig 2B, step 258]; [ 0032” Another aspect of the disclosure provides a system with one or more processors and a memory coupled to the one or more processors. The memory is configured to provide the one or more processors with instructions. When executed, the instructions cause the one or more processors to receive a natural language description; generate a query based on at least a first portion of the natural language description and one or more language processing rules; in response to a determination that the query is not satisfying the one or more language processing rules, provide at least a second portion of the natural language description to a generative artificial intelligence (GenAI) model; and update the query via the GenAI model processing at least the second portion of the natural language description”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine teachings of Renaud in view of Jones into Puttagunta order to provide both structured responses and adaptive fallback generation Such a combination would improve system flexibility and response strength by allowing efficient use of predefined responses when suitable while enabling adaptive generation when the predefined responses are insufficient. This would improve conversational system robustness by enabling adaptive generative strategy when predefined processing fails, thereby enhancing response completeness, system reliability and user experience. Regarding claim 11, the electronic device according to claim 10 where the actions perform the methods of claim 2. Claim 11 is rejected for the same reasons as claim 2. Regarding claim 20, recites a computer program according to claim 19 where the actions perform the methods of claim 2. Claim 20 is rejected for the same reasons as claim 2. Claim [3, 12] are rejected under 35 U.S.C. 103 as being unpatentable over Renard (US Patent No US-20190215249-A1) in view of Jones (US. Patent No. US 20220028378 A1) and in further view of Hasan (Us. Patent No. US 20230350929). Regarding claim 3 Renard in view of Jones teaches the method according to claim 1, wherein generating the intent parameter comprises: generating an evaluated intent value based on the user intent However, Renard in view of Jones does not teach determining an intent based on both user input and preset intent information. But Hasan teaches determining an intent associated with a user query by analyzing the user input and evaluating it using predefined intent categories and stored knowledge information - [0129] Further, the method 500, at step 510, may include generating a response corresponding to the intent through the virtual agent based on analyzation and the method 500 terminates at 512. A process of generating the response based on analyzing the knowledgebase is explained in greater detail in conjunction with FIG. 6. Here are some common techniques for generating the response:"]; [0130] Rule-based Systems: Rule-based systems may be utilized to generate responses based on predefined rules and patterns. These systems may have a set of predefined templates or patterns that match specific intents, allowing the virtual agent to select and populate the appropriate response based on the analyzed intent and relevant information from the knowledgebase”]; [0131] Natural Language Generation (NLG): NLG techniques may be employed to automatically generate human-like responses. NLG models may learn patterns and structures from the analyzed knowledgebase and use that knowledge to generate coherent and contextually relevant responses. These models may be trained on large amounts of text data to improve the quality and fluency of the generated responses”]; [0158] The method 700 illustrated by the flow diagram of FIG. 7 for generating a response through a virtual agent starts at step 702. The method 700 may include, at step 704, receiving, by the virtual agent, a query from a user”]; [0159] The method 700, at step 706, may include determining, by the virtual agent, the intent associated with the query. In this step, the virtual agent analyzes the user's query to determine the intent behind it. The intent represents the purpose or goal of the user's query. This may be achieved by the following techniques”]; [0160] Intent Classification: The virtual agent may utilize machine learning techniques, such as supervised learning algorithms, to classify the user's query into predefined intent categories. A training dataset consisting of labeled queries and their corresponding intents is used to train a classifier. The virtual agent then applies this trained model to predict the intent of the user's query “]; [0161] Natural Language Understanding (NLU): NLU techniques may be employed to extract the intent from the user's query. NLU models may analyze the syntactic and semantic structure of the query, identify key phrases or keywords, and map them to predefined intents. Techniques like named entity recognition, part-of-speech tagging, and dependency parsing can be applied to assist in intent determination”]; [0162] Keyword Matching: The virtual agent may use a rule-based approach to match the user's query against a set of predefined keywords or patterns associated with specific intents. If the query contains keywords or phrases that match these predefined patterns, the intent may be determined accordingly”]; [0193 “Utilizing Learned Patterns: The trained LLM has learned patterns from the training data, which includes understanding grammar, syntax, and semantic structures. The LLM can identify patterns in the input query or request and utilize this knowledge to generate a response. For example, if the input query is in the form of a question, the LLM can identify question patterns and provide an appropriate response. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the intent determination framework of Hasan into the primary reference Renard in view of Jones in order to provide a structured intent-based decision mechanism. Such a combination would improve the system ability to accurately interpret user queries by incorporating predefined intent classification into the content-based response framework thereby enabling more structured and reliable decision making. Regarding claim 12, the electronic device according to claim 10 where the actions perform the methods of claim 3. Claim 12 is rejected for the same reasons as claim 3. Claim [4, 13] are rejected under 35 U.S.C. 103 as being unpatentable over Renard (US Patent No US-20190215249-A1) in view of Jones (US. Patent No. US 20220028378 A1) and in further view of Moudy (US Patent No. US-10546235-B2). Regarding claim 4 Renard in view of Jones teaches the method according to claim 1, wherein generating the emotion parameter comprises: determining an average emotional value based on scores,conversation may be evaluated by the conversation ranking engine 324 using machine-based sentiment analysis of each interaction. In one embodiment, sentiment analysis refers to the use of natural language processing, text analysis, computational linguistics and biometrics (e.g. voice) to systematically identify, extract and study affective states and subjective information, e.g., to determine the attitude of a speaker, writer or other subject. Depending on the embodiment, the attitude may be one or more of a judgment or evaluation (as in appraisal theory), affective state (i.e., the emotional state of the author or speaker), or the intended emotional communication (i.e., the emotional effect intended by the author or interlocutor). Depending on the embodiment, the conversation ranking engine 324 uses one or more of the following, which may be referred to as “sentiment criteria,” to evaluate the “sentiment” of the session and rank the session"]; [0074 "In some embodiments, one or more of the foregoing sentiment criteria are used to generate a sentiment metric, which is used by the conversation ranking engine 324 alone, or in combination with a sentiment metric (depending on the embodiment), to rank the conversation"]. This implies generating an emotion parameter based on sentiment analysis. [0061- Average (e.g. arithmetic or harmonic) sentiment analysis over all interactions in the conversation/session, which may serve as a forecast of the conversation or as a dialogue atmosphere”]; [0062 ”Sentiment analysis trend of the conversation, e.g., positive to negative, neutral to negative, the opposite, etc.”]; The average sentiment analysis and trend values are interpreted as scores. [0063 “Offensive language usage, e.g., using keyword detection based on offenses n-gram dictionary.”]; [0064 “Detection of emojis, which may balance the trend of conversation (e.g. a winking, tongue sticking out or smiling emoji may indicate use of an otherwise offensive term is being used playfully or in jest”]; However, Renard in view of Jones does not teach generating the emotion parameter based on the average emotional value and a preset value. Moudy discloses using thresholds (preset values) for sentiment analysis – [ Column 25, lines 45-60 “In certain embodiments, the outputs of the sentiment analyzer may be determined in accordance with preprogrammed or dynamically implemented rules and/or criteria. For example, sentiment analyzer output rules may be based on sentiment score thresholds, whereby the feedback analytics server 610 may transmit notifications to predetermined recipients only if a sentiment score calculated in step 804 is greater than a specified threshold, less than a specified threshold, etc. In some cases, certain recipient devices may receive notifications for certain sentiment score thresholds or ranges, while other recipients may receive notifications for other sentiment score thresholds or ranges. In other examples, sentiment analyzer output rules may be based on changes in sentiment scores over time, or identifications of outliers within multiple related sentiment scores. For instance, in professional training or educational system 600, a content provider server 640, authorized client device 630, or other recipient device may receive a notification if the sentiment score for a user (e.g., employee or student), group of users (e.g., class or grade), an instructor or presenter, or one or more content items (e.g., a course, module assignment, test, etc.) falls below a certain threshold. Similarly, in a sentiment analyzer system 600 used with an eCommerce or media distribution system, a content provider 640 or client device 630 may receive a notification in response to unanticipated high or low sentiment scores for a product, product line, or media content, recent changes in sentiment scores for a product or media content, and the like. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the sentiment scoring of Moudy into Renard’s system in view of Jones in order to provide a structured emotional parameter output thereby improving emotional evaluation accuracy withing a decision framework. Additionally, this provides a more predictable and more reliable method of quantifying emotional state. Regarding claim 13, the electronic device according to claim 10 wherein generating the emotion parameter based on the methods of claim 4 - Claim 13 is rejected for the same reasons as claim 4. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHEZA ABDUL AZIZ whose telephone number is (571)272-9610. The examiner can normally be reached Monday-Friday 7:30am-5pm Alternate Fridays off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571) 272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL C WASHBURN/Supervisory Patent Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Jul 03, 2024
Application Filed
Mar 03, 2026
Non-Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month