Prosecution Insights
Last updated: April 19, 2026
Application No. 18/787,767

CHATBOT RISK MANAGEMENT

Non-Final OA §101§102§103
Filed
Jul 29, 2024
Examiner
TENGBUMROONG, NATHAN NARA
Art Unit
2654
Tech Center
2600 — Communications
Assignee
Capital One Services LLC
OA Round
1 (Non-Final)
43%
Grant Probability
Moderate
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
6 granted / 14 resolved
-19.1% vs TC avg
Strong +75% interview lift
Without
With
+75.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
34 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
27.2%
-12.8% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
3.2%
-36.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This office action is in response to Applicant’s submission filed on 7/29/2024. Claims 1-20 are pending in the application. As such, claims 1-20 have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) was submitted on 7/29/2024 and 11/19/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, the claim recites “(a) determine, based on the user input, intent information”, “(b) select, based on the intent information, a chatbot service from a generative-artificial-intelligence (gen-AI) chatbot service and a non-gen-AI chatbot service”, and “(c) provide the user input to the selected chatbot service to allow the selected chatbot service to generate a response to the user input .” Limitations (a) – (c) recite mental processes that may be practically performed in the mind using pen and paper. For example, limitation (a) can be done by someone determine an intent of a user input. Limitation (b) can be done by someone using a generic computer to select a chatbot service based on determining a user intent. Limitation (c) can be done by someone using a generic computer to input data to a chatbot to receive an output response. Under its broadest reasonable interpretation when read in light of the specification, the actions to “determine,” “select,” and “provide” encompass mental processes practically performed in the human mind by evaluation and judgement using pen and paper or a generic computer. Accordingly, the claim recites an abstract idea (Step 2A, Prong One). The judicial exception is not integrated into a practical application. In particular, the claim recites additional elements of “(d) receive, from a user device that includes a chatbot interface, user input associated with the chatbot interface.” The limitation, (d), is mere data gathering recited at a high level of generality, and thus is insignificant extra-solution activity. In addition, all uses of the recited judicial exception require such data gathering, and, as such, this limitation does not impose any meaningful limits on the claim. This limitation amounts to necessary data gathering. Further, limitations (a) - (d) are recited as being performed by a computer. In limitation (d), the computer is used as a tool to perform the generic computer function of receiving data. In limitations (a) - (c), the computer is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to an abstract idea (Step 2A: YES). The claim does not include additional elements that are sufficient to amount to more than the judicial exception. As discussed above, the recitation of a computer to perform limitations (a) – (d) amounts to no more than mere instructions to apply the exception using a generic computer component. Also as discussed above, limitation (d) is recited at a high level of generality. This element amounts to receiving user input from an interface, which is well understood, routine, conventional activity, as supported by paragraphs [0018] and [0051] of applicant’s specification. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept (Step 2B). Regarding claims 10, the claim is rejected with similar analysis to claim 1. Regarding claim 17, the claim recites “(a) selecting a chatbot service from a generative-artificial-intelligence (gen-AI) chatbot service and a non-gen-AI chatbot service” and “(b) providing the user input to the selected chatbot service to allow the selected chatbot service to generate a response to the user input.” Limitations (a) – (b) recite mental processes that may be practically performed in the mind using pen and paper. For example, limitation (a) can be done by someone using a generic computer to select a specific chatbot service. Limitation (b) can be done by someone using a generic computer to provide user input to a chatbot and receive a response from the chatbot. Under its broadest reasonable interpretation when read in light of the specification, the actions of “selecting” and “providing” encompass mental processes practically performed in the human mind by evaluation and judgement using pen and paper or a generic computer. Accordingly, the claim recites an abstract idea (Step 2A, Prong One). The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements of “(c) by a system for chatbot risk management” and “(d) based on user input associated with a chatbot interface.” The limitation, (d), is mere data gathering recited at a high level of generality, and thus is insignificant extra-solution activity. In addition, all uses of the recited judicial exception require such data gathering, and, as such, this limitation does not impose any meaningful limits on the claim. This limitation amounts to necessary data gathering. Further, limitations (a) - (b) and (d) are recited as being performed by a computer. In limitation (d), the computer is used as a tool to perform the generic computer function of receiving data. In limitations (a) - (b), the computer is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. The limitation (c) provides nothing more than mere instructions to implement an abstract idea on a generic computer. The system recited in limitation (c) is used to perform limitation (a) - (b), (d) without placing any limits on how the system functions. Rather this system only recites the outcomes and does not include any details on how the outcomes are accomplished. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to an abstract idea (Step 2A: YES). The claim does not include additional elements that are sufficient to amount to more than the judicial exception. As discussed above, the recitation of a computer to perform limitations (a) – (b) and (d) amounts to no more than mere instructions to apply the exception using a generic computer component. Also as discussed above, limitation (d) is recited at a high level of generality. This element amounts to receiving user input from an interface, which is well understood, routine, conventional activity, as supported by paragraphs [0018] and [0051] of applicant’s specification. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept (Step 2B). Similarly, dependent claims 2-9, 11-16, and 18-20 include additional steps that are considered abstract ideas because they fail to provide meaningful significance that goes beyond generally linking the use of an abstract idea to a particular technological environment and using the computer to perform an abstract idea. Claims 2, 11, and 18 read on someone determining a user intent from a user input, determining that the intent is on an intent blocklist, and using a generic computer to a select a particular chatbot to respond to the user input based on determining the intent is on the blocklist. Claims 3, 12, and 19 read on someone determining a user intent from a user input, determining that the intent is on an intent allowlist, and using a generic computer to a select a particular chatbot to respond to the user input based on determining the intent is on the allowlist. Claims 4, 13, and 20 read on someone determining a user intent from a user input, determining that the intent is on an intent allowlist, and using a generic computer to a select a particular generative AI chatbot to respond to the user input based on determining the intent is on the allowlist. Claims 5 and 14 read on someone determining that a user input contains sensitive information, modifying the user input to anonymize the sensitive information, and using a generic computer to provide the modified user input to a chatbot. Claims 6-7 and 15 read on someone using a generic computer to obtain a response from a chatbot using a user input and sending the user the chatbot response. Claims 8-9 and 16 read on someone using a generic computer to obtain a response from a chatbot hosted on another system using a user input and sending the user the chatbot response. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 6-7, 10, 15, and 17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Hudson et al. (US 20250390786 A1; hereinafter referred to as Hudson). Regarding claim 1, Hudson teaches: a system for chatbot risk management, the system comprising: one or more memories; and one or more processors, communicatively coupled to the one or more memories ([0037] System 200 may comprise main memory 215. Main memory 215 provides storage of instructions and data for programs executing on processor 210), configured to: receive, from a user device that includes a chatbot interface, user input associated with the chatbot interface ([0082] Process 400 may be triggered whenever a user starts a new session, such as a new chat session, and, during the session, execute in each of one or more, and generally a plurality of, iterations. The session may occur within a single screen of a graphical user interface of user interface 150. The screen may comprise a chat box, which is configured to receive inputs); determine, based on the user input, intent information ([0011] The intent model may comprise a classifier that classifies the input into one of a plurality of intent classes, and wherein the determined intent comprises the one intent class into which the intent model classified the input); select, based on the intent information, a chatbot service from a generative-artificial-intelligence (gen-AI) chatbot service ([0010] produce a generative artificial intelligence (AI) response by applying an intent model to the input to determine an intent of the input, applying a preference model to the determined intent to determine at least one of a plurality of generative artificial intelligence (AI) models) and a non-gen-AI chatbot service ([0054] For each common input, a gold-standard response 315 may be generated by a human expert (e.g., an agent of the operator of platform 110), with or without the aid of generative artificial intelligence, and then stored in database 114 in association with a representation of the input. The representation of the input may comprise or consist of the exact input, a portion of the input, a set of keywords representing the input, and/or the like); and provide the user input to the selected chatbot service ([0092] Subprocess 445, which may be implemented by module 350, may apply the generative AI model 355, determined by preference model 345 in subprocess 440, to the input, received in subprocess 410, to produce a generative AI response) to allow the selected chatbot service to generate a response to the user input ([0014] determine whether or not a gold-standard response exists for the input; when determining that the gold-standard response exists for the input, display the gold-standard response to the user within the graphical user interface without producing the generative AI response; and when determining that the gold-standard response does not exist for the input, produce the generative AI response). Regarding claim 6, Hudson teaches: the system of claim 1, wherein the non-gen-AI ([0054] Module 310 may determine whether or not a gold-standard response 315 exists for the input. In particular, module 310 may check the input against a plurality of gold-standard responses 315, stored in database 114) chatbot service is hosted by the system ([0024] Infrastructure 100 may comprise a platform 110 which hosts and/or executes one or more of the disclosed processes, which may be implemented in software and/or hardware. In particular, platform 110 may execute a server application 112, host a database 114 that may store data used by server application 112). Regarding claim 7, Hudson teaches: the system of claim 6, wherein the selected chatbot service is the non-gen-AI chatbot service, wherein the one or more processors are further configured to: obtain, based on providing the user input to the selected chatbot service, the response to the user input ([0054] For each common input, a gold-standard response 315 may be generated by a human expert (e.g., an agent of the operator of platform 110), with or without the aid of generative artificial intelligence, and then stored in database 114 in association with a representation of the input. The representation of the input may comprise or consist of the exact input, a portion of the input, a set of keywords representing the input, and/or the like. The plurality of gold-standard responses 315 may be indexed by the representation of the input, such that the gold-standard response 315, if one exists, for an input can be easily retrieved based on the input); and send, to the user device, the response ([0055] When a gold-standard response 315 exists for the input, that gold-standard response 315 may be returned to module 320. In this case, no generative AI response will be produced, since the best possible response is already available as a predefined gold-standard response 315). Regarding claim 10, Hudson teaches: a non-transitory computer-readable medium storing a set of instructions… ([0023] In an embodiment, systems, methods, and non-transitory computer-readable media are disclosed for a collaborative AI preference model for generative AI model selection). The rest of the claim recites similar limitations as claim 1 and therefore is rejected similarly. Regarding claim 15, Hudson teaches: the non-transitory computer-readable medium of claim 10, wherein the selected chatbot service is hosted by the system ([0024] Infrastructure 100 may comprise a platform 110 which hosts and/or executes one or more of the disclosed processes, which may be implemented in software and/or hardware. In particular, platform 110 may execute a server application 112, host a database 114 that may store data used by server application 112), and wherein the one or more processors are further configured to: obtain, based on providing the user input to the selected chatbot service, the response to the user input ([0054] For each common input, a gold-standard response 315 may be generated by a human expert (e.g., an agent of the operator of platform 110), with or without the aid of generative artificial intelligence, and then stored in database 114 in association with a representation of the input. The representation of the input may comprise or consist of the exact input, a portion of the input, a set of keywords representing the input, and/or the like. The plurality of gold-standard responses 315 may be indexed by the representation of the input, such that the gold-standard response 315, if one exists, for an input can be easily retrieved based on the input); and send, to the user device, the response ([0055] When a gold-standard response 315 exists for the input, that gold-standard response 315 may be returned to module 320. In this case, no generative AI response will be produced, since the best possible response is already available as a predefined gold-standard response 315). Regarding claim 17, Hudson teaches: a method, comprising: selecting, by a system for chatbot risk management and based on user input associated with a chatbot interface ([0010] receive an input from the user via a graphical user interface; and produce a generative artificial intelligence (AI) response by applying an intent model to the input to determine an intent of the input), a chatbot service from a generative-artificial-intelligence (gen-AI) chatbot service ([0010] produce a generative artificial intelligence (AI) response by applying an intent model to the input to determine an intent of the input, applying a preference model to the determined intent to determine at least one of a plurality of generative artificial intelligence (AI) models) and a non-gen-AI chatbot service ([0054] For each common input, a gold-standard response 315 may be generated by a human expert (e.g., an agent of the operator of platform 110), with or without the aid of generative artificial intelligence, and then stored in database 114 in association with a representation of the input. The representation of the input may comprise or consist of the exact input, a portion of the input, a set of keywords representing the input, and/or the like); and providing, by the system, the user input to the selected chatbot service ([0092] Subprocess 445, which may be implemented by module 350, may apply the generative AI model 355, determined by preference model 345 in subprocess 440, to the input, received in subprocess 410, to produce a generative AI response) to allow the selected chatbot service to generate a response to the user input ([0014] determine whether or not a gold-standard response exists for the input; when determining that the gold-standard response exists for the input, display the gold-standard response to the user within the graphical user interface without producing the generative AI response; and when determining that the gold-standard response does not exist for the input, produce the generative AI response). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Hudson in view of Anders et al. (US 20190325864 A1; hereinafter referred to as Anders). Regarding claim 2, Hudson teaches: the system of claim 1. Hudson does not explicitly, but Anders discloses: wherein the one or more processors, to select the chatbot service, are configured to: identify, based on the intent information, at least one intent of the user input ([0011] the user's estimated age range and/or vocabulary level may be used in detecting the user's intent. In various implementations, one or more candidate “query understanding models,” each associated with a specific age range, may be available for use by the automated assistant. Each query understanding model may be usable to determine the user's intent); determine that the at least one intent is associated with an entry of an intent blocklist ([0071] and similar to other components herein such as STT module 117 intent matcher 136, fulfillment module 124 may have access to a database 125 that stores a library of rules, heuristics, etc., that are geared towards various age ranges and/or vocabulary levels. For example, database 125 may store one or more whitelists and/or blacklists of websites, universal resource identifiers (URI), universal resource locators (URL), domains, etc., that dictate what a user can, and cannot access depending on their age); and select, based on determining that the at least one intent is associated with the entry of the intent blocklist ([0015] Regarding resolution of the user's intent, various actions and/or information may not be suitable for children. Accordingly, in various embodiments, based on the predicted age range of the user, the automated assistant may determine whether the intent of the user is resolvable. For example, if the user is determined to be a child, the automated assistant may limit the online corpuses of data it can use to retrieve information responsive to the user's request, e.g., to a “whitelist” of kid-friendly websites and/or away from a “blacklist” of non-kid-friendly websites), the non-gen-AI chatbot service ([0007] An automated assistant configured with selected aspects of the present disclosure may be configured to enter into various age-related modes, such as “standard” (e.g., suitable for adults) and “kid's mode” (suitable, for instance, for small children), based on various signals. A different service can be selected based on a user’s age and intentions.), which is associated with responding to disallowed user inputs, as the selected chatbot service ([0004] Various aspects of the automated assistant's behavior may be affected by the mode selected based on the age range (or vocabulary level) of the user, such as (i) recognition of the user's intent, (ii) resolving the user's intent, and (iii) how the results of resolving the user's intent are output). Hudson and Anders are considered analogous in the field of human-to-computer dialogue. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Hudson to combine the teachings of Anders because doing so would allow for a chatbot system to limit a chatbot’s generated response based on a user’s intention, leading to better user flexibility in controlling generated chatbot output and conserving resources (Anders [0005] various implementations can enable an automated assistant to generate responses to various inputs that, absent techniques described herein, would not be resolvable. As also described herein, various implementations can mitigate the need for an automated assistant to request clarification for various inputs, thereby conserving various computer and/or network resources that would otherwise be utilized to generate and render such requests for clarification and/or process further input responsive to the requests for clarification). Regarding claim 11, it recites similar limitations as claim 2 and therefore is rejected similarly. Regarding claim 18, Hudson teaches: the method of claim 17. Hudson does not explicitly, but Anders discloses: wherein selecting the chatbot service comprises: determining that the user input is associated with an intent blocklist ([0071] and similar to other components herein such as STT module 117 intent matcher 136, fulfillment module 124 may have access to a database 125 that stores a library of rules, heuristics, etc., that are geared towards various age ranges and/or vocabulary levels. For example, database 125 may store one or more whitelists and/or blacklists of websites, universal resource identifiers (URI), universal resource locators (URL), domains, etc., that dictate what a user can, and cannot access depending on their age); and selecting, based on determining that the user input is associated with the intent blocklist ([0015] Regarding resolution of the user's intent, various actions and/or information may not be suitable for children. Accordingly, in various embodiments, based on the predicted age range of the user, the automated assistant may determine whether the intent of the user is resolvable. For example, if the user is determined to be a child, the automated assistant may limit the online corpuses of data it can use to retrieve information responsive to the user's request, e.g., to a “whitelist” of kid-friendly websites and/or away from a “blacklist” of non-kid-friendly websites), the non-gen-AI chatbot service ([0007] An automated assistant configured with selected aspects of the present disclosure may be configured to enter into various age-related modes, such as “standard” (e.g., suitable for adults) and “kid's mode” (suitable, for instance, for small children), based on various signals. A different service can be selected based on a user’s age and intentions.). Claims 3, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Hudson in view of Will et al. (US 20250252260 A1; hereinafter referred to as Will). Regarding claim 3, Hudson teaches: the system of claim 1. Hudson does not explicitly, but Will discloses: wherein the one or more processors, to select the chatbot service, are configured to: identify, based on the intent information, at least one intent of the user input ([0027] Chatbot 218 utilizes natural language understanding model 220 to read and understand incoming user utterances to determine the intent of the users regarding their utterances (e.g., requests, questions, for the like)); determine that the at least one intent is associated with an entry of an intent allowlist associated with a particular intent type ([0005] generates a list of allowed user intents based on identifying one or more of a set of user intents corresponding to a user utterance within a filtered user intent mapping table. The filtered user intent mapping table contains allowed user intents); and select, based on determining that the at least one intent is associated with the entry of the intent allowlist, the non-gen-AI chatbot service ([0061] if the computer determines that the user intent having the highest confidence score in the set of user intents corresponding to the user utterance is contained in the list of allowed user intents, yes output of step 330, then the computer, using the chatbot, sends content corresponding to the user intent having the highest confidence score to the client device of the user as a response to the user utterance), which is associated with responding to user inputs associated with the particular intent type, as the selected chatbot service ([0034] For example, chatbot 218 can utilize a “people managers only” filter included in other eligibility filters 240 to remove a set of specified user intents from user intent mapping table 232 when the user does not have a people manager characteristic identified in chatbot session data 222 so that the user cannot receive content corresponding to one or more of user intents 228, increasing data security). Hudson and Will are considered analogous in the field of human-to-computer dialogue. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Hudson to combine the teachings of Will because doing so would allow for a chatbot system to determine which users intents are allowable and having a chatbot respond accordingly, increasing data security and user flexibility in designating chatbot responses (Will [0050] illustrative embodiments enable the chatbot to recognize when a particular user should not have access to content corresponding to a given user intent. In other words, illustrative embodiments enable customized user intent filtering based on analyzing session data (e.g., user characteristics) to prevent certain users from accessing certain content (e.g., secure information, inappropriate information, or the like), which correspond to certain user intents. Illustrative embodiments utilize filtering functionality for a user's geographic location (e.g., country) and support other custom user intent filtering functionality based on other specified parameters). Regarding claim 12, it recites similar limitations as claim 3 and therefore is rejected similarly. Regarding claim 19, Hudson teaches: the method of claim 17. Hudson does not explicitly, but Will discloses: wherein selecting the chatbot service comprises: determining that the user input is associated with an intent allowlist associated with a particular intent type ([0005] generates a list of allowed user intents based on identifying one or more of a set of user intents corresponding to a user utterance within a filtered user intent mapping table. The filtered user intent mapping table contains allowed user intents); and selecting, based on determining that the user input is associated with the intent allowlist, the non-gen-AI chatbot service ([0061] if the computer determines that the user intent having the highest confidence score in the set of user intents corresponding to the user utterance is contained in the list of allowed user intents, yes output of step 330, then the computer, using the chatbot, sends content corresponding to the user intent having the highest confidence score to the client device of the user as a response to the user utterance). Claims 4, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Hudson in view of Will, as applied to claims 3, 12, and 19 above, and further in view of Gunjal et al. (US 20250335928 A1; hereinafter referred to as Gunjal). Regarding claim 4, the combination of Hudson and Will teaches: the system of claim 1. Will further discloses: wherein the one or more processors, to select the chatbot service, are configured to: identify, based on the intent information, at least one intent of the user input ([0027] Chatbot 218 utilizes natural language understanding model 220 to read and understand incoming user utterances to determine the intent of the users regarding their utterances (e.g., requests, questions, for the like)); determine that the at least one intent is associated with an entry of an intent allowlist ([0008] a method for filtering user intents corresponding to user utterances is provided. A list of allowed user intents is generated based on identifying one or more of a set of user intents corresponding to a user utterance within a filtered user intent mapping table. The filtered user intent mapping table containing allowed user intents. It is determined whether a user intent having a highest confidence score in the set of user intents corresponding to the user utterance is contained in the list of allowed user intents) associated with a non-particular intent type… ([0033] User intent eligibility filters 236 represent a set of filters for determining whether the user is eligible or qualified to receive content corresponding to user intents 228. In this example, user intent eligibility filters 236 include valid user location filter 238 and other eligibility filters 240. Valid user location filter 238 identifies one or more geographic locations, such as, for example, one or more cities, states, countries, or the like, where the user may receive content corresponding to user intents 228). Hudson and Will are considered analogous in the field of human-to-computer dialogue. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Hudson to combine the teachings of Will because doing so would allow for a chatbot system to determine which users intents are allowable and having a chatbot respond accordingly, increasing data security and user flexibility in designating chatbot responses (Will [0050] illustrative embodiments enable the chatbot to recognize when a particular user should not have access to content corresponding to a given user intent. In other words, illustrative embodiments enable customized user intent filtering based on analyzing session data (e.g., user characteristics) to prevent certain users from accessing certain content (e.g., secure information, inappropriate information, or the like), which correspond to certain user intents. Illustrative embodiments utilize filtering functionality for a user's geographic location (e.g., country) and support other custom user intent filtering functionality based on other specified parameters). The combination of Hudson and Will does not explicitly, but Gunjal discloses: and select, based on determining that the at least one intent is associated with the entry of the intent allowlist, the gen-AI chatbot service ([0030] When user input is received, the RAI module 102 processes (e.g., using the I/O moderation sub-module 120) the user input to ensure that the user input conforms to AI principles defined by the enterprise through the AI principles sub-module 122. In some examples, AI principles can be represented in allowed and/or disallowed terminology, intents, and the like. For example, user input can be processed to determine one or more intents and words and/or sentences that represent content that is prohibited according to AI principles instituted by the enterprise), which is associated with responding to inputs associated with the non-particular intent type, as the selected chatbot service ([0053] In more general terms, the prompt flow library 236 provides for LLM agnosticism. Depending on the predefined criteria, the prompt flow library 236 selects the LLM to best answer the input query. This criteria is part of configurations and can be changed any time). Hudson, Will, and Gunjal are considered analogous in the field of human-to-computer dialogue. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Hudson and Will to combine the teachings of Gunjal because doing so would allow for the selection of a specific generative AI chatbot to be used to respond to a user input based on criteria such as an allowed user intent, leading to more targeted and better responses from gen-AI chatbots (Gunjal [0023] In some implementations, intuitive intent resolution is used to interpret and map complex, abstract user queries to pertinent products and/or services, to align functionality with needs and desires of users. In some implementations, query classification is provided by employing sophisticated categorization algorithms to improve performance of foundation models, directing computational power to generating highly relevant and targeted responses). Regarding claim 13, it recites similar limitations as claim 4 and therefore is rejected similarly. Regarding claim 20, Hudson teaches: the method of claim 17. Hudson does not explicitly, but Will discloses: determining that the user input is associated with an intent allowlist ([0008] a method for filtering user intents corresponding to user utterances is provided. A list of allowed user intents is generated based on identifying one or more of a set of user intents corresponding to a user utterance within a filtered user intent mapping table. The filtered user intent mapping table containing allowed user intents. It is determined whether a user intent having a highest confidence score in the set of user intents corresponding to the user utterance is contained in the list of allowed user intents) associated with a non-particular intent type ([0033] User intent eligibility filters 236 represent a set of filters for determining whether the user is eligible or qualified to receive content corresponding to user intents 228. In this example, user intent eligibility filters 236 include valid user location filter 238 and other eligibility filters 240. Valid user location filter 238 identifies one or more geographic locations, such as, for example, one or more cities, states, countries, or the like, where the user may receive content corresponding to user intents 228); The combination of Hudson and Will does not explicitly, but Gunjal discloses: and selecting, based on determining that the user input is associated with the intent allowlist, the gen-AI chatbot service ([0030] When user input is received, the RAI module 102 processes (e.g., using the I/O moderation sub-module 120) the user input to ensure that the user input conforms to AI principles defined by the enterprise through the AI principles sub-module 122. In some examples, AI principles can be represented in allowed and/or disallowed terminology, intents, and the like. For example, user input can be processed to determine one or more intents and words and/or sentences that represent content that is prohibited according to AI principles instituted by the enterprise). Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Hudson in view of Fisher et al. (US 20250307465 A1; hereinafter referred to as Fisher). Regarding claim 5, Hudson teaches: the system of claim 1. Hudson does not explicitly, but Fisher discloses: wherein the one or more processors, to provide the user input to the selected chatbot service, are configured to: determine that the user input includes sensitive information ([0034] The Parsing Engine 120 operates in two critical stages relative to the interaction with the LLM: before and after LLM processing. Initially, it examines the document to identify any sensitive data that needs to be altered, ensuring that only anonymized data is forwarded to the LLM for processing); modify, based on determining that the user input includes sensitive information, and by using at least one data anonymization technique or at least one data obfuscation technique, the user input ([0041] Obfuscation engine 124 is configured to generate obfuscated data for the identified sensitive data. In an embodiment, obfuscation engine 124 is configured to generate one or more embeddings for the document, such that an embedding corresponds to a given CI. In an embodiment, obfuscation engine 124 is configured to obfuscate the detected CI by an encryption mechanism using a private key of the user); and provide the modified user input to the selected chatbot service ([0030-0031] interception of a request includes intercepting the document or a portion of the document inside a chatbot conversation of the LLM service (e.g. LLM 101). For example, intercepting can include inside a chatbot conversation before the data is sent to the LLM, such as where CI is copied/pasted or manually written (and not just in the document)… using filtering, interception engine 118 is configured to analyze and process text input in real-time, identifying and obfuscating sensitive information before it reaches the LLM). Hudson and Fisher are considered analogous in the field of human-to-computer dialogue. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Hudson to combine the teachings of Fisher because doing so would allow for a chatbot system to implement data anonymization and obfuscation techniques on user input, leading to improved user data privacy and security when interacting with chatbots (Fisher [0006] In a feature and advantage of embodiments, systems and methods are particularly focused on data anonymization within the unique context of user interactions with LLMs. Accordingly, by concentrating on the intersection of data privacy and LLMs, systems and methods fill a crucial gap in the field of data security). Claims 8-9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Hudson in view of Mulligan et al. (US 20240267344 A1; hereinafter referred to as Mulligan). Regarding claim 8, Hudson teaches: the system of claim 1. Hudson does not explicitly, but Mulligan discloses: wherein the gen-AI chatbot service is hosted by another system ([0097] the LLM 338 is hosted by a server system that is separate from the system that hosts the chatbot system 300 and the chatbot system 300 communicates with the LLM 338 over a network). Hudson and Mulligan are considered analogous in the field of human-to-computer dialogue. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Hudson to combine the teachings of Mulligan because doing so would allow for a generative AI chatbot to be hosted on a separate system, leading to greater user flexibility in the type of information that the gen-AI chatbots generate and provide to users of a system (Mulligan [0113] the system that hosts the chatbot system 300 may not be a component of an interactive platform but another interaction system that provides services and information to a group of users such as, but not limited to, a platform that provides enterprise wide connectivity to a group of users such as employees of a company, clients of an enterprise provided professional services, educational institutions, and the like. In some of such examples, the content provided to the users may not be advertising, but may be other types of useful information such as company policies, status messages for projects, newsworthy events, and the like). Regarding claim 9, the combination of Hudson and Mulligan teaches: the system of claim 8. Mulligan further discloses: wherein the selected chatbot service is the gen-AI chatbot service, wherein the one or more processors are further configured to: receive, from the other system and based on providing the user input to the selected chatbot service, the response to the user input ([0097] the chatbot system 300 receives the user prompt and communicates the user prompt to the LLM 338 residing on the separate system. The LLM 338 receives the prompt 328 and generates the raw response 362. The LLM 338 then communicates the raw response 362 to the chatbot system 300. The chatbot system 300 receives the raw response 362 for subsequent processing); and send, to the user device, the response ([0097] The LLM 338 then communicates the raw response 362 to the chatbot system 300. The chatbot system 300 receives the raw response 362 for subsequent processing). Regarding claim 16, Hudson teaches: the non-transitory computer-readable medium of claim 10. Hudson does not explicitly, but Mulligan discloses: wherein the gen-AI chatbot service is hosted by another system ([0097] the LLM 338 is hosted by a server system that is separate from the system that hosts the chatbot system 300 and the chatbot system 300 communicates with the LLM 338 over a network), and wherein the one or more processors are further configured to: receive, from the other system and based on providing the user input to the selected chatbot service, the response to the user input ([0097] the chatbot system 300 receives the user prompt and communicates the user prompt to the LLM 338 residing on the separate system. The LLM 338 receives the prompt 328 and generates the raw response 362. The LLM 338 then communicates the raw response 362 to the chatbot system 300. The chatbot system 300 receives the raw response 362 for subsequent processing); and send, to the user device, the response ([0097] The LLM 338 then communicates the raw response 362 to the chatbot system 300. The chatbot system 300 receives the raw response 362 for subsequent processing). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Temraz et al. (US 20240330597 A1) – teaches a chatbot system that responds to a user input using a decision module to select an intent engine module or a generative Q&A module. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nathan Tengbumroong whose telephone number is (703)756-1725. The examiner can normally be reached Monday - Friday, 11:30 am - 8:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NATHAN TENGBUMROONG/Examiner, Art Unit 2654 /HAI PHAN/Supervisory Patent Examiner, Art Unit
Read full office action

Prosecution Timeline

Jul 29, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530536
Mixture-Of-Expert Approach to Reinforcement Learning-Based Dialogue Management
2y 5m to grant Granted Jan 20, 2026
Patent 12451142
NON-WAKE WORD INVOCATION OF AN AUTOMATED ASSISTANT FROM CERTAIN UTTERANCES RELATED TO DISPLAY CONTENT
2y 5m to grant Granted Oct 21, 2025
Patent 12412050
MULTI-PLATFORM VOICE ANALYSIS AND TRANSLATION
2y 5m to grant Granted Sep 09, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
43%
Grant Probability
99%
With Interview (+75.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month