Prosecution Insights
Last updated: April 19, 2026
Application No. 18/772,389

METHOD AND SYSTEM FOR GENERATIVE AI BASED UNIFIED VIRTUAL ASSISTANT

Non-Final OA §101§103
Filed
Jul 15, 2024
Examiner
HUTCHESON, CODY DOUGLAS
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Tata Consultancy Services Limited
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
15 granted / 24 resolved
+0.5% vs TC avg
Strong +47% interview lift
Without
With
+47.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
34 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
33.9%
-6.1% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed with the application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 1. Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, “A method” is recited, which is directed to one of the four statutory categories of invention (process) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite mental processes which fall into the category of abstract idea (Step 2A Prong 1: YES). The following limitations, under their broadest reasonable interpretation, recite mental processes: receiving in real time…a multi-modal query from a user associated with a role in a user conversation…associated with an enterprise: a person obtains a query from a user (e.g. text and an image), the user associated with a particular enterprise/business and with a particular role (e.g. customer, HR rep, finance, etc.) creating…a user context for the user conversation based on the role associated with the user: a person writes down a user context based on their role (e.g. makes note ‘user is a customer’) generating…an optimized prompt for the multi-modal query corresponding to the user context, based on a set of prompt concepts: a person uses prompt concepts/templates to create a prompt representing the query (e.g. writes down a prompt ‘customer has this question: {text} related to {image}’) generating…a response corresponding to the optimized prompt…using a customized tool array, wherein the customized tool array comprises a set of tools with each tool comprising a set of parameters including a tool description: a person writes down a response to the prompt using a tool array (e.g. uses a particular tool as a set of rules for answering the question) formatting…the response to obtain a final output using an output parser: a person writes down the response following a particular format (e.g. writes ‘Here is what I found: {answer}’) providing…the final output to the user: a person gives the written response to the user Claim 1 does not contain any additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional limitations are “by a virtual assistant engine of a unified virtual assistant, via one or more hardware processors”, “by the unified virtual assistant, via the one or more hardware processors”, “the optimized prompt from a large language model (LLM)”, and “a virtual assistant head comprised in the unified virtual assistant”. These limitations are recited at a high level of generality and amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer do not integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Therefore, claim 1 is directed to an abstract idea. Claim 1 does not contain any additional elements which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above, the only additional limitations amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer do not amount to significantly more than the judicial exception as they do not provide an inventive concept. Therefore, claim 1 is not patent eligible. Regarding claims 7 and 13, “A system” and “One or more non-transitory machine-readable information storage mediums” are recited respectively, which are both directed to one of the four statutory categories of invention (process) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite limitations similar to claim 1, and thus also recite mental processes which fall into the category of abstract idea (Step 2A Prong 1: YES; see explanation regarding claim 1). Claims 7 and 13 do not integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional limitations are those explained above regarding claim 1, as well as “A system, comprising: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to” (claim 7) and “One or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause” (claim 13). These additional limitations amount to further instructions to implement the judicial exception using a generic computer, and do not integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Claims 7 and 13 do not amount to significantly more than the judicial exception (Step 2B: NO). As discussed above, the only additional limitations amount to mere instructions to implement the judicial exception using a generic computer, which do not amount to significantly more than the judicial exception as they do not provide an inventive concept. Therefore, claims 7 and 13 are not patent eligible. Regarding claims 2-6, “The method” is recited, which is directed to one of the four statutory categories of invention (process) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite further mental processes which fall into the category of abstract idea (Step 2A Prong 1: YES). The following limitations, under their broadest reasonable interpretation, recite further mental processes: Claim 2: wherein the multi-modal query is one of (i) a text, (ii) an image or (iii) a voice data: A person analyzes a query that is multimodal (e.g. analyzes text and an image together to answer a user question) Claim 3: wherein the set of prompt concepts are dynamically modified for generating the optimized prompt based on the role of the user: a person changes their prompt templates to reflect the role of the user (e.g. use template ‘_______ wants to access {information}’, and adding ‘customer’ to the underlined portion of the template) Claim 4: comparing…the optimized prompt with the tool description of each tool in the set of tools to obtain an optimal tool, wherein the optimal tool characterizes a best observation based on the tool description; and generating…the response…: a person compares the prompt to the tools available (e.g. user notices person asking a question that is frequently asked, so selects a “FAQ” tool to assist them with answering the user’s question), and writes down the response using the tool Claim 4 contains the additional limitations “via the one or more hardware processors” and “generating…the response by invoking the LLM or an application programming interface (API) call provided in the tool description of the optimal tool”. These limitations amount to mere instructions to implement the judicial exception using a generic computer. Claim 5: …trained using an enterprise context corresponding to the enterprise and a set of user contexts stored in a database”: A person can use enterprise context and user contexts stored on paper to learn how to create a response. Claim 5 contains the additional “wherein the LLM is trained using…”, which amounts to mere instructions to implement the judicial exception using a generic computer. Claim 6: switching between one or more user contexts in a current user conversation based on roles associated with the one or more user contexts, wherein the one or more user contexts relate to a user context in a previous user conversation: a person can use previous user conversations with previous user context (e.g. user information) to switch between user contexts in a conversation (can user different user information at different times) Claim 6 does not contain any additional limitations. Claims 2-6 do not contain any additional limitations which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). As discussed above, the only additional limitations are mere instructions to implement the judicial exception using a generic computer, which even when viewed in combination, do not integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Therefore, claims 2-6 are directed to an abstract idea. Claims 2-6 do not contain any additional limitations which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above, the only additional limitations are mere instructions to implement the judicial exception using a generic computer, which even when viewed in combination, do not amount to significantly more than the judicial exception as they do not provide an inventive concept. Therefore, claims 2-6 are not patent eligible. Regarding claims 8-12 and 14-18, “The system” and “The one or more non-transitory machine-readable information storage mediums” are recited respectively, which are both directed to one of the four statutory categories of invention (process) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite limitations similar to claims 2-6, and thus also recite mental processes which fall into the category of abstract idea (Step 2A Prong 1: YES; see explanation regarding claims 2-6). Claims 8-12 and 14-18 do not contain any additional limitations which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). As discussed above with respect to claims 2-6, the only additional limitations are mere instructions to implement the judicial exception using a generic computer, which do not integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Therefore, claims 8-12 and 14-18 are directed to abstract ideas. Claims 8-12 and 14-18 do not contain any additional limitations which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above with respect to claims 2-6, the only additional limitations are mere instructions to implement the judicial exception using a generic computer, which do not amount to significantly more than the judicial exception as they do not provide an inventive concept. Therefore, claims 8-12 and 14-18 are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 2. Claims 1-3, 6-9, 12-15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chen & Wong (US 2025/0013788 A1, hereinafter Chen) in view of Kundel et al. (US 2025/0390526 A1, hereinafter Kundel). Regarding claim 1, Chen discloses A processor implemented method (Fig. 7, 702, para. 0043): receiving in real time by a virtual assistant engine of a unified virtual assistant (para. 0012 “Dialogue agent 102 may be embodied as an online application service of an online social media platform or a ‘chat bot’, which refers to an automated software tool designed and programmed to interact with users of a social media application through text-based or voice-based natural language queries.”; para. 0038 “At 602, method 600 includes executing a frontend dialogue agent configured to engage in a dialogue with a user of the social media network using at least a language model. At 604, method 600 includes receiving a user input including a natural language description of a request for an action on a content item.”), via one or more hardware processors (Fig. 7, 702, para. 0043), a multi-model query from a user (para. 0011 “In view of the above issues, the present disclosure describes a computing system 100 configured to implement a social media network with a dialogue-assisted interface for performing actions on social media content items such as video content… Computing system 100 receives a user input including a natural language description of an editing request 108 for a content item 109…”) associated with a role in a user conversation (para. 0021 “As described below, tool commands whose execution is authorized, and other tool commands whose execution is not authorized, may be established for different types of user queries, user account types or privilege levels, and/or on any other suitable basis.”) using an enterprise application associated with an enterprise (para. 0012 “Dialogue agent 102 may be embodied as an online application service of an online social media platform or a ‘chat bot’, which refers to an automated software tool designed and programmed to interact with users of a social media application through text-based or voice-based natural language queries.”); creating by the unified virtual assistant, via the one or more hardware processors, a user context for the user conversation based on the role associated with the user (para. 0030 “The first and second tool commands may be compared to a privilege level associated with a user account of a user who requested to edit the content item.”); generating by the unified virtual assistant, via the one or more hardware processors, an optimized prompt for the multi-modal query…based on a set of prompt concepts (para. 0011 “Computing system 100 receives a user input including a natural language description of an editing request 108 for a content item 109, and generates a prompt 110 for language model 106 based at least on the user input.”; para. 0019 “Upon receiving the user input from user 104 comprising a natural language description of editing request 108 for editing content item 109, prompt manager 126 queries prompt pool 128 for prompts whose descriptions are relevant to the editing request. Various predetermined and/or sample prompts may be combined to form a new prompt which can then be filled with data specific to editing request 108 to form prompt 110.”); generating by the unified virtual assistant, via the one or more hardware processors, a response (para. 0040 “At 626, method 600 includes outputting a natural language response to the user from the dialogue agent based on the result.”) corresponding to the optimized prompt from a large language model (LLM) (para. 0039 “At 612, method 600 includes inputting the prompt to the language model to generate a language model output describing one or more operations for implementing the action.”; para. 0014 “In some implementations, language model 106 may be a large language model.”) using a customized tool array (para. 0039 “At 614, method 600 includes identifying one or more tools callable at the backend service based on the one or more operations.”; Fig. 1, “Tool pool 133” and “Tool(s) 130”), wherein the customized tool array comprises a set of tools with each tool comprising a set of parameters including a tool description (para. 0017 “In the example implementation depicted in FIG. 1, editing features for editing content items are implemented by various tools 130 each callable at backend service 112 through a corresponding tool interface 132 such as a function call interface or application programming interface (API).”); formatting by the unified virtual assistant, via the one or more hardware processors, the response to obtain a final output using an output parser (para. 0040 “At 628, method 600 includes filtering the natural language response via one or both of sensitive word detection or intention detection.”; Fig. 2, ‘D’); and providing by the unified virtual assistant, via the one or more hardware processors, the final output to the user by a virtual assistant head comprised in the unified virtual assistant (para. 0032 “Here, filtering the response output from dialogue agent 102 produces a filtered natural language response 202 (e.g., response 152).”; para. 0034 “FIG. 3 depicts an example illustrating interactions between a user of the social media network and dialogue agent 102 conducted through a GUI presented by social network client 118. …Dialogue agent 102 formulates a natural language response 310 (“I added the sparkles filter.”) describing the addition of graphical asset 306.”; Fig. 3). Chen does not specifically disclose [generating…an optimized prompt for the multi-modal query] corresponding to the user context. Kundel teaches [generating…an optimized prompt for the multi-modal query] corresponding to the user context (para. 0029 “User query 212 may be a request for any type of information, e.g., a request for general knowledge, a request for specialized (e.g., professional) knowledge, a request to help with planning any user activities, and/or the like…If QT 101 determines that user query 212 is context-dependent, QT 101 may generate an intermediate query (operation 214).”; para. 0031 “Having received the response from GM 120 to the intermediate query, QT 101 (or user query analyzer 103) may parse the received response and generate one or more requests to DM 160 for contextual data about user 210 (operation 222).”; para. 0033 “DM 160 may then provide the context data to QT 101. In some embodiments, the context data may be delivered via one or more JSON objects (e.g., JSON files). Having received the requested context data from the data store (operation 228), QT 101 may generate a context-based query (operation 230). Generating the context-based query may include parsing the context data returned by DM 160 for specific pieces of information indicated by GM 120 as a relevant context and integrating these pieces of information into a natural language query (e.g., an unstructured conversational request). For example, the context-based query may be, “what travel deals are available for the Spring Break week of 2023 for User who attends the East Virginia State University and has traveled to Florida and Mexico over the last year?” QT 101 may then submit the generated context-based query to GM 120 (operation 232). In some embodiments, the context may be included as part of a query prompt.”). Chen and Kundel are considered to be analogous to the claimed invention as they both are in the same field of prompting large language models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen to incorporate the teachings of Kundel in order to specifically generate the optimized prompt to correspond to the user context. Doing so would be beneficial, as this would automatically identify contextual information relevant to user queries without having to provide all user data within the query, reducing computation costs (Kundel, para. 0013-0014). Regarding claim 2, Chen in view of Blohm discloses wherein the multi-modal query is one of (i) a text, (ii) an image or (iii) a voice data (Chen, para. 0011 “In view of the above issues, the present disclosure describes a computing system 100 configured to implement a social media network with a dialogue-assisted interface for performing actions on social media content items such as video content… Computing system 100 receives a user input including a natural language description of an editing request 108 for a content item 109…”). Regarding claim 3, Chen in view of Kundel discloses wherein the set of prompts concepts are dynamically modified for generating the optimized prompt based on the role of the user (Chen discloses prompt concepts: para. 0011 “Computing system 100 receives a user input including a natural language description of an editing request 108 for a content item 109, and generates a prompt 110 for language model 106 based at least on the user input.”; para. 0019 “Upon receiving the user input from user 104 comprising a natural language description of editing request 108 for editing content item 109, prompt manager 126 queries prompt pool 128 for prompts whose descriptions are relevant to the editing request. Various predetermined and/or sample prompts may be combined to form a new prompt which can then be filled with data specific to editing request 108 to form prompt 110.”; Kundel teaches dynamically modifying prompt to incorporate user context: para. 0029 “User query 212 may be a request for any type of information, e.g., a request for general knowledge, a request for specialized (e.g., professional) knowledge, a request to help with planning any user activities, and/or the like…If QT 101 determines that user query 212 is context-dependent, QT 101 may generate an intermediate query (operation 214).”; para. 0031 “Having received the response from GM 120 to the intermediate query, QT 101 (or user query analyzer 103) may parse the received response and generate one or more requests to DM 160 for contextual data about user 210 (operation 222).”; para. 0033 “DM 160 may then provide the context data to QT 101. In some embodiments, the context data may be delivered via one or more JSON objects (e.g., JSON files). Having received the requested context data from the data store (operation 228), QT 101 may generate a context-based query (operation 230). Generating the context-based query may include parsing the context data returned by DM 160 for specific pieces of information indicated by GM 120 as a relevant context and integrating these pieces of information into a natural language query (e.g., an unstructured conversational request). For example, the context-based query may be, “what travel deals are available for the Spring Break week of 2023 for User who attends the East Virginia State University and has traveled to Florida and Mexico over the last year?” QT 101 may then submit the generated context-based query to GM 120 (operation 232). In some embodiments, the context may be included as part of a query prompt.”). Chen and Kundel are considered to be analogous to the claimed invention as they both are in the same field of prompting large language models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen to incorporate the teachings of Kundel in order to specifically have the set of prompt concepts be dynamically modified for generating the optimized prompt based on the role of the user, using the same rationale as discussed with regards to claim 1. Regarding claim 6, Chen in view of Kundel discloses switching between one or more user contexts in a current user conversation based on roles associated with the one or more user contexts, wherein the one or more user contexts relate to a user context in a previous user conversation (A new user context may be determined for a user query: para. 0029 “…If QT 101 determines that user query 212 is context-dependent, QT 101 may generate an intermediate query (operation 214).”; para. 0040 “In the example workflow 400, in addition to processing user query 212, e.g., as described in conjunction with FIGS. 2A-2B and FIG. 3, QT 101 may infer data from user query 212 that may be stored in data store 110 as part of the user profile. For example, user query 212 may include affirmative information about user 210 (e.g., “I recently moved to 101 Spear St.”) in addition to one or more questions (e.g., “what restaurants in the area can you recommend?”). QT 101 may generate a context data request (operation 222), communicate the context data request to DM 160 (operation 224), and receive the context data relevant from DM 160.”; The new context is based on a user profile which is updated using past conversation histories: para. 0040 “Although FIG. 3 illustrates updating user profile data in the context of user query 212 processing, similar operations may be used to generate user profiles based on past conversation histories, to perform contact center routing, or in any other context where extracting structured data (e.g., profile entries) from unstructured content (user queries) is advantageous.”). Chen and Kundel are considered to be analogous to the claimed invention as they both are in the same field of prompting large language models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen to incorporate the teachings of Kundel in order to specifically switch between one or more user contexts in a current user conversation based on roles associated with the one or more user contexts, wherein the one or more user contexts relate to a user context in a previous user conversation, using the same rationale as discussed with regards to claim 1. Regarding claim 7, claim 7 is a system claim with limitations similar to those in method claim 1, and thus is rejected under similar rationale. Additionally, Chen discloses A system (Fig. 6), comprising: a memory storing instructions (Fig. 6, 604); one or more communication interfaces (Fig. 6, 630); and one or more hardware processors coupled to the memory via the one or more communication interfaces (Fig. 6, 602), wherein the one or more hardware processors are configured by the instructions to (para. 0051 “The processing device 602 is configured to execute instructions 622 for implementing method 500 of identification and retrieval of relevant contextual information for quick and accurate processing of user queries by generative artificial intelligence models).”). Regarding claim 8, claim 8 is rejected for analogous reasons to claim 2. Regarding claim 9, claim 9 is rejected for analogous reasons to claim 3. Regarding claim 12, claim 12 is rejected for analogous reasons to claim 6. Regarding claim 13, claim 13 is a non-transitory storage claim with limitations similar to those in method claim 1, and thus is rejected under similar rationale. Additionally, Chen discloses One or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause (para. 0053-0054 and claim 20). Regarding claim 14, claim 14 is rejected for analogous reasons to claim 2. Regarding claim 15, claim 15 is rejected for analogous reasons to claim 3. Regarding claim 18, claim 18 is rejected for analogous reasons to claim 6. 3. Claims 4, 10, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Kundel, and further in view of Castillo & Azmoon (US 2025/0028759 A1, hereinafter Castillo). Regarding claim 4, Chen in view of Kundel discloses generating an optimized prompt (para. 0011 “Computing system 100 receives a user input including a natural language description of an editing request 108 for a content item 109, and generates a prompt 110 for language model 106 based at least on the user input.”; para. 0019 “Upon receiving the user input from user 104 comprising a natural language description of editing request 108 for editing content item 109, prompt manager 126 queries prompt pool 128 for prompts whose descriptions are relevant to the editing request. Various predetermined and/or sample prompts may be combined to form a new prompt which can then be filled with data specific to editing request 108 to form prompt 110.”) and selecting tools using the output of an LLM (para. 0039 “At 612, method 600 includes inputting the prompt to the language model to generate a language model output describing one or more operations for implementing the action. Where the request is an editing request to edit the content item, the operation(s) may include editing operation(s) for editing the content item, for example. At 614, method 600 includes identifying one or more tools callable at the backend service based on the one or more operations.”). However, Chen in view of Kundel does not specifically disclose: comparing, via the one or more hardware processors, the optimized prompt with the tool description of each tool in the set of tools to obtain an optimal tool, wherein the optimal tool characterizes a best observation based on the tool description; and generating, via the one or more hardware processors, the response by invoking the LLM or an application programming interface (API) call provided in the tool description of the optimal tool. Castillo teaches comparing, via the one or more hardware processors, the optimized prompt with the tool description of each tool in the set of tools to obtain an optimal tool, wherein the optimal tool characterizes a best observation based on the tool description (para. 0183 “The requests are initially routed to intent identifier 702. This software module may be configured to determine an intent or purpose of requests in order to further route the requests to one or more of skills 704A, 704B, and/or 704C.”; para. 0188 “Pre-trained language models like BERT, GPT, or ELMo have been pre-trained on large corpora and can be fine-tuned for specific tasks, including intent classification. These models can understand the context and meaning of words and phrases, allowing them to handle complex queries effectively. In other words (and not explicitly shown in FIG. 7), intent identifier 702 could query LLM service 608 in order to determine the intent of requests. For example, such a query might use the prompt “classify the intent of the request ‘X’ into one or more of the categories ‘A’, ‘B’, or ‘C’.” Here, X may be the text string of the request, and A, B, and C, may be skills supported by context mediator 604.”; para. 0189 “Regardless, based on the result from intent identifier 702, one or more of skills 704A, 704B, and/or 704C may be selected.”); and generating, via the one or more hardware processors, the response by invoking the LLM or an application programming interface (API) call provided in the tool description of the optimal tool (para. 0190 “As an example, skill 704A may relate to generating charts in the form of user interface components by way of LLM prompts, skill 704B may relate to interpreting charts from user interfaces by way of LLM prompts, and skill 704C may related to determining contextually relevant information from various applications. Other possibilities include skills specific to interacting with particular applications (e.g., an incident management skill), conversing with users via a virtual agent, and so on. In taking any of these steps, the skills may query and/or write to one or more databases, make local or remote API calls, and/or take other actions.”). Chen, Kundel, and Castillo are considered to be analogous to the claimed invention as they are in the same field of prompting large language models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen in view of Kundel to incorporate the teachings of Castillo in order to specifically compare the optimized prompt with the tool description of each tool in the set of tools to obtain an optimal tool, wherein the optimal tool characterizes a best observation based on the tool description and to generate the response by invoking the LLM or an API call provided in the tool description of the optimal tool. Doing so would be beneficial, as this would allow for the system to effectively translate user queries into specific LLM prompts that are more likely to result in LLM responses that are relevant to the user’s needs (Castillo, para. 0179). Regarding claim 10, claim 10 is rejected for analogous reasons to claim 4. Regarding claim 16, claim 16 is rejected for analogous reasons to claim 4. 4. Claims 5, 11, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Kundel, and further in view of Vangala et al. (US 2024/0419918 A1, hereinafter Vangala). Regarding claim 5, Chen in view of Kundel does not specifically disclose wherein the LLM is trained using an enterprise context corresponding to the enterprise and a set of user contexts stored in a database. Vangala teaches wherein the LLM is trained using an enterprise context corresponding to the enterprise and a set of user contexts stored in a database (para. 0069 “Method 600 begins with step 602. At step 602, training graph data outputs from a data graph are converted into a text data format that is readable by the LLM. In some examples, the data graph corresponds to the data graph 200 and the LLM corresponds to the LLM 368. The training graph data outputs generally correspond to the graph data outputs 330 and may be converted by the conversion processor 362 into the training data 342. The data graph has nodes and edges between the nodes, for example, as shown in FIG. 2 and described above. The nodes represent entities associated with an enterprise organization and the edges representing relationships among the entities.”; para. 0051 “In some examples, each node of the data graph 200 is associated with a set of embeddings at different granularity levels or “slices” of the data graph. As a first example, the set of embeddings may include a first embedding based on a user-level slice which represents all the entity interactions and knowledge at a user level. These user-level embeddings are per-user and represent deeper level of user personalization, but may not always have context of a broader perspective.”; para. 0053 “The computing device 310 comprises a graph data store 326 configured to store graph data, such as the data graph 200, and a node processor 324 (corresponding to node processor 112, 122, 162).”). Chen, Kundel, and Vangala are considered to be analogous to the claimed invention as they are in the same field of prompting large language models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Chen in view of Kundel to incorporate the teachings of Vangala in order to specifically have the LLM be trained using an enterprise context corresponding to the enterprise and a set of user contexts stored in a database. Doing so would be beneficial, as this would enable the LLM to access enterprise specific information to identify relevant information for the user without the user having to make a challenging or time-consuming query themselves (Vangala, para. 0001 and 0021). Regarding claim 11, claim 11 is rejected for analogous reasons to claim 5. Regarding claim 17, claim 17 is rejected for analogous reasons to claim 5. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Chen & Wong (US 2025/0014605 A1): prompt concepts, tool API pool (Fig. 2) Radu et al. (US 2024/0403560 A1): virtual assistance via LLM prompting and tool API calls (Fig. 2, para. 0031) Blohm et al. (US 2024/0346255 A1): virtual assistance via LLM prompting and permissions/privileges associated with user’s role (Fig. 3B, para. 0055) Gore & Shen (US 2024/0320424 A1): utilizing user context for generating LLM prompts (Fig. 9, para. 0101) Gardner (US 2024/0296279 A1): LLM prompting and using LLM output to generate action string for performing API calls (Fig. 8) Any inquiry concerning this communication or earlier communications from the examiner should be directed to CODY DOUGLAS HUTCHESON whose telephone number is (703)756-1601. The examiner can normally be reached M-F 8:00AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at (571)-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CODY DOUGLAS HUTCHESON/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jul 15, 2024
Application Filed
Jan 12, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603096
VOICE ENHANCEMENT METHODS AND SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12591750
GENERATIVE LANGUAGE MODEL UNLEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12579447
TECHNIQUES FOR TWO-STAGE ENTITY-AWARE DATA AUGMENTATION
2y 5m to grant Granted Mar 17, 2026
Patent 12537018
METHOD AND SYSTEM FOR PREDICTING A MENTAL CONDITION OF A SPEAKER
2y 5m to grant Granted Jan 27, 2026
Patent 12530529
DOMAIN-SPECIFIC NAMED ENTITY RECOGNITION VIA GRAPH NEURAL NETWORKS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+47.1%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month