Prosecution Insights
Last updated: April 18, 2026
Application No. 18/743,552

FUNCTION CALLING TO ENABLE MUTI-SOURCE DATA RETRIEVAL IN GENERATIVE ARTIFICIAL INTELLIGENCE SYSTEMS

Non-Final OA §102
Filed
Jun 14, 2024
Examiner
COLUCCI, MICHAEL C
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
91%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
749 granted / 990 resolved
+13.7% vs TC avg
Strong +15% interview lift
Without
With
+15.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
41 currently pending
Career history
1031
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 990 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Note: The claims are not directed towards patent ineligible subject matter under 35 U.S.C. 101 Step 1: IS THE CLAIM DIRECTED TO A PROCESS, MACHINE, MANUFACTURE OR COMPOSITION OF MATTER? Yes Step 2A.1: IS THE CLAIM DIRECTED TO A LAW OF NATURE, A NATURAL PHENOMENON (PRODUCT OF NATURE) OR AN ABSTRACT IDEA? No Step 2A.2: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT INTEGRATE THE JUDICIAL EXCEPTION INTO A PRACTICAL APPLICATION? Yes. The claims seek to improve LLMs by using RAG modeling/concepts to improve accuracy and reliability for out-of-domain responses supported by the specification 0020, 0023, 0026, and reflected by the claims. In other words, the claims enable the invention to improve and use LLMs in a non-generic way by handling multiple data sources and out-of-domain functions thereby providing a practical application. Supported by the following: In Finjan Inc. v. Blue Coat Systems, Inc., 879 F.3d 1299, 125 USPQ2d 1282 (Fed. Cir. 2018), the claimed invention was a method of virus scanning that scans an application program, generates a security profile identifying any potentially suspicious code in the program, and links the security profile to the application program. 879 F.3d at 1303-04, 125 USPQ2d at 1285-86. The Federal Circuit noted that the recited virus screening was an abstract idea, and that merely performing virus screening on a computer does not render the claim eligible. 879 F.3d at 1304, 125 USPQ2d at 1286. The court then continued with its analysis under part one of the Alice/Mayo test by reviewing the patent’s specification, which described the claimed security profile as identifying both hostile and potentially hostile operations. The court noted that the security profile thus enables the invention to protect the user against both previously unknown viruses and “obfuscated code,” as compared to traditional virus scanning, which only recognized the presence of previously-identified viruses. The security profile also enables more flexible virus filtering and greater user customization. 879 F.3d at 1304, 125 USPQ2d at 1286. The court identified these benefits as improving computer functionality, and verified that the claims recite additional elements (e.g., specific steps of using the security profile in a particular way) that reflect this improvement. Accordingly, the court held the claims eligible as not being directed to the recited abstract idea. 879 F.3d at 1304-05, 125 USPQ2d at 1286-87. This analysis is equivalent to the Office’s analysis of determining that the additional elements integrate the judicial exception into a practical application at Step 2A Prong Two, and thus that the claims were not directed to the judicial exception (Step 2A: NO). Examples of claims that improve technology and are not directed to a judicial exception include: Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1339, 118 USPQ2d 1684, 1691-92 (Fed. Cir. 2016) (claims to a self-referential table for a computer database were directed to an improvement in computer capabilities and not directed to an abstract idea); McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1315, 120 USPQ2d 1091, 1102-03 (Fed. Cir. 2016) (claims to automatic lip synchronization and facial expression animation were directed to an improvement in computer-related technology and not directed to an abstract idea); Visual Memory LLC v. NVIDIA Corp., 867 F.3d 1253,1259-60, 123 USPQ2d 1712, 1717 (Fed. Cir. 2017) (claims to an enhanced computer memory system were directed to an improvement in computer capabilities and not an abstract idea); Finjan Inc. v. Blue Coat Systems, Inc., 879 F.3d 1299, 125 USPQ2d 1282 (Fed. Cir. 2018) (claims to virus scanning were found to be an improvement in computer technology and not directed to an abstract idea); SRI Int’l, Inc. v. Cisco Systems, Inc., 930 F.3d 1295, 1303 (Fed. Cir. 2019) (claims to detecting suspicious activity by using network monitors and analyzing network packets were found to be an improvement in computer network technology and not directed to an abstract idea). Additional examples are provided in MPEP § 2106.05(a). Regarding the December 5th 2025 Memo in light of September 26, 2025 Appeals Review Panel Decision in Ex parte Desjardins, Appeal 2024-000567 for Application 16/319,040, in deciding if a recited abstract idea does or does not direct the entire claim to an abstract idea, when a claim is considered as a whole: Paragraph 21 of the Specification, which the Appellant cites, identifies improvements in training the machine learning model itself. Of course, such an assertion in the Specification alone is insufficient to support a patent eligibility determination, absent a subsequent determination that the claim itself reflects the disclosed improvement. See MPEP § 2106.05(a) (citing Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316 (Fed. Cir. 2016)). Here, however, we are persuaded that the claims reflect such an improvement. For example, one improvement identified in the 8 Appeal2024-000567 Application 16/319,040 Specification is to "effectively learn new tasks in succession whilst protecting knowledge about previous tasks." Spec. ,r 21. The Specification also recites that the claimed improvement allows artificial intelligence (AI) systems to "us[e] less of their storage capacity" and enables "reduced system complexity." Id. When evaluating the claim as a whole, we discern at least the following limitation of independent claim 1 that reflects the improvement: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task." We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation. Under a charitable view, the overbroad reasoning of the original panel below is perhaps understandable given the confusing nature of existing § 101 jurisprudence, but troubling, because this case highlights what is at stake. Categorically excluding AI innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology. Yet, under the panel's reasoning, many AI innovations are potentially unpatentable-even if they are adequately described and nonobvious-because the panel essentially equated any machine learning with an unpatentable "algorithm" and the remaining additional elements as "generic computer components," without adequate explanation. Dec. 24. Examiners and panels should not evaluate claims at such a high level of generality. Specifically, Ex Parte Desjardins explained the following: Enfish ranks among the Federal Circuit's leading cases on the eligibility of technological improvements. In particular, Enfish recognized that “[m]uch of the advancement made in computer technology consists of improvements to software that, by their very nature, may not be defined by particular physical features but rather by logical structures and processes.” 822 F.3d at 1339. Moreover, because “[s]oftware can make non-abstract improvements to computer technology, just as hardware improvements can,” the Federal Circuit held that the eligibility determinations should turn on whether “the claims are directed to an improvement to computer functionality versus being directed to an abstract idea.” Id. at 1336. (Desjardins, page 8). Further in Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), the claimed invention was a method of training a machine learning model on a series of tasks. The Appeals Review Panel (ARP) overall credited benefits including reduced storage, reduced system complexity and streamlining, and preservation of performance attributes associated with earlier tasks during subsequent computational tasks as technological improvements that were disclosed in the patent application specification. Specifically, the ARP upheld the Step 2A Prong One finding that the claims recited an abstract idea (i.e., mathematical concept). In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task” reflected the improvement disclosed in the specification. Accordingly, the claims as a whole integrated what would otherwise be a judicial exception instead into a practical application at Step 2A Prong Two, and therefore the claims were The claim itself does not need to explicitly recite the improvement described in the specification (e.g., “thereby increasing the bandwidth of the channel”). See, e.g., Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), in which the specification identified the improvement to machine learning technology by explaining how the machine learning model is trained to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting,” and that the claims reflected the improvement identified in the specification. Indeed, enumerated improvements identified in the Desjardins specification included disclosures of the effective learning of new tasks in succession in connection with specifically protecting knowledge concerning previously accomplished tasks; allowing the system to reduce use of storage capacity; and the enablement of reduced complexity in the system. Such improvements were tantamount to how the machine learning model itself would function in operation and therefore not subsumed in the identified mathematical calculation. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20240404687 A1 Bell; Joshua Michael et al. (hereinafter Bell). Re claim 1, Bell teaches 1. A method comprising: (fig. 5b) receiving, from a user compute system, conversation history data including a query; (0152 0379 conversation history shown in fig. 5b with fig. 18c-f) generating, by a retrieval augmented generative (RAG) assistant, a function selection instruction prompt for a large language model (LLM), the function selection instruction prompt including: (0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f) the conversation history data; (0152 0379 conversation history shown in fig. 5b with fig. 18c-f) a function list including function definitions that each correspond to a data source and include a function descriptor that describes content of the data source; and (using fig. 5b and 18a-f e.g. elements 1834b and 1834c showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109 and 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f) instructions directing the LLM to return a function call to at least one function on the function list identified as relevant to the conversation history data based on the corresponding function descriptor; and (using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109 and 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f) receiving, at the RAG assistant, a function selection response from the LLM including the function call; (user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) based on the least one function identified within the function selection response, selecting a conditional operation from multiple defined conditional operations; and (as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) executing, by the RAG assistant, the conditional operation. (fig. 17 demonstrates the overview of a RAG agent/assistant, and as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) Re claim 10, this claim has been rejected for teaching a broader, or narrower claim based on general inclusion of hardware alone (e.g. processor, memory, instructions), representation of claim 1 omitting/including hardware for instance, otherwise amounting to a virtually identical scope Re claims 2 and 11, Bell teaches 2. The method of claim 1, wherein the function list includes an out-of-domain function with a function descriptor that instructs the LLM to select the out-of-domain function in response to determining that no other function on the function list is relevant to the conversation history data. (various domains which can change to a new one, where new is analogously defined as out-of-domain per se within conversation 0108 0122, as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) Re claims 3 and 12, Bell teaches 3. The method of claim 2, wherein the conditional operation includes: transmitting a response to the user compute system in response to determining that the function selection response identifies the out-of-domain function, wherein the response indicates that the query could not be answered using available data sources. (user sees a context description where the system notifies the user of the contextual shift in a response as in 1838 “in the context…:”, various domains which can change to a new one, where new is analogously defined as out-of-domain per se within conversation 0108 0122, as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) Re claims 4 and 13, Bell teaches 4. The method of claim 3, wherein the function selection instruction prompt further includes potentially-relevant data chunks mined from one or more data sources and the function list includes a particular function with a function descriptor that instructs the LLM to select the particular function in response to determining that the potentially-relevant data chunks are usable to answer the query. (chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) Re claim 5, Bell teaches 5. The method of claim 1, wherein the method further comprises: receiving, from a user compute system, identification of a group of approved data sources, wherein each data source of the group of approved data sources corresponds to a function on the function list. (user selects a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) Re claims 6 and 14, Bell teaches 6. The method of claim 1, further comprising: determining that the function call identifies a function corresponding to a select data source; (as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) retrieving, from the select data source, potentially-relevant data chunks satisfying similarity criteria with the conversation history data; (chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) transmitting, to the LLM, a context-enhanced query that includes: the query; (context based depending on user intent and scope of data e.g. patient condition versus pathology analysis, as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) context data that includes the potentially-relevant data chunks; and an instruction directing the LLM to answer the context-enhanced query based on the context data; and (instructions, query, summary 0175 and chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) receiving, from the LLM, an answer to the context-enhanced query; and displaying, on a user display, a response that is based on the answer. (user not only engages in parameter definitions and data sourcing, but active prompt and response engagement, as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) Re claims 7 and 15, Bell teaches 7. The method of claim 1, wherein the instructions direct the LLM to return a function call to one or multiple functions that the LLM identifies as relevant to the conversation history data, and wherein the method further comprises: in response to determining that the function selection response includes multiple function calls identifying multiple different data sources: (as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) retrieving, from each of the multiple different data sources, potentially-relevant data chunks satisfying similarity criteria with the query; (chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) defining, based on the potentially-relevant data chunks retrieved with respect to each of the multiple different data sources, a set of most relevant data chunks; and (chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) generating a context-enhanced query that includes the set of most relevant data chunks, the query, and an instruction to answer the query using the set of most relevant data chunks; (instructions, query, summary 0175 and chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) receiving, from the LLM, an answer to the context-enhanced query; and (back and forth LLM to user in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution) displaying, on a user display, a response that is based on the answer. (display thereof, back and forth LLM to user in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution) Re claims 8 and 16, Bell teaches 8. The method of claim 7, wherein generating the set of most relevant data chunks further comprises: ranking the potentially-relevant data chunks retrieved with respect to each of the multiple different data sources based on respective similarity to the conversation history data; (chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) selecting a number N of highest-ranked data chunks. (chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c most relevant is a ranking of N per se under BRI) Re claims 9 and 17, Bell teaches 9. The method of claim 7, wherein generating the set of most relevant data chunks further comprises: generating , via a trained summarization model, summaries of the potentially-relevant data chunks; (a summary mode per se 0166 with instructions, query, summary 0175 and chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) generating one or more combined data chunks that each includes two or more of the summaries concatenated together, wherein the set of most relevant data chunks includes the one or more combined data chunks. (in fig. 12a-d we see combined summaries as context evolves to append new summaries as user injects more queries/prompts into the dialogue, a summary mode per se 0166 with instructions, query, summary 0175 and chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) Re claim 18, Bell teaches 18. One or more tangible computer-readable storage media encoding instructions for executing a computer process, the computer process comprising: (fig. 5b) receiving, from a user compute system, conversation history data including a query; (0152 0379 conversation history shown in fig. 5b with fig. 18c-f) generating, by a retrieval augmented generative (RAG) assistant, a function selection instruction prompt for a large language model (LLM), the function selection instruction prompt including: (0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f) the conversation history data; (0152 0379 conversation history shown in fig. 5b with fig. 18c-f) a function list defining: a first set of function definitions, each function definition in the first set corresponding to a data source and including a function descriptor that describes content of the data source; and (using fig. 5b and 18a-f e.g. elements 1834b and 1834c showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109 and 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f) an out-of-domain function with a function descriptor that instructs the LLM to select the out-of-domain function in response to determining that no other function on the function list is relevant to the conversation history data; (various domains which can change to a new one, where new is analogously defined as out-of-domain per se within conversation 0108 0122, as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) instructions directing the LLM to return a function call to at least one function on the function list identified as relevant to the conversation history data based on the corresponding function descriptor; and (using fig. 5b and 18a-f e.g. elements 1834b and 1834c showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109 and 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f) receiving, at the RAG assistant, a function selection response from the LLM including a call to the out-of-domain function; (various domains which can change to a new one, where new is analogously defined as out-of-domain per se within conversation 0108 0122, as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) transmitting a response to the user compute system in response to determining that the function selection response identifies the out-of-domain function, the response indicating that the query could not be answered using available data sources. (user sees a context description where the system notifies the user of the contextual shift in a response as in 1838 “in the context…:”, various domains which can change to a new one, where new is analogously defined as out-of-domain per se within conversation 0108 0122, as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) Re claim 19, Bell teaches 19. The one or more tangible computer-readable storage media of claim 18, wherein the function selection instruction prompt further includes potentially-relevant data chunks mined from one or more data sources and the function list includes a particular function with a function descriptor that instructs the LLM to select the particular function in response to determining that the potentially-relevant data chunks are usable to answer the query. (chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) Re claim 20, Bell teaches 20. The one or more tangible computer-readable storage media of claim 19, transmitting, to the LLM, a context-enhanced query in response to determining that the function selection response identifies the particular function, the context-enhanced query including: the query, context data including the potentially-relevant data chunks, and an instruction directing the LLM to answer the context-enhanced query based on the context data. (chunks 0150 selecting most relevant 0156 with fig. 9a-d taken from a variety of data sources e.g. fig. 9c…as in fig. 18e-f operations are expressly defined and the user can select functions, define parameters, and engage in prompt execution, utilizing 0317 RAG + LLM0152 0379 conversation history shown in fig. 5b with fig. 18c-f, using fig. 5b and 18a-f using conversation history as an optional basis, showing various descriptors/parameters, user prompts, queries, data sources, and user selection thereof, function calls 0349 as well as user prompt guidance 0109) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20240404703 A1 COLLEY C S et al. LLM + RAG modeling for use GUI Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C COLUCCI whose telephone number is (571)270-1847. The examiner can normally be reached on M-F 9 AM - 5 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL COLUCCI/Primary Examiner, Art Unit 2655 (571)-270-1847 Examiner FAX: (571)-270-2847 Michael.Colucci@uspto.gov
Read full office action

Prosecution Timeline

Jun 14, 2024
Application Filed
Dec 23, 2025
Non-Final Rejection — §102
Mar 25, 2026
Applicant Interview (Telephonic)
Mar 25, 2026
Examiner Interview Summary
Mar 30, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592240
ENCODING AND DECODING OF ACOUSTIC ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586570
CHUNK-WISE ATTENTION FOR LONGFORM ASR
2y 5m to grant Granted Mar 24, 2026
Patent 12573405
WORD CORRECTION USING AUTOMATIC SPEECH RECOGNITION (ASR) INCREMENTAL RESPONSE
2y 5m to grant Granted Mar 10, 2026
Patent 12573380
MANAGING AMBIGUOUS DATE MENTIONS IN TRANSFORMING NATURAL LANGUAGE TO A LOGICAL FORM
2y 5m to grant Granted Mar 10, 2026
Patent 12567414
SYSTEM AND METHOD FOR DETECTING A WAKEUP COMMAND FOR A VOICE ASSISTANT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
91%
With Interview (+15.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 990 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month