Prosecution Insights
Last updated: April 19, 2026
Application No. 18/540,465

CONVERSATIONAL LANGUAGE MODEL BASED CONTENT RETRIEVAL

Non-Final OA §101§103
Filed
Dec 14, 2023
Examiner
ADESANYA, OLUJIMI A
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Amazon Technologies, Inc.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
91%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
430 granted / 655 resolved
+3.6% vs TC avg
Strong +26% interview lift
Without
With
+25.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
35 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
40.6%
+0.6% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 655 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of query/input analysis without significantly more. The claims 1, 5 and 13 recite steps of receiving first query data comprising a first natural language request to recommend content of a first type/related to content (i.e., a data gathering step), performing a lookup using the first query data to determine a first action identifier/data associated with a first recommendation action/query data (i.e., a data evaluation/analysis step), generating, by a large language model (LLM) orchestrator, first prompt data comprising the first query data and the first recommendation action, wherein the first prompt data instructs a first LLM to recognize entities in the first query data relevant to the first recommendation action (i.e., a data evaluation/analysis step), determining, by the first LLM using the first prompt data, a first recognized entity in the first natural language request, wherein the first recognized entity corresponds to the content of the first type (i.e., a data evaluation/analysis step), sending, by the first LLM to the LLM orchestrator, a request to resolve the first recognized entity (i.e., a data evaluation/analysis step), determining, using a first keyword resolver tool, a first content identifier related to a first instance of the content of the first type and a second content identifier related to a second instance of the content of the first type (i.e., a data evaluation/analysis/judgement step), generating, by the LLM orchestrator, second prompt data comprising the first content identifier and the second content identifier (i.e., a data evaluation/analysis step), generating, by the first LLM using the second prompt data, first instructions to perform the first recommendation action using the first content identifier and the second content identifier (i.e., a data evaluation/analysis step) receiving, by an action execution component, the first instructions (i.e., a data evaluation/analysis step) and executing the first recommendation action by the action execution component, wherein the executing the first recommendation action comprises causes an output of a list comprising data representing the first instance of the content of the first type and data representing the second instance of the content of the first type (i.e., a post solutional step of outputting data as a result of analysis), corresponding to steps achievable by a human in manually/mentally analyzing data to determine output to provide, and as such, the steps correspond to the mental processes category of abstract ideas. This judicial exception is not integrated into a practical application because the claims are directed to an abstract idea with additional generic computer elements, where the generically recited computer elements (computer-implemented, LLM, LM, system, processor, memory) do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because steps “generating, by the first LM based at least in part on the second prompt data, first instructions to perform at least one action associated with the first action data using the first resolved entity; and generating output data associated with the first resolved entity based at least in part on the first instructions”, and “executing the first recommendation action by the action execution component, wherein the executing the first recommendation action comprises causes an output of a list comprising data representing the first instance of the content of the first type and data representing the second instance of the content of the first type”, correspond to the well-understood, routine, conventional computer functions of “Gathering and analyzing information using conventional techniques and displaying the result,” and “collecting information, analyzing it, and displaying certain results of the collection and analysis” as recognized by the court decisions listed in MPEP § 2106.05 and as provided by cited references Karri and Krishnan (PTO 892 form). The dependent claims also recite mental processes and do not add significantly more than the abstract idea and are as such similarly rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claims 5, 6, 12-14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Karri et al US 2025/0077263 A1 (“Karri”) in view of Miller et al US 2024/0289545 A1 (“Miller”) Per claim 5, Karri discloses a method comprising: receiving first query data comprising a first request related to first content (para. [0035]; a user data input is received as a natural language request entered via a user interface …, para. [0066]); determining, using the first query data, first action data associated with the first query data (fig. 6, element 615; At operation 505, a user data input is received as a natural language request entered via a user interface (e.g., the conversational planning user interface 105 as described in FIG. 1, etc.). At operation 410 a common AI service (e.g., the common AI service 110 as described in FIG. 1, etc.) initiates intent classification of the data input to identify an update intent…. At operation 520 intents with a probability within a threshold (e.g., top-k intents, etc. are retrieved based on results from the vector database query to generate an intent candidate set…., para. [0066]); generating first prompt data comprising a representation of the first query data, wherein the first prompt data instructs a first language model (LM) to recognize entities in the first query data relevant to the first action data (fig. 6, elements 615, 620; para. [0039]; At operation 220, an orchestrator (e.g., the orchestrator 155 as described in FIG. 1, etc.) selects and executes appropriate agents based on an identified intent…. by way of example and not limitation, an entity resolution agent that extracts and maps entities from user utterances (e.g., requests, etc.) …, para. [0050]; At operation 810 an entity resolution agent (e.g., the entity resolution agent 310 as described in FIG. 3, etc.) initiates an entity extraction and mapping process. At operation 815, potential entities are identified in the user request.…, para. [0079]); determining, by the first LM based at least in part on the first prompt data, a first recognized entity from the first request, wherein the first recognized entity is associated with the first content (At operation 220, an orchestrator (e.g., the orchestrator 155 as described in FIG. 1, etc.) selects and executes appropriate agents based on an identified intent. The orchestrator selects the agents from an agent library … an entity resolution agent that extracts and maps entities from user utterances …, para. [0050]; para. [0066]; para. [0079]; para. [0088]-[0089]); generating, by the first LM, a request to resolve the first recognized entity (fig. 3, element 310; fig. 8, elements 840, 845; an entity resolution agent that extracts and maps entities from user utterances (e.g., requests, etc.) …, para. [0050]; para. [0060]; At operation 815, potential entities are identified in the user request.… if multiple results are returned for a specific phrase, options are provided to the user interface for user disambiguation. At operation 845, matched entities are mapped to corresponding system entities.…, para. [0079]); determining a first resolved entity for the first recognized entity (para. [0050]; para. [0060]; para. [0070]; At operation 815, potential entities are identified in the user request.… if multiple results are returned for a specific phrase, options are provided to the user interface for user disambiguation. At operation 845, matched entities are mapped to corresponding system entities.…, para. [0079]) generating second prompt data comprising the first resolved entity (para. [0023]; para. [0050]-[0051]; para. [0057]; an update agent (e.g., the update agent 315 as described in FIG. 3, etc.) creates a structured query based on the identified intent and resolved entities …, para. [0060]; para. [0080]-[0081]); generating, by the first LM based at least in part on the second prompt data, first instructions to perform at least one action associated with the first action data using the first resolved entity (an entity resolution agent that extracts and maps entities from user utterances (e.g., requests, etc.), an update agent that creates update queries based on user input and entities …, para. [0050]-[0051]; para. [0053]; para. [0060]-[0063]); and generating output data associated with the first resolved entity based at least in part on the first instructions (At operation 230, results generated in the virtual container are output to the user interface …, para. [0052]; para. [0060]-[0063]) Karri does not explicitly disclose generating first prompt data comprising data representing the first action data However, this feature is taught by Miller (para. [0079]; a solution plan (e.g., 156) may be generated based on the goal (e.g., 154) and the user prompt (112) …, para. [0080]; the solution plan (e.g., 156) that was generated by the plan creation component (e.g., 150) may be passed to the large language mode (e.g., 130) for further processing. …, para. [0082]) It would have been obvious to one of ordinary skill in the art before the effective filing of the instant invention to combine the teachings of Miller with the method of Karri in arriving at the missing features of Karri, because such combination would have resulted in enhancing the quality and accuracy of solution plans (Miller, para. [0174]). Per claim 6, Karri in view of Miller discloses the method of claim 5, Karri discloses performing a lookup of a first data store using the first query data to retrieve the first action data, wherein the first action data comprises a set of predefined tasks defined for compliance with a set of rules (para. [0035]; para. [0066]; The orchestrator 155 integrates the composite actions into an overall workflow for execution of tasks for intents detected from user requests …, para. [0076]). Per claim 12, Karri in view of Miller discloses the method of claim 5, Karri discloses: receiving, the first query data, by a first natural language processing system (para. [0035]; a user data input is received as a natural language request entered via a user interface …, para. [0066]); determining, by a second LM of the first natural language processing system, a first domain associated with the first query data (fig. 3; fig. 7; para. [0037]-[0039]); and sending the first query data to a domain-specific processing system associated with the first domain, the domain-specific processing system comprising the first LM (fig. 3; para. [0037]-[0039]). Per claim 13, Karri discloses a system comprising: at least one processor (para. [0095]-[0096]), and non-transitory computer-readable memory storing instructions that, when executed by the at least one processor (para. [0095]-[0096]), are effective to: receive first query data comprising a first request related to first content (para. [0035]; a user data input is received as a natural language request entered via a user interface …, para. [0066]) determine, using the first query data, first action data associated with the first query data (fig. 5, elements 505, 520; para. [0055]-[0056]; At operation 520 intents with a probability within a threshold (e.g., top-k intents, etc. are retrieved based on results from the vector database query to generate an intent candidate set.…, para. [0066]); generate first prompt data comprising a representation of the first query data, wherein the first prompt data instructs a first language model (LM) to recognize entities in the first query data relevant to the first action data (fig. 6, elements 615, 620; para. [0039]; At operation 220, an orchestrator (e.g., the orchestrator 155 as described in FIG. 1, etc.) selects and executes appropriate agents based on an identified intent…. by way of example and not limitation, an entity resolution agent that extracts and maps entities from user utterances (e.g., requests, etc.) …, para. [0050]; At operation 810 an entity resolution agent (e.g., the entity resolution agent 310 as described in FIG. 3, etc.) initiates an entity extraction and mapping process. At operation 815, potential entities are identified in the user request.…, para. [0079]); determine, by the first LM based at least in part on the first prompt data, a first recognized entity from the first request, wherein the first recognized entity is associated with the first content (At operation 220, an orchestrator (e.g., the orchestrator 155 as described in FIG. 1, etc.) selects and executes appropriate agents based on an identified intent. The orchestrator selects the agents from an agent library … an entity resolution agent that extracts and maps entities from user utterances …, para. [0050]; para. [0066]; para. [0079]; para. [0088]-[0089]); generate a request to resolve the first recognized entity (fig. 3, element 310; fig. 8, elements 840, 845; an entity resolution agent that extracts and maps entities from user utterances (e.g., requests, etc.) …, para. [0050]; para. [0060]); determine a first resolved entity for the first recognized entity (para. [0050]; para. [0060]; para. [0070]; At operation 845, matched entities are mapped to corresponding system entities.…, para. [0079]) generate second prompt data comprising the first resolved entity (para. [0023]; para. [0050]-[0051]; para. [0057]; an update agent (e.g., the update agent 315 as described in FIG. 3, etc.) creates a structured query based on the identified intent and resolved entities …, para. [0060]; para. [0080]-[0081]); generate, by the first LM based at least in part on the second prompt data, first instructions to perform at least one action associated with the first action data using the first resolved entity (an entity resolution agent that extracts and maps entities from user utterances (e.g., requests, etc.), an update agent that creates update queries based on user input and entities …, para. [0050]-[0051]; para. [0053]; para. [0060]-[0063]); and generate output data associated with the first resolved entity based at least in part on the first instructions (At operation 230, results generated in the virtual container are output to the user interface …, para. [0052]; para. [0060]-[0063]) Karri does not explicitly disclose generating first prompt data comprising data representing the first action data However, this feature is taught by Miller (para. [0079]; a solution plan (e.g., 156) may be generated based on the goal (e.g., 154) and the user prompt (112) …, para. [0080]; the solution plan (e.g., 156) that was generated by the plan creation component (e.g., 150) may be passed to the large language mode (e.g., 130) for further processing. …, para. [0082]) It would have been obvious to one of ordinary skill in the art before the effective filing of the instant invention to combine the teachings of Miller with the method of Karri in arriving at the missing features of Karri, because such combination would have resulted in enhancing the quality and accuracy of solution plans (Miller, para. [0174]). Per claim 14, Karri in view of Miller discloses the system of claim 13, and the non-transitory computer-readable memory storing further instructions executed by the at least one processor (Karri, para. [0095]-[0096]) System claim 14 and method claim 6 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 14 is similarly rejected under the same rationale as applied above with respect to claim 6. Per claim 20, Karri in view of Miller discloses the system of claim 13, and the non-transitory computer-readable memory storing further instructions executed by the at least one processor (Karri, para. [0095]-[0096]) System claim 20 and method claim 12 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 20 is similarly rejected under the same rationale as applied above with respect to claim 12. 2. Claims 1-4, 7-11 and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Karri in view of Miller and Krishnan et al US 2022/0093101 A1 (“Krishnan”) Per claim 1, Karri discloses a computer-implemented method comprising: receiving first query data comprising a first natural language request (para. [0035]; a user data input is received as a natural language request entered via a user interface …, para. [0066]); performing a lookup using the first query data to determine a first action identifier associated with a first recommendation action (fig. 5; fig. 6; para. [0050]; para. [0055]-[0056]; At operation 520 intents with a probability within a threshold (e.g., top-k intents, etc. are retrieved based on results from the vector database query to generate an intent candidate set.…, para. [0066]); generating, by a large language model (LLM) orchestrator, first prompt data comprising the first query data, wherein the first prompt data instructs a first LLM to recognize entities in the first query data relevant to the first recommendation action (fig. 6, elements 615, 620; para. [0039]; At operation 220, an orchestrator (e.g., the orchestrator 155 as described in FIG. 1, etc.) selects and executes appropriate agents based on an identified intent…. by way of example and not limitation, an entity resolution agent that extracts and maps entities from user utterances (e.g., requests, etc.) …, para. [0050]; At operation 810 an entity resolution agent (e.g., the entity resolution agent 310 as described in FIG. 3, etc.) initiates an entity extraction and mapping process. At operation 815, potential entities are identified in the user request.…, para. [0079]); determining, by the first LLM using the first prompt data, a first recognized entity in the first natural language request (At operation 220, an orchestrator (e.g., the orchestrator 155 as described in FIG. 1, etc.) selects and executes appropriate agents based on an identified intent. The orchestrator selects the agents from an agent library … an entity resolution agent that extracts and maps entities from user utterances …, para. [0050]; para. [0066]; para. [0079]; para. [0088]-[0089]); sending, by the first LLM to the LLM orchestrator, a request to resolve the first recognized entity (fig. 3, element 310; fig. 8, elements 840, 845; an entity resolution agent that extracts and maps entities from user utterances (e.g., requests, etc.) …, para. [0050]; para. [0060]); generating, by the LLM orchestrator, second prompt data (para. [0023]; para. [0050]-[0051]; para. [0057]; an update agent (e.g., the update agent 315 as described in FIG. 3, etc.) creates a structured query based on the identified intent and resolved entities …, para. [0060]; para. [0080]-[0081]); generating, by the first LLM using the second prompt data, first instructions (para. [0023]; para. [0050]-[0051]; para. [0057]; an update agent (e.g., the update agent 315 as described in FIG. 3, etc.) creates a structured query based on the identified intent and resolved entities …, para. [0060]; para. [0080]-[0081]); Karri does not explicitly disclose generating first prompt data comprising the first recommendation action However, this feature is taught by Miller (para. [0079]; a solution plan (e.g., 156) may be generated based on the goal (e.g., 154) and the user prompt (112) …, para. [0080]; the solution plan (e.g., 156) that was generated by the plan creation component (e.g., 150) may be passed to the large language mode (e.g., 130) for further processing. …, para. [0082]) Karri in view of Miller does not explicitly disclose receiving first query data comprising a first natural language request to recommend content of a first type, wherein the first recognized entity corresponds to the content of the first type, generating, second prompt data comprising the first content identifier and the second content identifier, generating, using the second prompt data, first instructions to perform the first recommendation action using the first content identifier and the second content identifier, receiving, by an action execution component, the first instructions or executing the first recommendation action by the action execution component, wherein the executing the first recommendation action comprises causes an output of a list comprising data representing the first instance of the content of the first type and data representing the second instance of the content of the first type. However, these features are taught by Krishnan: receiving first query data comprising a first natural language request to recommend content of a first type (fig. 14, element 1404; para. [0048]; para. [0058]; In the example “play songs by the stones,” …, para. [0225]; “tell me a recipe for pasta sauce,” …, para. [0254]) wherein the first recognized entity corresponds to the content of the first type (para. [0058]; In this manner, the NER component 862 identifies “slots” (each corresponding to one or more particular words in text data) that may be useful for later processing. The NER component 862 may also label each slot with a type …, para. [0186]-[0188]; para. [0193]; para. [0225]; para. [0254]); determining, using a first keyword resolver tool, a first content identifier related to a first instance of the content of the first type and a second content identifier related to a second instance of the content of the first type (para. [0186]-[0188]; para. [0225]; para. [0254]); generating, second prompt data comprising the first content identifier and the second content identifier (fig. 9; para. [0074]; para. [0254]); generating, using the second prompt data, first instructions to perform the first recommendation action using the first content identifier and the second content identifier (fig. 9; para. [0074]; para. [0254]; para. [0356]); receiving, by an action execution component, the first instructions (fig. 2; para. [0069]; para. [0074]; The action selector 1118 determines an action to be performed in response to the user request …, para. [0303]-[0304]); and executing the first recommendation action by the action execution component, wherein the executing the first recommendation action comprises causes an output of a list comprising data representing the first instance of the content of the first type and data representing the second instance of the content of the first type (fig. 14; fig. 15 F; para. [0074]; para. [0225]; para. [0303]-[0304]; para. [0337]; in response to a user command or otherwise, system 120 may send to device 110 a directive to output audio TTS audio data 1302 corresponding to a list of choices. The list may correspond to a variety of choices such as items to be selected, actions to be selected, or other examples …, para. [0356]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Miller with the method of Karri in arriving at the missing features of Karri, as well as to combine the teachings of Krishnan with the method of Karri in view of Miller in arriving at the missing features of Karri in view of Miller, because such combination would have resulted in enhancing the quality and accuracy of solution plans (Miller, para. [0174]), as well as in improving the likelihood that a speech recognition system will output hypotheses that make sense grammatically (Krishnan, para. [0041]; para. [0166]). Per claim 2, Karri in view of Miller and Krishnan discloses the computer-implemented method of claim 1, Krishnan discloses: searching, by the first keyword resolver tool using the first recognized entity, first historical context data comprising a prior input natural language request and a prior response to the prior input natural language request (fig. 15F; para. [0282]; para. [0355]; para. [0387]-[0388]) and determining, by the first keyword resolver tool, that the prior response comprises the first instance of the content of the first type and the second instance of the content of the first type, wherein the determining the first content identifier and the second content identifier comprises resolving the first recognized entity using the first historical context data (para. [0387]-[0388]). Per claim 3, Karri in view of Miller and Krishnan discloses the computer-implemented method of claim 1, Karri discloses receiving second query data comprising a second natural language request (fig. 2; para. [0053]); performing a lookup using the second query data to determine a second action identifier (fig. 5, elements 505, 520; para. [0053]; para. [0055]-[0056]; At operation 520 intents with a probability within a threshold (e.g., top-k intents, etc. are retrieved based on results from the vector database query to generate an intent candidate set.…, para. [0066]); generating, by the LLM orchestrator, third prompt data comprising the second query data, wherein the third prompt data instructs the first LLM to recognize entities in the second query data (fig. 2; para. [0039]; At operation 220, an orchestrator (e.g., the orchestrator 155 as described in FIG. 1, etc.) selects and executes appropriate agents based on an identified intent…. by way of example and not limitation, an entity resolution agent that extracts and maps entities from user utterances (e.g., requests, etc.) …, para. [0050]; para. [0053]; para. [0066]; para. [0070]; para. [0079]) determining, by the first LLM using the third prompt data, a second recognized entity in the second natural language request (fig. 2; para. [0040]; At operation 220, an orchestrator (e.g., the orchestrator 155 as described in FIG. 1, etc.) selects and executes appropriate agents based on an identified intent. The orchestrator selects the agents from an agent library … an entity resolution agent that extracts and maps entities from user utterances …, para. [0050]; para. [0053]; para. [0066]; para. [0088]-[0089]) sending, by the LLM to the LLM orchestrator, a request to resolve the second recognized entity (fig. 3, element 310; fig. 8, elements 840, 845; an entity resolution agent that extracts and maps entities from user utterances (e.g., requests, etc.) …, para. [0050]; para. [0060]); Krishnan discloses: receiving second query data comprising a second natural language request that requests a display of content in the list, wherein the second natural language request identifies the content in the list using an ordinal reference (fig. 15F; para. [0073]; para. [0352]); performing a lookup using the second query data to determine a second action identifier associated with a display action (fig. 15F; para. [0041]; para. [0073]; para. [0189]; para. [0352]; para. [0390]); generating, third prompt data comprising the display action, wherein the third prompt data instructs the first LLM to recognize entities in the second query data relevant to the display action (para. [0337]; para. [0352]-[0353]; para. [0363]; para. [0387], NLU as suggesting LLM); determining, using the third prompt data, a second recognized entity in the second natural language request, wherein the second recognized entity corresponds to the content in the list (para. [0041]; para. [0352]; para. [0363]); determining, using an ordinal resolver tool, the second content identifier based on an order of output of the second instance of the content of the first type in the list corresponding to the ordinal reference (para. [0352]; para. [0362]-[0364]). Per claim 4, Karri in view of Miller and Krishnan discloses the computer-implemented method of claim 1, Karri discloses receiving, the first query data, by a first natural language processing system (para. [0035]; a user data input is received as a natural language request entered via a user interface …, para. [0066]); determining, by a second LLM of the first natural language processing system, a first domain associated with the first query data (fig. 3; fig. 7; para. [0037]-[0039]); and sending the first query data to a domain-specific processing system associated with the first domain, the domain-specific processing system comprising the first LLM and the LLM orchestrator (fig. 7; para. [0020]; para. [0037]-[0039]). Per claim 7, Karri in view of Miller discloses the method of claim 5, Karri in view of Miller does not explicitly disclose determining, using the first query data, a target for the first action data, wherein the target specifies a predefined data structure that is acted upon using the first action data However, this feature is taught by Krishnan (para. [0177]; para. [0190]; para. [0225]). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Krishnan with the method of Karri in view of Miller in arriving at the missing features of Karri in view of Miller, because such combination would have resulted in improving the likelihood that a speech recognition system will output hypotheses that make sense grammatically (Krishnan, para. [0041]; para. [0166]). Per claim 8, Karri in view of Miller discloses the method of claim 5, further comprising: Karri in view of Miller does not explicitly disclose determining, using the first query data, an endpoint for the first action data, wherein the endpoint defines a device for outputting the output data or sending the output data to the endpoint However, these features are taught by Krishnan: determining, using the first query data, an endpoint for the first action data, wherein the endpoint defines a device for outputting the output data (para. [0210]-[0213]); and sending the output data to the endpoint (para. [0210]-[0213]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Krishnan with the method of Karri in view of Miller in arriving at the missing features of Karri in view of Miller, because such combination would have resulted in improving the likelihood that a speech recognition system will output hypotheses that make sense grammatically (Krishnan, para. [0041]; para. [0166]). Per claim 9, Karri in view of Miller discloses the method of claim 5, Karri in view of Miller does not explicitly disclose sending data representing the first recognized entity to an entity information retrieval component, determining, by the entity information retrieval component, at least a first keyword of the first recognized entity, determining an account associated with the first query data or determining the first resolved entity by searching a list or history associated with the account, wherein the first resolved entity comprises identifier data identifying an instance of the first content However, these features are taught by Krishnan: sending data representing the first recognized entity to an entity information retrieval component (para. [0184]); determining, by the entity information retrieval component, at least a first keyword of the first recognized entity (para. [0184]); determining an account associated with the first query data (Thus the system may respond to each user with personalized content by keeping track of what each user said in the dialog history…., para. [0355]); and determining the first resolved entity by searching a list or history associated with the account, wherein the first resolved entity comprises identifier data identifying an instance of the first content (para. [0355]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Krishnan with the method of Karri in view of Miller in arriving at the missing features of Karri in view of Miller, because such combination would have resulted in improving the likelihood that a speech recognition system will output hypotheses that make sense grammatically (Krishnan, para. [0041]; para. [0166]). Per claim 10, Karri in view of Miller discloses the method of claim 5, Karri in view of Miller does not explicitly disclose determining that the first request comprises an ordinal reference or determining the first resolved entity by searching a list of content output prior to receiving the first query data using the ordinal reference, wherein the first resolved entity comprises identifier data identifying an instance of the first content. However, these features are taught by Krishnan: determining that the first request comprises an ordinal reference (para. [0352]); and determining the first resolved entity by searching a list of content output prior to receiving the first query data using the ordinal reference, wherein the first resolved entity comprises identifier data identifying an instance of the first content (para. [0352]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Krishnan with the method of Karri in view of Miller in arriving at the missing features of Karri in view of Miller, because such combination would have resulted in improving the likelihood that a speech recognition system will output hypotheses that make sense grammatically (Krishnan, para. [0041]; para. [0166]). Per claim 11, Karri in view of Miller discloses the method of claim 5, Karri does not explicitly disclose sending data representing the first recognized entity to an entity information retrieval component, determining, by the entity information retrieval component, at least a first keyword of the first recognized entity, searching, using at least the first keyword, first historical context data comprising a prior input request and a prior response to the prior input request or determining the first resolved entity based at least in part on a correspondence between at least the first keyword and a previously-resolved entity present in the prior input request and the prior response However, these features are taught by Krishnan: sending data representing the first recognized entity to an entity information retrieval component (para. [0184]); determining, by the entity information retrieval component, at least a first keyword of the first recognized entity (para. [0184]); searching, using at least the first keyword, first historical context data comprising a prior input request and a prior response to the prior input request (para. [0283]; para. [0355]); and determining the first resolved entity based at least in part on a correspondence between at least the first keyword and a previously-resolved entity present in the prior input request and the prior response (para. [0283]; para. [0355]); It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Krishnan with the method of Karri in view of Miller in arriving at the missing features of Karri in view of Miller, because such combination would have resulted in improving the likelihood that a speech recognition system will output hypotheses that make sense grammatically (Krishnan, para. [0041]; para. [0166]). Per claim 15, Karri in view of Miller discloses the system of claim 13, and the non-transitory computer-readable memory storing further instructions executed by the at least one processor (Karri, para. [0095]-[0096]) System claim 15 and method claim7 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 15 is similarly rejected under the same rationale as applied above with respect to claim 7. Per claim 16, Karri in view of Miller discloses the system of claim 13, and the non-transitory computer-readable memory storing further instructions executed by the at least one processor (Karri, para. [0095]-[0096]) System claim 16 and method claim 8 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 16 is similarly rejected under the same rationale as applied above with respect to claim 8. Per claim 17, Karri in view of Miller discloses the system of claim 13, and the non-transitory computer-readable memory storing further instructions executed by the at least one processor (Karri, para. [0095]-[0096]) System claim 17 and method claim 9 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 17 is similarly rejected under the same rationale as applied above with respect to claim 9. Per claim 18, Karri in view of Miller discloses the system of claim 13, and the non-transitory computer-readable memory storing further instructions executed by the at least one processor (Karri, para. [0095]-[0096]) System claim 18 and method claim 10 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 18 is similarly rejected under the same rationale as applied above with respect to claim 10. Per claim 19, Karri in view of Miller discloses the system of claim 13, and the non-transitory computer-readable memory storing further instructions executed by the at least one processor (Karri, para. [0095]-[0096]) System claim 19 and method claim 11 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 19 is similarly rejected under the same rationale as applied above with respect to claim 11. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892 form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUJIMI A ADESANYA whose telephone number is (571)270-3307. The examiner can normally be reached Monday-Friday 8:30-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at 571-272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUJIMI A ADESANYA/Primary Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Dec 14, 2023
Application Filed
Feb 02, 2026
Non-Final Rejection — §101, §103
Mar 13, 2026
Interview Requested
Mar 24, 2026
Examiner Interview Summary
Mar 24, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591739
METHOD AND SYSTEM FOR DIACRITIZING ARABIC TEXT
2y 5m to grant Granted Mar 31, 2026
Patent 12585686
EVENT DETECTION AND CLASSIFICATION METHOD, APPARATUS, AND DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12585481
METHOD AND ELECTRONIC DEVICE FOR PERFORMING TRANSLATION
2y 5m to grant Granted Mar 24, 2026
Patent 12578779
Multiple Stage Network Microphone Device with Reduced Power Consumption and Processing Load
2y 5m to grant Granted Mar 17, 2026
Patent 12579181
Synchronization of Sensor Network with Organization Ontology Hierarchy
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
91%
With Interview (+25.5%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 655 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month