DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 4, 2026 has been entered.
Response to Amendment
This Office Action has been issued in response to Applicant’s Communication of amended application S/N 18/654,344 filed on February 4, 2026. Claims 1 to 8, and 11 to 22 are currently pending with the application.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 to 8 and 11 to 22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitations “transmitting the enriched user query from the intermediate LM layer to the LLM, wherein transmitting the enriched user query to the LLM causes the LLM or the intermediate LM layer to, based on the enriched user query: generate, during the gameplay, a response to the user prompt, and update a database comprising the historical trends associated with the user profile” in line 1 at page 3. These limitations are not clear. More specifically, the limitations recite an “or” clause with an option to cause the intermediate LM layer to perform the generate and the update operations, when the enriched user query is transmitted to the LLM. It is not clear how the transmitting the user query to the LLM can cause the intermediate LM layer to perform the operations. That is, the intention of the limitations is not clear, therefore rendering the claim indefinite. For purposes of examination, Examiner will interpret the limitations as “transmitting the enriched user query from the intermediate LM layer to the LLM, wherein transmitting the enriched user query to the LLM causes the LLM to, based on the enriched user query: generate, during the gameplay, a response to the user prompt, and update a database comprising the historical trends associated with the user profile”, without the conflicting “or” clause. Same rationale applies to claims 11 and 16, since they recite similar limitations, and to claims 2 to 8, 12 to 15, and 17 to 22, since they inherit the same deficiencies, by virtue of their dependency.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 to 5, 7, 8, 11 to 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Benedetto et al. (U.S. Publication No. 2019/0291011) hereinafter Benedetto, in view of Madnani (U.S. Publication No. 2025/0045314), and further in view of Sassak, JR. et al. (U.S. Publication No. 2025/0272506) hereinafter Sassak.
As to claim 1:
Benedetto discloses:
A system comprising: at least one computer processor; and computer storage media
storing computer-useable instructions that, when used by the at least one computer processor, cause the system to perform operations comprising:
receiving, from a user device and in association with a user profile engaging in gameplay within a gaming computer environment, an input comprising a user prompt [Paragraph 0007 teaches receiving a query from a user; Paragraph 0056 teaches the deep learning engine returns the response back to the game assist server; Paragraph 0057 teaches a user query received by a gaming controller during game play of a player playing a gaming application; Paragraph 0075 teaches user profile data; Paragraph 0081 teaches user saved data includes user profile data that identifies the user];
from the user prompt, during the gameplay, determining prompt-enriching information comprising at least one of: a first indication of a logical assessment of the user prompt, a second indication of a factual assessment of the user prompt, a third indication of an intent of the user prompt, a fourth indication of a comparison of the user prompt to a database comprising data for a plurality of users, or a fifth indication of historical trends associated with the user profile [Paragraph 0052 teaches context analyzer that determines context in association with the user query, including user profile related information, current context associated with the user query; Paragraph 0055 teaches matching the user query to queries, collected from the player and other players; Paragraph 0057 teaches a user query received by a gaming controller during game play of a player playing a gaming application; Paragraph 0078 teaches game context includes user/player saved data, which includes information that personalizes the video game for the corresponding player];
appending the prompt-enriching information and the user prompt to generate an enriched user query [Paragraph 0052 teaches generating a response based on the query along with the proper context; Paragraph 0053 teaches the query and the current context are delivered to an AI processor];
generate, during the gameplay, a response [Paragraph 0006 teaches providing a response that may provide assistance during game play of the user; Paragraph 0069 teaches provide gaming assistance to player by providing responses to queries]; and
update a database comprising the historical trends associated with the user profile [Paragraph 0075 teaches game contexts including OS level contexts and global contexts may be locally stored on client device and stored at the context profiles database of the game server; Paragraph 0059 teaches neural network represents an example of an automated analysis tool for analyzing data sets to determine the responses, actions, behavior, wants and needs of a corresponding user; Paragraph 0076 teaches game context includes metadata and information related to the game play, which may help determine where the player has been within the gaming application, where the player is in the gaming application, what the player has done, what assets and skills the player or the character has accumulated, what quests or tasks are presented to the player, and where the player will be going within the gaming application, where the metadata and information in each game context may provide support related to the game play of the player, such as when matching a query to a response, wherein the game play has a particular context related to the query, and the matched response is best suited to answering the query, as determined through deep learning].
Benedetto does not appear to expressly disclose an input comprising a user prompt intended for a large language model (LLM); using an intermediate language model (LM) layer bidirectionally communicatively coupled to a large language model (LLM) and communicatively coupled to the computer environment, determining prompt-enriching information; and transmitting the enriched user query from the intermediate LM layer to the LLM, wherein transmitting the enriched user query to the LLM causes the LLM or the intermediate LM layer to, based on the enriched user query: generate a response to the user prompt.
Madnani discloses:
receiving an input comprising a user prompt intended for a large language model (LLM) [Paragraph 0014 teaches user may enter a prompt, also referred to as a query, using a client device, for example, a question they would like to have answered by the LLM]; and
transmitting the enriched user query to the LLM [Paragraph 0021 teaches giving the contextual prompt as an input to the LLM; Paragraph 0025 teaches user queries are provided to the embeddings store so that they may be enriched by contextual prompts; Paragraph 0029 teaches enriching the user query with the determined context; Paragraph 0044 teaches analyzing the contextual prompt using the LLM]; wherein transmitting the enriched user query to the LLM causes the LLM or the intermediate LM layer to, based on the enriched user query: generate a response to the user prompt [Paragraph 0021 teaches the LLM may process the contextual prompt and generate a response; Paragraph 0025 teaches after the LLM processes the contextual prompt, a response to the user query may be generated].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Benedetto, by receiving an input comprising a user prompt intended for a large language model (LLM), and transmitting the enriched user query to the LLM; wherein transmitting the enriched user query to the LLM causes the LLM or the intermediate LM layer to, based on the enriched user query: generate a response to the user prompt, as taught by Madnani [Paragraph 0014, 0021, 0025, 0029, 0044], because both applications are directed to enriching user queries; utilizing a LLM for enriching and processing user queries enables to identify the latest and most contextual information, and generating prompts that provide more accurate and useful responses to a user’s query, enhancing hereby the user’s experience (See Madnani Para [0012]).
Neither Benedetto nor Madnani appear to expressly disclose using an intermediate language model (LM) layer bidirectionally communicatively coupled to a large language model (LLM) and communicatively coupled to the computer environment, determining prompt-enriching information; and transmitting the enriched user query from the intermediate LM layer to the LLM.
Sassak discloses:
using an intermediate language model (LM) layer bidirectionally communicatively coupled to a large language model (LLM) and communicatively coupled to the computer environment, determining prompt-enriching information [Paragraph 0019 teaches the prompt includes information about the user’s history; Paragraph 0042 teaches augmented generation engine generates prompts to a LLM, including the user input, identified relevant source of text, and receives output from the LLM to more efficiently generate output for the user, in other words, an intermediate language model layer bidirectionally communicatively coupled to a LLM and the computer environment; Paragraph 0081 teaches the RAG-based engine includes a rephrase operator, an embedding generator, a retrieval module, a prompt generator, etc., where the engine receives a user input and generates a prompt for providing to a LLM; Paragraph 0084 teaches receiving a user input, where the prompt generator generates an updated user input; Paragraph 0088 teaches generating an input embedding by transforming the user input or the updated user input using a neural network model; Paragraph 0089 teaches retrieving relevant source text associated with the user input; Paragraph 0090 teaches generating a prompt for the LLM based on the user input and the retrieved source text; Paragraph 0091 teaches prompt generator obtains contextual information about the user’s account profile, type, demographics, history, previous queries, etc., to be included in the prompt for the LLM]; and
transmitting the enriched user query from the intermediate LM layer to the LLM [Paragraph 0081 teaches generating a prompt to provide to a LLM; Paragraph 0091 teaches prompt generator generates a prompt to the LLM including context information, user query, etc., and providing the prompt to the LLM].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Benedetto, by using an intermediate language model (LM) layer bidirectionally communicatively coupled to a large language model (LLM) and communicatively coupled to the computer environment, determining prompt-enriching information; and transmitting the enriched user query from the intermediate LM layer to the LLM, as taught by Sassak [Paragraph 0019, 0042, 0081, 0089-0091], because the applications are directed to using language models for enriching user queries; utilizing an intermediate language model provides a technical advantage in that the LLM is provided with more relevant information to enable the LLM to generate appropriate output in fewer iterations, thereby reducing the unnecessary consumption of computing resources (e.g., processing power, memory, computing time, etc. (See Sassak Para [0008]).
As to claim 2:
The combination of Benedetto and Madnani discloses:
subsequent to transmitting the enriched user query, receiving, from the LLM, a response to the enriched user query [Madnani - Paragraph 0021 teaches the LLM may process the contextual prompt and generate a response; Paragraph 0025 teaches after the LLM processes the contextual prompt, a response to the user query may be generated];
based on the response, updating data for the plurality of users, wherein future prompt-enriching information is determined based on the update [Madnani - Paragraph 0026 teaches embeddings store would be constantly updated with new queries and context; Paragraph 0028 teaches once the user query has gone through the process, context may be provided to the embedding store, which includes the response, and where the embedding store is used to determine future prompt-enriching information, therefore, based on updates]; and
transmitting an aspect of the response to the user device [Benedetto - Paragraph 0007 teaches sending the response to a device of the first player].
As to claim 3:
Benedetto discloses:
determining the third indication comprises: extracting, from the input, an action [Paragraph 0059 teaches determine actions, behaviors, wants, and needs of a corresponding user; Paragraph 0034 teaches a query having a primary intent and a secondary intent, such as the requirement to accomplish a task];
determining, from the action, a task [Paragraph 0034 teaches a query having a primary intent and a secondary intent, such as the requirement to accomplish a task, e.g., meet General Tullius; Paragraph 0189 teaches the query entered relates to information about how to unlock the keystone shown in the game play]; and
performing a semantic search with the action and the task, wherein the intent corresponds to a result of the semantic search [Paragraph 0053 teaches queries are processed by NLP engine; Paragraph 0054 teaches NLP engine may analyze the query to determine the meaning and find a suitable response; Paragraph 0189 teaches given the context of the gaming application to include global context, gaming context, and current gaming context, the game assist server supporting the PlayStation Assist application may match the query to a response, i.e., “look in Arvel’s journal”].
As to claim 4:
Benedetto as modified by Madnani discloses:
wherein appending the prompt-enriching information to the user prompt comprises combining the user prompt and the prompt-enriching information into the enriched user query that is transmitted to the LLM as a single user prompt [Paragraph 0020 teaches creating a new prompt by adding context from the embeddings to the user prompt; Paragraph 0021 teaches giving the contextual prompt as an input to the LLM, as opposed to the original user prompt], wherein the prompt-enriching information corresponds to a plurality of prompt-enriching tokens having weight values that are higher than weight values of a plurality of user tokens corresponding to the user prompt [Paragraph 0021 teaches contextual prompt is more relevant than the original user prompt; Paragraph 0029 teaches enriching the query by sorting embeddings by relevance, and creating a contextual prompt by adding the closest embeddings to the original query; Paragraph 0042 teaches embeddings may be ranked based on relevance].
As to claim 5:
The combination of Benedetto, Madnani, and Sassak discloses:
wherein the input is received during the gameplay and within a video game associated with the gaming computer environment [Benedetto - Paragraph 0008 teaches receiving a query from a first player playing a gaming application during game play; Paragraph 0057 teaches a verbal query asked by a user as received by a gaming controller during game play of a player playing a gaming application], wherein a response to the enriched user query from the LLM [Madnani - Paragraph 0021 teaches the LLM may process the contextual prompt and generate a response] is communicated within the video game [Benedetto - Paragraph 0008 teaches sending the response to a device of the first player; Paragraph 0041 teaches the response may be provided on a display simultaneously with a current game play], wherein the operations are performed by the intermediate language model (LM) layer associated with the video game [Sassak - Paragraph 0042 teaches augmented generation engine generates prompts to a LLM, including the user input, identified relevant source of text, and receives output from the LLM to more efficiently generate output for the user; Benedetto - Paragraph 0055 teaches deep learning engine is configured to match the interpreted query to models of responses or queries/responses in order to provide a response to the query, or to generate a new response based on one or more closest matched models; Paragraph 0056 teaches the deep learning engine returns the response back to the game assist server], wherein the input comprising the user prompt is received as part of the gameplay [Benedetto - Paragraph 0008 teaches receiving a query from a first player playing a gaming application during game play; Paragraph 0057 teaches a user query received by a gaming controller during game play of a player playing a gaming application; Paragraph 0058 teaches neural network used to build response models and/or query/response models based on contextual information of a gaming application and the corresponding queries, where the deep learning engine is also configured for ASR and NLP processing].
As to claim 7:
The combination of Benedetto and Madnani discloses:
receiving, from the LLM, a response to the enriched user query [Madnani - Paragraph 0013 teaches the LLM processes the contextual response to generate a response to the user query; Paragraph 0025 teaches after the LLM processes the contextual prompt, a response to the user query may be generated]; and
communicating, to an endpoint of a video game application and via an Application Programming Interface (API) [Madnani - Paragraph 0056 teaches communication between various network and computing devices of a computing system may be facilitated by one or more application programming interfaces (APIs)] of the gaming computer environment, an aspect of the response to the enriched user query [Benedetto - Paragraph 0008 teaches sending the response to a device of the first player; Paragraph 0109 teaches providing a response to one or more queries presented by the player during the game play].
As to claim 8:
Benedetto discloses:
wherein determining the second indication, the fourth indication, or the fifth indication comprises: detecting an entity in the user prompt [Paragraph 0133 teaches the query may be specifically directed to how to beat a particular point in the game (e.g., level boss, quest, task, etc.), or may be directed to gaining information about an object (e.g., a boss’s name, an object encountered in the game play)]; and
based on the detected entity, performing a search operation against the user profile, the database comprising the historical trends associated with the user profile, or the database comprising data for the plurality of users [Paragraph 0076 teaches game context includes metadata and information related to the game play, and includes where the player (e.g., character of the player) has been within the gaming application, where the player is in the gaming application, what the player has done, what assets and skills the player or the character has accumulated, what quests or tasks are presented to the player, and where the player will be going within the gaming application, where the metadata and information in each game context may be analyzed to provide support related to the game play of the player, such as when matching a query to a response, wherein the game play has a particular context related to the query, and the matched response is best suited to answering the query], wherein the search operation is performed against a data set arranged in a tabular format, graph, vector, list, index, catalog, or key-value pair [Paragraph 0077 teaches game state data is stored in game state database].
As to claim 11:
Benedetto discloses:
A computer-implemented method comprising:
accessing, from a gaming device configured to run a video game within a gaming computing environment, an input comprising a user query associated with a user profile [Paragraph 0007 teaches receiving a query from a user; Paragraph 0075 teaches user profile data; Paragraph 0008 teaches receiving a query from a first player playing a gaming application during game play; Paragraph 0081 teaches user saved data includes user profile data that identifies the user; Paragraph 0133 teaches the query may be specifically directed to how to beat a particular point in the game (e.g., level boss, quest, task, etc.), or may be directed to gaining information about an object (e.g., a boss’s name, an object encountered in the game play)];
from the input, determining a plurality of user tokens [Paragraph 0052 teaches context analyzer that determines context in association with the user query, including user profile related information, current context associated with the user query, therefore, user tokens are being determined; Paragraph 0054 teaches the NLP engine is configured to interpret the nature of the query, or understand what is requested by player, therefore, including determination of tokens];
determining, from the plurality of user tokens associated with the input and during gameplay, a plurality of prompt-enriching tokens comprising at least one of: a first token indicative of a logical assessment of the plurality of user tokens, a second token indicative of a factual assessment of the plurality of user tokens, a third token indicative of an intent of the plurality of user tokens, or a fourth token indicative of a comparison of the plurality of user tokens to a database comprising data for a plurality of users [Paragraph 0052 teaches context analyzer that determines context in association with the user query, including user profile related information, current context associated with the user query; Paragraph 0055 teaches matching the user query to queries, collected from the player and other players; Paragraph 0078 teaches game context includes user/player saved data, which includes information that personalizes the video game for the corresponding player];
combining the plurality of prompt-enriching tokens and the plurality of user tokens to generate an enriched user query [Paragraph 0052 teaches generating a response based on the query along with the proper context; Paragraph 0053 teaches the query and the current context are delivered to an AI processor]; and
based on the enriched user query causing a response, to be surfaced during gameplay of the video game [Paragraph 0008 teaches sending the response to a device of the first player; Paragraph 0041 teaches the response may be provided on a display simultaneously with a current game play]; and
causing a database comprising the historical trends associated with the user profile to be updated [Paragraph 0075 teaches game contexts including OS level contexts and global contexts may be locally stored on client device and stored at the context profiles database of the game server; Paragraph 0059 teaches neural network represents an example of an automated analysis tool for analyzing data sets to determine the responses, actions, behavior, wants and needs of a corresponding user; Paragraph 0076 teaches game context includes metadata and information related to the game play, which may help determine where the player has been within the gaming application, where the player is in the gaming application, what the player has done, what assets and skills the player or the character has accumulated, what quests or tasks are presented to the player, and where the player will be going within the gaming application, where the metadata and information in each game context may provide support related to the game play of the player, such as when matching a query to a response, wherein the game play has a particular context related to the query, and the matched response is best suited to answering the query, as determined through deep learning].
Benedetto does not appear to expressly disclose using an intermediate language model (LM) layer bidirectionally communicatively coupled to a large language model (LLM) and communicatively coupled to the computer environment, determining prompt-enriching tokens; and transmitting the enriched user query from the intermediate LM layer to the LLM, a response from the LLM or the intermediate LM layer.
Madnani discloses:
transmitting the enriched user query to the LLM, a response from the LLM or the intermediate LM layer [Paragraph 0014 teaches user may enter a prompt, also referred to as a query, using a client device, for example, a question they would like to have answered by the LLM; Paragraph 0021 teaches giving the contextual prompt as an input to the LLM; Paragraph 0025 teaches user queries are provided to the embeddings store so that they may be enriched by contextual prompts, where after the LLM processes the contextual prompt, a response to the user query may be generated; Paragraph 0029 teaches enriching the user query with the determined context; Paragraph 0044 teaches analyzing the contextual prompt using the LLM].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Benedetto, by transmitting the enriched user query to the LLM, a response from the LLM or the intermediate LM layer, as taught by Madnani [Paragraph 0014, 0021, 0025, 0029, 0044], because both applications are directed to enriching user queries; utilizing a LLM for enriching and processing user queries enables to identify the latest and most contextual information, and generating prompts that provide more accurate and useful responses to a user’s query, enhancing hereby the user’s experience (See Madnani Para [0012]).
Neither Benedetto nor Madnani appear to expressly using an intermediate language model (LM) layer bidirectionally communicatively coupled to a large language model (LLM) and communicatively coupled to the computer environment, determining prompt-enriching tokens; and transmitting the enriched user query from the intermediate LM layer to the LLM.
Sassak discloses:
using an intermediate language model (LM) layer bidirectionally communicatively coupled to a large language model (LLM) and communicatively coupled to the computer environment, determining prompt-enriching tokens [Paragraph 0019 teaches the prompt includes information about the user’s history; Paragraph 0042 teaches augmented generation engine generates prompts to a LLM, including the user input, identified relevant source of text, and receives output from the LLM to more efficiently generate output for the user, in other words, an intermediate language model layer bidirectionally communicatively coupled to a LLM and the computer environment; Paragraph 0081 teaches the RAG-based engine includes a rephrase operator, an embedding generator, a retrieval module, a prompt generator, etc., where the engine receives a user input and generates a prompt for providing to a LLM; Paragraph 0084 teaches receiving a user input, where the prompt generator generates an updated user input; Paragraph 0088 teaches generating an input embedding by transforming the user input or the updated user input using a neural network model; Paragraph 0089 teaches retrieving relevant source text associated with the user input; Paragraph 0090 teaches generating a prompt for the LLM based on the user input and the retrieved source text; Paragraph 0091 teaches prompt generator obtains contextual information about the user’s account profile, type, demographics, history, previous queries, etc., to be included in the prompt for the LLM]; and
transmitting the enriched user query from the intermediate LM layer to the LLM [Paragraph 0081 teaches generating a prompt to provide to a LLM; Paragraph 0091 teaches prompt generator generates a prompt to the LLM including context information, user query, etc., and providing the prompt to the LLM].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Benedetto, by using an intermediate language model (LM) layer bidirectionally communicatively coupled to a large language model (LLM) and communicatively coupled to the computer environment, determining prompt-enriching tokens; and transmitting the enriched user query from the intermediate LM layer to the LLM, as taught by Sassak [Paragraph 0019, 0042, 0081, 0089-0091], because the applications are directed to using language models for enriching user queries; utilizing an intermediate language model provides a technical advantage in that the LLM is provided with more relevant information to enable the LLM to generate appropriate output in fewer iterations, thereby reducing the unnecessary consumption of computing resources (e.g., processing power, memory, computing time, etc. (See Sassak Para [0008]).
As to claim 12:
Madnani discloses:
the input comprising the user query is not communicated directly to the LLM [Paragraph 0021 teaches giving the contextual prompt as an input to the LLM, as opposed to the original user prompt, therefore, not communicating the user query directly to the LLM].
As to claim 13:
Benedetto discloses:
the gaming device comprises at least one of a desktop, a laptop, a VR/AR headset, a mobile device, or a tablet [Paragraph 0048 teaches user device includes a personal computer (PC), a game console, a home theater device, a mobile computing device, a tablet, a phone, or any other types of computing devices that can interact with the game server to execute an instance of a video game].
As to claim 14:
The combination of Benedetto and Madnani discloses:
subsequent to transmitting the enriched user query, receiving, from the LLM, a response to the enriched user query [Madnani - Paragraph 0021 teaches the LLM may process the contextual prompt and generate a response; Paragraph 0025 teaches after the LLM processes the contextual prompt, a response to the user query may be generated];
based on the response, updating data for the plurality of users, wherein future prompt-enriching information is determined based on the update [Madnani - Paragraph 0026 teaches embeddings store would be constantly updated with new queries and context; Paragraph 0028 teaches once the user query has gone through the process, context may be provided to the embedding store, which includes the response, and where the embedding store is used to determine future prompt-enriching information, therefore, based on updates]; and
transmitting an aspect of the response to the user device [Benedetto - Paragraph 0007 teaches sending the response to a device of the first player].
As to claim 15:
Benedetto discloses:
detecting an entity in the user query [Paragraph 0133 teaches the query may be specifically directed to how to beat a particular point in the game (e.g., level boss, quest, task, etc.), or may be directed to gaining information about an object (e.g., a boss’s name, an object encountered in the game play)]; and
based on the detected entity, performing a search operation against a user profile, the database comprising the historical trends associated with the user profile, or the database comprising data for the plurality of users [Paragraph 0076 teaches game context includes metadata and information related to the game play, and includes where the player (e.g., character of the player) has been within the gaming application, where the player is in the gaming application, what the player has done, what assets and skills the player or the character has accumulated, what quests or tasks are presented to the player, and where the player will be going within the gaming application, where the metadata and information in each game context may be analyzed to provide support related to the game play of the player, such as when matching a query to a response, wherein the game play has a particular context related to the query, and the matched response is best suited to answering the query].
Claims 16 and 17 are rejected under the same rationale as claims 1 and 2, since they recite similar limitations.
As to claim 18:
Benedetto as modified by Madnani discloses:
converting the user query into a plurality of user tokens [Paragraph 0030 teaches converting the user query to embeddings]; and
converting the prompt-enriching information into a plurality of prompt-enriching tokens that have weight values that are higher than weight values of the plurality of user tokens, wherein the enriched user query comprises the plurality of user tokens and the plurality of prompt-enriching tokens have respective weight values [Paragraph 0021 teaches contextual prompt is more relevant than the original user prompt; Paragraph 0029 teaches enriching the query by sorting embeddings by relevance, and creating a contextual prompt by adding the closest embeddings to the original query; Paragraph 0042 teaches embeddings may be ranked based on relevance].
As to claim 20:
Benedetto discloses:
the computing system comprises a video game server [Paragraph 0069 teaches the game
executing engine may be operating within one of many game processors of the game server].
Claims 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Benedetto et al. (U.S. Publication No. 2019/0291011) hereinafter Benedetto, in view of Madnani (U.S. Publication No. 2025/0045314), in view of Sassak, JR. et al. (U.S. Publication No. 2025/0272506) hereinafter Sassak, and further in view of Qadrud-Din et al. (U.S. Publication No. 2024/0289363) hereinafter Qadrud-Din.
As to claim 6:
Benedetto discloses all the limitations as set forth in the rejections of claim 1 above, but does not appear to expressly disclose an indication that a portion of the user query comprises contentions not in evidence.
Qadrud-Din discloses:
wherein the second indication of the factual assessment comprises an indication that a portion of the user query comprises contentions not in evidence [Paragraph 0260 teaches search terms may be used to search for evidence for or against the factual claims, where results of these searches may then be provided in prompts to evaluate the factual claims, where the prompts may be completed by indicating whether the factual claims are accurate given the available search results].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Benedetto, by incorporating an indication that a portion of the user query comprises contentions not in evidence, as taught by Qadrud-Din [Paragraph 0260], because both applications are directed to processing user queries; determining inaccurate expressions in the user’s query enables the large language model to automatically determine appropriate searches to perform and then ground its responses to a source of truth, thereby increasing accuracy of responses (See Qadrud-Din Para [0035]).
As to claim 19:
Benedetto discloses all the limitations as set forth in the rejections of claim 16 above, but does not appear to expressly disclose the prompt-enriching information is determined based on legal rules of evidence.
Qadrud-Din discloses:
the prompt-enriching information is determined based on legal rules of evidence [Paragraph 0260 teaches search terms may be used to search for evidence for or against the factual claims, where results of these searches may then be provided in prompts to evaluate the factual claims, where the prompts may be completed by indicating whether the factual claims are accurate given the available search results, therefore, based on rules of evidence].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teachings of the cited references and modify the invention as taught by Benedetto, by determining the prompt-enriching information based on legal rules of evidence, as taught by Qadrud-Din [Paragraph 0260], because both applications are directed to processing user queries; determining enriching information based on rules of evidence enables the large language model to automatically determine appropriate searches to perform and then ground its responses to a source of truth, thereby increasing accuracy of responses (See Qadrud-Din Para [0035]).
Allowable Subject Matter
Claim 21 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Response to Arguments
The following is in response to arguments filed on February 4, 2026. Arguments have been fully and respectfully considered.
Claim Rejections - 35 USC § 101
In view of claim amendments, rejections under 35 USC § 101 are hereby withdrawn.
Claim Rejections - 35 USC § 103
Applicant’s arguments have been carefully and respectfully considered, but are not moot in view of new grounds of rejections, as necessitated by the amendments.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAQUEL PEREZ-ARROYO whose telephone number is (571)272-8969. The examiner can normally be reached Monday - Friday, 8:00am - 5:30pm, Alt Friday, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RAQUEL PEREZ-ARROYO/Primary Examiner, Art Unit 2169