DETAILED ACTION
This communication is in response to the Application filed on 01/20/2026. Claims 1-20 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/20/2026 has been entered.
Information Disclosure Statement
The IDS dated 12/04/2025 has been considered and placed in the application file.
The IDS dated 02/03/2026 has been considered and placed in the application file.
The IDS dated 03/10/2026 has been considered and placed in the application file.
Response to Arguments
The reply filed on 01/20/2026 has been entered. Applicant’s arguments with respect to claims 1-20 have been considered but are not persuasive/moot in view of new ground(s) of rejection caused by the amendments.
With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 101, Applicant has amended each of the independent claims and asserts that “The claimed system provides significant technical advantages including improved computing efficiency and resource utilization by generative machine learning services by allowing users to manage the components used to retrieve data to perform natural language tasks as part of hosting and executing generative natural language applications, as exemplified by the above amendments.” The examiner respectfully disagrees with these assertions. While the applicant claims that the ability to manage retrieval components improves computer efficiency and resource utilization, the applicant fails to explain how the such a feature actually causes an improvement. Simply stating that a feature of an invention improves computer efficiency without explaining how is not considered valid justification for integrating an invention into practical application.
Applicant further asserts that “features as recited in Applicant's claims provide a clear technological improvement to generative machine learning services by implementing features to configure the service to "gain access to the most relevant data for a received natural language task" as clarified by the amendments above.” The examiner respectfully disagrees with these assertions. The applicant fails to explain how the ability to customize search components directly allows “access to the most relevant data.” Features of an invention cannot intrinsically construe a benefit without greater explanation. Further, the applicant fails to explain what intrinsic benefit modifying the process of data retrieval actually provides. The act of customizing data retrieval by itself cannot be considered inherent justification for improving the field of machine learning as a whole. As amended, there is no language in the independent claims that would prevent a human from performing these steps, as addressed in further detail below with respect to claim rejections under 35 USC § 101.
With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 102 and 103, the applicant’s arguments with respect to claims 1, 5, and 14 have been considered but are moot in view of new ground(s) of rejection caused by the amendments.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. All of the claims are method claims (5-13), apparatus/machine claims (1-4, 14-20) or manufacture claim under (Step 1), but under Step 2A all of these claims recite abstract ideas and specifically mental processes. These mental processes are more particularly recited in claims 1, 5, and 14 as:
determining, by the generative machine learning service, one or more data retrievers to obtain data as configured to perform the natural language task from the one or more data repositories…
obtaining, by the generative machine learning service, the data to perform the natural language task…
generating, by the generative machine learning service, a prompt for a generative machine learning model…
submitting, by the generative machine learning service, the prompt to the generative machine learning model to perform the natural language task…
returning, via the interface of the generative machine learning service, a response to the natural language request…
Under Step 2A Prong One, claims 1, 5, and 14 are directed to an abstract idea and specifically a mental process. As detailed above, the steps of generating, selecting, submitting, returning, etc. may be practically performed in the human mind with the use of a physical aid such as a pen and paper. For example, a boss could ask their subordinate manager a question, the manager could select one of their subordinate employees to gather information related to their boss’ question, and the employee could return to the manager with the requested information. The manager could then use the requested information to create an informed question, present the informed question to an expert, receive an answer from the expert, and then present that answer to their boss.
Under Step 2A Prong Two, this judicial exception is not integrated into a practical application because claims 1-20 do not recite additional elements that integrate the exception into a practical application. In particular, claims 1 and 14 recite the additional elements of a generative machine learning service (¶ [0019]), a generative machine learning model (¶ [0014]), an interface (¶ [0049]), a network endpoint (¶ [0068]), a processor (¶ [0103]), a memory (¶ [0105]), and a plurality of computing devices (¶ [0101]-[0102]). These additional elements are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). Further, claims 1, 5, and 14 recite the additional elements of “receiving…” which amounts to insignificant extra-solution activities which are not indicative of integration into a practical application as per MPEP 2106.05(g). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Under Step 2B, the claims do not recite additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a computer is noted as a general computer {generative machine learning service (¶ [0019]); generative machine learning model (¶ [0014]); interface (¶ [0049]); network endpoint (¶ [0068]); processor (¶ [0103]); memory (¶ [0105]); plurality of computing devices (¶ [0101]-[0102])}. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitations in the claims noted above are directed towards insignificant extra-solution activities. The claims are not patent eligible.
With respect to claims 2, 6, and 15, the claim relates to inferring context from previous conversations, and rewriting the natural language query based on that context. This relates to a manager remembering conversations between them and an expert, choosing an appropriate conversation that is relevant to their boss’ question, and using that relevant knowledge to rephrase the question before handing it off to a subordinate employee for information retrieval. The additional limitation of “accessing…” amounts to insignificant extra-solution activities which are not indicative of integration into a practical application as per MPEP 2106.05(g). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claims 3, 7, and 16, the claim relates to adding new data retrievers to the system retrieval configuration. This relates to a manager integrating a new employee into their list of subordinates. The additional limitation of “receiving…” amounts to insignificant extra-solution activities which are not indicative of integration into a practical application as per MPEP 2106.05(g). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claims 4 and 13, the claim relates to allocating computer resources and providing network endpoints for creating a generative natural language application. This relates to a manager allocating specific subordinate employees to support the process of generating an answer and assigning messengers to communicate requests between the him and his employees. The additional limitation of computing resources (¶ [0068]), a network endpoint (¶ [0068]), and an application interface (¶ [0049]) are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). Furthermore, The additional limitation of “receiving…” amounts to insignificant extra-solution activities which are not indicative of integration into a practical application as per MPEP 2106.05(g). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claims 8, the claim relates to removing data retrievers from the system retrieval configuration. This relates to a manager firing a subordinate employee and adjusting their responsibility delegation accordingly. The additional limitation of “receiving…” amounts to insignificant extra-solution activities which are not indicative of integration into a practical application as per MPEP 2106.05(g). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claims 9, the claim relates to implementing a service as part of a provider network. The additional limitation of a provider network (¶ [0024]) are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claims 10 and 18, the claim relates to parameters in access requests between the data retrievers and data repository. This relates to a manager instructing a subordinate employee to retrieve data with specific parameters. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claims 11, the claim relates to data repositories storing non-natural language data. This relates to a subordinate employee fetching an image from the company’s database, instead of fetching natural text. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claims 12, the claim relates to the machine learning service ingests and indexes the data repository. This relates to a manager organizing and memorizing the data repository. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claims 17, the claim relates to data retrievers interacting with different types of data storage systems to obtain different portions of data. This relates to subordinate employees accessing different databases to retrieve different types of information. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claims 19, the claim relates to accessing data repositories using a specified schema. This relates to subordinate employees accessing a database using a specified interface associated with that particular database. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
With respect to claims 20, the claim relates to providing identifiers that associating requests with the generative natural language application. This relates to a manager tracking and labelling requests sent to the expert. The additional limitation of “receiving…” amounts to insignificant extra-solution activities which are not indicative of integration into a practical application as per MPEP 2106.05(g). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
For all of the above reasons, taken alone or in combination, claims 1-20 recite a non-statutory mental process.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5, 10, 11, 14, 17-19 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 20220229832 A1, (Li et al.) in view of US Patent Publication 20160171050 A1 (Das) in view of US Patent Publication 20210049217 A1 (Ogawa et al.).
Claim 1
Regarding claim 1, Li et al. disclose a system, comprising:
a plurality of computing devices, respective comprising at least one processor (Li et al. ¶ [0045], "Referring still to FIG. 10, processing system 1020 may comprise processor, a micro-processor and other circuitry that retrieves and executes software 1010 from storage system 1005.") and a memory (Li et al. ¶ [0046], "Storage system 1005 may comprise any computer readable storage media readable by processing system 1020 and capable of storing software 1010."), configured to implement a natural language generative application service (Li et al. ¶ [0016], "The system may use the user input to generate a prompt, provide the prompt to a natural language generation model, obtain the output from the natural language generation model, and suggest complete content to the user for use in the content document based on the output."), wherein the natural language generative application service is configured to:
receive, via an interface of the natural language generative application service, one or more management requests [to specify a retrieval configuration] for a generative natural language application created at the natural language generative application service (Li et al. ¶ [0025], "The user query may be fetched using a user interface specific to the user system design components 135 or by a user interface of the content generation application 130.")
receive, at a network endpoint for the generative natural language application provided by the natural language generative application service (Li et al. ¶ [0043], "computing system 1000 may comprise one or more computing devices that execute processing for applications and/or services over a distributed network to enable execution of processing operations described herein over one or more applications or services."), a natural language request to perform a natural language task (Li et al. ¶ [0020], "The query understanding component 140 takes the text query input by the user (i.e., the user query) and tries to understand the user's intention. The query understanding component 140 classifies the user's intention into one of two types of actions. The first is a natural language action that will use the natural language generation model.") for a generative natural language application created at the natural language generative application service (Li et al. ¶ [0021], "For example, the Generative Pre-trained Transformer 3 (“GPT-3”) may be the natural language generation model used in system 100.") [and using one or more data repositories associated with the generative natural language application];
based, at least in part, on [the obtained] data, generate a prompt for a generative machine learning model trained to perform the natural language task (Li et al. ¶ [0033], "A user query that will use a natural language action is identified by the query understanding component, and at step 310 the natural language action is determined from an intent of the user query. ... The prompt design component may generate a prompt at step 315 based on the determined action.");
submit the prompt to the generative machine learning model to perform the natural language task (Li et al. ¶ [0033], "At step 320, the prompt is provided to a natural language generation model (e.g., natural language generation model 125), such as GPT-3. The natural language generation model performs modelling, and at step 325 the output from the model is received.");
generate a response to the natural language request based, at least in part, on a result of the prompt received from the generative machine learning model (Li et al. ¶ [0033], "At step 330, the output is used to generate response content in a format compatible with the content generation application (e.g., word processing application, presentation creation application, or the like)." The response content is considered analogous to a response to the natural language request); and
return the response to the request via the interface (Li et al. ¶ [0033], "The response content may be displayed at step 335.").
Li et al. do not explicitly disclose all of the use of data repositories to respond to user queries.
However, Das discloses a system, configured to:
receive, at a network endpoint for the generative natural language application provided by the natural language generative application service (Das ¶ [0028], "The architecture has a client Web-based Analyst Interface component 20 communicating with a Query Server component 30 over the internet or a secured connection. The interface 20 allows a user to specify search and analytics queries in a declarative manner via a high-level query language such as SQL, or in a natural-language-like syntax with constrained vocabulary."), a natural language request [to perform a natural language task] (Das ¶ [0062], "In a preferred construction, Query Planning module 32, FIG. 1, includes a query translation module that automatically translates a natural language query to its equivalent SQL representation to be executed against structured data.") [for a generative natural language application created at the natural language generative application service and] using one or more data repositories associated with the [generative] natural language application (Das ¶ [0032], "In one construction, the module 32 makes use of a locally installed Domain and Site Model database 40 that contains data site descriptions and domain models");
access a retrieval configuration (Das ¶ [0062], "The algorithm exploits the database metadata structure to generate a set of candidate SQL queries." Exploiting database metadata is considered analogous to accessing a retrieval configuration.) [that selects and configures one or more data retrievers for the generative natural language application specified in a prior request via the interface];
determine, based on the retrieval configuration, the one or more data retrievers to obtain data as configured [to perform the natural language task] from the one or more data repositories (Das ¶ [0067], "Returning to Query Planning and Optimization, query planning involves generating a set of sub-queries from a given user query based on the data source locations that have parts of the required information to answer the query."; ¶ [0085], "The QE module 36 receives a list of sub-queries 53 from the Query Planning and Optimization module 32 and generates a series of mobile agents 60 to carry out these sub-queries. For each agent, the module 36 creates an itinerary of the various sites to be visited and the data retrieval and processing tasks to be executed at each site." Sending agents based on generated sub-queries is considered analogous to selecting data retrievers based on retrieval configuration, since said sub-queries were generated using data source locations.);
invoke the selected one or more data retrievers to obtain the data at the one or more data repositories according to the natural language request (Das ¶ [0071], "The executive agent sends an agent to execute the query at the site where terrain mobility information by NAIs is located. The results are then carried by two other agents in a temporary relation to the two sites of the SALUTE databases."); and
return the response to the request via the interface (Das ¶ [0072], "The results are brought back by the agents 60, FIG. 1, and merged as Query Results 57 from QE 36 and/or merged in QP 32, and presented as Query Results 35 to the user via the user interface 20. ").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the natural language generation system as taught by Li et al. to incorporate the usage of data repositories as taught by Das.
The suggestion/motivation for doing so would have been that, “a Distributed Analytical Search (DAS) system and method according to the present invention allows a user to pose natural language questions to multiple data stores of both structured and unstructured data of any size simultaneously without the user needing to know anything about the metadata of the source or sources and without any specialized knowledge of SQL or other computing technologies,” as noted by the Das disclosure in paragraph [0011].
Li et al. in view of Das do not explicitly disclose all of a retrieval configuration specified in a management request.
However, Ogawa et al. disclose receiving, via an interface of the natural language generative application service, one or more management requests to specify a retrieval configuration for a generative natural language application created at the natural language generative application service (Ogawa et al. ¶ [0038], "The computing system ... includes a user interface to receive user input to generate a command card. Associated with the command card includes data representing a directive [and] multiple search bots.... Each one of the search bots search for data specific to the same directive of the command card, but each search bot is also specific to a different data source. In other words, a first search bot searches a first data source and a second search bot searches a second data source." ¶ [0062], "this application is placed into a certain mode... by the user.... For example, if a user is associated with a first command card, a second command card, a third command card, and so forth, then the user's selection of the second command card places the data enablement application into the mode of the second command card." A command card details which search bots to use, and how each search bot should search their respective data sources. In other words, a command card is considered analogous to a retrieval configuration. Thus, a user selection of a command card is considered analogous to a management request to specify a retrieval configuration);
accessing the retrieval configuration (Ogawa et al. ¶ [0194], "Block 1401: Based on user input, the data enablement platform creates a command card.") that selects and configures one or more data retrievers for the generative natural language application (Ogawa et al. ¶ [0038], "Associated with the command card includes data representing a directive, multiple search bots, multiple behavior bots, a memory module, and a user interface module. ") specified in the one or more management requests via the interface (Ogawa et al. ¶ [0195], "a user can select, via a UI, the data sources to be searched, which in turn determines the type of search bots. The user can also adjust parameters of the assigned search bots, thereby customizing the search bots. For example, the user can input certain keywords, names, or types of data, for a given search bot to use in their searching computations.");
determining, based on the retrieval configuration, the one or more data retrievers to obtain data (Ogawa et al. ¶ [0195], "Block 1402: The data enablement platform assigns and provisions search bots to the command card.") as configured to perform a natural language task (Ogawa et al. ¶ [0216], "At block 1604, the data enablement platform applies NLP automatic summarization of the search results and outputs the summarization to the user device (e.g. via audio feedback) (block 1605).") [from the one or more data repositories]; and
invoke the selected one or more data retrievers to obtain the data (Ogawa et al. ¶ [0199], " Block 1406: The search bots are executed.").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Li et al. in view of Das to incorporate Ogawa et al.’s user-specified retrieval configurations.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Li et al. in view of Das’s data retrieval to include Ogawa et al.’s user-specified retrieval configurations because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Li et al. in view of Das’s data retrieval as modified by Ogawa et al.’s user-specified retrieval configurations can yield a predictable result of improving user experience since user-specified retrieval configurations would allow the user to more finely control the system. Thus, a person of ordinary skill would have appreciated including in Li et al. in view of Das’s data retrieval the ability to do Ogawa et al.’s user-specified retrieval configurations since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Claim 5
Regarding claim 5, the limitations of claim 5 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above.
Claim 10
Regarding claim 10, the rejection of claim 5 is incorporated.
Das further discloses wherein the retrieval configuration comprises one or more parameters (Das ¶ [0061], "The server accesses information about available data, contents and locations modeled in two related tables with attributes such as database name, table name, IP address, server and wrapper. ... Both tables will be accessed during query planning and decomposition." Attributes are considered analogous to parameters.) to include in access requests from the one or more data retrievers to the one or more data repositories (Das ¶ [0067], "Returning to Query Planning and Optimization, query planning involves generating a set of sub-queries from a given user query based on the data source locations that have parts of the required information to answer the query."; ¶ [0085], "The QE module 36 receives a list of sub-queries 53 from the Query Planning and Optimization module 32 and generates a series of mobile agents 60 to carry out these sub-queries. For each agent, the module 36 creates an itinerary of the various sites to be visited and the data retrieval and processing tasks to be executed at each site.").
Claim 14
Regarding claim 5, Li et al. disclose one or more non-transitory, computer-readable storage media, storing program instructions (Li et al. ¶ [0046], "Storage system 1005 may comprise any computer readable storage media readable by processing system 1020 and capable of storing software 1010. Storage system 1005 may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, cache memory or other data.").
The remaining limitations of claim 14 are similar in scope to that of claim 1 and therefore are rejected for similar reasons as described above.
Claim 17
Regarding claim 17, the rejection of claim 14 is incorporated.
Das further discloses wherein individual ones of the one or more data retrievers interacts with different types of data storage systems to obtain different portions of the data from different ones of the one or more data repositories (Das ¶ [0033], "For example, situation (and threat) assessment in a complex environment requires fusion of several sources and types of data. A query plan 53 from the Query Optimization module 32 is sent to QE 36, which in turn spawns mobile agents 60. Agents involved in the query communicate with each other and may perform “join” after select operations to fuse data where appropriate.").
Claim 18
Regarding claim 18, the rejection of claim 14 is incorporated. The limitations of claim 18 are similar in scope to that of claim 10 and therefore are rejected for similar reasons as described above.
Claim 19
Regarding claim 19, the rejection of claim 14 is incorporated.
Das further discloses wherein at least one of the one or more data repositories is accessed by one of the one or more data retrievers (Das ¶ [0071]-[0076], "The executive agent sends an agent to execute the query at the site where terrain mobility information by NAIs is located. The results are then carried by two other agents in a temporary relation to the two sites of the SALUTE databases. ... The above two sub-queries will be executed in parallel through wrappers w0 and w1 respectively.") using a schema provided as part of a request to add the one data repository (Das ¶ [0035]-[0043], "A preliminary syntax was adopted for modeling data sources residing outside of the cloud, incorporating such constructs as repository, wrapper, interface, and extent. ... A wrapper is an object with an interface that identifies the schema and functionality of a source. When supplied with information on a repository and a query, it returns objects as answers to the query." Agents executing queries through wrappers is considered analogous to data retrievers accessing data repositories via repository schema. This is because wrappers identify the schema of a source, and the schema of a source is defined while modeling the source.).
Claims 2, 6, and 15 are rejected under 35 U.S.C. 103 as obvious over Li et al. in view of Das in view of Ogawa et al. as applied to claims 1, 5, and 14 above, and further in view of US Patent Publication 20200380077 A1 (Ge et al.).
Claim 2
Regarding claim 2, the rejection of claim 1 is incorporated. Li et al. in view of Das in view of Ogawa et al. disclose all the elements of the claimed invention as stated above.
Das further discloses a system configured to:
rewrite the natural language request [based on the one or more conversations to decontextualize the natural language request] (Das ¶ [0032], "A set of sub-queries, arrow 53, is generated in the Query Planning and Optimization QP module 32 corresponding to a high-level search and analytics query, arrow 33, posed to server component 30 by a human analyst via Interface 20 and converted to at least one SQL Query 41 from NLT 31." Conversion is considered analogous to rewriting.), wherein the one or more data retrievers are invoked using the rewritten natural language request (Das ¶ [0032], "An execution plan, arrow 53, for the sub-queries is then passed to the Query Execution module 36, which is responsible for generating and spawning the actual mobile agents 60 and/or direct access queries 61.").
Li et al. in view of Das in view of Ogawa et al. do not explicitly disclose all of rewriting a prompt using conversation history.
However, Ge et al. disclose wherein the natural language generative application service (Ge et al. ¶ [0012], "the automated agent system may output utterances to convey information to the user in any suitable form, for example, by outputting text to a display device, and/or by outputting audible speech via a speaker.") is configured to:
access a conversation history structure (Ge et al. ¶ [0065], "At 406, method 400 includes, in a computer-accessible conversation history of the multi-turn dialogue, searching a set of previously-resolved entities for a candidate entity having entity properties") for the generative natural language application (Ge et al. ¶ [0022], "Responsive to detecting a fully resolved query, the speech act classifier 110 may delegate handling of the user utterance 106 to a query answering machine 114 configured to return an answer to a query in the user utterance 106.");
determine a relevant history window for the natural language task (Ge et al. ¶ [0063]-[0065], "At 402, method 400 includes using a predefined language model (e.g., any suitable predefined language model as described above) to recognize a suggested entity in an unresolved user utterance ... At 406, method 400 includes, in a computer-accessible conversation history of the multi-turn dialogue, searching a set of previously-resolved entities for a candidate entity having entity properties with a highest confidence correspondence to the entity constraints of the suggested entity" Searching conversation history for candidate entries having the highest confidence correspondence to suggested entities in user query is considered analogous to determining relevancy.);
obtain one or more conversations in the conversation history structure that are within the relevant history window (Ge et al. ¶ [0067], "At 408, method 400 includes rewriting the candidate utterance as a rewritten utterance that includes the candidate intent and that replaces the candidate entity with the suggested entity." Rewriting utterances by replacing the suggested entity in the original utterance with the candidate entity found in the relevancy determination step is considered analogous to obtaining a conversation within the relevant history window.); and
rewrite the natural language request based on the one or more conversations to decontextualize the natural language request (Ge et al. ¶ [0067], "At 408, method 400 includes rewriting the candidate utterance as a rewritten utterance that includes the candidate intent and that replaces the candidate entity with the suggested entity."), [wherein the one or more data retrievers are invoked using the rewritten natural language request].
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Li et al. in view of Das in view of Ogawa et al. to incorporate rewriting a prompt using conversation history as taught by Ge et al.
The suggestion/motivation for doing so would have been that, “since the disambiguation is performed before sending the rewritten utterance, the rewritten utterance may be processed without performing any additional disambiguation steps, which may reduce a latency, memory requirement, and/or computational cost for the downstream query answering machines to return an answer,” as noted by the Ge et al. disclosure in paragraph [0026].
Claim 6
Regarding claim 6, the rejection of claim 5 is incorporated. The limitations of claim 6 are similar in scope to that of claim 2 and therefore are rejected for similar reasons as described above.
Claim 15
Regarding claim 15, the rejection of claim 14 is incorporated. The limitations of claim 15 are similar in scope to that of claim 2 and therefore are rejected for similar reasons as described above.
Claims 3, 7, 8, and 16 are rejected under 35 U.S.C. 103 as obvious over Li et al. in view of Das in view of Ogawa et al. as applied to claims 1, 5, and 14 above, and further in view of US Patent Publication 20100070448 A1 (Omoigui et al.).
Claim 3
Regarding claim 3, the rejection of claim 1 is incorporated. Li et al. in view of Das in view of Ogawa et al. disclose all the elements of the claimed invention as stated above.
Das further disclose wherein the natural language [generative] application service is configured to:
update the retrieval configuration to include the one data retriever (Das ¶ [0094], " The Plan Agent preferably can create, monitor, coordinate, retract, dispatch, and dispose Query Agents as needed.").
Li et al. in view of Das in view of Ogawa et al. do not explicitly disclose all of adding data retrievers from a user request.
However, Omoigui et al. disclose wherein the natural language [generative] application service (Omoigui ¶ [1078], "the present invention (preferably using SQML technology) allows a user to issue a query like: "Find me all email messages written by my boss or anyone in research and which relate to this specification on my hard disk."") is configured to:
receive, via the interface (Omoigui ¶ [0513], " The present invention provides for a Blender Wizard, which is a user interface designed to facilitate users in creating Blenders."), a request to add one data retriever to the [generative] natural language application (Omoigui ¶ [0512], "Users are able to create a Blender and add and remove Agents (across Agencies) to and from the Blender." ¶ [0482], "An Agent is the main entry point into the Semantic Network of the present invention. ... Agents can also be configured with a Context Template (described below). In this case, the query will return an object type, but it will incorporate the semantics of the Context Template." Agents are considered analogous to data retrievers, since they are configured to fetch objects using queries.); and
update the retrieval configuration to include the one data retriever (Omoigui ¶ [0512], "Users are able to create a Blender and add and remove Agents (across Agencies) to and from the Blender."; ¶ [0542], "An example of a custom blender is "All.CriticalPriority.All that relates to my most recent documents or email." This Custom Blender may be implemented by an SQML file" Adding and removing agents from a Blender, which is implemented by an SQML file, is considered analogous to updating the retrieval configuration to include a data retriever.).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Li et al.’s natural language generation service to include Omoigui’s data retriever management because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Li et al.’s natural language generation service as modified by Omoigui’s data retriever management can yield a predictable result of increasing system usability since allowing users to manage data retrievers directly would increase user control over the system. Thus, a person of ordinary skill would have appreciated including in Li et al.’s natural language generation service the ability to do Omoigui’s data retriever management since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Claim 7
Regarding claim 7, the rejection of claim 5 is incorporated. The limitations of claim 7 are similar in scope to that of claim 3 and therefore are rejected for similar reasons as described above.
Claim 8
Regarding claim 8, the rejection of claim 1 is incorporated. Li et al. in view of Das in view of Ogawa et al. disclose all the elements of the claimed invention as stated above.
Das further disclose updating, by the generative machine learning service, the retrieval configuration to remove the one data retriever (Das ¶ [0094], "The Plan Agent preferably can create, monitor, coordinate, retract, dispatch, and dispose Query Agents as needed.").
Li et al. in view of Das in view of Ogawa et al. do not explicitly disclose all of removing data retrievers from a user request.
However, Omoigui et al. disclose receiving, via the interface (Omoigui ¶ [0513], " The present invention provides for a Blender Wizard, which is a user interface designed to facilitate users in creating Blenders."), a request to remove one of the one or more data retrievers from the generative natural language application (Omoigui ¶ [0512], "Users are able to create a Blender and add and remove Agents (across Agencies) to and from the Blender."; ¶ [0482], "An Agent is the main entry point into the Semantic Network of the present invention. ... Agents can also be configured with a Context Template (described below). In this case, the query will return an object type, but it will incorporate the semantics of the Context Template." Agents are considered analogous to data retrievers, since they are configured to fetch objects using queries.); and
updating, by the generative machine learning service, the retrieval configuration to remove the one data retriever (Omoigui ¶ [0512], "Users are able to create a Blender and add and remove Agents (across Agencies) to and from the Blender."; ¶ [0542], "An example of a custom blender is "All.CriticalPriority.All that relates to my most recent documents or email." This Custom Blender may be implemented by an SQML file" Adding and removing agents from a Blender, which is implemented by an SQML file, is considered analogous to updating the retrieval configuration to include a data retriever.).
The suggestion/motivation for doing so is similar to the suggestion/motivation described above with respect to claim 3.
Claim 16
Regarding claim 16, the rejection of claim 14 is incorporated. The limitations of claim 16 are similar in scope to that of claim 3 and therefore are rejected for similar reasons as described above.
Claims 4, 13, and 20 are rejected under 35 U.S.C. 103 as obvious over Li et al. in view of Das in view of Ogawa et al. as applied to claims 1, 5, and 14 above, and further in view of US Patent 11373119 A1 (Doshi et al.).
Claim 4
Regarding claim 4, the rejection of claim 1 is incorporated. Li et al. in view of Das in view of Ogawa et al. disclose all the elements of the claimed invention as stated above.
Li et al. in view of Das in view of Ogawa et al. do not explicitly disclose all of servicing requests to create applications from computing resources.
However, Doshi et al. disclose wherein the [natural language generative] application service is configured to:
receive request to create the generative natural language application to be hosted by the natural language generative application service (Doshi et al. ¶ (20)-(21), "In some embodiments, the user 102 may utilize a user interface 105 of the electronic device 104 construct the ML inference application definition 107 for an ML inference application. ... the user 102 may utilize the graph generation tool 106 to construct the inference graph by selecting graphical elements such as nodes and edges available in a library of available ML models and/or transformation operations, and/or specified/defined by the user. The nodes may represent one or more ML models that represent deployable units (e.g., comprising ML model definitions) with executable code that may be hosted and deployed by a model hosting system 140 (e.g., of a ML service) in the provider network 100. The edges may represent one or more data transformation operations to be performed on data to be provided to or generated by the ML models." Creating a ML model definition for deployment is considered analogous to requesting a generative natural language application to be hosted by a natural language generation application service.);
provision one or more computing resources to host the generative natural language application (Doshi et al. ¶ (30), "For instance, the orchestration agent 132 may identify an order of execution flows (operations) to be performed to deploy the ML inference application 118 based on the ML inference application definition (e.g., a symbolic execution graph) and provision the necessary computing resources to deploy the ML models and the data transformation operations defined in the ML inference application definition."); and
provide a network endpoint for accessing the generative natural language application at the one or more computing resources (Doshi et al. ¶ (33), "After receiving separate web service endpoints for the ML model(s) and the data transformation operations defined in the ML inference application definition (or the symbolic execution graph), at (7), the orchestration agent 132 may “host” or “deploy” the ML inference application 118 which may include configuring a web service endpoint 150 for internal/external clients to issue inference requests to and receive inference results from."), wherein the natural language request is submitted via an application interface of the generative natural language application (Doshi et al. ¶ (34), "a user (e.g., user 102) or a different user may utilize an electronic device 104 to issue a request 150 to the ML application orchestration service 114 via the endpoint 150 to execute a ML inference application.").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Li et al. in view of Das in view of Ogawa et al. to incorporate the system for creating applications as taught by Doshi et al.
The suggestion/motivation for doing so would have been that, “a user may directly utilize a compute instance hosted by the provider network to perform a variety of computing tasks … without the user having any control of or knowledge of the underlying compute instance(s) involved,” as noted by the Doshi et al. disclosure in paragraph (17).
Claim 13
Regarding claim 13, the rejection of claim 5 is incorporated. The limitations of claim 13 are similar in scope to that of claim 4 and therefore are rejected for similar reasons as described above.
Claim 20
Regarding claim 20, the rejection of claim 14 is incorporated. Li et al. in view of Das in view of Ogawa et al. disclose all the elements of the claimed invention as stated above.
Li et al. in view of Das in view of Ogawa et al. do not explicitly disclose all of identifying requests with an application.
However, Doshi et al. disclose the one or more non-transitory, computer-readable storage media of claim 14, storing further program instructions (Doshi et al. ¶ (53), "The code is stored on a computer-readable storage medium, for example, in the form of a computer program comprising instructions executable by one or more processors. The computer-readable storage medium is non-transitory. In some embodiments, one or more (or all) of the operations 500 are performed by one or more components (controller 116 and orchestration agent 132) of FIG. 1.") that when executed on or across the one or more computing devices, cause the one or more computing devices to further implement:
receiving, by the generative machine learning service, request to create the generative natural language application not to be hosted by the natural language generative application service (Doshi et al. ¶ (20)-(21), "In some embodiments, the user 102 may utilize a user interface 105 of the electronic device 104 construct the ML inference application definition 107 for an ML inference application. ... the user 102 may utilize the graph generation tool 106 to construct the inference graph by selecting graphical elements such as nodes and edges available in a library of available ML models and/or transformation operations, and/or specified/defined by the user. ... The user may also select one or more custom models from a custom model library 128 at (2C) as nodes of the inference graph. The custom models in the custom model library 128 may be constructed and hosted by a third party or by the ML application orchestration service 114." Selecting custom models hosted by a third party is considered analogous to requesting the creation of a generative natural language application not hosted by the natural language generative application service.); and
providing, by the generative machine learning service, an identifier for associating requests with the [generative natural language] application (Doshi et al. ¶ (30), "the orchestration agent 132 may transmit one or more requests to a model hosting system 140 to deploy one or more ML models 136 identified in the ML inference application definition (or the symbolic execution graph). The request can include an identification of the ML model, e.g., a location of the ML model in the ML model library 126, custom model library 128, or model training system 120, the type and number of computing resources needed to host the ML model, etc.").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Li et al. in view of Das in view of Ogawa et al. to incorporate the system for creating applications as taught by Doshi et al.
The suggestion/motivation for doing so is similar to the suggestion/motivation described above with respect to claim 4.
Claim 9 is rejected under 35 U.S.C. 103 as obvious over Li et al. in view of Das in view of Ogawa et al. as applied to claim 5 above, and further in view of US Patent Publication 20210035025 A1 (Kalluri et al.).
Claim 9
Regarding claim 9, the rejection of claim 5 is incorporated. Li et al. in view of Das in view of Ogawa et al. disclose all the elements of the claimed invention as stated above.
Li et al. in view of Das in view of Ogawa et al. do not explicitly disclose all of a provider network.
However, Kalluri et al. disclose wherein the generative machine learning service is implemented as part of a provider network (Kalluri et al. ¶ [0114], "In some embodiments, a service provider provides a cloud network to one or more end users. Various service models may be implemented by the cloud network, including but not limited to Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). In SaaS, a service provider provides end users the capability to use the service provider's applications, which are executing on the network resources.") and wherein at least one of the data repositories is hosted external to the provider network (Kalluri et al. ¶ [0052], "In some embodiments, data repository 124 stores data generated and/or otherwise accessed by components of ML application 104. ... Data repository 124 may be any type of storage unit and/or device (e.g., a file system, database, collection of tables, or any other storage mechanism) for storing data. ... Alternatively or additionally, data repository 124 may be implemented or executed on a computing system separate from one or more other components of system 100. Data repository 124 may be communicatively coupled to one or more components illustrated in system 100 via a direct connection or via a network.").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Li et al.’s natural language generation service to include Kalluri et al.’s utilization of provider networks and external databases because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Li et al.’s natural language generation service as modified by Kalluri et al.’s utilization of provider networks and external databases can yield a predictable result of increasing service accessibility since separating service components between multiple computing devices (e.g. using external databases and implementing the service as a part of a provider network) would decrease the required processing power for utilizing the service via a user device. Thus, a person of ordinary skill would have appreciated including in Li et al.’s natural language generation service the ability to do Kalluri et al.’s utilization of provider networks and external databases since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Claim 11 is rejected under 35 U.S.C. 103 as obvious over Li et al. in view of Das in view of Ogawa et al. as applied to claim 5 above, and further in view of US Patent Publication 20240193200 A1 (Watanabe et al.).
Claim 11
Regarding claim 11, the rejection of claim 5 is incorporated. Li et al. in view of Das in view of Ogawa et al. disclose all the elements of the claimed invention as stated above.
Li et al. in view of Das in view of Ogawa et al. do not explicitly disclose all of data repositories that store non-natural language data.
However, Watanabe et al. disclose wherein at least one of the data repositories stores data that is non-natural language data (Watanabe et al. ¶ [0043], "The retrieval target storage unit 121 stores a plurality of images to be retrieved (specifically, still images) in association with image IDs.").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Li et al.’s natural language generation service to include Watanabe et al.’s retrieval of non-natural language data because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Li et al.’s natural language generation service as modified by Watanabe et al.’s retrieval of non-natural language data can yield a predictable result of increasing service usability since adding compatibility with different media formats in addition to retrieving natural language data would widen the scope of the invention and cause it to become more generally applicable. Thus, a person of ordinary skill would have appreciated including in Li et al.’s natural language generation service the ability to do Watanabe et al.’s retrieval of non-natural language data since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Claim 12 is rejected under 35 U.S.C. 103 as obvious over Li et al. in view of Das in view of Ogawa et al. as applied to claim 5 above, and further in view of US Patent Publication 20160140187 A1 (Bae et al.).
Claim 12
Regarding claim 12, the rejection of claim 5 is incorporated. Li et al. in view of Das in view of Ogawa et al. disclose all the elements of the claimed invention as stated above.
Li et al. in view of Das in view of Ogawa et al. do not explicitly disclose all of indexing and ingesting data repositories.
However, Bae et al. disclose wherein at least one of the one or more data repositories was ingested and indexed (Bae et al. ¶ [0101], "Referring to FIGS. 1 and 8, the index unit 120 analyzes the text of irregular documents stored in the storage unit 110 (S810), and classifies and indexes the documents according to meanings of sentences or paragraphs in the documents (S820).") by the generative machine learning service (Bae et al. ¶ [0017], "The present invention relates to a system and method for answering a natural language question, and is directed to providing a system and method for answering a natural language question in which sentences or paragraphs of irregular documents are analyzed and the documents are classified and indexed according to meanings and used to provide an answer to a question, so that information retrieval performance can be improved.").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Li et al. in view of Das in view of Ogawa et al. to incorporate the ingestion of databases as taught by Bae et al.
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Li et al.’s natural language generation service to include Bae et al.’s ingestion of databases because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Li et al.’s natural language generation service as modified by Bae et al.’s ingestion of databases can yield a predictable result of reducing computing power since indexing a database would make searching through it faster and more efficient for agents. Thus, a person of ordinary skill would have appreciated including in Li et al.’s natural language generation service the ability to do Bae et al.’s ingestion of databases since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB B VOGT whose telephone number is (571)272-7028. The examiner can normally be reached Monday - Friday 9:30am - 7pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACOB B VOGT/ Examiner, Art Unit 2653
/Paras D Shah/Supervisory Patent Examiner, Art Unit 2653
03/18/2026