DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 12, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea – mental process without significantly more. The claims recite receiving a request from a user and generating a plan for complying with the request by using one or more of the tools, the plan comprising the performance of one or more tasks. An ordinary person of average intelligence and education is capable of 1) receiving a request to perform one or more tasks from another person/user and 2) creating/generating a plan to accomplish the tasks based upon tools available to the person receiving the request. The type and origin of the tool, whether that may be software, hardware, computer systems, hand tools, etc., does not have any bearing on the capability of a person of ordinary intelligence to receive a request and create a plan to comply with the request based upon available tools, as is done in most, if not all, industries throughout the world. This judicial exception is not integrated into a practical application because the additional elements of the claims (i.e., a communication interface for receiving tool documentation, and a large language model (LLM) for analyzing the tool documentation to determine one or more tasks each tool can perform) amount to well-known generic computer elements operable to perform their well-understood functions (e.g., a communication interface for transferring documents/information and a large-language-model for processing and determining information from the same documents/documentation). Further, a person of average intelligence, before the priority date, was and is capable of interfacing with a large language model and providing tool documentation to the large language model to process the model for such information as functions/capabilities of the tool (e.g., ChatGPT publicly released in Nov. 2022, and other similar large language models). As such, an LLM alone is a commonly used and understood computer element that does not hold patentable weight in the context of the claims as written. Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements (i.e., a communication interface (claim 1) and computer readable medium and instructions (claim 20)) do not add significantly more because these elements amount to well-understood, routine, conventional computer elements and functions that do not patentably distinguish the invention from the prior art as laid out below. As such, claims 1, 12, and 20, are rejected under 35 U.S.C. § 101 as being directed to a mental process without significantly more than the judicial exception.
Further, claims 2 – 11 and 13 – 19, which depend from claims 1 and 12 respectively, do not cure the deficiencies of claims 1 and 12, and therefore are rejected under 35 U.S.C. § 101 for the same or similar reasons as laid out above.
Claims 2 and 13 further specify that the tool documentation and request are part of a large language model prompt that is received via the communication interface. However, this does not provide significantly more than the judicial exception because receiving a prompt comprising tool documentation and a request is well-understood within the art (as laid out below with reference to the 35 U.S.C. § 103 rejection of claim 2). As such, the limitations of claims 2 and 13 amount to the abstract idea – mental process – without significantly more. As such, claims 2 and 13 stand rejected under 35 U.S.C. § 101 for similar reasons as claim 1, 12, and 20 as laid out above.
Claims 3 and 16 further specify the tool documentation for at least one of the tools comprises a description of the tool written by a provider of the tool. However, this does not provide significantly more than the judicial exception because tool documentation written by a provider is commonly understood within the art as laid out below with respect to the 35 U.S.C § 103 rejections of claims 3 and 16. Therefore, the limitation of claims 3 and 16 amounts to the well-understood, routine, and conventional computer function of receiving documentation (i.e., tool documentation.) As such, claims 3 and 16 stand rejected under 35 U.S.C. § 101 for similar reasons as claim 1, 12, and 20 as laid out above.
Claim 4 further specifies wherein the tool documentation for at least one of the tools specifies one or more input parameters. However, this does not provide significantly more than the judicial exception because tool documentation including input parameters is commonly understood within the art as laid out below with respect to the 35 U.S.C § 103 rejections of claim 4. Therefore, the limitation of claim 4 amounts to the well-understood, routine, and conventional computer function of receiving documentation (i.e., tool documentation including input parameters.) As such, claim 4 stands rejected under 35 U.S.C. § 101 for similar reasons as claim 1, 12, and 20 as laid out above.
Claim 5 further specifies wherein the tool documentation for at least one of the tools specifies one or more output parameters. However, this does not provide significantly more than the judicial exception because tool documentation including output parameters is commonly understood within the art as laid out below with respect to the 35 U.S.C § 103 rejections of claim 5. Therefore, the limitation of claim 5 amounts to the well-understood, routine, and conventional computer function of receiving documentation (i.e., tool documentation including input parameters.) As such, claim 5 stands rejected under 35 U.S.C. § 101 for similar reasons as claim 1, 12, and 20 as laid out above.
Claims 6 and 14 further specify wherein the one or more tools comprise one or more websites. However, this does not provide significantly more than the judicial exception because tool documentation of websites is commonly understood within the art as laid out below with respect to the 35 U.S.C § 103 rejections of claims 6 and 14. Therefore, the limitation of claims 6 and 14 amount to the well-understood, routine, and conventional computer function of receiving documentation (i.e., tool documentation including input parameters.) As such, claims 6 and 14 stand rejected under 35 U.S.C. § 101 for similar reasons as claim 1, 12, and 20 as laid out above.
Claims 7 and 15 further specify wherein the one or more websites comprise at least one of a search engine, a messaging application, a conferencing application, or an image recognition application. However, this does not provide significantly more than the judicial exception because tool documentation of websites comprising at least one of a search engine, messaging application, conferencing application, or an image recognition application is commonly understood within the art as laid out below with respect to the 35 U.S.C § 103 rejections of claims 7 and 15. Therefore, the limitations of claims 7 and 15 amount to the well-understood, routine, and conventional computer function of receiving documentation (i.e., tool documentation including input parameters.) As such, claims 7 and 15 stand rejected under 35 U.S.C. § 101 for similar reasons as claim 1, 12, and 20 as laid out above.
Claim 8 further specifies wherein the plan for complying with the request comprises the performance of one or more image recognition tasks. However, this does not alter the fact that the claims can be construed as a mental process because tools such as image recognition tools are commonly understood within the art as laid out below with respect to the 35 U.S.C § 103 rejections of claim 5. As such, a person receiving a request to complete a task including such a tool would plan to use such a commonly understood tool. Therefore, the limitation of claim 8 amounts to no more than a mental process including. As such, claim 8 stands rejected under 35 U.S.C. § 101 for similar reasons as claim 1, 12, and 20 as laid out above.
Claims 9 and 17 further specify wherein the large language model is operable to receive one or more demonstrations, and generating the plan further comprises generating the plan based on the one or more demonstrations and wherein the method further comprises receiving one or more demonstrations at the large language model, and using the large language model to generate the plan further comprises generating the plan based on the one or more demonstrations. However, this does not provide significantly more than the judicial exception because LLMs receiving one or more demonstrations and generating a plan to use a tool based on the demonstration is commonly understood within the art as laid out below with respect to the 35 U.S.C § 103 rejections of claims 9 and 17. Therefore, the limitations of claims 9 and 17 amount to well-understood, routine, and conventional computers. As such, claims 9 and 17 stand rejected under 35 U.S.C. § 101 for similar reasons as claim 1, 12, and 20 as laid out above.
Claims 10 and 18 further specify wherein the large language model is operable to query the user in response to the request, and generating the plan further comprises generating the plan based on a reply to the query. However, this does not provide significantly more than the judicial exception because LLMs, specifically, LLMs performing clarification responses or similar actions for a user, is commonly understood within the art as laid out below with respect to the 35 U.S.C § 103 rejections of claims 10 and 18. Therefore, the limitations of claims 10 and 18 amount to well-understood, routine, and conventional computers. As such, claims 10 and 18 stand rejected under 35 U.S.C. § 101 for similar reasons as claim 1, 12, and 20 as laid out above.
Claims 11 and 19 further specify wherein the tool documentation for at least one of the tools comprises a truncated description of the tool and wherein providing the large language model with tool documentation for each of one or more tools comprises, for at least one of the tools, truncating a description of the tool to generate a truncated description and using the truncated description as the tool documentation. However, this does not provide significantly more than the judicial exception because truncating a description and submitting the truncated description as tool documentation to a large language model is commonly understood within the art as laid out below with respect to the 35 U.S.C § 103 rejections of claims 11 and 19. Therefore, the limitations of claims 11 and 19 amount to well-understood, routine, and conventional computers. As such, claims 11 and 19 stand rejected under 35 U.S.C. § 101 for similar reasons as claim 1, 12, and 20 as laid out above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 – 5, 11 – 13, 16, and 19 - 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2019/0332667 A1 to Kyle Mark Williams et al. (hereinafter Williams) in view of U.S. Patent Application Publication No. 2024/0330589 A1 to Manikanta Kotaru (hereinafter Kotaru).
Regarding claim 1, Williams teaches a computing system comprising: a communication interface for receiving tool documentation for each of one or more tools; and (Williams teaches accessing (i.e., receiving) via a network (i.e., communication interface) documentation of a variety of APIs (i.e., tools) from an API discovery service. Williams at ¶¶ [0032] - [0035] and [0039] - [0040].)
… analyzing the tool documentation for each of the one or more tools to determine, for each tool, one or more tasks that the tool is operable to perform, receiving a request from a user, and generating a plan for complying with the request by using one or more of the tools, the plan comprising the performance of one or more of the tasks. (Williams teaches a Natural Language Processing (NLP)machine trained using semi-supervised and/or unsupervised learning or zero-shot learning to process the documentation and learn the functions of the APIs. Williams at ¶¶ [0046] - [0059]. (i.e., a person of ordinary skill in the art would have understood before the priority date that an NLP Machine could comprise a Large Language Model) Further, Williams teaches processing the documentation using the NLP machine in order to determine functions made available by the API (i.e., tool). Williams at ¶¶ [0046] - [0059]. Further still, Williams teaches receiving a user query, determining an intent of the user query, then determining a function node within an AI graph structure where the function node is associated with the query and the intent (i.e., the task) and further generating a path through the structure that executes multiple functions in a process of reaching and executing the final function node associated with the query and intent (i.e., building a plan for complying with the request using one or more of the tools). Williams at ¶¶ [0083] - [0092].)
Williams alone, however, does not explicitly teach “a large language model for” analyzing tool documentation. Although, Williams does teach a natural language processing machine and artificial intelligence models, Williams does not explicitly discuss a large language model.
In a similar field of endeavor (e.g., user queries processed by a large language model and using documentation for applications to aid in the answer of queries), Kotaru teaches a large language model receiving user prompts for responding to user queries that merges technical specifications with user queries to construct responses to the user queries (i.e., a large language model for analyzing tool documentation). Kotaru at ¶¶ [0054] – [0070].
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Williams with the teachings of Kotaru to provide a large language model for analyzing tool documentation and determining tasks the tool is operable to perform. Doing so would have improved the quality of the responses of the system as recognized by Kotaru at ¶¶ [0037] – [0038]. Further, a person of ordinary skill in the art would have understood before the priority date that an NLP Machine could comprise a Large Language Model. As such, Williams’ natural language processing machine would be commonly understood to comprise a large language model or the like. Thus, a simple substitution of Kotaru’s LLM into Williams’ system would be a predictable application of known techniques within similar fields of endeavor.
Regarding claim 2, Williams in view of Kotaru (hereinafter Williams-Kotaru) teaches all the limitations of claim 1 as laid out above. Further, Kotaru teaches the computing system according to claim 1, wherein the tool documentation and the request are part of a large language model prompt that is received via the communication interface. (Kotaru teaches a system for query answering where a prompt is generated by including technical specification (i.e., tool documentation) with the user query in order to provide better responses. Kotaru at ¶¶ [0054] - [0070].)
Regarding claim 3, Williams-Kotaru teaches all the limitations of claim 1 as laid out above. Further, Williams teaches the computing system according to claim 1, wherein the tool documentation for at least one of the tools comprises a description of the tool written by a provider of the tool. (Williams teaches the documentation may be provided by a service provider (i.e., a provider of the tool). Williams at ¶ [0034].)
Regarding claim 4, Williams-Kotaru teaches all the limitations of claim 1 as laid out above. Further, Williams teaches the computing system according to claim 1, wherein the tool documentation for at least one of the tools specifies one or more input parameters. (Williams teaches the APIs (i.e., tools) may have multiple input parameters. Williams at ¶ [0027]. Further, Williams teaches the documentation providing specific information related to input parameters of the APIs. Williams at ¶ [0034].)
Regarding claim 5, Williams-Kotaru teaches all the limitations of claim 1 as laid out above. Further, Williams teaches the computing system according to claim 1, wherein the tool documentation for at least one of the tools specifies one or more output parameters. (Williams teaches the APIs (i.e., tools) may have multiple input parameters. Williams at ¶ [0027]. Further, Williams teaches the documentation providing specific information related to output parameters of the APIs. Williams at ¶ [0034].)
Regarding claim 11, Williams-Kotaru teaches all the limitations of claim 1 as laid out above. Further, Williams teaches the computing system according to claim 1, wherein the tool documentation for at least one of the tools comprises a truncated description of the tool. (Williams teaches using partial documentation of APIs for the training (i.e., truncated description of the tool.) Williams ¶ [0040]. Further, Williams teaches using a simplified version of API documentation in its process. Williams at ¶¶ [0040] - [0045] and Figs. 4A and 4B.)
Regarding claim 12, Williams teaches a method for using a large language model to comply with a user request, comprising:
providing the large language model with tool documentation for each of one or more tools; (Williams teaches accessing (i.e., receiving) via a network (i.e., communication interface) documentation of a variety of APIs (i.e., tools) from an API discovery service (i.e., the API discovery service provides tool documentation to an NLP machine). Williams at ¶¶ [0032] - [0035] and [0039] - [0040].)
and using the large language model to analyze the tool documentation for each of the one or more tools to determine, for each tool, one or more tasks that the tool is operable to perform, receive a request from a user, and generate a plan for complying with the request by using one or more of the tools, the plan comprising performance of one or more of the tasks. (Williams teaches a Natural Language Processing (NLP)machine trained using semi-supervised and/or unsupervised learning or zero-shot learning to process the documentation and learn the functions of the APIs. Williams at ¶¶ [0046] - [0059]. (i.e., a person of ordinary skill in the art would have understood before the priority date that an NLP Machine could comprise a Large Language Model) Further, Williams teaches processing the documentation using the NLP machine in order to determine functions made available by the API (i.e., tool). Williams at ¶¶ [0046] - [0059]. Further still, Williams teaches receiving a user query, determining an intent of the user query, then determining a function node within an AI graph structure where the function node is associated with the query and the intent (i.e., the task) and further generating a path through the structure that executes multiple functions in a process of reaching and executing the final function node associated with the query and intent (i.e., building a plan for complying with the request using one or more of the tools). Williams at ¶¶ [0083] - [0092].)
Williams alone, however, does not explicitly teach “a large language model for” analyzing tool documentation. Although, Williams does teach a natural language processing machine and artificial intelligence models, Williams does not explicitly discuss a large language model.
In a similar field of endeavor (e.g., user queries processed by a large language model and using documentation for applications to aid in the answer of queries), Kotaru teaches a large language model receiving user prompts for responding to user queries that merges technical specifications with user queries to construct responses to the user queries (i.e., a large language model for analyzing tool documentation). Kotaru at ¶¶ [0054] – [0070].
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Williams with the teachings of Kotaru to provide a large language model for analyzing tool documentation and determining tasks the tool is operable to perform. Doing so would have improved the quality of the responses of the system as recognized by Kotaru at ¶¶ [0037] – [0038]. Further, a person of ordinary skill in the art would have understood before the priority date that an NLP Machine could comprise a Large Language Model. As such, Williams’ natural language processing machine would be commonly understood to comprise a large language model or the like. Thus, a simple substitution of Kotaru’s LLM into Williams’ system would be a predictable application of known techniques within similar fields of endeavor.
Regarding claim 13, Williams-Kotaru teaches all the limitations of claim 12 as laid out above. Further, Kotaru teaches the method according to claim 12, wherein the tool documentation and the request are part of a large language model prompt that is received at the large language model. (Kotaru teaches a system for query answering where a prompt is generated by including technical specification (i.e., tool documentation) with the user query in order to provide better responses. Kotaru at ¶¶ [0054] - [0070].)
Regarding claim 16, Williams-Kotaru teaches all the limitations of claim 12 as laid out above. Further, Williams teaches the method according to claim 12, wherein the tool documentation for at least one of the tools comprises a description of the tool written by a provider of the tool. (Williams teaches the documentation may be provided by a service provider (i.e., a provider of the tool). Williams at ¶ [0034].)
Regarding claim 19, Williams-Kotaru teaches all the limitations of claim 12 as laid out above. Further, Williams teaches the method according to claim 12, wherein providing the large language model with tool documentation for each of one or more tools comprises, for at least one of the tools, truncating a description of the tool to generate a truncated description and using the truncated description as the tool documentation. (Williams teaches using partial documentation of APIs for the training (i.e., truncated description of the tool.) Williams ¶ [0040]. Further, Williams teaches using a simplified version of API documentation in its process. Williams at ¶¶ [0040] - [0045] and Figs. 4A and 4B.)
Regarding claim 20, Williams teaches a non-transitory computer-readable medium having stored thereon computer-readable instructions for using a large language model to comply with a user request, the instructions causing a computing system to: (Williams teaches the system implemented on a computer comprising processors and memory. Williams at ¶¶ [0116] – [0118].)
receive, at a large language model, tool documentation for each of one or more tools; (Williams teaches accessing (i.e., receiving) via a network (i.e., communication interface) documentation of a variety of APIs (i.e., tools) from an API discovery service. Williams at ¶¶ [0032] - [0035] and [0039] - [0040].)
and use the large language model to analyze the tool documentation for each of the one or more tools to determine, for each tool, one or more tasks that the tool is operable to perform, receive a request from a user, and generate a plan for complying with the request by using one or more of the tools, the plan comprising performance of one or more of the tasks. (Williams teaches a Natural Language Processing (NLP)machine trained using semi-supervised and/or unsupervised learning or zero-shot learning to process the documentation and learn the functions of the APIs. Williams at ¶¶ [0046] - [0059]. (i.e., a person of ordinary skill in the art would have understood before the priority date that an NLP Machine could comprise a Large Language Model) Further, Williams teaches processing the documentation using the NLP machine in order to determine functions made available by the API (i.e., tool). Williams at ¶¶ [0046] - [0059]. Further still, Williams teaches receiving a user query, determining an intent of the user query, then determining a function node within an AI graph structure where the function node is associated with the query and the intent (i.e., the task) and further generating a path through the structure that executes multiple functions in a process of reaching and executing the final function node associated with the query and intent (i.e., building a plan for complying with the request using one or more of the tools). Williams at ¶¶ [0083] - [0092].)
Williams alone, however, does not explicitly teach “a large language model for” analyzing tool documentation. Although, Williams does teach a natural language processing machine and artificial intelligence models, Williams does not explicitly discuss a large language model.
In a similar field of endeavor (e.g., user queries processed by a large language model and using documentation for applications to aid in the answer of queries), Kotaru teaches a large language model receiving user prompts for responding to user queries that merges technical specifications with user queries to construct responses to the user queries (i.e., a large language model for analyzing tool documentation). Kotaru at ¶¶ [0054] – [0070].
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Williams with the teachings of Kotaru to provide a large language model for analyzing tool documentation and determining tasks the tool is operable to perform. Doing so would have improved the quality of the responses of the system as recognized by Kotaru at ¶¶ [0037] – [0038]. Further, a person of ordinary skill in the art would have understood before the priority date that an NLP Machine could comprise a Large Language Model. As such, Williams’ natural language processing machine would be commonly understood to comprise a large language model or the like. Thus, a simple substitution of Kotaru’s LLM into Williams’ system would be a predictable application of known techniques within similar fields of endeavor.
Claims 6 – 7 and 14 - 15 are rejected under 35 U.S.C. 103 as being unpatentable over Williams-Kotaru as applied to claims 1 and 12 above, and further in view of U.S. Patent Application Publication No. 2024/0354319 A1 to Rasvan Dinu et al. (hereinafter Dinu).
Regarding claim 6, Williams-Kotaru teaches all the limitations of claim 1 as laid out above. Williams-Kotaru, however, do not teach all the limitations of claim 6.
In a similar field of endeavor (e.g., processing user queries using a large language model), Dinu teaches the computing system of claim 1, wherein the one or more tools comprise one or more websites. (Dinu teaches using external tools within a website/web-application system (i.e., the external tools can be or are websites) such as search engines. Dinu at ¶¶ [0020] and [0047] - [0049].)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Williams-Kotaru with the teachings of Dinu to provide the limitations of claim 6. Doing so would have improved the output of a language model as recognized by Dinu at ¶ [0040].
Regarding claim 7, Williams-Kotaru in view of Dinu (hereinafter Williams-Kotaru-Dinu) teaches all the limitations of claim 6 as laid out above. Further, Dinu teaches the computing system according to claim 6, wherein the one or more websites comprise at least one of a search engine, a messaging application, a conferencing application, or an image recognition application. (Dinu teaches using external tools within a website/web-application system (i.e., the external tools can be or are websites) such as search engines. Dinu at ¶¶ [0020] and [0047] - [0049].)
Regarding claim 14, Williams-Kotaru teaches all the limitations of claim 12 as laid out above. Williams-Kotaru, however, do not teach all the limitations of claim 14.
In a similar field of endeavor (e.g., processing user queries using a large language model), Dinu teaches the method according to claim 12, wherein the one or more tools comprise one or more websites. (Dinu teaches using external tools within a website/web-application system (i.e., the external tools can be or are websites) such as search engines. Dinu at ¶¶ [0020] and [0047] - [0049].)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Williams-Kotaru with the teachings of Dinu to provide the limitations of claim 14. Doing so would have improved the output of a language model as recognized by Dinu at ¶ [0040].
Regarding claim 15, Williams-Kotaru in view of Dinu (hereinafter Williams-Kotaru-Dinu) teaches all the limitations of claim 14 as laid out above. Further, Dinu teaches the method according to claim 14, wherein the one or more websites comprise at least one of a search engine, a messaging application, a conferencing application, or an image recognition application. (Dinu teaches using external tools within a website/web-application system (i.e., the external tools can be or are websites) such as search engines. Dinu at ¶¶ [0020] and [0047] - [0049].)
Claims 8, 10, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Williams-Kotaru as applied to claims 1 and 12 above, and further in view of U.S. Patent Application Publication No. 2023/0359789 A1 to David Andre et al. (hereinafter Andre).
Regarding claim 8, Williams-Kotaru teaches all the limitations of claim 1 as laid out above. Williams-Kotaru, however does not teach all the limitations of claim 8.
In a similar field of endeavor (e.g., processing natural language queries and forming an action set in response), Andre teaches the computing system according to claim 1, wherein the plan for complying with the request comprises the performance of one or more image recognition tasks. (Andre teaches generating an action set (i.e., a plan for performing a user request) including performing an image search (i.e., image recognition task). Andre at ¶ [0065].)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Williams-Kotaru with the teachings of Andre to provide the limitations of claim 8. Doing so would have increased the accuracy of action sets (i.e., task plans) as recognized by Andre at ¶ [0006].
Regarding claim 10, Williams-Kotaru teaches all the limitations of claim 1 as laid out above. Williams-Kotaru, however, do not teach all the limitations of claim 10.
In a similar field of endeavor (e.g., processing natural language queries and forming an action set in response), Andre teaches the computing system according to claim 1, wherein the large language model is operable to query the user in response to the request, and generating the plan further comprises generating the plan based on a reply to the query. (Andre teaches, in the process of generating a request embedding (i.e., determining the intent and goal of the user.), causing a clarification prompt for the user that prompts the user to reply to the confirmation prompt and generating an alternate request embedding based on the user response (i.e., the request/intent is altered, which would change the final function node of Williams, and therefore the path through the graph). Andre at ¶¶ [0113] - [0114].)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Williams-Kotaru with the teachings of Andre to provide the limitations of claim 10. Doing so would have increased the accuracy of action sets (i.e., task plans) as recognized by Andre at ¶ [0006].
Regarding claim 18, Williams-Kotaru teaches all the limitations of claim 12 as laid out above. Williams-Kotaru, however, do not teach all the limitations of claim 18.
In a similar field of endeavor (e.g., processing natural language queries and forming an action set in response), Andre teaches the method according to claim 12, further comprising using the large language model to query the user in response to the request, and using the large language model to generate the plan further comprises generating the plan based on a reply to the query. (Andre teaches, in the process of generating a request embedding (i.e., determining the intent and goal of the user.), causing a clarification prompt for the user that prompts the user to reply to the confirmation prompt and generating an alternate request embedding based on the user response (i.e., the request/intent is altered, which would change the final function node of Williams, and therefore the path through the graph). Andre at ¶¶ [0113] - [0114].)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Williams-Kotaru with the teachings of Andre to provide the limitations of claim 18. Doing so would have increased the accuracy of action sets (i.e., task plans) as recognized by Andre at ¶ [0006].
Claims 9 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Williams-Kotaru as applied to claims 1 and 12 above, and further in view of Non-Patent Literature Toolformer: Language Models Can Teach Themselves to Use Tools to Timo Schick et al. (hereinafter Schick).
Regarding claim 9, Williams-Kotaru teaches all the limitations of claim 1 as laid out above. Williams-Kotaru, however, do not teach all the limitations of claim 9.
In a similar field of endeavor (e.g., transformer models training to learn to user external tools), Schick teaches the computing system according to claim 1, wherein the large language model is operable to receive one or more demonstrations, and generating the plan further comprises generating the plan based on the one or more demonstrations. (Schick teaches using human-written examples (i.e., demonstrations) of how an API can be used to train a large language model to finetune and train itself to perform the API calls (i.e., use the external tools.) Schick at section 1: introduction, section 2: Approach, and section 4: experiments.)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Williams-Kotaru with the teachings of Schick to provide the limitations of claim 9. Doing so would have improved zero-shot learning performance of the transformer models as recognized by Schick at section 7: conclusion.
Regarding claim 17, Williams-Kotaru teaches all the limitations of claim 12 as laid out above. Williams-Kotaru, however, does not teach all the limitations of claim 17.
In a similar field of endeavor (e.g., transformer models training to learn to user external tools), Schick teaches the method according to claim 12, wherein the method further comprises receiving one or more demonstrations at the large language model, and using the large language model to generate the plan further comprises generating the plan based on the one or more demonstrations. (Schick teaches using human-written examples (i.e., demonstrations) of how an API can be used to train a large language model to finetune and train itself to perform the API calls (i.e., use the external tools.) Schick at section 1: introduction, section 2: Approach, and section 4: experiments.)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Williams-Kotaru with the teachings of Schick to provide the limitations of claim 17. Doing so would have improved zero-shot learning performance of the transformer models as recognized by Schick at section 7: conclusion.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAMERON KENNETH YOUNG whose telephone number is (703)756-1527. The examiner can normally be reached Mon - Fri, 9:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAMERON KENNETH YOUNG/Examiner, Art Unit 2655
/ANDREW C FLANDERS/Supervisory Patent Examiner, Art Unit 2655