DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-21 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant has amended the claims to include receiving, by a network device, code for generating an interface for communicating with a large language model, wherein the interface is an application programming interface. Upon further consideration of the claim amendments it is believed Kohsla teaches such language in that another example, the user may ask the natural language question answering service 102 to create an API call, or create and run an API call (e.g., the user asks “please create a bucket for me named ‘bucket3’ in my network-based storage service”), see par. [0070]. Khosla teaches creating an API call or create and run an API call using a model that’s not the LLM component however it is not clear whether the API is for communicating with the LLM, although it is entirely possible since par. [0073] teaches The LLM component 106 may also generate API calls (or run them for the customer) based on the question (e.g., customer wants an API call to create a bucket in a network-based storage service and the LLM component 106 generates it). A new search was made and art was found to Callegari to clarify aspects of Khosla which teaches a computing system for revising large language model (LLM) input prompts is provided herein. In one example, the computing system includes at least one processor configured to cause a prompt interface for a trained LLM to be presented, see par. [0004]. Another aspect provides a computing system for revising large language model (LLM) input prompts. The computing system comprises at least one processor configured to execute a prompt interface application programming interface (API) for a trained LLM, receive, via the prompt interface API, a prompt including an instruction for the LLM to generate an output, provide first input including the prompt to the LLM, generate, in response to the first input, a first response to the prompt via the LLM, perform assessment and revision of the prompt, at least in part by assessing the first response according to assessment criteria to generate an assessment report for the first response, see par. [0090].
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim(s) 1, 3, 5, 8-10, 12, 15, 16, 18-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khosla U.S. PAP 2025/0005057 A1 in view of Callegari U.S. PAP 2024/0362422 A1.
Regarding claim 1 Khosla teaches a method, comprising: receiving, by a network device, code for generating an interface for communicating with a large language model (the LLM component 106 may additionally be trained to provide and/or run application programming interface (API) commands in response to a natural language question or prompt , see par. [0064]); generating, by the network device, the interface based on the code (run application programming interface (API) commands , see par. [0064]); providing, by the network device, to the large language model, and via the interface, a question associated with the network device (The LLM component 106 may receive (via the natural language question answering service 102) a natural language question (or prompt) which asks for a command to be run in a network-based system, see par. [0064]); and receiving, by the network device and from the large language model, an answer to the question associated with the network device ( The LLM component 106 may gain access to the user's credentials (e.g., which services they are subscribed to, usage history, knowledge graphs of the customer, etc.) to generate an answer which can include API commands as an answer, see par. [0064]).
Khosla teaches generatin an API call (As another example, the user may ask the natural language question answering service 102 to create an API call, or create and run an API call (e.g., the user asks “please create a bucket for me named ‘bucket3’ in my network-based storage service”).
However Khosla does not teach receiving, by a network device, code for generating an interface for communicating with a large language model, wherein the interface is an application programming interface.
In the same field of endeavor Callegari teaches a computing system for revising large language model (LLM) input prompts is provided herein. In one example, the computing system includes at least one processor configured to cause a prompt interface for a trained LLM to be presented, see par. [0004]. Another aspect provides a computing system for revising large language model (LLM) input prompts. The computing system comprises at least one processor configured to execute a prompt interface application programming interface (API) for a trained LLM, receive, via the prompt interface API, a prompt including an instruction for the LLM to generate an output, provide first input including the prompt to the LLM, generate, in response to the first input, a first response to the prompt via the LLM, perform assessment and revision of the prompt, at least in part by assessing the first response according to assessment criteria to generate an assessment report for the first response, see par. [0090].
It would have been obvious to one of ordinary skill in the art to combine the Khosla invention with the teachings of Callegari for the benefit of performing assessment and revision of the prompt, at least in part by assessing the first response according to assessment criteria to generate an assessment report for the first response, see par. [0090].
Regarding claim 3 Khosla teaches the method of claim 1, further comprising: providing, to the large language model and via the interface, a question associated with configuring the network device (the LLM component 106 may receive (via the natural language question answering service 102) a natural language question (or prompt) which asks for a command to be run in a network-based system, see par. [0064]); and receiving, from the large language model, an answer to the question associated with configuring the network device ( generate an answer which provides the API command to execute the function of the network-based on-demand code execution service, see par. [0064]).
Regarding claim 5 Khosla teaches the method of claim 1, wherein generating the interface based on the code comprises: filtering an output of an operational mode command or a configuration mode command to generate the interface for communicating with the large language model (The LLM component 106 may also generate API calls (or run them for the customer) based on the question (e.g., customer wants an API call to create a bucket in a network-based storage service and the LLM component 106 generates it). Additionally, the LLM component 106 may pre-determine questions for customers based on their activity (e.g., referencing a knowledge graph and determining that the customer likes links to other passages rather than answers with long text in the answer itself), see par. [0073]). Regarding claim 8 Khosla teaches a network device (customer devices 122, see figure 1), comprising: one or more memories; and one or more processors (processor can be in communication with the memory for maintaining computer-executable instructions, see par. [0020]) to: receive code for generating an interface for communicating with a large language model (the LLM component 106 may additionally be trained to provide and/or run application programming interface (API) commands in response to a natural language question or prompt , see par. [0064]); provide, to the large language model and via the interface, a question associated with the network device (The LLM component 106 may receive (via the natural language question answering service 102) a natural language question (or prompt) which asks for a command to be run in a network-based system, see par. [0064]); and receive, from the large language model, an answer to the question associated with the network device ( The LLM component 106 may gain access to the user's credentials (e.g., which services they are subscribed to, usage history, knowledge graphs of the customer, etc.) to generate an answer which can include API commands as an answer, see par. [0064]).
However Khosla does not teach receive, code for generating an interface for communicating with a large language model, wherein the interface is an application programming interface; generate the interface based on executing the code.
In the same field of endeavor Callegari teaches a computing system for revising large language model (LLM) input prompts is provided herein. In one example, the computing system includes at least one processor configured to cause a prompt interface for a trained LLM to be presented, see par. [0004]. Another aspect provides a computing system for revising large language model (LLM) input prompts. The computing system comprises at least one processor configured to execute a prompt interface application programming interface (API) for a trained LLM, receive, via the prompt interface API, a prompt including an instruction for the LLM to generate an output, provide first input including the prompt to the LLM, generate, in response to the first input, a first response to the prompt via the LLM, perform assessment and revision of the prompt, at least in part by assessing the first response according to assessment criteria to generate an assessment report for the first response, see par. [0090].
It would have been obvious to one of ordinary skill in the art to combine the Khosla invention with the teachings of Callegari for the benefit of performing assessment and revision of the prompt, at least in part by assessing the first response according to assessment criteria to generate an assessment report for the first response, see par. [0090].
Regarding claim 9 Khosla teaches the network device of claim 8, wherein the one or more processors are further to: provide, to the large language model and via the interface, a request to troubleshoot the network device (a customer can create a support ticket case with a title of the case and a detailed description of the issue related to the case, see par. [0039]); and receive, from the large language model and based on the request, a response identifying one or more issues associated with the network device (Agents of this search system may analyze the case and suggest ways to resolve the issue related to case and also annotate the case (e.g., issue related to debugging a network-based server when it is experiencing lag issues), see par. [0039]).
Regarding claim 10 Khosla teaches the network device of claim 8, wherein the one or more processors are further to: provide, to the large language model and via the interface, a request to troubleshoot the network device (a customer can create a support ticket case with a title of the case and a detailed description of the issue related to the case, see par. [0039]); and receive, from the large language model and based on the request, instructions that cause the network device to correct one or more issues associated with the network device (Agents of this search system may analyze the case and suggest ways to resolve the issue related to case and also annotate the case (e.g., issue related to debugging a network-based server when it is experiencing lag issues), see par. [0039]). Regarding claim 12 Khosla teaches the network device of claim 8, wherein the one or more processors are further to: provide, to the large language model and via the interface, a request to analyze an output of the network device (a customer can create a support ticket case with a title of the case and a detailed description of the issue related to the case, see par. [0039]);; and receive, from the large language model and based on the request, an analysis of the output of the network device (Agents of this search system may analyze the case and suggest ways to resolve the issue related to case and also annotate the case (e.g., issue related to debugging a network-based server when it is experiencing lag issues), see par. [0039]). Regarding claim 15 Khosla teaches a non-transitory computer-readable medium storing a set of instructions (a computer-readable medium drive 224, see par. [0049]), the set of instructions comprising: one or more instructions that, when executed by one or more processors of a network device (computing system may include one or more computers or processors, see par. [0078]), cause the network device to: generate the interface based on the code, wherein the interface is an application programming interface (run application programming interface (API) commands , see par. [0064]); provide, to the large language model and via the interface, a question associated with the network device (The LLM component 106 may receive (via the natural language question answering service 102) a natural language question (or prompt) which asks for a command to be run in a network-based system, see par. [0064]); and receive, from the large language model, an answer to the question associated with the network device ( The LLM component 106 may gain access to the user's credentials (e.g., which services they are subscribed to, usage history, knowledge graphs of the customer, etc.) to generate an answer which can include API commands as an answer, see par. [0064]). However Khosla does not teach receive code for generating an interface for communicating with a large language model, wherein the interface is an application programming interface.
In the same field of endeavor Callegari teaches a computing system for revising large language model (LLM) input prompts is provided herein. In one example, the computing system includes at least one processor configured to cause a prompt interface for a trained LLM to be presented, see par. [0004]. Another aspect provides a computing system for revising large language model (LLM) input prompts. The computing system comprises at least one processor configured to execute a prompt interface application programming interface (API) for a trained LLM, receive, via the prompt interface API, a prompt including an instruction for the LLM to generate an output, provide first input including the prompt to the LLM, generate, in response to the first input, a first response to the prompt via the LLM, perform assessment and revision of the prompt, at least in part by assessing the first response according to assessment criteria to generate an assessment report for the first response, see par. [0090].
It would have been obvious to one of ordinary skill in the art to combine the Khosla invention with the teachings of Callegari for the benefit of performing assessment and revision of the prompt, at least in part by assessing the first response according to assessment criteria to generate an assessment report for the first response, see par. [0090].
Regarding claim 16 Khosla teaches the non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the network device to: provide, to the large language model and via the interface, a question associated with configuring the network device (the LLM component 106 may receive (via the natural language question answering service 102) a natural language question (or prompt) which asks for a command to be run in a network-based system, see par. [0064]); and receive, from the large language model, an answer to the question associated with configuring the network device ( generate an answer which provides the API command to execute the function of the network-based on-demand code execution service, see par. [0064]).
Regarding claim 18 Khosla teaches the non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the network device to: provide, to the large language model and via the interface, a request for instructions to configure the network device (the LLM component 106 may receive (via the natural language question answering service 102) a natural language question (or prompt) which asks for a command to be run in a network-based system, see par. [0064]); and receive, from the large language model and based on the request, instructions for configuring the network device (generate an answer which provides the API command to execute the function of the network-based on-demand code execution service, see par. [0064]).
Regarding claim 19 Khosla teaches the non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the network device to: provide, to the large language model and via the interface, a request to troubleshoot the network device (a customer can create a support ticket case with a title of the case and a detailed description of the issue related to the case, see par. [0039]); and receive, from the large language model and based on the request, a response identifying one or more issues associated with the network device (Agents of this search system may analyze the case and suggest ways to resolve the issue related to case and also annotate the case (e.g., issue related to debugging a network-based server when it is experiencing lag issues), see par. [0039]).
Regarding claim 20 Khosla teaches the non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the network device to: provide, to the large language model and via the interface, a request to troubleshoot the network device (a customer can create a support ticket case with a title of the case and a detailed description of the issue related to the case, see par. [0039]); and receive, from the large language model and based on the request, instructions that cause the network device to correct one or more issues associated with the network device (Agents of this search system may analyze the case and suggest ways to resolve the issue related to case and also annotate the case (e.g., issue related to debugging a network-based server when it is experiencing lag issues), see par. [0039]).
Regarding claim 21 Callegari teaches the method of claim 1, wherein generating the interface based on executing the code comprises: utilizing a command to filter an output of a command to generate the application programming interface for communicating with the large language model (execute a prompt interface application programming interface (API) for a trained LLM, receive, via the prompt interface API, a prompt including an instruction for the LLM to generate an output, see par. [0090]).
Claim(s) 2, 4, 7, 11, 13, 14 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Khosla U.S. PAP 2025/0005057 A1, in view of Callegari U.S. PAP 2024/0 in view of Panemangalore U.S. PAP 2016/0260430 A1.
Regarding claim 2 Khosla in view of Callegari does not teach the method of claim 1, further comprising: providing, to the large language model and via the interface, a request for a translation of an output of the network device to another language; and receiving, from the large language model and based on the request, the translation of the output to the other language.
In the same field of endeavor Panemangalore teaches a more universal, easy, natural, and vendor-agnostic interface to configure, manage, and/or monitor devices in networks, see abstract. the NLP input command issued via the client on the end-point computing device 705 is received (605) by the NLP interaction end-point server 715. The server 715, which may be a chat server, forwards the command to the network management NLP translation system 720 to process the NLP input, see par. [0097].
It would have been obvious to one of ordinary skill in the art to combine the Khosla invention with the teachings of Panemangalore for the benefit of monitoring devices in networks in a more universal easy natural way, see abstract.
Regarding claim 4 Khosla in view of Callegari does not teach the method of claim 1, further comprising: providing, to the large language model and via the interface, a request to analyze a route summary of the network device; and receiving, from the large language model and based on the request, an analysis of the route summary of the network device.
IN the same field of endeavor Panemangalore teaches Aspects of the present patent document are directed to information handling systems. For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, route, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes, see par. [0126].
It would have been obvious to one of ordinary skill in the art to combine the Khosla invention with the teachings of Panemangalore for the benefit of monitoring devices in networks in a more universal easy natural way, see abstract.
Regarding claim 7 Khosla in view of Callegari does not teach the method of claim 1, further comprising: providing, to the large language model and via the interface, a request for instructions to configure the network device; and receiving, from the large language model and based on the request, instructions for configuring the network device.
In the same field of endeavor Panemangalore teaches a single point of administration, management, and monitoring across a network, such as an entire data center, can provide a user-friendly natural language interface. For example, in embodiments, a voice-based chat or messaging interface may be used to “live chat” with networking devices using messaging and presence protocol or protocols, such as XMPP (Extensible Messaging and Presence Protocol (XMPP) protocol. In embodiments, such messaging and presence protocols create a shared bus over which networking devices can be configured, managed, and/or monitored using traditional command line interfaces (CLIs), see par. [0030].
It would have been obvious to one of ordinary skill in the art to combine the Khosla invention with the teachings of Panemangalore for the benefit of monitoring devices in networks in a more universal easy natural way, see abstract.
Regarding claim 11 Khosla in view of Callegari does not teach the network device of claim 8, wherein the one or more processors are further to: provide, to the large language model and via the interface, a request for instructions to onboard the network device with a network; and receive, from the large language model and based on the request, instructions that cause the network device to onboard with the network.
In the same field of endeavor Panemangalore teaches demands for data and communications have resulted in vast arrays of ever expanding networks. As these networks expand, new equipment is added at different times and for different reasons, such as to add new functionality and features, see par. [0004]. In FIG. 8, the NLP chat interface 800 displays for a user client 805 a listing of contacts 810 that represent or are avatars for devices that can be configured, managed, or monitored via the interface 800. For example, the user may connect to a target switch avatar and issue an NLP query (or input, which may be used interchangeably with query herein), such as “create virtual-LAN 10 and add it on ports 1 and 2.” In embodiments, a voice-to-text module converts the user's speech into text; and in embodiments, the client 800 may display the converted text in a command window 820, see par. [0095].
It would have been obvious to one of ordinary skill in the art to combine the Khosla invention with the teachings of Panemangalore for the benefit of adding new functionality and features to the networks, see par. [0004].
Regarding claim 13 Khosla in view of Callegari does not teaches the network device of claim 8, wherein the one or more processors are further to: provide, to the large language model and via the interface, a request to analyze traffic associated with the network device; and receive, from the large language model and based on the request, an analysis of the traffic associated with the network device.
In the same field of endeavor Panemangalore teaches terms along with similar terms such as “data,” “data traffic,” “information,” “cell,” etc. may be replaced by other terminologies referring to a group of bits, and may be used interchangeably, see par. [0026]. In embodiments, analysis may be performed on one or more of the NLP input, target-specific NLP inputs, and command template list, and empty/variable slots in the template(s) in which values are expected are completed. In embodiments, temporal analysis may be performed so that commands, including preparatory actions, are performed and performed in the correct order, see par. [0107].
It would have been obvious to one of ordinary skill in the art to combine the Khosla invention with the teachings of Panemangalore for the benefit of monitoring devices in networks in a more universal easy natural way, see abstract.
Regarding claim 14 Khosla in view of Callegari does not teach the network device of claim 8, wherein the one or more processors are further to: provide, to the large language model and via the interface, a request for instructions to offload traffic of the network device; and receive, from the large language model and based on the request, instructions that cause the network device to offload the traffic of the network device to another network device.
Aspects of the present patent document are directed to information handling systems. For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, route, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes, see par. [0126].
It would have been obvious to one of ordinary skill in the art to combine the Khosla invention with the teachings of Panemangalore for the benefit of monitoring devices in networks in a more universal easy natural way, see abstract.
Regarding claim 17 Khosla in view of Callegari does not teach the non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the network device to: provide, to the large language model and via the interface, a request to analyze a route summary of the network device; and receive, from the large language model and based on the request, an analysis of the route summary of the network device.
IN the same field of endeavor Panemangalore teaches Aspects of the present patent document are directed to information handling systems. For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, calculate, determine, classify, process, transmit, receive, retrieve, originate, route, switch, store, display, communicate, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes, see par. [0126].
It would have been obvious to one of ordinary skill in the art to combine the Khosla invention with the teachings of Panemangalore for the benefit of monitoring devices in networks in a more universal easy natural way, see abstract.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Pertinent prior art available on form 892.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael Ortiz-Sanchez whose telephone number is (571)270-3711. The examiner can normally be reached Monday- Friday 9AM-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL ORTIZ-SANCHEZ/ Primary Examiner, Art Unit 2656