DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
The following non-is a final office action.
Claims 1-20 are currently pending and have been examined on their merits.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1-20 recite a method (i.e. a series of steps), and therefore each claim falls within one of the four statutory categories.
Step 2A prong 1 (Is a judicial exception recited?):
The representative claim 1 recites: a method comprising: in response to a user input, instantiating a generative service, the generative service, receive a natural language input at an input region; in response to user input provided to the input region, analyzing the user input to determine an action intent; evaluating a first degree of correlation between the action intent and a first subject-matter expertise of a first automated assistant service and a second degree of correlation between the action intent and a second subject-matter expertise of a second automated assistant service; in response to the first degree of correlation being greater than the second degree of correlation, causing the first automated assistant service to generate a prompt comprising: predefined query prompt text associated with the first subject-matter expertise; at least a portion of the natural language user input; and text extracted from the content displayed in the content panel; providing the prompt and obtaining a generative response; and causing (presentation) of a result based on the generative response.
Claim 10: a method comprising: receive a natural language input at an input region; in response to a first user input provided to the input region, analyzing the first user input to determine a first action intent; in response to the first action intent corresponding to a first subject-matter expertise of a first automated assistant service, causing the first automated assistant service to generate a first prompt comprising: first predefined query prompt text associated with the first subject-matter expertise; and text extracted from the content displayed in the content panel; providing the first prompt and obtaining a first generative response; causing (presentation) of a first result based on the first generative response; in response to a second user input provided to the input region, analyzing the second user input to determine a second action intent; in response to the second action intent corresponding to a second subject-matter expertise of a second automated assistant service, causing the second automated assistant service to generate a second prompt comprising second predefined query prompt text associated with the second subject-matter expertise; providing the second prompt and obtaining a second generative response; and causing (presentation) of a second result based on the second generative response.
Claim 15: a method comprising: in response to a first user input provided to the input region, analyzing the first user input to determine an action intent; in response to the action intent corresponding to a first subject-matter expertise of a first automated assistant service, causing the first automated assistant service to generate a first prompt comprising: first predefined query prompt text associated with the first subject-matter expertise; and at least a portion of the natural language user input; providing the first prompt and obtaining a first generative response; causing (presentation) of a first result based on the first generative response; causing a second automated assistant service to generate a second prompt comprising: second predefined query prompt text associated with a second subject-matter expertise; and at least a portion of the first generative response; providing the second prompt and obtaining a second generative response; and causing (presentation) of a second result based on the second generative response.
The claims recite a certain method of organizing human activity. The claims are found to be considered a method of organizing human activity as they relate towards managing personal behavior or relationships or interactions between people. As the claims recite a method for receiving a user prompt, identifying a user intent of the prompt, identifying an assistant for generating a response based on the intent, and generating a response to the prompt. The method merely recites a series of steps for receiving a user input and identifying an assistant service that can perform a desired response based on a user’s intent.
Alternatively, the claims also recite a mental process. The claims recite merely receiving and analyzing user input such as receiving a user prompt and determining a desired intent, based on the identified intent identifying an assistant, and generating a response to the prompt. The examiner finds these limitations to merely be observations, evaluations, judgements, and opinions. As the claims recite receiving and evaluating a user prompt to determine an intent and evaluating the intent to identify an assistant service for generating a response to the prompt. Furthermore, the examiner finds that a user could mentally or with the aid of a “pen and paper” perform the steps of receiving and analyzing user input to determine an intent and identifying an assistant service based on the intent for generating a result.
Therefore, the examiner finds the claims to recite an abstract idea.
Step 2A Prong 2 (Is the exception integrated into a practical application?): The claims additionally recite;
Claim 1: A computer, a multi-participant interface for a content collaboration platform, causing display of a graphical user interface having a content panel depicting content of a content item managed by a content collaboration system, the graphical user interface displayed on a display of a client device; providing the prompt to an external generative output engine and obtaining a generative response from the external generative output engine.
Claim 10: A computer, a multi-participant interface for a content collaboration platform, causing display of a graphical user interface having a content panel depicting content of a content item managed by a content collaboration system, the graphical user interface displayed on a display of a client device; providing the prompt to an external generative output engine and obtaining a generative response from the external generative output engine.
Claim 15: A computer, a multi-participant interface for a content collaboration platform, causing display of a graphical user interface having a content panel depicting content of a content item managed by a content collaboration system, the graphical user interface displayed on a display of a client device; providing the prompt to an external generative output engine and obtaining a generative response from the external generative output engine.
However, the limitations merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Merely utilizing a generic computer system to perform the claim limitations of receiving and analyzing user input prompt to identify an assistant service and generate a response is not an improvement in a technology or technical field. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Step 2B (Does the claim recite additional elements that amount to significantly more that the judicial exception?): As discussed above, the additional imitations amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Therefore, the additional elements do not integrate the judicial exception into a practical application and do not amount to significantly more.
Claims 2-9, 11-14, and 16-20 are further narrowing the abstract idea of receiving and analyzing user prompt to determine an assistant service.
The independent claims recite the following additional elements:
Claims 9 and 20: an issue tracking plugin
Claim 8-9, 11-12, and 18-20: a first and second set of plugins.
However, the additional elements are directed to merely “apply it” or applying generic computer elements to perform the abstract idea.
Therefore, claims 1-20 are rejected under U.S.C. 101.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Grinberg (US 2025/0061285).
Claim 1: Grinberg discloses a computer-implemented method for operating a multi-participant interface for a content collaboration platform, the method comprising: causing display of a graphical user interface having a content panel depicting content of a content item managed by a content collaboration system, the graphical user interface displayed on a display of a client device (Paragraph [0007-0008]; [0080-0083]; [0099]; [0120]; Fig. 5, embodiments consistent with the present disclosure involve systems, methods, and computer readable medium for building an application incorporating AI functionality. Exemplary operations may include receiving a selection of an AI assistant add-on and at least one of a plurality of SaaS platform elements, enabling implementation of permission for providing access to data from the at least one of the plurality of linked SaaS platform elements. Some embodiments involve performing selection operations for a plurality of distinct artificial intelligence agents. Exemplary embodiments include sending via the application, a prompt to a plurality of distinct AI agents, and receiving from each of the plurality of distinct AI agents a response to the prompt. The operation may further include comparing information associated with each of the received responses, and selecting at least one AI agent from the plurality of distinct AI agents. A software application may include a user interface enabling users to interact with and access features and functionalities of the software application. User interface is consistent with use of the term as described herein. For example, displaying a user interface may be done by showing a message on a screen, such as a pop-up box, or a list of selectable options);
in response to a user input, instantiating a generative service, the generative service causing display of a generative interface panel within the graphical user interface, the generative interface panel configured to receive a natural language input at an input region (Paragraph [0117-0121]; Fig. 4, in some embodiments disclosed herein, processes are described that explain ways to select an AI agent from a pool of AI agents. Embodiments may involve selection operations for a plurality of distinct AI agent. An AI agent may be distinct if it provides at least one functions, purpose, solution, task, or use that distinguishes it from other agents. For example, one AI agent may be used to analyze text, while another may be used to analyze audio. Based on a developer’s needs or requests. Some embodiments may involve accessing an application that employes AI functionality. Some examples involve sending a prompt to a plurality of distinct Ai agents. A prompt is a message, question, or indication presented to elicit a response or action. It can be a textual message, a dialog box, an input field where an entity is pinged);
in response to user input provided to the input region of the generative interface panel, analyzing the user input to determine an action intent (Paragraph [0117-0121]; Fig. 4, the operations further include analyzing a context associated with the prompt. A context refers to circumstances, conditions, environment, or background in which something exists. Analyzing a context for a prompt refers to examination or consideration of surrounding information and/or relevant factors to better understand the meaning and/or intent of the prompt. For example, a context may refer to any kind of information that associates a query with a specific topic or sub-element. Increasing the context of the query may lead to a more accurate selection of an AI agent);
evaluating a first degree of correlation between the action intent and a first subject-matter expertise of a first automated assistant service and a second degree of correlation between the action intent and a second subject-matter expertise of a second automated assistant service (Paragraph [0123-0126]; [0129-0131]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user);
in response to the first degree of correlation being greater than the second degree of correlation, causing the first automated assistant service to generate a prompt comprising: predefined query prompt text associated with the first subject-matter expertise; at least a portion of the natural language user input; and text extracted from the content displayed in the content panel (Paragraph [0123-0126]; [0129-0131]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user);
providing the prompt to an external generative output engine and obtaining a generative response from the external generative output engine; and causing display of a result based on the generative response in the generative interface panel (Paragraph [0129-0131]; [0138-0139]; Fig. 4, upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. An AI agent may generate and/or provide automatic replies, or rephrasing of text, for a conversation, such as a chat box or emails. The operations further include outputting the response of the at least one selected AI agent. The output may be received by a system or device associated with a user or sender of the query);
Claim 2: Grinberg discloses the computer-implemented method as per claim 1. Grinberg further discloses wherein: the generative response is a first generative response; the prompt is a first prompt; the result is a first result; the predefined query prompt text is a first predefined query prompt text; in response to the action intent corresponding to a compound action: subsequent to obtaining the first generative response, causing the second automated assistant service to generate a second prompt comprising: second predefined query prompt text associated with the second subject-matter expertise; and text extracted from the first generative response; providing the second prompt to the external generative output engine and obtaining a second generative response from the external generative output engine; and causing display of a second result based on the second generative response in the generative interface panel (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Claim 3: Grinberg discloses the computer-implemented method as per claim 2. Grinberg further discloses wherein: the first predefined query prompt text includes content extracted from a first corpus of knowledge base electronic documents directed to the first subject-matter expertise; and the second predefined query prompt text includes content extracted from a second corpus of knowledge base electronic documents directed to the second subject-matter expertise (Paragraph [0123-0126]; [0129-0131]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user).
Claim 4: Grinberg discloses the computer-implemented method as per claim 1. Grinberg further discloses wherein: the user input is a first user input; the generative response is a first generative response; and the method further comprises storing a set of user inputs, including the first user input, provided to the generative interface panel and a corresponding set of generative responses, including the first generative response, provided by the external generative output engine in a persistence module of the generative service (Paragraph [0123-0126]; [0129-0131]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user).
Claim 5: Grinberg discloses the computer-implemented method as per claim 4. Grinberg further discloses wherein: the prompt is a first prompt; in response to a second user input provided to the input region, causing the first automated assistant service to generate a second prompt comprising: the predefined query prompt text associated with the first subject-matter expertise; and at least a portion of the set of user inputs or the set of generative responses; providing the second prompt to the external generative output engine and obtaining a second generative response from the external generative output engine; and causing display of a second result based on the second generative response in the generative interface panel (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Claim 6: Grinberg discloses the computer-implemented method as per claim 4. Grinberg further discloses wherein: the action intent is a first action intent; in response to a second user input provided to a second input region, analyzing the second user input to determine a second action intent; and evaluating a third degree of correlation between the second action intent and the first subject-matter expertise of the first automated assistant service, evaluation of the third degree of correlation including an analysis of one or more of the set of user inputs or the set of generative responses (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Claim 7: Grinberg discloses the computer-implemented method as per claim 1. Grinberg further discloses wherein: evaluating the first degree of correlation comprises analyzing a first semantic similarity of the action intent with text representing the first subject-matter expertise of the first automated assistant service; and evaluating the second degree of correlation comprises analyzing a second semantic similarity of the action intent with text representing the second subject-matter expertise of the second automated assistant service (Paragraph [0123-0126]; [0129-0131]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user).
Claim 8: Grinberg discloses the computer-implemented method as per claim 1. Grinberg further discloses wherein: the first automated assistant service includes a first set of plugins, each plugin configured to extract content from content items of a respective platform; and the second automated assistant service includes a second set of plugins different than the first set of plugins (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Claim 9: Grinberg discloses the computer-implemented method as per claim 8. Grinberg further discloses wherein: the first set of plugins includes an issue tracking plugin configured to extract content from issues managed by an issue tracking platform; and the second set of plugins includes a documentation plugin configured to extract content from pages managed by a documentation platform (Paragraph [0117-0121]; Fig. 4, in some embodiments disclosed herein, processes are described that explain ways to select an AI agent from a pool of AI agents. Embodiments may involve selection operations for a plurality of distinct AI agent. An AI agent may be distinct if it provides at least one functions, purpose, solution, task, or use that distinguishes it from other agents. For example, one AI agent may be used to analyze text, while another may be used to analyze audio. Based on a developer’s needs or requests. Some embodiments may involve accessing an application that employes AI functionality. Some examples involve sending a prompt to a plurality of distinct Ai agents. A prompt is a message, question, or indication presented to elicit a response or action. It can be a textual message, a dialog box, an input field where an entity is pinged).
Claim 10: Grinberg discloses a computer-implemented method for operating a cross-platform multi-participant interface for a content collaboration platform, the method comprising: causing display of a graphical user interface having a content panel depicting content of a content item managed by a content collaboration system, the graphical user interface displayed on a display of a client device; causing display of a generative interface panel within the graphical user interface, the generative interface panel configured to receive a natural language input at an input region; in response to a first user input provided to the input region of the generative interface panel, analyzing the first user input to determine a first action intent (Paragraph [0007-0008]; [0080-0083]; [0099]; [0120]; Fig. 5, embodiments consistent with the present disclosure involve systems, methods, and computer readable medium for building an application incorporating AI functionality. Exemplary operations may include receiving a selection of an AI assistant add-on and at least one of a plurality of SaaS platform elements, enabling implementation of permission for providing access to data from the at least one of the plurality of linked SaaS platform elements. Some embodiments involve performing selection operations for a plurality of distinct artificial intelligence agents. Exemplary embodiments include sending via the application, a prompt to a plurality of distinct AI agents, and receiving from each of the plurality of distinct AI agents a response to the prompt. The operation may further include comparing information associated with each of the received responses, and selecting at least one AI agent from the plurality of distinct AI agents. A software application may include a user interface enabling users to interact with and access features and functionalities of the software application. User interface is consistent with use of the term as described herein. For example, displaying a user interface may be done by showing a message on a screen, such as a pop-up box, or a list of selectable options);
in response to the first action intent corresponding to a first subject-matter expertise of a first automated assistant service, causing the first automated assistant service to generate a first prompt comprising: first predefined query prompt text associated with the first subject-matter expertise; and text extracted from the content displayed in the content panel; providing the first prompt to an external generative output engine and obtaining a first generative response from the external generative output engine (Paragraph [0117-0121]; Fig. 4, in some embodiments disclosed herein, processes are described that explain ways to select an AI agent from a pool of AI agents. Embodiments may involve selection operations for a plurality of distinct AI agent. An AI agent may be distinct if it provides at least one functions, purpose, solution, task, or use that distinguishes it from other agents. For example, one AI agent may be used to analyze text, while another may be used to analyze audio. Based on a developer’s needs or requests. Some embodiments may involve accessing an application that employes AI functionality. Some examples involve sending a prompt to a plurality of distinct Ai agents. A prompt is a message, question, or indication presented to elicit a response or action. It can be a textual message, a dialog box, an input field where an entity is pinged);
causing display of a first result based on the first generative response in the generative interface panel; in response to a second user input provided to the input region of the generative interface panel, analyzing the second user input to determine a second action intent (Paragraph [0117-0121]; Fig. 4, the operations further include analyzing a context associated with the prompt. A context refers to circumstances, conditions, environment, or background in which something exists. Analyzing a context for a prompt refers to examination or consideration of surrounding information and/or relevant factors to better understand the meaning and/or intent of the prompt. For example, a context may refer to any kind of information that associates a query with a specific topic or sub-element. Increasing the context of the query may lead to a more accurate selection of an AI agent);
in response to the second action intent corresponding to a second subject-matter expertise of a second automated assistant service, causing the second automated assistant service to generate a second prompt comprising second predefined query prompt text associated with the second subject-matter expertise; providing the second prompt to the external generative output engine and obtaining a second generative response from the external generative output engine; and causing display of a second result based on the second generative response in the generative interface panel (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Claim 11: Grinberg discloses the computer-implemented method as per claim 10. Grinberg further discloses wherein: the first automated assistant service includes a first set of plugins, each plugin configured to extract content from content items of a respective platform; a first plugin of the first set of plugins extracts the text from the content used for the first prompt; and the second automated assistant service includes a second set of plugins different than the first set of plugins (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Claim 12: Grinberg discloses the computer-implemented method as per claim 11. Grinberg further discloses wherein: a second plugin of the second set of plugins extracts source code from a source code management platform; and the second prompt includes the source code extracted by the second plugin (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Claim 13: Grinberg discloses the computer-implemented method as per claim 10. Grinberg further discloses wherein causing the first automated assistant service to generate the first prompt is based on a determination that a first correlation between the first action intent and the first subject-matter expertise satisfies a selection criteria (Paragraph [0123-0126]; [0129-0131]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user);
Claim 14: Grinberg discloses the computer-implemented method as per claim 10. Grinberg further discloses wherein the second prompt further comprises at least a portion of the first generative response (Paragraph [0123-0126]; [0129-0131]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user);
Claim 15: Grinberg discloses a computer-implemented method for operating a cross-platform multi-participant interface for a content collaboration platform, the method comprising: causing display of a graphical user interface having a content panel depicting content of a content item managed by a content collaboration system, the graphical user interface displayed on a display of a client device (Paragraph [0007-0008]; [0080-0083]; [0099]; [0120]; Fig. 5, embodiments consistent with the present disclosure involve systems, methods, and computer readable medium for building an application incorporating AI functionality. Exemplary operations may include receiving a selection of an AI assistant add-on and at least one of a plurality of SaaS platform elements, enabling implementation of permission for providing access to data from the at least one of the plurality of linked SaaS platform elements. Some embodiments involve performing selection operations for a plurality of distinct artificial intelligence agents. Exemplary embodiments include sending via the application, a prompt to a plurality of distinct AI agents, and receiving from each of the plurality of distinct AI agents a response to the prompt. The operation may further include comparing information associated with each of the received responses, and selecting at least one AI agent from the plurality of distinct AI agents. A software application may include a user interface enabling users to interact with and access features and functionalities of the software application. User interface is consistent with use of the term as described herein. For example, displaying a user interface may be done by showing a message on a screen, such as a pop-up box, or a list of selectable options);
causing display of a generative interface panel within the graphical user interface, the generative interface panel configured to receive a natural language input at an input region (Paragraph [0123-0126]; [0129-0131]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user);
in response to a first user input provided to the input region of the generative interface panel, analyzing the first user input to determine an action intent; in response to the action intent corresponding to a first subject-matter expertise of a first automated assistant service, causing the first automated assistant service to generate a first prompt comprising: first predefined query prompt text associated with the first subject-matter expertise; and at least a portion of the natural language user input (Paragraph [0123-0126]; [0129-0131]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user);
providing the first prompt to an external generative output engine and obtaining a first generative response from the external generative output engine; causing display of a first result based on the first generative response in the generative interface panel (Paragraph [0129-0131]; [0138-0139]; Fig. 4, upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. An AI agent may generate and/or provide automatic replies, or rephrasing of text, for a conversation, such as a chat box or emails. The operations further include outputting the response of the at least one selected AI agent. The output may be received by a system or device associated with a user or sender of the query);
causing a second automated assistant service to generate a second prompt comprising: second predefined query prompt text associated with a second subject-matter expertise; and at least a portion of the first generative response; providing the second prompt to the external generative output engine and obtaining a second generative response from the external generative output engine; and causing display of a second result based on the second generative response in the generative interface panel (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Claim 16: Grinberg discloses the computer-implemented method as per claim 15. Grinberg further discloses wherein: the action intent indicates a compound request; a first portion of the compound request corresponds to the first subject-matter expertise; and a second portion of the compound request corresponds to the second subject-matter expertise (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Claim 17: Grinberg discloses the computer-implemented method as per claim 15. Grinberg further discloses wherein the first user input includes a first reference to the first automated assistant service and a second reference to the second automated assistant service (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Claim 18: Grinberg discloses the computer-implemented method as per claim 15. Grinberg further discloses wherein: the first automated assistant service includes a first set of plugins, each plugin configured to extract content from content items of a respective platform; and the second automated assistant service includes a second set of plugins different than the first set of plugins (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Claim 19: Grinberg discloses the computer-implemented method as per claim 18. Grinberg further discloses wherein: the content collaboration platform is a documentation platform; the content item is a page managed by the documentation platform; the first set of plugins includes a content extraction plugin configured to extract content from pages managed by the documentation platform; the first automated assistant service generates the first prompt by extracting content from the page using the content extraction plugin; and the first prompt includes the content extracted from the page using the content extraction plugin (Paragraph [0123-0126]; [0129-0131]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user).
Claim 20: Grinberg discloses the computer-implemented method as per claim 18. Grinberg further discloses wherein: the second set of plugins includes an issue tracking plugin configured to extract content from issues of an issue tracking platform; the second automated assistant service generates the second prompt by extracting content from an issue using the issue tracking plugin; and the second prompt includes the content extracted from the issue using the issue tracking plugin (Paragraph [0123-0126]; [0129-0131]; [0135-0137]; Fig. 4, receiving from each of a plurality of distinct AI agents a response to the prompt. A response refers to a piece of information that is sent as a reply to a request, query, or preliminary information. The queried AI agents send responses that may provide instructions, information, or data to be further analyzed in the application. Some embodiments may involve comparing information associated with each of the received responses. Comparing information refers to examining two or more sets of data, facts, or details to identify similarities, differences, patterns, and/or relationships. A comparison could be made by analyzing segments or portions of information associated with each of the received responses or by analyzing the entirety of the data. Each Ai agent may send differing sets of information which are then compared to determine the most suitable. The information may contain data in an answer to a user query. Analyzing information associated with each of the received responses include evaluating qualify of content of each of the response, determining a response time, or combination thereof. Following ranking of the plurality of AI agents according to the determining scores, the operation further include saving the determined scores in a database. Selecting at least one Ai agent from the plurality of distinct AI agents based on the comparison. Upon selection the Ai agent may be assigned to further process and address one or more tasks of the query. For example, a prompt may be “complete task” and after the AI agent send responses to this query, one or more agent may generate and/or provide a response to the query or prompt to a device associated with the user. Consistent with some embodiments, selecting at least one AI agent from the plurality of distinct AI agents includes selecting at least two AI agents from the plurality of AI agents, and merging responses of the at least two selected AI agents to generate a merged response).
Therefore, claim 1-20 are rejected under U.S.C. 102.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Ferrydiansyah (US 2018/0052824) Task identification and completion based on natural language query.
Evermann (US 2014/0365209) System and method for inferring user intent form speech inputs.
Peng (US 2025/0138852) Task processing.
Missig (US 2014/0218372) Intelligent digital assistant in a desktop environment.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to COREY RUSS whose telephone number is (571)270-5902. The examiner can normally be reached on M-F 7:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached on 5712726782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be
obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/COREY RUSS/Primary Examiner, Art Unit 3629