DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to Application filed on 05/15/2025.
Claims 1-19 are pending.
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Republic of India on 05/15/2024. It is noted, however, that applicant has not filed a certified copy of the IN202311077673 application as required by 37 CFR 1.55.
Claim Objections
Claims 1-19 are objected to because of the following informalities:
Regarding claim 1, two instances of limitation “at least one category-specific AI agent” in line 8 and line 13 should be “the at least one category-specific AI agent” for properly referencing to limitation “at least one category-specific AI agent” recited in line 5, two instances of limitation “at least one support AI agent” in line 10 and line 13 should be “the at least one support AI agent” for properly referencing to limitation “at least one support AI agent” recited in line 9, and limitation “at least one recommendation” in line 18 should be “the at least one recommendation” for properly referencing to limitation “at least one recommendation” recited in line 15.
Similarly, regarding claim 2, five instances of limitation “at least one recommendation” in line 2, line 3, line 5, line 6 and line 7 should be “the at least one recommendation”.
Regarding claim 3, limitation “at least one category-specific AI agent” in line 2 should be “the at least one category-specific AI agent”.
Regarding claim 4, limitation “at least one category-specific AI agent” in line 1 should be “the at least one category-specific AI agent”.
Regarding claim 5, limitation “at least one support AI agent” in line 1 should be “the at least one support AI agent”.
Regarding claim 8, two instances of term “matric” in line 3 and line 5 should be “metric” for being correct in meaning, and the limitation “at least one recommendation” in line 4 should be “the at least one recommendation”.
Regarding claim 9, the limitation “at least one recommendation” in line 2 should be “the at least one recommendation”, and the term “matric” in line 5 should be “metric” for being correct in meaning.
Regarding claim 10, two instances of limitation “at least one category-specific AI agent” in line 10 and line 16 should be “the at least one category-specific AI agent” for properly referencing to limitation “at least one category-specific AI agent” recited in line 8, two instances of limitation “at least one support AI agent” in line 13 and line 16 should be “the at least one support AI agent” for properly referencing to limitation “at least one support AI agent” recited in line 12, and limitation “at least one recommendation” in line 19 should be “the at least one recommendation” for properly referencing to limitation “at least one recommendation” recited in line 17.
Similarly, regarding claim 11, five instances of limitation “at least one recommendation” in line 3, line 4, line 6, line 7 and line 8 should be “the at least one recommendation”.
Regarding claim 12, limitation “at least one category-specific AI agent” in line 2 should be “the at least one category-specific AI agent”.
Regarding claim 13, limitation “at least one category-specific AI agent” in line 1 should be “the at least one category-specific AI agent”.
Regarding claim 14, limitation “at least one support AI agent” in line 1 should be “the at least one support AI agent”.
Regarding claim 17, two instances of term “matric” in line 4 and line 6 should be “metric” for being correct in meaning, and the limitation “at least one recommendation” in line 5 should be “the at least one recommendation”.
Regarding claim 18, the limitation “at least one recommendation” in line 3 should be “the at least one recommendation”, and the term “matric” in line 4 should be “metric” for being correct in meaning.
Regarding claim 19, two instances of limitation “at least one category-specific AI agent” in line 10 and line 15 should be “the at least one category-specific AI agent” for properly referencing to limitation “at least one category-specific AI agent” recited in line 7, two instances of limitation “at least one support AI agent” in line 12 and line 15 should be “the at least one support AI agent” for properly referencing to limitation “at least one support AI agent” recited in line 11, and limitation “at least one recommendation” in line 20 should be “the at least one recommendation” for properly referencing to limitation “at least one recommendation” recited in line 17.
Other dependent claims are objected for incorporating the informality of the objected independent claims 1 and 10 upon which they depend correspondingly.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding independent claims 1, 10 and 19, the limitations “determining/determine a confidence score and reliability score based on one or more parameters of at least one category-specific AI agent and at least on support AI agent” and “generating/generate at least one recommendation from the relevant information and the auxiliary information based on the confidence score and the reliability score” raise questions regarding the recited “confidence score” and “reliability score” (e.g., whether the recited “confidence score” and “reliability score” is associated with the at least one category-specific AI agent or associated with the at least one support AI score, or associated with both, and it is unclear how the recited “confidence score” and “reliability score” are used in generating at least one recommendation as recited). Therefore, the metes and bounds of the claimed invention is unclear.
Other dependent claims 2-9 and 11-18 are rejected as incorporating and failing to resolve the deficiencies of the rejected independent claims 1 and 10 upon which they depend correspondingly.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-19 (effective filing date 05/15/2025 or 05/15/2024 (if perfected)) are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (U.S. Publication No. 2019/0325081, Publication date 10/24/2019), and further in view of Naanaa et al. (U.S. Patent No. 12,238,213, effectively filed date 09/12/2023).
As to claim 1, Liu et al. teaches:
“A method for augmenting recommendations through resource sharing between Artificial Intelligent (Al) agents” (see Liu et al., Abstract, and [0041] for processing/responding to a request using one or more agents selected from a plurality of agents (e.g., first-party agents, third-party agents, or proactive agents) comprising:
“receiving, by a primary Al agent of a plurality of Al agents, a request from a user for performing a task by the primary Al agent” (see Liu et al., Fig. 4 and [0062] for receiving a user request by the assistant system, wherein the assistant system as disclosed can be interpreted as equivalent to a primary AI agent as broadly recited; also see [0061] for example of a request to perform a task (e.g., ordering a pizza));
“identifying, by the primary Al agent, at least one category-specific Al agent from the plurality of Al agents based on the request” (see Liu et al., [0062] for determining one or more agents from a plurality of agents for executing one or more tasks associated with the one or more one or more dialog-intents (i.e., categories) associated with the request, wherein each of one or more agents identified as disclosed can be interpreted as a category-specific AI agent as recited);
“extracting, by the primary Al agent, relevant information related to the request from each of at least one category-specific Al agent” (see Liu et al., [0062] for receiving the information returned from the one or more agents);
“triggering, by the primary Al agent, at least one support Al agent from the plurality of Al agents based on the request, wherein at least one support Al agent provides auxiliary information related to the request” (see Liu et al., [0062] for triggering/communicating to one or more agents, wherein each of the one or more agents can be interpreted as either a category-specific AI agent or a support AI agent as recited, and the information returned from each agent can be interpreted as either the relevant information or the auxiliary information as broadly recited; also see [0062] for the dialog engine accessing the user context engine to retrieve the context information wherein the dialog engine or the user context engine can be interpreted as equivalent to a support AI agent as broadly recited, and the context information can be interpreted as auxiliary information as recited);
“determining, by the primary Al agent, a confidence score and a reliability score based on one or more parameters of at least one category-specific Al agent and at least one support Al agent” (see Liu et al., [0068] for receiving one or more evaluation results (i.e., one or more parameters) indicating the capability of a respective agent to complete one or more tasks, and determining a confidence score from the one or more evaluation results; also see [0066] for capability value (i.e., confidence score));
“generating, by the primary Al agent, at least one recommendation from the relevant information and the auxiliary information based on the confidence score and the reliability score” (see Liu et al., [0062] and [0067] for generating a communication content comprising the information/result returned from the one or more agents and based on confidence score); and
“providing, by the primary Al agent, at least one recommendation to the user in response to the request” (see Liu et al., [0062] for sending the communication content to the client content associated with the first user; also [0047] wherein communication content can include a recommendation).
Thus, Liu et al. teach determining a confidence score or capability value for each agent (see Liu et al., [0066] and [0068]).
However, Liu et al. does not explicitly teach a feature of determining different scores/metrics associated with agents based on one or more parameters of the agents as recited as follows:
“determining, by the primary Al agent, a confidence score and a reliability score based on one or more parameters of at least one category-specific Al agent and at least one support Al agent”.
On the other hand, Naanaa et al. explicitly teaches a feature of determining different scores/metrics associated with agents based on one or more parameters of the agents (see Naanaa et al., [column 11, lines 5-18] and [column 19, lines 30-46] for ranking/scoring the plurality of candidate worker agents based on evaluating one or more factors/parameters; also see [column 20, lines 53-61] for reliability score metric for each worker agent; also see [column 22, lines 12-16] for relevance score (i.e., confidence score) for each worker agent).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Naanaa et al.'s teaching to Liu et al.’s system by implementing a feature for determining different metrics/scores associated with agents for ranking and selecting the agents. Ordinarily skilled artisan would have been motivated to do so to provide Liu et al.’s system with an effective way to evaluate and rank the agents based on multiple factors and scores. In addition, both of the references (Liu et al. and Naanaa et al.) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, providing a system for processing/responding to a request using a plurality of agents. This close relation between both of the references highly suggests an expectation of success when combined.
As to claim 2, this claim is rejected based on the same arguments as above to reject claim 1 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“receiving, by the primary Al agent, a reply to at least one recommendation from the user, wherein the reply comprises an approval or a rejection on at least one recommendation” (see Liu et al., [0065] the assistant may then return the results (i.e., recommendation) to the user, for which the user may evaluate and select the result and the associated dialog-intent that are correct to the user);
“performing, by the primary Al agent, an action associated with the request when the reply comprises the approval on at least one recommendation” (see Liu et al., [0065] the assistant system may annotate the selected dialog-intent of [IN:call_weather-agent(location)] and the result, in combination with the associated agent as positive training samples for the original submitted user request; also see [0067]); and
“updating, by the primary Al agent, at least one recommendation when the reply comprises the rejection on at least one recommendation” (see Liu et al., [0065] for updating the ranker models with positive and negative training samples).
As to claim 3, this claim is rejected based on the same arguments as above to reject claim 1 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“wherein the relevant information related to the request is extracted by transmitting a query to each of at least one category-specific AI agent” (see Liu et al., [0065] for calling a news agent and a weather agent to get results/information regarding a location (i.e., a query); also see [0028] and [0037] for retrieving information from different sources using different agents in response to user input/request).
As to claim 4, this claim is rejected based on the same arguments as above to reject claim 1 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“wherein at least one category-specific AI agent comprises at least one of a family AI agent, a work AI agent, a friends AI agent, a budget AI agent, and a sport AI agent” (see Liu et al., [0041], [0043]-[0044] for a plurality of first-party agents and third-party agent, wherein a calendar agent can be interpreted as a work AI agent).
As to claim 5, this claim is rejected based on the same arguments as above to reject claim 1 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“wherein at least one support AI agent is communicatively connected with at least one of a location AI agent and a device AI agent” (see Liu et al., [0043] wherein a calendar agent to retrieve the location of the next meeting as disclosed can be interpreted as a location AI agent as recited; also see [0047] for proactive agents as support AI agents).
As to claim 6, this claim is rejected based on the same arguments as above to reject claim 1 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“wherein the confidence score is determined based on at least one of an interaction frequency, an interaction recency, a semantic match, and a feedback history of a corresponding AI agent of the plurality of AI agents” (see Liu et al., [0066] and [0068] for determining capability values or confidence scores associated with a plurality of agents; also see Naanaa et al., [column 11, lines 5-18] and [column 19, lines 30-46] for ranking/scoring the plurality of candidate worker agents based on evaluating one or more factors/parameters (e.g., feedback or ratings from previous interactions (i.e., feedback history)); also see [column 22, lines 12-16] for determining relevance score (i.e., confidence score) for each worker agent based on a relevance of a plurality of responses provided by the worker agent to a plurality of user requests (i.e., feedback history)).
As to claim 7, this claim is rejected based on the same arguments as above to reject claim 1 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“wherein the reliability score is determined based on at least one of a user acceptance rate, a historical correctness, and an adaptation over multiple interactions” (see Liu et al., [0066] and [0068] for determining capability values or confidence scores associated with a plurality of agents; also see Naanaa et al., [column 11, lines 5-18] and [column 19, lines 30-46] for ranking/scoring the plurality of candidate worker agents based on evaluating one or more factors/parameters (e.g., historical performance data (i.e., historical correctness)); also see [column 20, lines 53-61] for determining reliability score for each worker agent).
As to claim 8, this claim is rejected based on the same arguments as above to reject claim 1 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“combining, by the primary AI agent, the confidence score and the reliability score based on corresponding weights to generate a single unified trust matric” (see Liu et al., [0068] for ranking/scoring a respective agent based on one or more evaluation results; also see Naanaa et al., [column 11, lines 5-18] and [column 19, lines 30-46] for ranking/scoring the plurality of candidate worker agents based on evaluating one or more factors/parameters/scores); and
“generating, by the primary AI agent, at least one recommendation from the relevant information and the auxiliary information based on the single unified trust matric” (see Liu et al., [0068] for generating/selecting results based on the ranking or scores associated with a plurality of agents).
As to claim 9, this claim is rejected based on the same arguments as above to reject claim 8 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“receiving, by the primary AI agent, feedback on at least one recommendation from the user” (see Liu et al., [0065] the assistant may then return the results (i.e., recommendation) to the user, for which the user may evaluate and select the result and the associated dialog-intent that are correct to the user; also see Naanaa et al., [column 11, lines 9-18] for receiving feedback or ratings); and
“updating, by the primary AI agent, the confidence score, the reliability score, and the single unified trust matric based on the feedback” (see Liu et al., [0065] for updating the ranker model with feedback; also see Naanaa et al., [column 11, lines 9-18] for ranking/scoring based on feedback or ratings).
As to claim 10, Liu et al. teaches:
“A system for augmenting recommendations through resource sharing between Artificial Intelligent (AI) agents” (see Liu et al., Abstract, [0041] and [0051], for an assistant system for processing/responding to a request using one or more agents selected from a plurality of agents (e.g., first-party agents, third-party agents, or proactive agents)), comprising:
“one or more processors associated with a primary AI agent of a plurality of AI agents” (see Liu et al., Fig. 9 and [0105] for processor 902); and
“a memory storing programmed instructions executable by the one or more processors, wherein the one or more processors execute the programmed instructions to” (see Liu et al., Fig. 9 and [0105]-[0106] for memory 904):
“receive a request from a user for performing a task by the primary AI agent” (see Liu et al., Fig. 4 and [0062] for receiving a user request by the assistant system, wherein the assistant system as disclosed can be interpreted as equivalent to a primary AI agent as broadly recited; also see [0061] for example of a request to perform a task (e.g., ordering a pizza));
“identify at least one category-specific AI agent from the plurality of AI agents based on the request” (see Liu et al., [0062] for determining one or more agents from a plurality of agents for executing one or more tasks associated with the one or more one or more dialog-intents (i.e., categories) associated with the request, wherein each of one or more agents identified as disclosed can be interpreted as a category-specific AI agent as recited);
“extract relevant information related to the request from each of at least one category-specific AI agent” (see Liu et al., [0062] for receiving the information returned from the one or more agents);
“trigger at least one support AI agent from the plurality of AI agents based on the request, wherein at least one support AI agent provides auxiliary information related to the request” (see Liu et al., [0062] for triggering/communicating to one or more agents, wherein each of the one or more agents can be interpreted as either a category-specific AI agent or a support AI agent as recited, and the information returned from each agent can be interpreted as either the relevant information or the auxiliary information as broadly recited; also see [0062] for the dialog engine accessing the user context engine to retrieve the context information wherein the dialog engine or the user context engine can be interpreted as equivalent to a support AI agent as broadly recited, and the context information can be interpreted as auxiliary information as recited);
“determine a confidence score and a reliability score based on one or more parameters of at least one category-specific AI agent and at least one support AI agent” (see Liu et al., [0068] for receiving one or more evaluation results (i.e., one or more parameters) indicating the capability of a respective agent to complete one or more tasks, and determining a confidence score from the one or more evaluation results; also see [0066] for capability value (i.e., confidence score));
“generate at least one recommendation from the relevant information and the auxiliary information based on the confidence score and the reliability score” (see Liu et al., [0062] and [0067] for generating a communication content comprising the information/result returned from the one or more agents and based on confidence score); and
“provide at least one recommendation to the user in response to the request” (see Liu et al., [0062] for sending the communication content to the client content associated with the first user; also [0047] wherein communication content can include a recommendation).
Thus, Liu et al. teach determining a confidence score or capability value for each agent (see Liu et al., [0066] and [0068]).
However, Liu et al. does not explicitly teach a feature of determining different scores/metrics associated with agents based on one or more parameters of the agents as recited as follows:
“determine a confidence score and a reliability score based on one or more parameters of at least one category-specific AI agent and at least one support AI agent”.
On the other hand, Naanaa et al. explicitly teaches a feature of determining different scores/metrics associated with agents based on one or more parameters of the agents (see Naanaa et al., [column 11, lines 5-18] and [column 19, lines 30-46] for ranking/scoring the plurality of candidate worker agents based on evaluating one or more factors/parameters; also see [column 20, lines 53-61] for reliability score metric for each worker agent; also see [column 22, lines 12-16] for relevance score (i.e., confidence score) for each worker agent).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Naanaa et al.'s teaching to Liu et al.’s system by implementing a feature for determining different metrics/scores associated with agents for ranking and selecting the agents. Ordinarily skilled artisan would have been motivated to do so to provide Liu et al.’s system with an effective way to evaluate and rank the agents based on multiple factors and scores. In addition, both of the references (Liu et al. and Naanaa et al.) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, providing a system for processing/responding to a request using a plurality of agents. This close relation between both of the references highly suggests an expectation of success when combined.
As to claim 11, this claim is rejected based on the same arguments as above to reject claim 10 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“receive a reply to at least one recommendation from the user, wherein the reply comprises an approval or a rejection on at least one recommendation” (see Liu et al., [0065] the assistant may then return the results (i.e., recommendation) to the user, for which the user may evaluate and select the result and the associated dialog-intent that are correct to the user);
“perform an action associated with the request when the reply comprises the approval on at least one recommendation” (see Liu et al., [0065] the assistant system may annotate the selected dialog-intent of [IN:call_weather-agent(location)] and the result, in combination with the associated agent as positive training samples for the original submitted user request; also see [0067]); and
“update at least one recommendation when the reply comprises the rejection on at least one recommendation” (see Liu et al., [0065] for updating the ranker models with positive and negative training samples).
As to claim 12, this claim is rejected based on the same arguments as above to reject claim 10 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“wherein the relevant information related to the request is extracted by transmitting a query to each of at least one category-specific AI agent” (see Liu et al., [0065] for calling a news agent and a weather agent to get results/information regarding a location (i.e., a query); also see [0028] and [0037] for retrieving information from different sources using different agents in response to user input/request).
As to claim 13, this claim is rejected based on the same arguments as above to reject claim 10 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“wherein at least one category-specific AI agent comprises at least one of a family AI agent, a work AI agent, a friends AI agent, a budget AI agent, and a sport AI agent” (see Liu et al., [0041], [0043]-[0044] for a plurality of first-party agents and third-party agent, wherein a calendar agent can be interpreted as a work AI agent).
As to claim 14, this claim is rejected based on the same arguments as above to reject claim 10 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“wherein at least one support AI agent is communicatively connected with at least one of a location AI agent and a device AI agent” (see Liu et al., [0043] wherein a calendar agent to retrieve the location of the next meeting as disclosed can be interpreted as a location AI agent as recited; also see [0047] for proactive agents as support AI agents).
As to claim 15, this claim is rejected based on the same arguments as above to reject claim 10 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“wherein the confidence score is determined based on at least one of an interaction frequency, an interaction recency, a semantic match, and a feedback history of a corresponding AI agent of the plurality of AI agents” (see Liu et al., [0066] and [0068] for determining capability values or confidence scores associated with a plurality of agents; also see Naanaa et al., [column 11, lines 5-18] and [column 19, lines 30-46] for ranking/scoring the plurality of candidate worker agents based on evaluating one or more factors/parameters (e.g., feedback or ratings from previous interactions (i.e., feedback history); also see [column 22, lines 12-16] for relevance score (i.e., confidence score) for each worker agent based on a relevance of a plurality of responses provided by the worker agent to a plurality of user requests (i.e., feedback history)).
As to claim 16, this claim is rejected based on the same arguments as above to reject claim 10 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“wherein the reliability score is determined based on at least one of a user acceptance rate, a historical correctness, and an adaptation over multiple interactions” (see Liu et al., [0066] and [0068] for determining capability values or confidence scores associated with a plurality of agents; also see Naanaa et al., [column 11, lines 5-18] and [column 19, lines 30-46] for ranking/scoring the plurality of candidate worker agents based on evaluating one or more factors/parameters (e.g., historical performance data (i.e., historical correctness)); also see [column 20, lines 53-61] for determining reliability score for each worker agent).
As to claim 17, this claim is rejected based on the same arguments as above to reject claim 10 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“combine the confidence score and the reliability score based on corresponding weights to generate a single unified trust matric” (see Liu et al., [0068] for ranking/scoring a respective agent based on one or more evaluation results; also see Naanaa et al., [column 11, lines 5-18] and [column 19, lines 30-46] for ranking/scoring the plurality of candidate worker agents based on evaluating one or more factors/parameters/scores); and
“generate at least one recommendation from the relevant information and the auxiliary information based on the single unified trust matric” (see Liu et al., [0068] for generating/selecting results based on the ranking or scores associated with a plurality of agents).
As to claim 18, this claim is rejected based on the same arguments as above to reject claim 17 and is similarly rejected including the following:
Liu et al. as modified by Naanaa et al. teaches:
“receive feedback on at least one recommendation from the user” (see Liu et al., [0065] the assistant may then return the results (i.e., recommendation) to the user, for which the user may evaluate and select the result and the associated dialog-intent that are correct to the user; also see Naanaa et al., [column 11, lines 9-18] for receiving feedback or ratings); and
“update the confidence score, the reliability score, and the single unified trust matric based on the feedback” (see Liu et al., [0065] for updating the ranker model with feedback; also see Naanaa et al., [column 11, lines 9-18] for ranking/scoring based on feedback or ratings).
As to claim 19, Liu et al. teaches:
“A non-transitory machine-readable medium including data, which when used by a system for augmenting recommendations through resource sharing between Artificial Intelligent (AI) agents, causes the system to perform instructions that cause the system to perform operations comprising” (see Liu et al., Abstract, and [0041] for processing/responding to a request using one or more agents selected from a plurality of agents (e.g., first-party agents, third-party agents, or proactive agents):
“receiving, by a primary Al agent of a plurality of Al agents, a request from a user for performing a task by the primary Al agent” (see Liu et al., Fig. 4 and [0062] for receiving a user request by the assistant system, wherein the assistant system as disclosed can be interpreted as equivalent to a primary AI agent as broadly recited; also see [0061] for example of a request to perform a task (e.g., ordering a pizza));
“identifying, by the primary Al agent, at least one category-specific Al agent from the plurality of Al agents based on the request” (see Liu et al., [0062] for determining one or more agents from a plurality of agents for executing one or more tasks associated with the one or more one or more dialog-intents (i.e., categories) associated with the request, wherein each of one or more agents identified as disclosed can be interpreted as a category-specific AI agent as recited);
“extracting, by the primary Al agent, relevant information related to the request from each of at least one category-specific Al agent” (see Liu et al., [0062] for receiving the information returned from the one or more agents);
“triggering, by the primary Al agent, at least one support Al agent from the plurality of Al agents based on the request, wherein at least one support Al agent provides auxiliary information related to the request” (see Liu et al., [0062] for triggering/communicating to one or more agents, wherein each of the one or more agents can be interpreted as either a category-specific AI agent or a support AI agent as recited, and the information returned from each agent can be interpreted as either the relevant information or the auxiliary information as broadly recited; also see [0062] for the dialog engine accessing the user context engine to retrieve the context information wherein the dialog engine or the user context engine can be interpreted as equivalent to a support AI agent as broadly recited, and the context information can be interpreted as auxiliary information as recited);
“determining, by the primary Al agent, a confidence score and a reliability score based on one or more parameters of at least one category-specific Al agent and at least one support Al agent” (see Liu et al., [0068] for receiving one or more evaluation results (i.e., one or more parameters) indicating the capability of a respective agent to complete one or more tasks, and determining a confidence score from the one or more evaluation results; also see [0066] for capability value (i.e., confidence score));
“generating, by the primary Al agent, at least one recommendation from the relevant information and the auxiliary information based on the confidence score and the reliability score” (see Liu et al., [0062] and [0067] for generating a communication content comprising the information/result returned from the one or more agents and based on confidence score); and
“providing, by the primary Al agent, at least one recommendation to the user in response to the request” (see Liu et al., [0062] for sending the communication content to the client content associated with the first user; also [0047] wherein communication content can include a recommendation).
Thus, Liu et al. teach determining a confidence score or capability value for each agent (see Liu et al., [0066] and [0068]).
However, Liu et al. does not explicitly teach a feature of determining different scores/metrics associated with agents based on one or more parameters of the agents as recited as follows:
“determining, by the primary Al agent, a confidence score and a reliability score based on one or more parameters of at least one category-specific Al agent and at least one support Al agent”.
On the other hand, Naanaa et al. explicitly teaches a feature of determining different scores/metrics associated with agents based on one or more parameters of the agents (see Naanaa et al., [column 11, lines 5-18] and [column 19, lines 30-46] for ranking/scoring the plurality of candidate worker agents based on evaluating one or more factors/parameters; also see [column 20, lines 53-61] for reliability score metric for each worker agent; also see [column 22, lines 12-16] for relevance score (i.e., confidence score) for each worker agent).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Naanaa et al.'s teaching to Liu et al.’s system by implementing a feature for determining different metrics/scores associated with agents for ranking and selecting the agents. Ordinarily skilled artisan would have been motivated to do so to provide Liu et al.’s system with an effective way to evaluate and rank the agents based on multiple factors and scores. In addition, both of the references (Liu et al. and Naanaa et al.) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as, providing a system for processing/responding to a request using a plurality of agents. This close relation between both of the references highly suggests an expectation of success when combined.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHUONG THAO CAO whose telephone number is (571)272-2735. The examiner can normally be reached Monday - Friday: 9:00 am - 6:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at 571-270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Phuong Thao Cao/Primary Examiner, Art Unit 2164