DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 8/21/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 5-9, 11-17, 19-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by YANG et al. (US 2025/0284561).
Regarding claim 5, Yang et al. teach a computer-implemented method (and a first computer system comprising a processor and memory (see Figure 1, user terminal)) comprising:
receiving, by a first computer system corresponding to a first language model (LM) agent, first input data (see [0066], where “user request” (first input data) is received through a “first co-pilot channel 130a” and where the system 100 transfers the request to the “first agent execution device” (first agent and its associated first computer system) and see [0061], part of “LLM service system” (first language model (LM));
generating, using the first input data and a first LM corresponding to the first LM agent, first LM output data representing a natural language request to delegate a first task to a second LM agent different from the first LM agent and an indication that the natural language request is from the first LM agent (see [0067], where “first agent” (first agent) may determine whether it can process a task, and determines that the task cannot be performed by the first agent and see [0070]-[0075], where the first agent is able to perform some steps but not all and identifies “an agent capable of performing the second step” (a second agent to delegate tasks) and sends (is delegated) a “task processing request signal” (first LM output data) requesting processing of the second step to the second agent and see [0078], where communication between the agents is described over a “p2p communication”, which according to Fi. 8 panel “151” shows “transceiver information” which includes “Sender Agent” (an indication of the sender or the request));
sending the first LM output data to a second computer system corresponding to the second LM agent (see [0056], where 110a and 110b are different devices which store the different agents) see [0075], “task processing request signal” (the first LM output data) is “transmit[ted]” (sent) to “the second agent” (the second LM agent));
receiving, from the second computer system in response to the first LM output data, first data (see [0075] lines 7+, where “second agent” “perform[s] the second step in accordance with the task processing request signal and transmits the result of performing the second step to the first agent” (first LM agent receives from the second computer system “the result of performing the second step” (first data)));
generating second LM output data using the first data and the first LM, the second LM output data representing a response to the first input data (see [0075] last 5 lines:, where “first agent generates final processing result data” (generating second LM output data) “by using the result received from the second agent” and see further details in [0076]) ; and
sending the second LM output data to a first system component (see [0075] last 3 lines:, “transmit” (sending) “contents of the final processing result data” (the second LM output data) “to the user terminal 10” (to a first system component) “through first co-pilot channel 130a).
Regarding claim 6, Yang et al. do teach the computer-implemented method of claim 5, further comprising:
receiving second data representing natural language instructions for how the first LM agent is to handle a task (¶ 0070 lines 6-8: “The first agent may identify an agent capable of performing the second step by using detailed information” (receiving second data) “on each of a plurality of agents stored in agent information storage device 120” (on how agents including the first may handle the task)),
the second data indicating:
a first instruction for the first LM agent to determine whether the second LM
agent is more capable of handling the task (¶ 0075 lines 1-2: “the first agent” (the first LM agent receiving) “may identify” (a first instruction) “that an agent capable of performing the second step is the second agent” (of a second agent capable of handling the task)), and
a second instruction to, in response to determining that the second LM agent is
more capable of handling the task, delegate the task to the second LM agent (¶ 0075 lines 4-5: “In this case, the first agent may transmit” (delegating) “a task” (the task) “processing request signal requesting processing” (via a second instruction) “of the second step to the second agent” (to the second LM agent)); and
determining a first LM prompt using the first input data and the second data, wherein generating the first LM output data includes processing the first LM prompt using the first LM (¶ 0075 lines 6+: “In addition, the second agent may perform the second step in according with the task processing request signal and transmit the result of performing the second step to the first agent” (determining a first prompt based on the ”task processing request signal” (first LM output data) which in turn was obtained based on the first input data and the second data)).
Regarding claim 7, Yang et al. do teach the computer-implemented method of claim 5, further comprising:
receiving second data representing an identifier corresponding to the second LM agent and a natural language description of capabilities corresponding to the second LM agent (¶ 0070 lines 6-8: “The first agent may identify an agent capable of performing the second step by using detailed information” (receiving second data) “on each of a plurality of agents stored in agent information” (which comprises of identifier of agents) “storage device 120” (plus other information such as according to ¶ 0071 lines 10-11: “capability information” (capabilities corresponding to e.g. a second LM agent) “that is a text” (in natural language) “for defining the capability of the corresponding agent” (for e.g., the second agent))) ; and
determining a first LM prompt using the first input data and the second data, wherein generating the first LM output data includes processing the first LM prompt using the first LM (¶ 0075 lines 1-3: “the first agent may identify an agent capable of performing the second step is the second agent executed in the second agent execution device” “in this case, the first agent may transmit a task processing request signal” (generating the first LM output data) “requesting processing” (by a first LM prompt) “of the second step to the second agent” (to the second agent wherein the said prompt is based on the original input (i.e., first data) and information pertaining to the second agent qualifying for it (second data)).
Regarding claim 8, Yang et al. do teach the computer-implemented method of claim 5, further comprising:
receiving second data representing an identifier corresponding to a software component and a natural language description of capabilities corresponding to the software component (¶ 0070 lines 6-8: “The first agent may identify an agent capable of performing the second step by using detailed information” (receiving second data) “on each of a plurality of agents stored in agent information” “storage device 120” (plus other information such as according to ¶ 0071 lines 10-11: “capability information” (capabilities corresponding) “that is a text” (in natural language) “for defining the capability of the corresponding agent” “and one or more skill sets” (to a software component which also identifies the “skill” (software component)) “by the corresponding agent”))); and
determining a first LM prompt using the first input data and the second data, wherein generating the first LM output data includes processing the first LM prompt using the first LM (¶ 0075 lines 1-3: “the first agent may identify an agent capable of performing the second step is the second agent executed in the second agent execution device” “in this case, the first agent may transmit a task processing request signal” (generating the first LM output data) “requesting processing” (by a first LM prompt) “of the second step to the second agent” (to the second agent wherein the said prompt is based on the original input (i.e., first data) and information pertaining to the second agent qualifying for it (second data)).
Regarding claim 9, Yang et al. do teach the computer-implemented method of claim 5, further comprising:
Sending, to the second LM agent, second data representing natural language instructions for how the second agent is to handle a task, the second data indicating:
A first instruction for the second LM agent to determine whether it is capable of handling a task indicated in a message from another LM agent (¶ 0075 lines 4-7: “the first agent” (another LM agent) “may transmit” (sending) “a task processing request signal” (a first instruction) “requesting processing of the second step” (indicating whether it is capable of handling a task) “to the second agent” (to the second agent)),
A second instruction to, in response to determining that the second LM agent is capable of handling the task, generate a first response to an other LM agent by processing the message using a second LM corresponding to the second LM agent(¶ 0075 lines 6+: “In addition, the second agent may perform the second step” (in response to determining that the second LM agent is capable of handling the task) “and transmit” (and generate a first response) “the result of performing the second step to the first agent” (to the another LM agent)),
And a third instruction to, in response to determining that the second LM agent is not capable of handling the task, generate a second response to the other LM agent indicating that the second LM agent is unable to handle the task (¶ 0076 lines 9-11: “and the result of performing the third step” (a third instruction) “may be received from the third agent” (indicating that a task is not done by the “second agent” (second LM agent) as it was done by a “third agent”) “as second external processing result data” (as a second respond to the other LM agent)).
Regarding claim 11, Yang et al. do teach the computer-implemented method of claim 5, further comprising:
Receiving second input data (¶ 0029 lines 10+: “processing a second user request” (receiving a second user input) “introduced through a second channel”);
generating, using the first LM, third LM output data representing a second task to
delegate to the second LM agent upon detection of an event (¶ 0029 lines 13+: “the first agent”(using the first LM) “determines whether it can process a task for generating a processing result for the user request by itself, and transmits a first task processing request signal” (detecting an event associated with) “for requesting” (associated with a third LM output data) “processing of at least a portion of steps constituting” (for delegating) “the task” (the second task) “to the second agent” (to the second LM agent));
determining, using the third LM output data, second data corresponding to the second task; detecting an occurrence of the event (¶ 0029 lines 13+ : “the first agent determines whether it can process a task for generating a processing result for the user request by itself, and transmits a first task processing request signal” (upon detecting the event) “for requesting” (associated with a third LM output data) “processing of at least a portion of steps” (a second data is determined) “continuing” (for delegating) “the task” (the second task) “to the second agent”);
and
in response to detecting the occurrence, sending the second data to the second LM agent (¶ 0029 last 4 lines: following “first task processing signal” (in response to the occurrence of the event) “for requesting processing of at least a portion of steps” (the second data) “constituting the task to the second agent” (is sent to the second LM agent)),
the second LM agent performing the second task in response to receiving the second data (¶ 0029 last 4 lines: following “first task processing signal” (in response to the occurrence of the event) “for requesting processing of at least a portion of steps” (the second data) “constituting the task to the second agent” (is sent to the second LM agent) “when the first agent determines that it cannot process the user request by itself” (and therefore to be performed by the second agent)).
Regarding claim 12, Yang et al. do teach the computer-implemented method of claim 5, further comprising: receiving second data representing:
a first message format corresponding to messages from users, the first message format including a first portion indicating that a message is from a user and a second portion representing a natural language user input (Fig. 8 step “154d” line 1 (a first message format including): “<USER> What is the date today” (a second data following “152” (a first user input) which comprises of “<user>” (a first portion indicating identity of a sending user) and “What is the date today” (a second portion indicating a natural language user input)));
a second message format corresponding to messages from other LM agents, the
second message format including a third portion identifying [[the]] an other LM agent
and a fourth portion representing natural language generated by the other LM agent (Fig. 8 step “154d” line 2 (a second message format): “<Agent> Today is February 28, 2024” (which is message from an agent comprising “<Agent>” (a third portion which identifies an agent) “Today is February 28, 2024” (and a fourth portion representing natural language generated by the other agent)),
and
a third message format corresponding to delegation requests to send to other LM
agents, the third message format including a fifth portion identifying a delegate LM agent, and a sixth portion representing a natural language message to the delegate LM agent (Fig. 9 step “154-1” lines 1-2: “You” (identifying a delegate LM agent (a fifth portion)) “have been requested” (a delegation request) “a task by another agent” “and generated a plan to perform the task” (and a sixth portion representing a natural language message to the delegate agent)).
Regarding claim 13, Yang et al. teach a first computer system, comprising: at least one processor; and at least one memory comprising instructions that, when executed by the at least one processor ([0138] “FIG. 15 is a hardware schematic view illustrating a computing system according to some embodiments of the present disclosure. The computing system 1000 of FIG. 15 may include one or more processors 1100, a system bus 1600, a communication interface 1200 connected with a buyer user terminal, a memory 1400 for loading a computer program 1500 performed by the processor 1100, and a storage 1300 for storing the computer program 1500”),
cause the first computer system to:
receive, by a first computer system corresponding to a first language model (LM) agent, first input data (see [0066], where “user request” (first input data) is received through a “first co-pilot channel 130a” and where the system 100 transfers the request to the “first agent execution device” (first agent and its associated first computer system) and see [0061], part of “LLM service system” (first language model (LM));
generate, using the first input data and a first LM corresponding to the first LM agent, first LM output data representing a natural language request to delegate a first task to a second LM agent different from the first LM agent and an indication that the natural language request is from the first LM agent (see [0067], where “first agent” (first agent) may determine whether it can process a task, and determines that the task cannot be performed by the first agent and see [0070]-[0075], where the first agent is able to perform some steps but not all and identifies “an agent capable of performing the second step” (a second agent to delegate tasks) and sends (is delegated) a “task processing request signal” (first LM output data) requesting processing of the second step to the second agent and see [0078], where communication between the agents is described over a “p2p communication”, which according to Fi. 8 panel “151” shows “transceiver information” which includes “Sender Agent” (an indication of the sender or the request));
send the first LM output data to a second computer system corresponding to the second LM agent (see [0056], where 110a and 110b are different devices which store the different agents) see [0075], “task processing request signal” (the first LM output data) is “transmit[ted]” (sent) to “the second agent” (the second LM agent));
receive, from the second computer system in response to the first LM output data, first data (see [0075] lines 7+, where “second agent” “perform[s] the second step in accordance with the task processing request signal and transmits the result of performing the second step to the first agent” (first LM agent receives from the second computer system “the result of performing the second step” (first data)));
generate second LM output data using the first data and the first LM, the second LM output data representing a response to the first input data (see [0075] last 5 lines:, where “first agent generates final processing result data” (generating second LM output data) “by using the result received from the second agent” and see further details in [0076]) ; and
send the second LM output data to a first system component (see [0075] last 3 lines:, “transmit” (sending) “contents of the final processing result data” (the second LM output data) “to the user terminal 10” (to a first system component) “through first co-pilot channel 130a).
Regarding claim 14, Yang et al. do teach the first computer system of claim 13, wherein the at least one memory further comprises instructions that, when executed by the at least one processor, further cause the first computer system to:
receive second data representing natural language instructions for how the first LM agent is to handle a task (¶ 0070 lines 6-8: “The first agent may identify an agent capable of performing the second step by using detailed information” (receiving second data) “on each of a plurality of agents stored in agent information storage device 120” (on how agents including the first may handle the task)),
the second data indicating:
a first instruction for the first LM agent to determine whether the second LM
agent is more capable of handling the task (¶ 0075 lines 1-2: “the first agent” (the first LM agent receiving) “may identify” (a first instruction) “that an agent capable of performing the second step is the second agent” (of a second agent capable of handling the task)), and
a second instruction to, in response to determining that the second LM agent is
more capable of handling the task, delegate the task to the second LM agent (¶ 0075 lines 4-5: “In this case, the first agent may transmit” (delegating) “a task” (the task) “processing request signal requesting processing” (via a second instruction) “of the second step to the second agent” (to the second LM agent)); and
determining a first LM prompt using the first input data and the second data, wherein generating the first LM output data includes processing the first LM prompt using the first LM (¶ 0075 lines 6+: “In addition, the second agent may perform the second step in according with the task processing request signal and transmit the result of performing the second step to the first agent” (determining a first prompt based on the ”task processing request signal” (first LM output data) which in turn was obtained based on the first input data and the second data)).
Regarding claim 15, Yang et al. do teach the first computer system of claim 13, wherein the at least one memory further comprises instructions that, when executed by the at least one processor, further cause the first computer system to:
receive second data representing an identifier corresponding to the second LM agent and a natural language description of capabilities corresponding to the second LM agent (¶ 0070 lines 6-8: “The first agent may identify an agent capable of performing the second step by using detailed information” (receiving second data) “on each of a plurality of agents stored in agent information” (which comprises of identifier of agents) “storage device 120” (plus other information such as according to ¶ 0071 lines 10-11: “capability information” (capabilities corresponding to e.g. a second LM agent) “that is a text” (in natural language) “for defining the capability of the corresponding agent” (for e.g., the second agent))) ; and
determine a first LM prompt using the first input data and the second data, wherein generating the first LM output data includes processing the first LM prompt using the first LM (¶ 0075 lines 1-3: “the first agent may identify an agent capable of performing the second step is the second agent executed in the second agent execution device” “in this case, the first agent may transmit a task processing request signal” (generating the first LM output data) “requesting processing” (by a first LM prompt) “of the second step to the second agent” (to the second agent wherein the said prompt is based on the original input (i.e., first data) and information pertaining to the second agent qualifying for it (second data)).
Regarding claim 16, Yang et al. do teach the computer system of claim 13, wherein the at least one memory further comprises instructions that, when executed by the at least one processor, further cause the first computer system to:
receive second data representing an identifier corresponding to a software component and a natural language description of capabilities corresponding to the software component (¶ 0070 lines 6-8: “The first agent may identify an agent capable of performing the second step by using detailed information” (receiving second data) “on each of a plurality of agents stored in agent information” “storage device 120” (plus other information such as according to ¶ 0071 lines 10-11: “capability information” (capabilities corresponding) “that is a text” (in natural language) “for defining the capability of the corresponding agent” “and one or more skill sets” (to a software component which also identifies the “skill” (software component)) “by the corresponding agent”))); and
determine a first LM prompt using the first input data and the second data, wherein generating the first LM output data includes processing the first LM prompt using the first LM (¶ 0075 lines 1-3: “the first agent may identify an agent capable of performing the second step is the second agent executed in the second agent execution device” “in this case, the first agent may transmit a task processing request signal” (generating the first LM output data) “requesting processing” (by a first LM prompt) “of the second step to the second agent” (to the second agent wherein the said prompt is based on the original input (i.e., first data) and information pertaining to the second agent qualifying for it (second data)).
Regarding claim 17, Yang et al. do teach the first computer system of claim 13, wherein the at least one memory comprises instructions that, when executed by the at least one processor, further cause the first computer system to:
Send, to the second LM agent, second data representing natural language instructions for how the second agent is to handle a task, the second data indicating:
A first instruction for the second LM agent to determine whether it is capable of handling a task indicated in a message from another LM agent (¶ 0075 lines 4-7: “the first agent” (another LM agent) “may transmit” (sending) “a task processing request signal” (a first instruction) “requesting processing of the second step” (indicating whether it is capable of handling a task) “to the second agent” (to the second agent)),
A second instruction to, in response to determining that the second LM agent is capable of handling the task, generate a first response to an other LM agent by processing the message using a second LM corresponding to the second LM agent(¶ 0075 lines 6+: “In addition, the second agent may perform the second step” (in response to determining that the second LM agent is capable of handling the task) “and transmit” (and generate a first response) “the result of performing the second step to the first agent” (to the another LM agent)),
And a third instruction to, in response to determining that the second LM agent is not capable of handling the task, generate a second response to the other LM agent indicating that the second LM agent is unable to handle the task (¶ 0076 lines 9-11: “and the result of performing the third step” (a third instruction) “may be received from the third agent” (indicating that a task is not done by the “second agent” (second LM agent) as it was done by a “third agent”) “as second external processing result data” (as a second respond to the other LM agent)).
Regarding claim 19, Yang et al. do teach the computer system of claim 13, wherein the at least one memory further comprises instructions that, when executed by the at least one processor, further cause the computer system to:
Receive second input data (¶ 0029 lines 10+: “processing a second user request” (receiving a second user input or input data) “introduced through a second channel”);
generating, using the first LM, third LM output data representing a second task to delegate to the second LM agent upon detection of an event (¶ 0029 lines 13+: “the first agent”(using the first LM) “determines whether it can process a task for generating a processing result for the user request by itself, and transmits a first task processing request signal” (detecting an event associated with) “for requesting” (associated with a third LM output data) “processing of at least a portion of steps constituting” (for delegating) “the task” (the second task) “to the second agent” (to the second LM agent));
generate, using the first LM, third LM output data representing a second task to
delegate to the second LM agent upon detection of an event (¶ 0029 lines 13+: “the first agent”(using the first LM) “determines whether it can process a task for generating a processing result for the user request by itself, and transmits a first task processing request signal” (detecting an event associated with) “for requesting” (associated with a third LM output data) “processing of at least a portion of steps constituting” (for delegating) “the task” (the second task) “to the second agent” (to the second LM agent));
determine, using the third LM output data, second data corresponding to the second task; detecting an occurrence of the event (¶ 0029 lines 13+ : “the first agent determines whether it can process a task for generating a processing result for the user request by itself, and transmits a first task processing request signal” (upon detecting the event) “for requesting” (associated with a third LM output data) “processing of at least a portion of steps” (a second data is determined) “continuing” (for delegating) “the task” (the second task) “to the second agent”);
and
in response to detecting the occurrence, send the second data to the second LM agent (¶ 0029 last 4 lines: following “first task processing signal” (in response to the occurrence of the event) “for requesting processing of at least a portion of steps” (the second data) “constituting the task to the second agent” (is sent to the second LM agent)),
the second LM agent performing the second task in response to receiving the second data (¶ 0029 last 4 lines: following “first task processing signal” (in response to the occurrence of the event) “for requesting processing of at least a portion of steps” (the second data) “constituting the task to the second agent” (is sent to the second LM agent) “when the first agent determines that it cannot process the user request by itself” (and therefore to be performed by the second agent)).
Regarding claim 20, Yang et al. do teach the computer system of claim 13, wherein the at least one memory further comprises instructions that, when executed by the at least one processor, further cause the first computer system to:
Receive second data representing: a first message format corresponding to messages from users, the first message format including a first portion indicating that a message is from a user and a second portion representing a natural language user input (Fig. 8 step “154d” line 1 (a first message format including): “<USER> What is the date today” (a second data following “152” (a first user input) which comprises of “<user>” (a first portion indicating identity of a sending user) and “What is the date today” (a second portion indicating a natural language user input)));
a second message format corresponding to messages from other LM agents, the
second message format including a third portion identifying [[the]] an other LM agent
and a fourth portion representing natural language generated by the other LM agent (Fig. 8 step “154d” line 2 (a second message format): “<Agent> Today is February 28, 2024” (which is message from an agent comprising “<Agent>” (a third portion which identifies an agent) “Today is February 28, 2024” (and a fourth portion representing natural language generated by the other agent)),
and
a third message format corresponding to delegation requests to send to other LM
agents, the third message format including a fifth portion identifying a delegate LM agent, and a sixth portion representing a natural language message to the delegate LM agent (Fig. 9 step “154-1” lines 1-2: “You” (identifying a delegate LM agent (a fifth portion)) “have been requested” (a delegation request) “a task by another agent” “and generated a plan to perform the task” (and a sixth portion representing a natural language message to the delegate agent)).
Allowable Subject Matter
Claim 1-4 are allowed.
Claims 10, 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARZAD KAZEMINEZHAD whose telephone number is (571)270-5860. The examiner can normally be reached 10:30 am to 11:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D. Shah can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Farzad Kazeminezhad/
Art Unit 2653
February 23rd 2026.6