Prosecution Insights
Last updated: April 19, 2026
Application No. 18/528,727

DETERMINING THE INTENDED RECIPIENT OF UNDIRECTED UTTERANCES IN A MULTI-BOT CONTEXT

Final Rejection §103
Filed
Dec 04, 2023
Examiner
HLAING, SOE MIN
Art Unit
2451
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
288 granted / 353 resolved
+23.6% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
18 currently pending
Career history
371
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
15.2%
-24.8% vs TC avg
§112
5.6%
-34.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 353 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments/Remarks Applicant's arguments filed 11/05/2025 have been fully considered but they are not persuasive. In response to applicant’s arguments/remarks, see 1st ¶ of page 8, stating that cited references fail to suggest at least the feature of “generating a prompt in response to receiving the user communication, wherein the prompt includes an instruction to identify which of the first bot or the second bot the user communication is directed” because “the execution plan” disclosed by Bista is not equivalent to a prompt, see 2nd ¶ of page 8, the examiner respectfully disagree. [AltContent: arrow] PNG media_image1.png 568 882 media_image1.png Greyscale Examiner firstly would like to note that, for independent claims 1 and 18, the examiner didn’t equate “the execution plan” disclosed by Bista to “a prompt” that is generated in response to the user communication. Examiner has included the reproduced Fig. 2 of Bista and also pointed (see the arrow) to the feature, i.e. “Output Prompt” of 204 of Fig. 2, that is used to teach the claim limitation “prompt” of the independent claims 1 and 18. Furthermore, it is explicitly described in Bista that “Output Prompt” [i.e. a prompt] is generated in response to receiving a request / input prompt 220 from the user [i.e. the user communication] (1, 220 & 204 – Fig. 2 and ¶ 0055 – 0056). Bista also describes, in step 2, performing, based on words from the request 220 [i.e. the user communication], a semantic search to search matching words in the digital assistant and agent wherein the matching words [i.e. an instruction] identify corresponding agents [i.e. which of the first bot or the second bot] from a plurality of agents [i.e. the first bot and the second bot]. Then “the Output Prompt”, which includes gather data, e.g. matching words [i.e. an instruction] identifying the agents [i.e. which of the first bot or the second bot] is fed to a LLM. Note that the matching words [i.e. an instruction to identify which of the first bot or the second bot] are semantically identified from the user request 220 [i.e. the user communication is directed] (¶ 0043 - ¶ 0056 – 0057). In response to the applicant’s arguments/remarks, see 1st ¶ of page 9, stating that cited references fail to suggest the feature "providing the prompt to a generative model, where the generative model generates an output based upon the prompt, and further where the output indicates that the user communication is directed to the first bot” because Bista fails to suggest “generating a prompt including an instruction to identify which of the first bot or the second bot the user communication is directed”, the examiner respectfully disagree. As described above, Bista explicitly discloses the limitation “generating a prompt including an instruction to identify which of the first bot or the second bot the user communication is directed”. Furthermore, Bista also discloses: Feeding [i.e. providing] “the Output Prompt” [i.e. the prompt] (see 204 – Fig. 4) to planning LLM 204A [i.e. a generative model] (204A – Fig. 2 and ¶ 0057). Then, the planning LLM 204A [i.e. the generative model] generates execution plan 212 [i.e. an output] based on “the Output Prompt” [i.e. based upon the prompt]), and the execution plan 212 [i.e. the output] identifies/indicates one or more agents, e.g. 210A [i.e. the first bot] for responding to the request [i.e. the user communication is directed to the first bot] (204A & 212 – Fig. 2, ¶ 0055 and ¶ 0057 – 0060). In response to applicant’s arguments/remarks, see 2nd ¶ of page 9, generally stating that the claim 14 has been amended to recite similar features as claim 1, the examiner would like to note that the scope and the limitations of independent claim 14 are different from those of independent claims 1 and 18. For example, the claim 14 includes the features, such as “generating a prompt for a generative model that corresponds to the first bot” and “the generative model generates a response to the user communication”, that are not described in the claims 1 and 18. Therefore, the applicant’s arguments/remarks with respect to Claim 1 may not be similarly applied to the claim 14. The ground of rejection used to address claim 14 differs from that of the claims 1 and 18, and it is address in this Office Action below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-10, 12 and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bista et al. (US PG PUB 20250094455 / US Provisional 63538747), hereinafter "Bista" in views of Choi et al. (US PG PUB 20180293983), hereinafter "Choi". Regarding Claim 1, Bista discloses: A computing system (i.e. system 200) (Fig. 2 and ¶ 0051 – 0052) comprising: a processor (i.e. one or more processors) (¶ 0007); and memory storing instructions that, when executed by the processor, cause the processor to perform acts (i.e. memory storing program instructions executable by the one or more processors) (¶ 0048) comprising: receiving a user communication set forth by a user of a client computing device (i.e. the method/system may receive a user request / input prompt, e.g. “What is my current 401k contribution?” [i.e. a user communication set forth by a user of a client computing device]) (220 – Fig. 2 and ¶ 0054 – 0055 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), where the user communication is directed towards an environment that includes a first bot and a second bot (i.e. the user request [i.e. the user communication] is inputted/directed to the system environment that includes Agent Artifacts, e.g. Agent 1, Agent 2, etc. [i.e. a first bot and a second bot]) (210 – Fig. 2 and ¶ 0055 of PG PUB / Fig. 2 and ¶ 0047 of provisional application); generating a prompt in response to receiving the user communication (i.e. the method/system may generate “Output Prompt” [i.e. a prompt] including the user request and relevant data gathered the context and memory store 206 in response to receiving the user request [i.e. the user communication], wherein the prompt including the user request and the gathered data is fed into LLM 204A) (204, 204A & 206 – Fig. 2 and ¶ 0056 - 0057 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), where the prompt includes an instruction to identify which of the first bot or the second bot the user communication is directed (i.e. “the Output Prompt” includes gather data, e.g. matching words [i.e. an instruction] identifying the agents [i.e. which of the first bot or the second bot]. Note that the matching words [i.e. an instruction to identify which of the first bot or the second bot] are semantically identified from the user request 220 [i.e. the user communication is directed]) (204, 204A & 206 – Fig. 2 and ¶ 0057 - 0060 of PG PUB / Fig. 2 and ¶ 0047 of provisional application); providing the prompt to a generative model, where the generative model generates an output based upon the prompt (i.e. the prompt including the request and gathered data is fed into the planning LLM 204A [i.e. a generative model], wherein the LLM 204A generates execution plan 212 [i.e. an output] based on the prompt [i.e. based upon the prompt]) (204A & 212 – Fig. 2 and ¶ 0057 - 0060 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), and further where the output indicates that the user communication is directed to the first bot (i.e. the execution plan 212 [i.e. the output] identifies/indicates one or more agents, e.g. 210A, 210B, etc. [i.e. the first bot], to address the request [i.e. the user communication] and the one or more actions, e.g. 210C, 210D, etc., to be executed by the one or more agents, e.g. 210A, 210B [i.e. directed to the first bot] for responding to the request [i.e. the user communication is directed to the first bot]) (210 & 212 – Fig. 2, ¶ 0055 and ¶ 0057 - 0060 of PG PUB / Fig. 2 and ¶ 0047 of provisional application); and causing the first bot to generate a response to the user communication (i.e. the one or more agents [i.e. the first bot], e.g. 210A & 210B / 401k Contribution agent, execute actions in order to generate a response to the request [i.e. a response to the user communication]) (212 – Fig. 2, ¶ 0024 – 0025 and ¶ 0057 - 0060 of PG PUB / Fig. 2 and ¶ 0047 of provisional application). However, Bista does not explicitly disclose: where the response is caused to be presented to a user of the client computing device as being provided by the first bot. On the other hand, in the same field of endeavor, Choi teaches: where the response is caused to be presented to a user of the client computing device as being provided by the first bot (i.e. user of the electronic device may be presented with a response 1542a [i.e. the response], wherein the response 1542a is displayed as being provided by “Air-bot” [i.e. the first bot] which is selected from a pool of CP chatbots) (1512a & 1542a – Fig. 15A, ¶ 0193, ¶ 0211 and ¶ 0239). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system/computer-readable-medium of Bista to include the feature where the response is caused to be presented to a user of the client computing device as being provided by the first bot as taught by Choi so that the user may directly communicate with the bot that is relevant to the user request (1512a & 1542a – Fig. 15A, ¶ 0193, ¶ 0211 and ¶ 0239). Regarding Claim 2, Bista and Choi disclose: where the output additionally indicates that the user communication is directed to the second bot (Bista - i.e. the execution plan 212 [i.e. the output] identifies/indicates one or more agents, e.g. 210A [i.e. the first bot], 210B [i.e. the second bot], etc., to address the request [i.e. the user communication is directed to the second bot]) (Bista - 210 & 212 – Fig. 2, ¶ 0055 and ¶ 0057 - 0060 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), the acts further comprising: causing the second bot to generate a second response to the user communication (Choi - i.e. weather forecaster bot 931 [i.e. the second bot] may generate a message 932, e.g. “It will be sunny, ….” [i.e. a second response], in addition to the message 922, e.g. “I will register a trip to Seattle…..” [i.e. the first response], generated by scheduler bot 921[i.e. the first bot]) (Choi - 921, 922, 931 & 932 - Fig. 9, ¶ 0165 and ¶ 0168), where the second response is caused to be presented to the user of the client computing device as being provided by the second bot (Choi - i.e. the message 932, e.g. “It will be sunny, ….” [i.e. the second response], is presented to the user of the electronic device as being provided by the weather forecaster bot 931 [i.e. the second bot]) (Choi - 921, 922, 931 & 932 - Fig. 9, ¶ 0165 and ¶ 0168). The prior art used in the rejection of the current claim is combined using the same motivations as was applied in claim 1. Regarding Claim 3, Bista and Choi disclose, in particular Choi teaches: where the output indicates that the user communication is not directed to the second bot (i.e. the method/system may output a plurality of CP chatbots with their respective confidence level; From the outputted plurality of CP chatbots [i.e. the output], the CP chatbots [i.e. the second bot] with confidence levels lower than a threshold indicates that they are not suited/relevant [i.e. not directed] for/to the natural language input of the user [i.e. the user communication]) (¶ 0070). The prior art used in the rejection of the current claim is combined using the same motivations as was applied in claim 1. Regarding Claim 4, Bista and Choi disclose, in particular Bista teaches: where the prompt includes: the user communication; and a previous user communication set forth by the user of the client computing device (i.e. the prompt includes the user request [i.e. the user communication] and user session, dialog state, information from previous input in a dialog session [i.e. a previous user communication set forth by the user of the client computing device], etc.) (¶ 0057 and ¶ 0065). Regarding Claim 5, Bista and Choi disclose, in particular Bista teaches: where the prompt includes: the user communication; and at least one message previously set forth by the first bot (i.e. the prompt includes the user request/query [i.e. the user communication] and user session, dialog state, dialog history [i.e. conversation history between the user and chatbot; In other words, at least one message previously set forth by the first bot], information from previous input in a dialog session, etc.) (¶ 0057 and ¶ 0065). Regarding Claim 6, Bista and Choi disclose, in particular Bista teaches: where the prompt includes: the user communication; and at least one message previously set forth by the second bot (i.e. the prompt includes the user request/query [i.e. the user communication] and user session, dialog state, dialog history [i.e. conversation history between the user and chatbot; In other words, at least one message previously set forth by the second bot], information from previous input in a dialog session, etc.) (¶ 0057 - 0060 and ¶ 0065). Regarding Claim 7, Bista and Choi disclose, in particular Bista teaches: where the prompt includes: contextual data; and a second instruction to summarize the contextual data (i.e. user request/query [i.e. the prompt] may include contextual information [i.e. contextual data] from previous input in a dialog session; the modified prompt is generated based on the original or rewritten query [i.e. context information is built/summarized from previous inputs/ In other words, a second instruction to summarize the contextual data]) (206 – Fig. 2, ¶ 0057 - 0061 and ¶ 0065). Regarding Claim 8, Bista and Choi disclose, in particular Bista teaches: where a second generative model corresponds to the first bot (i.e. agents [i.e. first bot] are associated with their respective LLMs [i.e. a second generative model]) (¶ 0043), the acts further comprising: subsequent to providing the prompt to the generative model, constructing a second prompt (i.e. subsequent to providing the modified user request/query to LLM [i.e. generative model], execution plan [i.e. a second prompt is constructed]) (Fig. 2 and ¶ 0057 – 0060), where the second prompt includes the user communication (i.e. the execution plan includes the user request/query [i.e. the user communication]) (Fig. 2 and ¶ 0057 – 0060); and providing the second prompt to the second generative model, where the second generative model generates the response based upon the user communication in the second prompt (i.e. the execution plan is provided to the agent specific LLM [i.e. the second generative model] in order to generate the response based upon the execution plan [i.e. the second prompt]) (Fig. 2 and ¶ 0057 – 0060 and ¶ 0066). Regarding Claim 9, Bista and Choi disclose, in particular Bista teaches: where the second prompt additionally includes a previous user communication set forth by the user of the client computing device (i.e. user request/query [i.e. the prompt] may include contextual information [i.e. contextual data] from previous input in a dialog session) (206 – Fig. 2, ¶ 0057 - 0061 and ¶ 0065). Regarding Claim 10, Bista and Choi disclose, in particular Choi teaches: where the user communication fails to identify either the first bot or the second bot (i.e. in the case where no candidate CP chatbot [i.e. the first bot or the second bot] is found [i.e. fails to identify] for the natural language input of the user [i.e. the user communication fails to identify either the first bot or the second bot], the master chatbot may output a guidance message requesting from the user another natural language input refining his or her request) (¶ 0220). The prior art used in the rejection of the current claim is combined using the same motivations as was applied in claim 1. Regarding Claim 12, Bista and Choi disclose, in particular Bista teaches: where the environment is a chat environment (i.e. the system environment is chat conversation environment for responding the user chat message such as “What is my 401k contribution limit?”) (220 - Fig. 2 and ¶ 0064). Regarding Claim 14, Bista discloses: A method performed by a computing system (i.e. system 200) (Fig. 2 and ¶ 0051 – 0052), the method comprising: receiving a user communication from a client computing device (i.e. the method/system may receive a user request / input prompt, e.g. “What is my current 401k contribution?” [i.e. a user communication set forth by a user of a client computing device]) (220 – Fig. 2 and ¶ 0054 – 0055 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), where the user communication is directed towards an environment that includes a first bot and a second bot (i.e. the user request [i.e. the user communication] is inputted/directed to the system environment that includes Agent Artifacts, e.g. Agent 1, Agent 2, etc. [i.e. a first bot and a second bot]) (210 – Fig. 2 and ¶ 0055 of PG PUB / Fig. 2 and ¶ 0047 of provisional application); generating a prompt for a generative model that corresponds to the first bot (i.e. the method/system may generate execution plan [i.e. a prompt] that is fed into LLM 214 [i.e. a generative model] corresponds to an agent, e.g. 401k contribution agent [i.e. the first bot]) (Fig. 2 and ¶ 0057 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), where the prompt is generated in response to receipt of the user communication, and further where the prompt includes an instruction to ascertain whether the first bot should respond to the user communication (i.e. the method/system may generate the execution plan [i.e. the prompt] identifying an agent, e.g. 401k contribution agent [i.e. the first bot] for responding to the user request [i.e. ascertain whether the user communication is directed to the first bot and if the first bot should respond to the user communication], in response to receiving the user request [i.e. the user communication]) (Fig. 2 and ¶ 0057 of PG PUB / Fig. 2 and ¶ 0047 of provisional application); providing the prompt to the generative model, where the generative model generates a response to the user communication based upon the prompt (i.e. the execution plan [i.e. the prompt] is provided to one or more LLMs [i.e. the generative model], wherein the LLMs generates a response to the user request [i.e. the user communication] base upon the execution plan [i.e. the prompt]) (Fig. 2, ¶ 0024 – 0025, ¶ 0057 - 0060 and ¶ 0066 of PG PUB / Fig. 2 and ¶ 0047 of provisional application). However, Bista does not explicitly disclose: causing the response to be presented to a user of the client computing device as being output by the first bot. On the other hand, in the same field of endeavor, Choi teaches: causing the response to be presented to a user of the client computing device as being output by the first bot (i.e. user of the electronic device may be presented with a response 1542a [i.e. the response], wherein the response 1542a is displayed as being provided by “Air-bot” [i.e. the first bot] which is selected from a pool of CP chatbots) (1512a & 1542a – Fig. 15A, ¶ 0193, ¶ 0211 and ¶ 0239). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system/computer-readable-medium of Bista to include the feature for causing the response to be presented to a user of the client computing device as being output by the first bot as taught by Choi so that the user may directly communicate with the bot that is relevant to the user request (1512a & 1542a – Fig. 15A, ¶ 0193, ¶ 0211 and ¶ 0239). Regarding Claim 15, Bista and Choi disclose: generating a second prompt for a second generative model that corresponds to the second bot (Bista - i.e. the method/system may generate execution plan [i.e. a second prompt] that is fed into LLM 214 [i.e. a second generative model] corresponds to one or more agents, e.g. 401k contribution agent [i.e. the second bot]) (Bista - Fig. 2 and ¶ 0057 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), where the second prompt is generated in response to receipt of the user communication and further where the second prompt includes a second instruction to ascertain whether the user communication is directed to the second bot and if the second bot should respond to the user communication (Bista - i.e. the method/system may generate the execution plan [i.e. the second prompt] identifying an agent, e.g. 401k contribution agent [i.e. the second bot] for responding to the user request [i.e. ascertain whether the first bot should respond to the user communication], in response to receiving the user request [i.e. the user communication]) (Bista - Fig. 2 and ¶ 0057 of PG PUB / Fig. 2 and ¶ 0047 of provisional application); providing the second prompt to the second generative model, where the second generative model generates a second response to the user communication (Bista - i.e. the execution plan [i.e. the second prompt] is provided to one or more LLMs [i.e. the generative model], wherein the LLMs generates a response to the user request [i.e. the user communication] based upon the execution plan [i.e. the second prompt]) (Bista - Fig. 2, ¶ 0024 – 0025, ¶ 0057 - 0060 and ¶ 0066 of PG PUB / Fig. 2 and ¶ 0047 of provisional application); and causing the second response to be presented to the user of the client computing device as being output by the second bot (Choi - i.e. user of the electronic device may be presented with a response 1542a [i.e. the second response], wherein the response 1542a is displayed as being provided by “Air-bot” [i.e. the second bot] which is selected from a pool of CP chatbots) (Choi - 1512a & 1542a – Fig. 15A, ¶ 0193, ¶ 0211 and ¶ 0239). The prior art used in the rejection of the current claim is combined using the same motivations as was applied in claim 14. Regarding Claim 16, Bista discloses: generating a second prompt for a second generative model that corresponds to the second bot (i.e. the method/system may generate execution plan [i.e. a second prompt] that is fed into LLM 214 [i.e. a second generative model] corresponds to one or more agents, e.g. 401k contribution agent [i.e. the second bot]) (Fig. 2 and ¶ 0057 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), where the second prompt is generated in response to receipt of the user communication, and further where the second prompt includes a second instruction to ascertain whether the second bot should respond to the user communication (i.e. the method/system may generate the execution plan [i.e. the second prompt] identifying an agent, e.g. 401k contribution agent [i.e. the second bot] for responding to the user request [i.e. ascertain whether the first bot should respond to the user communication], in response to receiving the user request [i.e. the user communication]) (Fig. 2 and ¶ 0057 of PG PUB / Fig. 2 and ¶ 0047 of provisional application); and providing the second prompt to the second generative model, where the second generative model outputs a null value based upon the second prompt (i.e. The CQR model 214C can be further trained to interpret and correct previous turns of the conversation [i.e. the second prompt] based on user corrections or restatements. The CQR model 214C can also clarify the query to handle mistakes [i.e. a null value based upon the second prompt] made by the response LLM [i.e. the second generative model], or mistakes in rewriting the query by the CQR model) (¶ 0091). Regarding Claim 17, Bista discloses: where the prompt includes: the user communication; and a previous user communication set forth by the user of the client computing device (i.e. the prompt includes the user request [i.e. the user communication] and user session, dialog state, information from previous input in a dialog session [i.e. a previous user communication set forth by the user of the client computing device], etc.) (¶ 0057 and ¶ 0065). Regarding Claim 18, Bista discloses: A computer-readable storage medium comprising instructions that, when executed by a processor (i.e. memory storing program instructions executable by the one or more processors) (¶ 0048), cause the processor to perform acts comprising: receiving a user communication set forth by a user of a client computing device (i.e. the method/system may receive a user request / input prompt, e.g. “What is my current 401k contribution?” [i.e. a user communication set forth by a user of a client computing device]) (220 – Fig. 2 and ¶ 0054 – 0055 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), where the user communication is directed towards an environment that includes a first bot and a second bot (i.e. the user request [i.e. the user communication] is inputted/directed to the system environment that includes Agent Artifacts, e.g. Agent 1, Agent 2, etc. [i.e. a first bot and a second bot]) (210 – Fig. 2 and ¶ 0055 of PG PUB / Fig. 2 and ¶ 0047 of provisional application); generating a prompt in response to receiving the user communication (i.e. the method/system may generate a Prompt [i.e. a prompt] including the user request and relevant data gathered the context and memory store 206 in response to receiving the user request [i.e. the user communication], wherein the prompt including the user request and the gathered data is fed into LLM 204A) (204, 204A & 206 – Fig. 2 and ¶ 0057 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), where the prompt includes an instruction to identify which of the first bot or the second bot the user communication is directed (i.e. “the Output Prompt” includes gather data, e.g. matching words [i.e. an instruction] identifying the agents [i.e. which of the first bot or the second bot]. Note that the matching words [i.e. an instruction to identify which of the first bot or the second bot] are semantically identified from the user request 220 [i.e. the user communication is directed]) (204, 204A & 206 – Fig. 2 and ¶ 0057 - 0060 of PG PUB / Fig. 2 and ¶ 0047 of provisional application); providing the prompt to a generative model, where the generative model generates an output based upon the prompt (i.e. the prompt including the request and gathered data is fed into the planning LLM 204A [i.e. a generative model], wherein the LLM 204A generates execution plan 212 [i.e. an output] based on the prompt [i.e. based upon the prompt]) (204A & 212 – Fig. 2 and ¶ 0057 - 0060 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), and further where the output indicates that the user communication is directed to the first bot (i.e. the execution plan 212 [i.e. the output] identifies/indicates one or more agents, e.g. 210A, 210B, etc. [i.e. the first bot], to address the request [i.e. the user communication] and the one or more actions, e.g. 210C, 210D, etc., to be executed by the one or more agents, e.g. 210A, 210B [i.e. directed to the first bot] for responding to the request [i.e. the user communication is directed to the first bot]) (210 & 212 – Fig. 2, ¶ 0055 and ¶ 0057 - 0060 of PG PUB / Fig. 2 and ¶ 0047 of provisional application); and causing the first bot to generate a response to the user communication (i.e. the one or more agents [i.e. the first bot], e.g. 210A & 210B / 401k Contribution agent, execute actions in order to generate a response to the request [i.e. a response to the user communication]) (212 – Fig. 2, ¶ 0024 – 0025 and ¶ 0057 - 0060 of PG PUB / Fig. 2 and ¶ 0047 of provisional application). However, Bista does not explicitly disclose: where the response is caused to be presented to a user of the client computing device as being provided by the first bot. On the other hand, in the same field of endeavor, Choi teaches: where the response is caused to be presented to a user of the client computing device as being provided by the first bot (i.e. user of the electronic device may be presented with a response 1542a [i.e. the response], wherein the response 1542a is displayed as being provided by “Air-bot” [i.e. the first bot] which is selected from a pool of CP chatbots) (1512a & 1542a – Fig. 15A, ¶ 0193, ¶ 0211 and ¶ 0239). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system/computer-readable-medium of Bista to include the feature where the response is caused to be presented to a user of the client computing device as being provided by the first bot as taught by Choi so that the user may directly communicate with the bot that is relevant to the user request (1512a & 1542a – Fig. 15A, ¶ 0193, ¶ 0211 and ¶ 0239). Regarding Claim 19, Bista discloses: where the output additionally indicates that the user communication is directed to the second bot (Bista - i.e. the execution plan 212 [i.e. the output] identifies/indicates one or more agents, e.g. 210A [i.e. the first bot], 210B [i.e. the second bot], etc., to address the request [i.e. the user communication is directed to the second bot]) (Bista - 210 & 212 – Fig. 2, ¶ 0055 and ¶ 0057 - 0060 of PG PUB / Fig. 2 and ¶ 0047 of provisional application), the acts further comprising: causing the second bot to generate a second response to the user communication (Choi - i.e. weather forecaster bot 931 [i.e. the second bot] may generate a message 932, e.g. “It will be sunny, ….” [i.e. a second response], in addition to the message 922, e.g. “I will register a trip to Seattle…..” [i.e. the first response], generated by scheduler bot 921[i.e. the first bot]) (Choi - 921, 922, 931 & 932 - Fig. 9, ¶ 0165 and ¶ 0168), where the second response is caused to be presented to the user of the client computing device as being provided by the second bot (Choi - i.e. the message 932, e.g. “It will be sunny, ….” [i.e. the second response], is presented to the user of the electronic device as being provided by the weather forecaster bot 931 [i.e. the second bot]) (Choi - 921, 922, 931 & 932 - Fig. 9, ¶ 0165 and ¶ 0168). The prior art used in the rejection of the current claim is combined using the same motivations as was applied in claim 18. Regarding Claim 20, Bista and Choi disclose, in particular Choi teaches: where the output indicates that the user communication is not directed to the second bot (i.e. the method/system may output a plurality of CP chatbots with their respective confidence level; From the outputted plurality of CP chatbots [i.e. the output], the CP chatbots [i.e. the second bot] with confidence levels lower than a threshold indicates that they are not suited/relevant [i.e. not directed] for/to the natural language input of the user [i.e. the user communication]) (¶ 0070). The prior art used in the rejection of the current claim is combined using the same motivations as was applied in claim 18. Claim(s) 11 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bista in views of Choi as applied to claim 1 above, and further in view of Douceur et al. (US PG PUB 20090203449), hereinafter "Douceur". Regarding Claim 11, Bista and Choi disclose all the features with respect to Claim 1 as described above. However, the combination of Bista and Choi does not explicitly disclose: where the environment is a video game environment. On the other hand, in the same field of endeavor, Douceur teaches: where the environment is a video game environment (i.e. the system environment may be implemented in a tactical gaming navigation context [i.e. a video game environment]) (¶ 0044 - 0045). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system/computer-readable-medium of Bista and Choi to include the feature where the environment is a video game environment as taught by Douceur so that the system may be implemented in a gaming context (¶ 0044 – 0045). Regarding Claim 13, Bista and Choi disclose all the features with respect to Claim 1 as described above. However, the combination of Bista and Choi does not explicitly disclose: where the prompt additionally includes locations of the first bot and the second bot in a virtual environment. On the other hand, in the same field of endeavor, Douceur teaches: where the prompt additionally includes locations of the first bot and the second bot in a virtual environment (i.e. advice 418 [i.e. the prompt] may include new positions of players and bots in the virtual gaming environment) (¶ 0045 - 0046). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method/system/computer-readable-medium of Bista and Choi to include the feature where the prompt additionally includes locations of the first bot and the second bot in a virtual environment as taught by Douceur so that the system may be implemented in a gaming context and may receive new positions of players and bots in the virtual gaming environment (¶ 0044 – 0046). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOE MIN HLAING whose telephone number is (303)297-4282. The examiner can normally be reached Monday-Friday 9AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry can be reached at 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Soe Hlaing/ Primary Examiner, Art Unit 2451
Read full office action

Prosecution Timeline

Dec 04, 2023
Application Filed
May 03, 2025
Non-Final Rejection — §103
Nov 05, 2025
Response Filed
Feb 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603938
METHOD AND INTERNET OF THINGS SYSTEM FOR LOADING GAS DATA OF SMART GAS
2y 5m to grant Granted Apr 14, 2026
Patent 12598134
PACKET DISCARD NOTIFICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12592904
INTELLIGENT TRANSACTION SCORING
2y 5m to grant Granted Mar 31, 2026
Patent 12587494
COORDINATED EMOTION REPRESENTATION AND EXPRESSION IN MULTI-AGENT DIGITAL ASSISTANTS
2y 5m to grant Granted Mar 24, 2026
Patent 12580983
DATA TRANSMISSION METHOD AND COMMUNICATION APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+17.5%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 353 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month