Prosecution Insights
Last updated: April 19, 2026
Application No. 18/952,572

METHOD OF INFORMATION PROCESSING, ELECTRONIC DEVICE AND STORAGE MEDIUM

Non-Final OA §102§103
Filed
Nov 19, 2024
Examiner
NGUYEN, LINH T
Art Unit
2459
Tech Center
2400 — Computer Networks
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
96%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
248 granted / 354 resolved
+12.1% vs TC avg
Strong +26% interview lift
Without
With
+26.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
30 currently pending
Career history
384
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
64.2%
+24.2% vs TC avg
§102
9.2%
-30.8% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 354 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/19/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 6, 9, 12-14, 17, 19 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gruber et al. (US 2013/0311997), hereinafter Gruber. As for claim 1, Gruber teaches a method of information processing, applied to a first server of a digital assistant (paragraph [0008[ describes a method for processing a user input; paragraphs [0023]-[0025] describe a digital assistant (DA) executed on a server system, comprises a client-facing interface facilitating the client-facing input and output for the digital assistant server), the method comprising: generating a first message based on information indicated by a trigger request from a client of the digital assistant under a first task topic in response to the trigger request (Fig. 4A, steps 412-418; paragraphs [0115]-[0119] describe a DA server receives an input of a user, identifies respective task type, locates one or more service providers that can perform the identified task type in accordance with information in a service provider directory and/or the list of supported tasks or competencies. When the DA server locates two or more service providers that can perform the identified task type, the DA server selects one of the two or more service providers, the DAT server sends a request to the one or more selected service providers, the task may be associated with buying a movie ticket (i.e. topic movie); paragraphs [0063] and [0073] describe speech input from a user is recognized as a sequence of tokens, if a word or phrase in the token sequence is found to be associated with nodes in an ontology, the word or phrase will “trigger” or “activate” those nodes. When multiple nodes are “triggered,” based on the quantity and/or relative importance of the activated nodes, the natural language processor of the DA will select one of the actionable intents as the task (or task type) that the user intended the DA to perform); sending the first message to a second server to cause the second server to process the trigger request under the first task topic (Gruber: Fig. 4A, Step 418; paragraphs [0119]-[0120] describe the DA server sends a request to the one or more selected service providers, typically one service provider, to perform a task of the identified task type. The DA server locates default service providers and the one or more third party service providers capable of performing the identified task type), the second server being a creator of the first task topic (Gruber: Fig. 4A, Step 402; paragraph [0104] describes the server provider sends one or more task types to the DA server; paragraph [0114] describes another service provider also sends task types supported by that service provider to the DA server); and receiving a second message returned by the second server and sending the second message to the client of the digital assistant (Fig. 4A, Steps 426 & 428; paragraphs [0122]-[0123] describe the DA server receives one or more results from the service provider and sends the one or more results to the DA client). As for claim 2, Gruber teaches wherein generating the first message based on the information indicated by the trigger request comprises: determining a trigger type indicated by the trigger request according to the trigger request (paragraphs [0116]-[0117] describe the DA server receive the input of the user. The input is received as an audio file (or as a string of text corresponding to the speech if the DA client includes a text-to-speech engine), the DA server uses the digital assistant to identify the respective task type. The device identifies the respective task type in accordance with a vocabulary (or an identification of a vocabulary, such as associating an existing vocabulary with a new task or domain)); and generating a first message based on a template corresponding to the trigger type (paragraph [0088] describes the digital assistant receives a user input, and determines that the user input corresponds to a particular template that is associated with multiple third party service providers. The digital assistant selects one of the multiple third party service providers associated with the particular template, and sends the user input to the selected third party service provider). As for claim 3, Gruber teaches wherein generating the first message based on the information indicated by the trigger request from the client of the digital assistant under the first task topic in response to the trigger request comprises: generating the first message based on an input information indicated by a starting request from the client of the digital assistant for the first task topic in response to the starting request (paragraphs [0119]-[0120] describe the DA server sends a request to the one or more selected service providers, typically one service provider, to perform a task of the identified task type. The DA server locates default service providers and the one or more third party service providers capable of performing the identified task type), the first message comprising parameters related to the first task topic (paragraphs [0096]-[0097] describe the third party services include APIs that specifies a service model, the APIs are used for communicating the values of the necessary parameter to a service (e.g. a reservation). The processor of the DA sends the necessary parameters of the reservation to the online reservation interface in a format according to the API of the online reservation service; paragraph [0113] describes the user input initiates the particular task), the first message comprising fields related to the first task topic (paragraph [0077] describes a user makes a request “Make me a dinner reservation at a sushi place at 7.” A natural language processor is able to correctly identify the actionable intent to “restaurant reservation” based on the user input. According to the ontology, a structured query for a “restaurant reservation” domain includes parameters such as {Cuisine}, {Time}, {Date}, {Part Size} and the like). As for claim 6, Gruber teaches wherein generating the first message based on information indicated by the trigger request from the client of the digital assistant under the first task topic in response to the trigger request comprises: generating a first verification token corresponding to the trigger request in response to the trigger request from the client of the digital assistant under the first task topic (paragraphs [0063]-[0065] describes a speech-to-text processing module of the DA server uses various acoustic and language models to recognize the speech input as a sequence of tokens written in one or more languages. The natural language processing module of the DA takes the sequence of tokens generated by the speech-to-text processing module, and attempts to associated the token sequence with one or more “actionable intents” recognized by the digital assistant. The natural language processor of the DA uses the context information to clarify, supplement, and/or further define the information contained in the token sequence); and generating the first message based on the first verification token and the information indicated by the trigger request (paragraphs [0077], [0079] and [0085] describe once the natural language processor identifies an actionable intent based on the user request, the natural language processor generates a structured query to represent the identified actionable intent. The token sequence generated by the speech-to-text processing module is sent to the template processing module in addition to the natural language processing module. The DA receives an input that corresponds to a predefined template, and in response, sends the input to a particular third party service provider). As for claim 9, Gruber teaches wherein the trigger request from the client of the digital assistant under the first task topic is a starting request for the first topic (paragraphs [0077]-[0078] describe a user input (e.g. an utterance) is received, the user’s utterance contains insufficient information to complete a structure query. The natural language processor passes the structured query to the task flow processing module which is configured to perform actions required to “complete” the user’s ultimate request. The various procedures necessary to complete these tasks are provided in task flow models. The task flow models include procedures for obtaining additional information from the user); the second message comprises a content pre-configured by the second server (paragraph [0104] describes the service provider sends one or more task types to the DA server. The service provider sends a vocabulary or an identification of a vocabulary and the domain information), the second message comprises a content pre-configured by the second server (paragraph [0104] describes the service provider sends task types to the DA server, the service provider sends vocabulary or an identification of a vocabulary and domain information), and the trigger request from the client of the digital assistant under the first task topic is the message sending request (paragraph [0115] describes a voice command “book a restaurant at 7 pm.” which is a request), and the second message comprises a reply content for a third message sent by the client of the digital assistant (paragraphs [0119]-[0121] and [0123] describe the DA client receives the request and sends the request (i.e. a third message) to one or more selected service providers. The service provider then receives the request, performs the requested task and sends the results (i.e. the second message) directly back to the client). As for claim 12, Gruber teaches wherein the first task topic is configured with corresponding configuration information to execute a corresponding type of task (paragraph [0109] describes the DA server updates a service provider directory and a supported task. The supported tasks typically include a list of task types supported by respective third party service providers and a list of third party service providers that support respective task types; paragraph [0116]-[0118] describe the DA server receives the input from the user, and identifies a respective task type in accordance with a vocabulary received from the third party service provider. When the DA server locates two or more service providers that can perform the identified task type, the DA server selects on of the two or more service providers), and the configuration information comprises at least one selected from the group consisting of: task topic setting information and plug-in information (paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task type in accordance with information in the service provider directory and/or the list of supported tasks or competencies), wherein the task topic setting information is used for describing information related to the corresponding task topic (paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task in accordance with information in the service provider directory), and the plug-in information indicates at least one plug-in configured to execute a task under the corresponding task topic. As for claim 13, Gruber teaches wherein the first task topic is configured with corresponding configuration information to execute a corresponding type of task (paragraph [0109] describes the DA server updates a service provider directory and a supported task. The supported tasks typically include a list of task types supported by respective third party service providers and a list of third party service providers that support respective task types; paragraph [0116]-[0118] describe the DA server receives the input from the user, and identifies a respective task type in accordance with a vocabulary received from the third party service provider. When the DA server locates two or more service providers that can perform the identified task type, the DA server selects on of the two or more service providers), and the configuration information comprises at least one selected from the group consisting of: task topic setting information and plug-in information (paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task type in accordance with information in the service provider directory and/or the list of supported tasks or competencies), wherein the task topic setting information is used for describing information related to the corresponding task topic (paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task in accordance with information in the service provider directory), and the plug-in information indicates at least one plug-in configured to execute a task under the corresponding task topic. As for claim 14, Gruber teaches wherein the first task topic is configured with corresponding configuration information to execute a corresponding type of task (paragraph [0109] describes the DA server updates a service provider directory and a supported task. The supported tasks typically include a list of task types supported by respective third party service providers and a list of third party service providers that support respective task types; paragraph [0116]-[0118] describe the DA server receives the input from the user, and identifies a respective task type in accordance with a vocabulary received from the third party service provider. When the DA server locates two or more service providers that can perform the identified task type, the DA server selects on of the two or more service providers), and the configuration information comprises at least one selected from the group consisting of: task topic setting information and plug-in information (paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task type in accordance with information in the service provider directory and/or the list of supported tasks or competencies), wherein the task topic setting information is used for describing information related to the corresponding task topic (Gruber: paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task in accordance with information in the service provider directory), and the plug-in information indicates at least one plug-in configured to execute a task under the corresponding task topic. As for claim 17, Gruber teaches wherein the first task topic is configured with corresponding configuration information to execute a corresponding type of task (paragraph [0109] describes the DA server updates a service provider directory and a supported task. The supported tasks typically include a list of task types supported by respective third party service providers and a list of third party service providers that support respective task types; paragraph [0116]-[0118] describe the DA server receives the input from the user, and identifies a respective task type in accordance with a vocabulary received from the third party service provider. When the DA server locates two or more service providers that can perform the identified task type, the DA server selects on of the two or more service providers), and the configuration information comprises at least one selected from the group consisting of: task topic setting information and plug-in information (paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task type in accordance with information in the service provider directory and/or the list of supported tasks or competencies), wherein the task topic setting information is used for describing information related to the corresponding task topic (paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task in accordance with information in the service provider directory), and the plug-in information indicates at least one plug-in configured to execute a task under the corresponding task topic. As for claim 19, Gruber teaches an electronic device (Fig. 3A; paragraph [0042] describes a digital assistant system), comprising: a processor and a memory (Fig. 3A, processor(s) 304, Memory 302; paragraph [0043] describes a processor and a memory); the processor is configured to execute instructions stored in the memory, so as to cause the electronic device to execute a method of information processing (paragraph [0047] describes the processor executes programs stored at the memory to perform operations), and the method comprising: generating a first message based on information indicated by a trigger request from a client of a digital assistant under a first task topic in response to the trigger request (Fig. 4A, steps 412-418; paragraphs [0115]-[0119] describe a DA server receives an input of a user, identifies respective task type, locates one or more service providers that can perform the identified task type in accordance with information in a service provider directory and/or the list of supported tasks or competencies. When the DA server locates two or more service providers that can perform the identified task type, the DA server selects one of the two or more service providers, the DAT server sends a request to the one or more selected service providers, the task may be associated with buying a movie ticket (i.e. topic movie); paragraphs [0063] and [0073] describe speech input from a user is recognized as a sequence of tokens, if a word or phrase in the token sequence is found to be associated with nodes in an ontology, the word or phrase will “trigger” or “activate” those nodes. When multiple nodes are “triggered,” based on the quantity and/or relative importance of the activated nodes, the natural language processor of the DA will select one of the actionable intents as the task (or task type) that the user intended the DA to perform); sending the first message to a second server to cause the second server to process the trigger request under the first task topic (Fig. 4A, Step 418; paragraphs [0119]-[0120] describe the DA server sends a request to the one or more selected service providers, typically one service provider, to perform a task of the identified task type. The DA server locates default service providers and the one or more third party service providers capable of performing the identified task type), the second server being a creator of the first task topic (Gruber: Fig. 4A, Step 402; paragraph [0104] describes the server provider sends one or more task types to the DA server; paragraph [0114] describes another service provider also sends task types supported by that service provider to the DA server); and receiving a second message returned by the second server and sending the second message to the client of the digital assistant (Fig. 4A, Steps 426 & 428; paragraphs [0122]-[0123] describe the DA server receives one or more results from the service provider and sends the one or more results to the DA client). As for claim 20, the claim lists all the same limitations of method claim 19, but in a non-transitory computer-readable storage medium, comprising instructions that instruct an electronic device to execute a method of information processing (Gruber: paragraphs [0044] and [0047] describe a memory includes a non-transitory computer readable medium that stores programs and the processors execute these programs). Therefore, the supporting rationale of the rejection to claim 19 applies equally as well to claim 20. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5, 11 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Gruber (US 2013/0311997) in view of Boran et al. (US 2022/0237054), hereinafter Boran. As for claim 5, Gruber teaches wherein receiving the second message returned by the second server comprises: receiving a second message sent by the second server in response to a call request from the second server (Fig. 4A; Step 424; paragraph [0121] describes the service provider sends one or more results relating to the performance of the requested task to the DA server). Gruber fails to teach where a second message is sent through a first interface and wherein a call request is for the first interface. Boran discloses where a second message is sent through a first interface and wherein a call request is for the first interface (paragraphs [0026] and [0054] describe an origin server receives an API call requesting content corresponding to a data element for display by a client, the origin server generates an API response based on the API call by creating an API response payload file, and the origin server transmits the API response). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Boran for sending an API response. The teachings of Boran, when implemented in the Gruber system, will allow one of ordinary skill in the art to provide content to a client in response to a data request. One of ordinary skill in the art would be motivated to utilize the teachings of Boran in the Gruber system in order to transmit a modified data query to an origin server, receive a query response from the origin server and send the modified query response to a client. As for claim 11, Gruber teaches wherein the second server creates the first task topic through the following steps (paragraph [0121] describes the service provider performs the requested task): receiving creation parameters sent by the second server (paragraph [0104] describes the service provider sends task types including vocabulary and domain information); and creating the first task topic according to the creation parameters (paragraphs [0106]-[0107] describe the DA server receives and integrates the one or more task types into the task flow models and stores the received one or more task types with the existing third party task flow models. The DA server performs a task corresponding to a task flow model in the third party task flow model). Gruber fails to teach wherein creation parameters is sent by the second server through a second interface in response to a call request from the second server for the second interface. Boran discloses wherein creation parameters is sent by the second server through a second interface in response to a call request from the second server for the second interface (paragraphs [0026] and [0054] describe an origin server receives an API call requesting content corresponding to a data element for display by a client, the origin server generates an API response based on the API call by creating an API response payload file, and the origin server transmits the API response). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Boran for sending an API response. The teachings of Boran, when implemented in the Gruber system, will allow one of ordinary skill in the art to provide content to a client in response to a data request. One of ordinary skill in the art would be motivated to utilize the teachings of Boran in the Gruber system in order to transmit a modified data query to an origin server, receive a query response from the origin server and send the modified query response to a client. As for claim 16, the combined system of Gruber and Boran teaches wherein the first task topic is configured with corresponding configuration information to execute a corresponding type of task Gruber: paragraph [0109] describes the DA server updates a service provider directory and a supported task. The supported tasks typically include a list of task types supported by respective third party service providers and a list of third party service providers that support respective task types; paragraph [0116]-[0118] describe the DA server receives the input from the user, and identifies a respective task type in accordance with a vocabulary received from the third party service provider. When the DA server locates two or more service providers that can perform the identified task type, the DA server selects on of the two or more service providers), and the configuration information comprises at least one selected from the group consisting of: task topic setting information and plug-in information (Gruber: paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task type in accordance with information in the service provider directory and/or the list of supported tasks or competencies), wherein the task topic setting information is used for describing information related to the corresponding task topic (Gruber: paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task in accordance with information in the service provider directory), and the plug-in information indicates at least one plug-in configured to execute a task under the corresponding task topic. Claims 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Gruber (US 2013/0311997) in view of Huang et al. (US 2022/0115012), hereinafter Huang. As for claim 7, Gruber teaches wherein the first message comprises a first verification token corresponding to the trigger request (paragraphs [0063]-[0065] describe the natural language processing module of the DA takes the sequence of tokens generated by the speech-to-text processing module, and attempts to associated the token sequence with one or more “actionable intents” recognized by the digital assistant), and receiving the second message returned by the second server and sending the second message to the client of the digital assistant comprises (paragraph [0122] describes the third party service receives the results from the service provider, and sends the results to the DA client): receiving a second message returned by the second server (paragraph [0122] describes the third party service receives the results from the service provider). Gruber fails to teach acquiring a second verification token in the second message; and sending the second message to the client of the digital assistant in response to verification information indicated by the second verification token meeting a verification condition of the first verification token. Huang discloses acquiring a second verification token in a second message (paragraphs [0021]-[0022] describes a server of a second voice assistant, receives a text request sent by a server of a first voice assistant. The server of the second voice assistant generates token information for the text request and sends the token information to the server of the first voice assistant); and sending the second message to a client of a digital assistant in response to verification information indicated by the second verification token meeting a verification condition of the first verification token (Fig. 2; Steps 207-210; paragraphs [0095]-[0098] describe a server of the second voice assistant performs a check using the received token (i.e. the second verification token) and the token generated for the text request (i.e. the first verification token) to determine whether the two tokens are consistent, and if yes, the check is passed. If the check is passed, the server of the second voice assistant responds to the text request). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Huang for authenticating a token associated with a voice request. The teachings of Huang, when implemented in the Gruber system, will allow one of ordinary skill in the art to provide content to a client in response to a service request. One of ordinary skill in the art would be motivated to utilize the teachings of Huang in the Gruber system in order to prevent a false response caused by an error call of a client of a second voice assistant by the client of the first voice assistant and also prevent the client of a malicious first voice assistant from calling the client of the second voice assistant for an attack, thereby improving reliability and safety (Huang: paragraph [0048]). As for claim 18, the combined system of Gruber and Huang teaches wherein the first task topic is configured with corresponding configuration information to execute a corresponding type of task (Gruber: paragraph [0109] describes the DA server updates a service provider directory and a supported task. The supported tasks typically include a list of task types supported by respective third party service providers and a list of third party service providers that support respective task types; paragraph [0116]-[0118] describe the DA server receives the input from the user, and identifies a respective task type in accordance with a vocabulary received from the third party service provider. When the DA server locates two or more service providers that can perform the identified task type, the DA server selects on of the two or more service providers), and the configuration information comprises at least one selected from the group consisting of: task topic setting information and plug-in information (Gruber: paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task type in accordance with information in the service provider directory and/or the list of supported tasks or competencies), wherein the task topic setting information is used for describing information related to the corresponding task topic (Gruber: paragraph [0118] describes the DA server locates the one or more service providers that can perform the identified task in accordance with information in the service provider directory), and the plug-in information indicates at least one plug-in configured to execute a task under the corresponding task topic. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Gruber (US 2013/0311997) in view of Jaswal et al. (US 2020/0097567), hereinafter Jaswal. As for claim 8, Gruber teaches wherein after receiving the second message returned by the second server (paragraph [0122] describes the DA server receives results from the service provider), the method further comprises: operations (paragraph [0122] describes the DA server sends the results to the DA client); wherein the first server and the client associate with the digital assistant (paragraph [0023] describes a digital assistant system implemented according to a client-server model. The digital assistant system includes a client-side portion executed on a user device, and a server-side portion (i.e. DA server) executed on a server system). Gruber fails to teach wherein operations include: modifying fields related to a sender in the second message, wherein the sender indicated by the modified second message is the first server; and sending the second message to the client comprises: sending the modified second message to the client. Jaswal discloses wherein operations include: modifying fields related to a sender in the second message, wherein the sender indicated by the modified second message is the first server (paragraph [0022] describes updates to the data fields performed by another data server are transmitted and propagated to a first data server; paragraphs [0025]-[0026] describe data identifier of data values includes a data server and a version number, the first data server eliminate second data value from the data field based on determining that the second data identifier is superseded by a third data identifier, after eliminating the replaced data values, the first data server provides the remaining data value(s) for the data field to the requesting device as a response to the data read request); and sending the second message to the client comprises: sending the modified second message to the client (paragraph [0026] describe the first data server provides a response to a data read request; paragraph [0023] describes a requesting device requests a data read). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Jaswal for updating data fields. The teachings of Jaswal, when implemented in the Gruber system, will allow one of ordinary skill in the art to provide content to a request device in response to a data request. One of ordinary skill in the art would be motivated to utilize the teachings of Jaswal in the Gruber system in order to ensure that data are consistent between a provider and a request device. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Gruber (US 2013/0311997) in view of Baldua et al. (US 2025/0110957), hereinafter Baldua. As for claim 10, Gruber teaches wherein the trigger request from the client of the digital assistant under the first task topic is the message sending request (paragraph [0115]/[0119] describes the client sends a voice command of a user to request for a service), and the second server processing the trigger request under the first task topic comprises: determining, by the second server, a third message sent by the client of the digital assistant according to the first message (paragraph [0121] describes the service provider receives the request and performs the requested task); return a reply content for the third message (paragraph [0123] describes the service provider sends the results directly back to the client);. generating, by the second server, the second message according to the reply content (paragraph [0121] and [0123] describe the service provider receives the request, performs the requested task, and sends the results relating to the performance of the requested task directly back to the client). Gruber fails to teach calling, by a second server, a language model corresponding to a first task topic to cause the language model to process a third message. Baldua discloses calling, by a second server, a language model corresponding to a first task topic to cause the language model to process a third message (paragraphs [0169]-[0172] describe a processing device receives a first query to obtain information using a first ser of data resources. The first query is a conversational natural language and includes a first query term. The processing device configures a first prompt to cause a large language model to translate the first query term and the context data into an intent which is a structured representation of a user input. The large language model executes the instructions contained in the first prompt to classify the user input into an intent based on the context data and the instructions contained in the first prompt). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Baldua for utilizing a large language model. The teachings of Baldua, when implemented in the Gruber system, will allow one of ordinary skill in the art to translate a user query into an intent. One of ordinary skill in the art would be motivated to utilize the teachings of Baldua in the Gruber system in order to generate a plan for executing a user query that requires multiple functions. Allowable Subject Matter Claims 4 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: As for claim 4, Gruber teaches wherein generating the first message based on information indicated by the trigger request from the client of the digital assistant under the first task topic in response to the trigger request comprises: acquiring a third message sent by the client of the digital assistant in response to a message sending request from the client of the digital assistant under the first task topic (Fig. 4A; Step 419; paragraphs [0116]- [0119] describes the DA server receives the input of the user and identifies a respective task type. The DA server locates service providers that can perform the identified task type. The DA server sends a request to the DA client, and the DA client receives the request and sends the request to the selected service providers); and generating, according to the third message, a first message based on information indicated by the message sending request (Fig. 4A, Step 424; paragraphs [0121]-[0123] describe the service provider receives the request, performs the requested task, and sends one or more results relating to the performance of the requested task to the DA server). While Gruber teaches the above limitations, Gruber fails to teach the limitations “wherein the first message comprises fields related to the first task topic and fields related to the third message, and a message structure indicated by the fields related to the third message is consistent with a message structure of the second server.” Claim 4 contains allowable subject matter. Claim 15 is dependent claim of claim 4, therefore claim 15 is objected to allow due to its dependency. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Smith et al. (US 2020/0151259) advanced machine learning interfaces Kim et al. (US 2021/0365482) teach chat system, chatbot server device Byun et al. (US 2019/0258456) teach system for processing user utterance Sharkey et al. (US 8,579,911) teach intent fulfillment Roth et al. (US 2005/0288005) teach extendable voice commands. Any inquiry concerning this communication or earlier communications from the examiner should be directed to L. T N. whose telephone number is (571)272-1013. The examiner can normally be reached M & Th 5:30 am - 2:30 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TONIA DOLLINGER can be reached at 571-272-4170. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L. T. N/ Examiner, Art Unit 2459 /TONIA L DOLLINGER/Supervisory Patent Examiner, Art Unit 2459
Read full office action

Prosecution Timeline

Nov 19, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598105
Software-Defined Device Tracking in Network Fabrics
2y 5m to grant Granted Apr 07, 2026
Patent 12592984
MULTIMODAL VEHICLE SENSOR FUSION AND STREAMING
2y 5m to grant Granted Mar 31, 2026
Patent 12580987
USING CONTEXTUAL INFORMATION FOR VEHICLE TRIP LOSS RISK ASSESSMENT SCORING
2y 5m to grant Granted Mar 17, 2026
Patent 12574790
REDUCING LATENCY OF EXTENDED REALITY (XR) APPLICATION USING HOLOGRAPHIC COMMUNICATION NETWORK AND MOBILE EDGE COMPUTING (MEC)
2y 5m to grant Granted Mar 10, 2026
Patent 12562989
FLOW-TRIMMING BASED CONGESTION MANAGEMENT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
96%
With Interview (+26.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 354 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month