Prosecution Insights
Last updated: April 19, 2026
Application No. 18/765,110

ELECTRONIC DEVICE INCLUDING ARTIFICIAL INTELLIGENCE AGENT AND METHOD OF OPERATING ARTIFICIAL INTELLIGENCE AGENT

Non-Final OA §103§112
Filed
Jul 05, 2024
Examiner
SIRJANI, FARIBA
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
414 granted / 547 resolved
+13.7% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
578
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 547 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-20 are pending. Claims 1, 14, and 20 are independent. This Application was published as U.S. 20250029600. Apparent priority: 10 May 2024. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 10 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention Claim 10 provides: 10. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to: when the communal space event ends, classify history information organized up to an end point of the communal space event after the communal space event ends into shared data and private data; and provide the shared data and the personal data corresponding to each of all users participating in the communal space event to each of all the users. There are two problems with this Claim: 1) “The personal data” does not have an antecedent basis in this Claim or Claim 1. “Personal data” appears in Claim 6 which is not in the chain of dependency. 2) While the first limitation divides the data into shared and private, the second limitation provides shared and personal (possibly should have been “private”) data to everybody such that there is no point to classifying the data into shared and private. Please address. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9 and 11-20 are rejected under 35 U.S.C. 103 as being unpatentable over Byeon (U.S. 2022/0122603) in view of Kennewick (U.S. 20100286985). Regarding Claim 1, Byeon teaches: 1. An electronic device comprising: [Byeon, Figure 1, “electronic device 101.”] a memory; and [Byeon, Figure 1, “memory 130.”] at least one processor, comprising processing circuitry, individually and/or collectively configured to: [Byeon, Figure 1, “processor 120.”] generate an artificial intelligence (AI) agent configured to operate in common in a communal space and [Byeon, Figure 2, “Intelligent Server 201” which serves the electronic device 101 is an AI agent. ‘[0057] Referring to FIG. 2, the integrated intelligence system 200 according to an embodiment may include an electronic device 101, an intelligent server 201, and a service server 300.” “[0071] … According to an embodiment, the intelligent server 201 may produce a plan for performing a task corresponding to the user voice input, based on the text data.” “[0072] According to an embodiment, the plan may be produced by an artificial intelligent (AI) system. The artificial intelligent system may be a rule-based system, or may be a neural network-based system (e.g., a feedforward neural network (FNN) or a recurrent neural network (RNN)). Alternatively, the artificial intelligent system may be a combination of the above systems, or may be another artificial intelligent system….”] determine a domain in the AI agent, [Byeon, Figure 6, “collect information related to collaborative task 607” include collecting domain information: “[0156] According to various embodiments, the processor 120 may collect information related to the collaborative task through interaction between the user and at least one participant. The information related to the collaborative task may include at least one of domains, actions, or parameters. The domains may indicate applications (e.g., a beverage order application and a hotel reservation application) installed in the electronic device 101 (or the electronic device 102). The actions may indicate specific functions to be performed (e.g., order a drink at the location XX) in the applications.”]] when a communal space event occurs; [Byeon, Figure 6, “Determine whether or not to perform collaborative task 603” is determining that a “communal space event” of the Claim is occurring. “[0149] In operation 603, the processor 120 (e.g., the collaboration dispatcher module 510 in FIG. 5) may determine whether to perform a collaborative task. The processor 120 may determine whether the converted text data matches a collaboration pattern. … Alternatively, the processor 120 may determine whether to perform the collaborative task, based on whether the user voice corresponds to an action intended for collaboration (e.g., order a drink, reserve accommodation, or the like).”] load the determined domain; [Byeon, Figure 6, “produce collaborative tasks based on collected information 609” includes the loading of the domain which is determined at 603. “[0160] In operation 609, the processor 120 (e.g., the collaboration management module 540) may produce a collaborative task, based on the collected information….” “[0161] In the participant correction method, the processor 120 may collect information related to the collaborative task (e.g., domains, actions, and parameters), based on the user voice (or text data converted from the user voice), may provide the collected information to the participant, and may correct the information related to the collaborative task, based on a participant input.…. In the collaborative task configuration method, the processor 120 may collect information related to the collaborative task (e.g., domains, actions, and parameters) through a dialog between the user and the participant, thereby producing the collaborative task.”] collect user information about a user participating in the communal space event; and [Byeon, Figure 6, “collect information related to collaborative task 607” includes collecting the information of the participants. See [0163] below. See also: “[0154] In operation 607, the processor 120 (e.g., the collaboration management module 540 in FIG. 5) may collect information related to the collaborative task. To this end, the processor 120 (e.g., the user search module 530 in FIG. 1) may identify whether the electronic device (e.g., the electronic device 102 or the electronic device 104 in FIG. 1) of at least one participant provides a voice agent function. The processor 120 may receive participant information from the collaboration dispatcher module 510, and may identify whether the electronic device of the participant provides a voice agent function, based on the participant information.” “[0156] According to various embodiments, the processor 120 may collect information related to the collaborative task through interaction between the user and at least one participant….”] process an utterance of the user based on the determined domain and the user information. [Byeon, Figure 6, “execute collaborative task 611” at the end of a flowchart which begins with “recognize user voice 601.” “[0163] In operation 611, the processor 120 (e.g., the collaboration execution module 550 in FIG. 5) may execute the collaborative task. The processor 120 may execute the collaborative task, based on the collaborative task produced using the collected information. The processor 120 may provide the collected information to the user or at least one participant, and may receive confirmation for execution of the collaborative task from the user or at least one participant. The processor 120 may change a representative to execute the collaborative task, based on an input from the user or at least one participant. For example, the processor 120 may configure the user who speaks first as a representative, or may configure a participant other than the user as a representative, based on an input by at least one participant. The processor 120 may transmit information for executing the collaborative task to the electronic device of the representative who executes the collaborative task, and may provide an execution result thereof to the participants other than the representative.”] It is understood and implied that the determined domain will be loaded in order to be able to perform the following tasks such as ordering coffee that is shown in the drawings of the Byeon. An express reference is added. Kennewick teaches: 1. An electronic device comprising: [Kennewick, Figure 1, “[0082] The system 90 may include a main unit 98 and a speech unit 128. Alternatively, the system 98 may only comprise of the main unit 98, the speech unit 128 being a completely separate system. The event manager 100 may mediate interactions between other components of the main unit 98. The event manager 100 provides a multi-threaded environment allowing the system 98 to operate on multiple commands or questions from multiple user sessions without conflict and in an efficient manner, maintaining real-time response capabilities.”] a memory; and [Kennewick, Figure 1, “databases 102.”] at least one processor, comprising processing circuitry, individually and/or collectively configured to: [Kennewick, Figure 1, processor is inherent in the “event manager 100.”] generate an artificial intelligence (AI) agent configured to operate in common in a communal space and [Kennewick, Figure 1: “Agents 106” and Figure 2 showing the architecture of the “agents 106” including “domain agents 156” from an “Agent library 158.” “[0083] Agents 106 contain packages of both generic and domain specific behavior for the system 98. Agents 106 may use nonvolatile storage for data, parameters, history information, and locally stored content provided in the system databases 102 or other local sources. User specific data, parameters, and session and history information that may determine the behavior of agents 106 are stored in one or more user profiles 110. Data determining system personality characteristics for agents are stored in the one or more personality module 108. The update manager 104 manages the automatic and manual loading and updating of agents 106 and their associated data from the Internet 136 or other network through the network interface 116.” Kennewick is from before the AI age.] determine a domain in the AI agent, [Kennewick, Figures 4A and 4B, “capture utterance 402/452” leading to “determine the domain of expertise/command 406/456.” Figure 6, “select agents 606.” “[0077] FIG. 6 is a process for determining the proper domain agents to invoke and the properly formatted queries and/or commands that is to be submitted to the agents according to one embodiment of the invention.”] when a communal space event occurs; [Kennewick is not contingent on a “communal event” and is dealing with single users issuing commands or queries to their assistant device.] load the determined domain; [Kennewick, the identification and determination of the domain is ensued by the loading of the proper domain agent: “[0083] … The update manager 104 manages the automatic and manual loading and updating of agents 106 and their associated data from the Internet 136 or other network through the network interface 116.” “[0090] … When the system starts-up or boots-up the agent manager 154 may load and initialize the system agent 150 and the one or more domain agents 156. At shutdown the agent manager unloads the agents. …” “[0092] Domain agents 156 can be data-driven, scripted or created with compiled code. A base of generic agent is used as the starting point for data-driven or scripted agents. Agents created with compiled code are typically built into dynamically linkable or loadable libraries….” “[0103] If a question or command requires an agent, currently not loaded on the system, the agent manager 154 may search the network 136 through the network interface 116 to find a source for a suitable agent. Once located, the agent can be loaded under the control of the update manager 104, within the terms and conditions of the license agreement as enforced by the agent manger.”] collect user information about a user participating in the communal space event; and [Kennewick, Figure 1, “user profile 110.” “[0112] Commands and questions are interpreted, queries formulated, responses created and results presented based on the users personal or user profile 110 values. Personal profiles may include information specific to the individual, their interests, their special use of terminology, the history of their interactions with the system, and domains of interest. The personal profile data may be used by the agents 106, the speech recognition engine 120, the text to speech engine 124, and the parser 118. Preferences can include, special (modified) commands, past behavior or history, questions, information sources, formats, reports, and alerts. User profile data can be manually entered by the user and/or can be learned by the system 90 based on user behavior. User profile values may include: …”] process an utterance of the user based on the determined domain and the user information. [Kennewick, Figure 3, “take action 308,” Figure 4A, “execute queries and/or commands 412,” Figure 4B, “route commands to the systems 460,” all of these figures beginning with the receiving of the utterance from the user.] Byeon and Kennewick pertain to speech driven agents and performance of tasks and functions and the difference is that Byeon is triggered by a “communal event” such as a conference or conversation with another person whereas Kennewick does not require a communal event in order to invoke an agent/model/domain for responding to a query or performing a task and the two references mesh quite closely as shown by the overlapping mapping above. It would have been obvious to buttress the slight deviations of each reference with the teachings of the other as combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Regarding Claim 2, Byeon teaches: 2. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to determine at least one of the domain corresponding to a result obtained by analyzing a name, a theme, or a description of the communal space, when determining the domain. [Byeon determines the domain based on the intent of the request of the user such as ordering coffee which may be determined from a name for example “Americano” and “Star Valley” in the following example determine that the Domain = Ordering Coffee. “[0114] … The domains may indicate applications (e.g., a beverage order application, a hotel reservation application, and an airline reservation application) installed in the electronic device 101 (or the electronic device 102)….” “[0119] … For example, the collaboration management module 540 may collect domains (e.g., a coffee order application), actions (e.g., order coffee at Star Valley (e.g., the order place)), and parameters (e.g., Americano), based on the utterance of a user (e.g., Sujan) “Order an Americano from Star Valley with Mark”. The collaboration management module 540 may collect all of domains, actions, and parameters for the collaborative task from the user utterance….”] Regarding Claim 3, Byeon teaches selecting the participants and also selecting parameters and information pertaining to domain such as the type of coffee. Kennewick teaches: 3. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to receive domain information selected by a user generating the communal space event and determine the domain, when determining the domain. [Kennewick teaches that the user may select the domain and that is how the domain would be determined: “[0101] When a user requires or selects a new domain agent 156 or database element 102, the update manager 104 may connect to their source on the network 136 though the network interface 116, download and install the agent or data….”] Rationale for combination as provided for Claim 1. Regarding Claim 4, Byeon teaches: 4. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to apply at least one model corresponding to the determined domain to at least one of an automatic speech recognition (ASR) module, a natural language understanding (NLU) module, a natural language generator (NLG) module, a text-to-speech (TTS) module, or an image processing module, when loading the determined domain. [Byeon, Figure 2, “ASR Module 221,” “NLU Module 223,” “NLG Module 227,” and “TTS Module 229” all part of the “natural language platform 220” of the “Intelligent server 201.” “[0076] According to an embodiment, the natural language platform 220 may include an automatic speech recognition module (ASR module) 221, a natural language understanding module (NLU module) 223, a planner module 225, a natural language generator module (NLG module) 227, or a text-to-speech module (TTS module) 229.” “[0078] … According to an embodiment, the planner module 225 may determine a plurality of domains used to perform the task, based on the determined intention. The planner module 225 may determine a plurality of operations included in each of a plurality of domains determined based on the intention….”] Byeon teaches determining the domains from the recognized speech and intent. Byeon teaches the use of ASR, NLU, NLG, TTS in order to recognize the speech and determine the domain. But, the order is first conduct ASR/NLU etc. and then determine the domain. Even for NLG and TTS which are performed after domain determination, Byeon does not teach the determined domain is given effect. Kennewick teaches: apply at least one model corresponding to the determined domain to at least one of an automatic speech recognition (ASR) module, a natural language understanding (NLU) module, a natural language generator (NLG) module, a text-to-speech (TTS) module, or an image processing module, when loading the determined domain. [Kennewick loads and tailors the ASR and TTS tasks to the domain. Figure 1, “TTS engine 124” and “Speech Recognition 120.” “[0105] The data used to configure data driven agents 156 are structured in a manner to facilitate efficient evaluation and to help developers with organization. These data are used not only by the agents 156, but also by the speech recognition engine 120, the text to speech engine 124, and the parser 118. …”] Rationale for combination as provided for Claim 1. Regarding Claim 5, Byeon teaches: 5. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to: transmit an invitation message to at least one user participating in the shared space event; [Byeon, Figure 6, 605: select participant based on collaborative task which is expanded in Figure 8 and Figure 9 and sends requests for adding participants, i.e. invitations. See Figure 15 where the user tells the device to ask Mark if he wants coffee before ordering coffee.] send a request for information necessary for the communal space event to the at least one user participating in the communal space event; [Byeon, Figure 15, 1570: Let’s ask Mark. Figure 16B, 1650 and 1660, sends the request to Mark.] receive user information about each of the at least one user participating in the communal space event from each of the at least one user; and [Byeon, Figure 16B, 1660 where Mark is responding that he wants an iced caramel macchiato.] collect the user information. [Byeon, Figure 16B, 1660 where Mark is responding that he wants an iced caramel macchiato. Figure 14, 1405: control collection of collaborative task information.] Regarding Claim 6, Byeon teaches: 6. The electronic device of claim 1, wherein the user information comprises at least one of: public data of the user that is data allowed to be disclosed to other users in the communal space; [Byeon, Figure 14, 1405: control collection of collaborative task information and 1407: Display collected collaborative task information. “[0237] … he processor 120 may receive, from the intelligent server 201, an instruction to make a request to at least one participant for information….”] private data of the user that is data that is not allowed to be disclosed to the other users in the communal space; shared data that is data related to the other users in the communal space; and personal data that is data unrelated to the other users in the communal space. [Byeon, Figure 5, “personal information server 108.” “[0113] If a participant corresponding to the participant information is retrieved from the personal information database, the user search module 530 may determine that the electronic device of the participant provides a voice agent function. If a participant corresponding to the participant information is not retrieved from the personal information database, the user search module 530 may determine that the electronic device of the participant does not provide a voice agent function. The user search module 530 may transmit information on whether the participant provides a voice agent function to the collaboration management module 540.”] Regarding Claims 7-9: Byeon ([0129]) teaches that dialog history information can be provided in response to user request and may include details of a dialog at any specific time. Claims 7-9 specify various points in time (organized up to the point that the user leaves the dialog, organized up to the end of the dialog, and organized up to the end of the event/dialog and requested at a later time) all of which are taught or at the least suggested by the teaching that the details of dialog history at any point of time are available and can be provided to the users upon request. Regarding Claim 7, Byeon teaches and the teaching suggests: 7. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to, when the user participating in the communal space event leaves the communal space before the communal space event ends, provide the user who left the communal space with history information organized up to a point in time at which the user left the communal space. [Byeon teaches that dialog history information can be provided in response to user request and may include details of a dialog at any specific time: “[0129] The collaboration management module 540 may provide dialog history information in response to a request by the user or at least one participant. The collaboration management module 540 may provide a dialog between the user and at least one participant in the form of a chat room or a table. The collaboration management module 540 may provide details of a dialog at a specific time in response to an input from the user or at least one participant. The collaboration management module 540 may change the information in response to a user input for the provided dialog.”] Regarding Claim 8, Byeon teaches and the teaching suggests: 8. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to, when the user participating in the communal space event leaves the communal space before the communal space event ends, provide the user who left the communal space with history information organized up to an end point of the communal space event after the communal space event ends. [Byeon teaches that dialog history information can be provided in response to user request and may include details of a dialog at any specific time: “[0129] The collaboration management module 540 may provide dialog history information in response to a request by the user or at least one participant. The collaboration management module 540 may provide a dialog between the user and at least one participant in the form of a chat room or a table. The collaboration management module 540 may provide details of a dialog at a specific time in response to an input from the user or at least one participant. The collaboration management module 540 may change the information in response to a user input for the provided dialog.”] Regarding Claim 9, Byeon teaches and the teaching suggests: 9. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to, when the communal space event ends, provide each of all users participating in the communal space event with history information organized up to an end point of the communal space event after the communal space event ends. [Byeon teaches that dialog history information can be provided in response to user request and may include details of a dialog at any specific time: “[0129] The collaboration management module 540 may provide dialog history information in response to a request by the user or at least one participant. The collaboration management module 540 may provide a dialog between the user and at least one participant in the form of a chat room or a table. The collaboration management module 540 may provide details of a dialog at a specific time in response to an input from the user or at least one participant. The collaboration management module 540 may change the information in response to a user input for the provided dialog.” Figure 13, “provide collected information 1301” includes “dialog history information”: “[0225] … The collected information may include at least one piece of current command information (e.g., order coffee), request information (e.g., select a coffee item), or dialog history information….”] Regarding Claim 11, Byeon teaches the identification of domains (“[0157] … The domains or the actions may be configured, added, changed, or deleted by the user or at least one participant….” And see the rejection of Claim 13) but not the use of a particular domain for ASR/TTS and Kennewick is much clearer on domain identification and addition Kennewick teaches: 11. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to: when an occurrence of a domain addition event is detected, identify at least one model corresponding to a domain requested to be added in response to the domain addition event; and [Kennewick, Figure 4A, 406: determine the domain of expertise. Figures 1 and 2 the “event manager 100” adds or removes agents/models according to the detected domain. “[0092] … Developers of agents can add new functionality to the agent library 158 as required….” “[0090] … When the system starts-up or boots-up the agent manager 154 may load and initialize the system agent 150 and the one or more domain agents 156. At shutdown the agent manager unloads the agents. The agent manager 154 also performs license management functions for the domain agents 156 and content in the databases 102.”] additionally apply at least one model corresponding to the requested domain to at least one of an ASR module, an NLU module, an NLG module, a TTS module, or an image processing module, or replace a currently applied domain with the requested domain and apply the requested domain. [Kennewick, [Kennewick loads and tailors the ASR and TTS tasks to the domain. Figure 1, “TTS engine 124” and “Speech Recognition 120.” “[0105] The data used to configure data driven agents 156 are structured in a manner to facilitate efficient evaluation and to help developers with organization. These data are used not only by the agents 156, but also by the speech recognition engine 120, the text to speech engine 124, and the parser 118. …” See rejection of Claim 4.] Rationale for combination as provided for Claim 1. Regarding Claim 12, Byeon teaches: 12. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to: when processing of an input of the user using a currently applied domain is impossible during analyzing and processing of the input of the user, search for a domain corresponding to the input of the user; and [Byeon, Figure 11, “Identify domain 1101” which is done based on the inputs of the user and then the system has to verify consensus at 1103 or it will not move forward and changes the domain until consensus is achieved based on the inputs of the user and the other participants. The answers to the “questions” in Byeon teach the “input of the user” of the Claim and as long as these answers do not point to a single domain the system keeps collecting information: “[0202] … the collaboration management module 540 may analyze the collaborative task, and may differently determine the processing method thereof depending on whether any one piece of information in the collaborative task corresponds to a single choice or multiple choices. Since the domain indicates an application to be executed, the collaboration management module 540 may process the domain as a single choice. If there is a difference in choices between the user and at least one participant, the collaboration management module 540 may repeatedly provide additional questions until the choices match each other. If the additional questions are repeated a predetermined number of times or more, the collaboration management module 540 may make a decision by a majority vote between the user and the participant, or may appoint a representative participant through a dialog between the participants so that the appointed representative makes a decision.”] trigger an occurrence of the domain addition event for requesting an addition of the found domain or a replacement with the found domain. [Byeon, Figure 11, YES out of “does everybody consent with domain? 1103.”] Regarding Claim 13, Byeon teaches: 13. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to, when a request for an addition of a new domain and/or a replacement with the new domain is received from the user, trigger an occurrence of the domain addition event for requesting the addition of the new domain or the replacement with the new domain. [Byeon, Figure 5 and Figure 11. Figure 5 shows the development (addition, modification) of the various actions in the same domain. Figure 11 includes “does everybody consent with domain? 1103” which could result in the change of domain based on a series of occurrences that are described in the description of Figure 11. “[0157] … The domains or the actions may be configured, added, changed, or deleted by the user or at least one participant….” “[0201] … The domain may be configured, added, changed, or deleted by the user or at least one participant. The processor 120 may receive a response to the provided information from the user or the electronic device 102 of participant. Based on the response, the processor 120 may perform operation 1107 if both the user and the at least one participant consent with the domain, and may perform operation 1105 if neither the user nor the at least one participant consents with the domain.” “[0119] … The collaboration management module 540 may collect all of domains, actions, and parameters for the collaborative task from the user utterance. The collaboration management module 540 may provide the participant (e.g., Mark) with information related to the collaborative task, such as “Sujan ordered an Americano from Star Valley”, and may receive, from the participant (e.g., Mark), a voice command (or a touch input) such as “Okay, order an ice latte for me.” The collaboration management module 540 may correct the information related to the collaborative task (e.g., the domain (e.g., the coffee order application), the action (e.g., order coffee from Star Valley (e.g., the order place)), and the parameters (e.g., Americano and ice latte)). When production of the collaborative task is completed through the correction, the collaboration management module 540 may transmit the produced collaborative task to the collaboration execution module 550.”] Claim 14 is a method claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. A slight difference in the language of Claim 14 makes is broader than Claim 1 which makes the mapping of Claim 1 readily applicable to 14. (In 1, the “determining of a domain in the AI agent” depends on the Claim 15 is a method claim with limitations corresponding to the limitations of Claim 2 and is rejected under similar rationale. Claim 16 is a method claim with limitations corresponding to the limitations of Claim 7 and is rejected under similar rationale. Claim 17 is a method claim with limitations corresponding to the limitations of Claim 8 and is rejected under similar rationale. Claim 18 is a method claim with limitations corresponding to the limitations of Claim 9 and is rejected under similar rationale. Claim 19 is a method claim with limitations corresponding to the limitations of Claim 11 and is rejected under similar rationale. Claim 20 is a computer program product system claim with limitations corresponding to the limitations of method Claim 14 and is rejected under similar rationale. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Byeon and Kennewick in view of DeLuca (U.S. 20130185363). Regarding Claim 10, Byeon teaches: 10. The electronic device of claim 1, wherein at least one processor, comprising processing circuitry, is individually and/or collectively configured to: when the communal space event ends, classify history information organized up to an end point of the communal space event after the communal space event ends into shared data and private data; and [Byeon teaches that dialog history information can be provided in response to user request and may include details of a dialog at any specific time and that the history is retained after the dialog has stopped; this is what history means after all. Figure 13, “provide collected information 1301” includes “dialog history information”.] provide the shared data and the personal data corresponding to each of all users participating in the communal space event to each of all the users. [NOTE that this limitation is not withholding the private data and while the data is classified, after classification, whether the data is private or public it is going to all of the participants. Accordingly, the data sharing of Byon teaches this limitation. Byeon, Figure 14, “control collection of collaborative task information 1405” and “display collected collaborative task information 1407.” Figure 16A, 1640 where the system is sharing information regarding what Mark wants with the initial initiating user.] Byeon does not teach classifying and distinguishing private and shared history. Neither does Kennewick. DeLuca teaches: when the communal space event ends, classify history information organized up to an end point of the communal space event after the communal space event ends into shared data and private data; and [DeLuca pertains to chat messages that may have several participants/ communal spaces and chat history prior to the joining of a new participant may be classified as private with respect to the new participant or may be shared with him depending on the classification that is input by ab existing participant: “[0044] Only a selected at least one instant message that corresponds to the at least one new participant will be displayed. Instant messages that do not correspond to the at least one new participant (e.g., non-selected instant messages from the list) are not displayed to the at least one new participant and thus are hidden from the at least one new participant. Thus, using an access control, at least one participant in an existing instant messaging session may specify whether at least part of a past conversation or chat history between participants in the instant messaging session should be kept private or should be shared when at least one new participant joins the same instant messaging session.”] Byeon/Kennewick and DeLuca pertain to participatory communications and it would have been obvious to modify the system of combination to include the classification into private and public considering the nature of the communications as taught by DeLuca in order to distinguish the data for later processing. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARIBA SIRJANI whose telephone number is (571)270-1499. The examiner can normally be reached 9 to 5, M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Fariba Sirjani/ Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jul 05, 2024
Application Filed
Jan 26, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603099
SELF-ADJUSTING ASSISTANT LLMS ENABLING ROBUST INTERACTION WITH BUSINESS LLMS
2y 5m to grant Granted Apr 14, 2026
Patent 12579482
Schema-Guided Response Generation
2y 5m to grant Granted Mar 17, 2026
Patent 12572737
GENERATIVE THOUGHT STARTERS
2y 5m to grant Granted Mar 10, 2026
Patent 12537013
AUDIO-VISUAL SPEECH RECOGNITION CONTROL FOR WEARABLE DEVICES
2y 5m to grant Granted Jan 27, 2026
Patent 12492008
Cockpit Voice Recorder Decoder
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+31.0%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 547 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month