Prosecution Insights
Last updated: April 19, 2026
Application No. 18/128,009

ROBOT AND METHOD FOR CONTROLLING THEREOF

Final Rejection §103
Filed
Mar 29, 2023
Examiner
RAMIREZ, ELLIS B
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Final)
80%
Grant Probability
Favorable
4-5
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
156 granted / 194 resolved
+28.4% vs TC avg
Strong +18% interview lift
Without
With
+18.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
39 currently pending
Career history
233
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
62.0%
+22.0% vs TC avg
§102
14.1%
-25.9% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 194 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments The amendment and response filed on November 18, 2025, to the Non-Final Office Action dated August 18, 2025 has been entered. Claims 6 & 14 are amended; Claims 2 & 10 are cancelled. Claims 1,3-9, and 11-15 are pending in this application. Response to Arguments Applicant’s arguments and amendments, see pages 8-10, filed November 18, 2025, with respect to the 35 U.S.C. § 103 rejection based on Takashi SUMIYOSHI (US-20190126488-A1), Breazeal et al (US-20170206064-A1), and Li et al (CN-101241561-A) have been considered but are not persuasive. The 35 U.S.C. § 103 rejection of claims 11,3-9, and 11-15 for the reasons explained below. Applicant asserts two differences between the invention and the Li Patent Publication (CN-101241561-A); 1. “the blackboard of Li appears to store "physical state value 95, perceived state value 96, and emotional state value 97" which appear to correspond to rolling values representing certain states of the virtual robot of Li”; and 2. “Applicant submits that while the cited excerpt mentions a ‘behavior initiated by the user’, there is no indication that any data regarding said interaction with the user is stored in the blackboard of Li.” The Examiner disagrees, in Para. [0056] Li discloses sensing the external environment through “the sensor values of the blackboard 90” and in Para. [0079] these reference values from the blackboard are then used by the robot to identified a perceived object. Further in Para. [0091], where the “behavior management unit 40 selects the behavior initiated by the user”, and then in Para. [0093], the processor “determines the duration of the behavior of the performance, generates an internal event 93 that causes the behavior of the performance, and outputs the internal event 93 to the blackboard 90”, which suggest that these are not “rolling values” as applicant asserts but the storing of sensed data, user interaction through the behavioral unit, and the performance or actions as taken by the robot by the blackboard in furtherance to learning related to behavior, perception, and emotional state of the robot. Therefore, the rejection of the claims (1,3-9, and 11-15) is maintained for the reasons set above and below with reference to the § 103 rejection. Claim Rejections -- 35 U.S.C. § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3-9, and 11-15 are rejected under 35 U.S.C. 103 as being unpatentable over Takashi SUMIYOSHI (US-20190126488-A1)(“Sumiyoshi”), Breazeal et al (US-20170206064-A1)(“Breazeal”), and Li et al (CN-101241561-A)(“Li”), machine translation of CN101241561A is attached. As per claim 1, Sumiyoshi discloses a robot (Figure 1, service robot 20.) comprising: a memory configured to store at least one instruction (Sumiyoshi at Figure 2, memory 220, and Para. [0049] disclosing instructions to control the robot and interaction with a user (40):” main program 231 to control the service robot 20, a voice recognition program 232 to convert a voice from the microphone 223 into a text, a voice synthesis program 233 to convert text data into a voice and output the voice from the speaker 224, and a transfer program 234 to control the transfer device 227 and transfer the service robot 20 are loaded on the memory 220 and the programs are implemented by the CPU 221.”); and at least one processor configured to execute the at least one instruction (Sumiyoshi at Figure 2, CPU 221, and Para. [0050] disclosing execution of a program by CPU 221:” CPU 221 operates as a functional unit to provide a prescribed function by processing in accordance with the program of the functional unit. For example, the CPU 221 functions as a voice recognition unit by processing in accordance with the voice recognition program 232.”) to: based on detecting a user interaction (Sumiyoshi at Figure 4, Step S202 obtain voice from user.), acquire information on a (Sumiyoshi at Figure 5, Voice recognition (S503) and state transition rule (S506) and Para. [0087] checking to determine if voice event matches a particular rule:” robot control program 331 judges whether or not a received event matches with all the state transition rules of setting a present position as a start state in reference to the scenario 343 (S507).”), and perform an action corresponding to the user interaction based on the information on the (Sumiyoshi at Figure 5, Voice recognition (S503) and state transition rule (S506) and Para. [0088] disclosing performing an action like a change in state based on the voice and the rule:” [w]hen the received event matches with a state transition rule, the robot control program 331: changes the present state to a transition destination state of the state transition rule; and implements an action described in the state (S508).”), . Sumiyoshi does not explicitly disclose using a behavior tree to process the user interaction and to formulate a response on the basis of the user interaction. Breazeal in the same field of endeavor discloses the use of a behavior tree in a robot. See Figures 1-6, 22-23, and 27-35 with Paras. [0411]-[0423]. Sumiyoshi does not disclose but Breazeal discloses a behavior tree corresponding to the user interaction (Breazeal at Para. [0013] discloses the use of a behavior tree to process a conversation with a user:” a plurality of behavior tree data structures accessible by the behavior editor that facilitate controlling behavior and control flow of autonomous robot operational functions, the operational functions including a plurality of sensor input functions and a plurality expressive output functions, wherein the plurality of behavior tree data structures organize control of robot operational functions hierarchically, wherein at least one behavior tree data structure is associated with at least one skill performed by the PCD.”) Sumiyoshi does not disclose but Breazeal discloses wherein the behavior tree comprises a node for controlling a dialogue flow between the robot and a user (Breazeal at Paras. [0013]-[0014] discloses that the behavior tree comprises nodes from a dialogue flow (conversation):” system further including a plurality of behavior nodes of each behavior tree, each of the plurality of behavior nodes associated with one of four behavior states consisting of an invalid state, an in-progress state, a successful state, and a failed state.” At Para. [0014] aspects of the speech and dialogue flow are controlled:” a system for recognizing speech with a persistent companion device (PCD). The system may include a PCD speech recognition configuration system that facilitates natural language understanding by a PCD, the system comprising a plurality of user interface screens by which a user operates a speech rule editor executing on a networked computer to configure speech understanding rules comprising at least one of an embedded rule and a custom rule.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the behavioral tree processing method taught in Breazeal in the robot controllers in Sumiyoshi with a reasonable expectation of success because this results in the robots being utilized to engage in a dialogue with a user that is relevant to the time and place, and using a behavior tree to select an action consistent with the dialogue. (see Breazeal at Para. [0015]). Sumiyoshi and Breazeal do not disclose wherein the memory comprises a blackboard area configured to store data comprising data detected by the robot, data regarding the user interaction, and data regarding the action performed by the robot, and Sumiyoshi and Breazeal do not disclose wherein the at least one processor is further configured to execute the at least one instruction to acquire the information on the behavior tree corresponding to the user interaction based on the data stored in the blackboard area. Li in the same field of endeavor discloses a robot apparatus for excitation, management and control and operation processes of each configuration module using a memory comprising a blackboard area. See Abstract and Figures 1-4. In particular, Li discloses wherein the memory comprises a blackboard area configured to store data comprising data detected by the robot (Li at Figure 1, blackboard 90 having a structure shared by respective modules and used for integrating various information sources, and Para. [0052] which discloses storing data relating to sensor, user, actions performed by the robot:” physical state value 95, the perceived state value 96, and the emotional state value 97 recorded in the blackboard 90 include not only representative physical state values currently processed by the software robot, representative perceptual state values, and representative emotional state values, but also all physical”.), data regarding the user interaction, and data regarding the action performed by the robot (Li at Figure 1, blackboard 90 and allocated memory spaces 91-103, and Para. [0079] discloses using perceived state and user information from blackboard 90:” As shown in FIG. 12, the behavior management unit 40 refers to the perceived state value 96 and the emotional state value 97 of the blackboard 90, the SES 71 of the short-term memory 70, and the object of interest 72, and the plurality of context and behavior objects 98 of the scene memory 60. Determine the behavior. Therefore, the behavior management unit 40 outputs the final behavior object 98 to the blackboard 90. The behavior management unit 40 basically determines the behavior with reference to the context memory 60, and if necessary, controls the performance of the guidance behavior initiated by the user.”) . In particular, Li discloses wherein the at least one processor is further configured to execute the at least one instruction to acquire the information on the behavior tree corresponding to the user interaction based on the data stored in the blackboard area (Li at Figures 1-5 and Para. [0050] discloses acquiring data and information from the blackboard to be used by the requesting module:” structure has a common data area corresponding to the blackboard, which is located at the center of the structure and unifies information provided from a plurality of modules to the common data area. The blackboard 90 is implemented by the CBlackboard class. The CBlackboard class has various data structures as defined in Table 7 below, each of which is provided to each module that constructs a virtual creature, or updates each data information from each module through a related Put function and a Get function” ; further at Para. [0040] teaches constructing and sharing an information state:” construct an information space and provide an information space to a user, and can control a plurality of objects present in the information space in accordance with internal logic or in response to user input. In the information space, environmental information including the environmental factor information and the object location information and the object interaction information may be generated according to changes in environmental factors, motion of the object, or interaction between the objects.”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the robot controllers in Sumiyoshi as modified by Breazeal with the blackboard centric control of a software robot as taught by Li with a reasonable expectation of success in order for the one or more method steps to be augmented by using a common data area such as a blackboard. The teaching suggestion/motivation to combine is that by maintaining and using data stored in a blackboard, integration of various data sources can be improved as taught by Li in Paras. [0049] - [0050]. As per claim 3, Sumiyoshi, Breazeal, and Li disclose a robot, wherein the user interaction comprises a user voice (Sumiyoshi at Figure 4 voice recognition process.), and the at least one processor is further configured to execute the at least one instruction (Sumiyoshi at Figure 11, CPU 521; and Breazeal at Figure 5, processor 516.) to: acquire information on a user intent corresponding to the user voice and information on a slot for performing an action corresponding to the user intent , determine whether the information on the slot is sufficient for performing a task corresponding to the user intent, based on determining that the information on the slot is insufficient for performing the task corresponding to the user intent, acquire information on an additional slot necessary for performing the task corresponding to the user intent, and store, in the blackboard area, the information on the user intent, the information on the slot, and the information on the additional slot (Sumiyoshi at Figures 12-13B and Paras. [0106]­[0134] disclosing wherein the service robot (20): receives an utterance from a user (40) to perform speech recognition; on the basis of a result of the speech recognition, executes a transition to the dialogue node (N11) according to a predetermined rule (S51) so as to generate a sub-scenario (560), and if nodes in the sub scenario (560) are insufficient, a node is automatically added to confirm an action.). As per claim 4, Sumiyoshi, Breazeal, and Li disclose a robot, wherein the at least one processor is further configured to execute the at least one instruction to: convert the information on the slot into information in a form that can be interpreted by the robot, and acquire information on the additional slot based on a dialogue history or through an additional inquiry and response operation (Sumiyoshi at Figures 2 and 12-13B, and Paras. [0058] and [0106]-[0134] wherein the storage device (220) may store a state table (state information) (341) and a scenario (343), and if nodes in the sub-scenario (560) are insufficient, a node is automatically added to confirm an action by selecting an action with the highest confidence factor.). As per claim 5, Sumiyoshi and Breazeal disclose a robot, wherein the additional inquiry and response operation comprises a re-asking operation comprising an inquiry regarding the slot for performing the task corresponding to the user intent (Sumiyoshi at Para. [0133] a procedure for re-asking for instructions when certain information deemed not be adequate or missing:” inquiry node N115 to ask a place to the user 40 when the place is not identified.”, a selection operation configured to select one of a plurality of slots, and a confirmation operation configured to confirm whether the slot is the slot selected by the user (Sumiyoshi at Para. [0134] discloses confirmation of user information:” the judgment node N112, when a result of voice recognition is guidance and a result of voice recognition on a place exists, the robot control program 331 selects a place and an action conforming to the result of the voice recognition from the scenario 343. When the object is unknown, the program advances to the inquiry node N113 and commands the service robot 20 to inquire the object.”), and wherein the at least one processor is further configured to execute the at least one instruction (Sumiyoshi at Figure 11, CPU 521; and Breazeal at Figure 5, processor 516.) to: store information on the additional inquiry and response operation in the blackboard area, and acquire information on the behavior tree including a node for controlling a dialogue flow between the robot and the user based on the additional inquiry and response operation (Sumiyoshi at Figures 13A-13B and Paras. [01 l l]-[0134] wherein the sub-scenario (560) includes a determination node (N112) to determine whether the purpose of dialogue is guidance, an inquiry node (N113) to ask the purpose if the purpose of the dialogue is not guidance, a determination node (N114) to determine whether a place inquired by the user (40) is identified, an inquiry node (N115) to ask the place to the user (40) if the place is not identified, and a dialogue finish node (Nl20); and a robot control program (331) selects one of the nodes (N112, N113, N114, N1l5, and N120) to proceed. Additionally see Breazeal at Figure 1 and Para. [0101] wherein, if the user's intent of taking a picture is identified, the robot (100) may ask directed questions either for con firming the user's intent or requesting additional information.). As per claim 6, Sumiyoshi, Breazeal, and Li disclose a robot, wherein the at least one processor is further configured to execute the at least one instruction (Breazeal at Figure 5, processor 516.) to: based on the task being successfully performed, learn whether to acquire the information on the additional slot based on the dialogue history (Breazeal at Figure 1 and Para. [0351] wherein, when the robot (100) detects that a user regularly uses a particular phrase, the robot (100) may add the particular phrase to interaction data, and persistently use same when performing interaction in the future.). As per claim 7, Sumiyoshi, Breazeal, and Li disclose a robot, wherein the behavior tree comprises at least one of: a learnable selector node that is trained to select an optimal sub tree/node among a plurality of sub trees/nodes, a learnable sequence node that is trained to select an optimal order of the plurality of sub trees/nodes, or a learnable parallel node that is trained to select optimal sub trees/nodes that can perform simultaneously among the plurality of sub trees/nodes (Breazeal at Figures 3-4 and Para. [0404] wherein the behavior tree include at least one of a select node, a sequence node, and a parallel node:” a behavior tree may be made of these elementary behaviors: BaseBehavior—a leaf node; BaseDecorator—a behavior decorator; Parallel—a compound node; Sequence (and sequence variations)—a compound node; Select—a compound node; and Random (and random variations)—a compound node.”). As per claim 8, Sumiyoshi, Breazeal, and Li disclose a robot, wherein the at least one processor is further configured to execute the at least one instruction to train the learnable selector node, the learnable sequence node, and the learnable parallel node based on a task learning policy, and wherein the task learning policy comprises information on an evaluation method, an update cycle, and a cost function (Breazeal at Figure 1 and Paras. [0334] & [0403] wherein the behavior tree may be trained by a method, such as machine learning, and the machine learning may include learning via an analysis method, such as task/policy modeling:” Behavior hierarchies may be learned from the experience of the PCD 100, such as using machine learning methods such as reinforcement learning, among others.” See Para. [0403].). As per Claim 9, Sumiyoshi discloses method of controlling a robot (Figure 8), the method comprising: based on detecting a user interaction (Sumiyoshi at Figure 4, Step S202 obtain voice from user.), acquire information on a (Sumiyoshi at Figure 5, Voice recognition (S503) and state transition rule (S506) and Para. [0087] checking to determine if voice event matches a particular rule:” robot control program 331 judges whether or not a received event matches with all the state transition rules of setting a present position as a start state in reference to the scenario 343 (S507).”), and perform an action corresponding to the user interaction based on the information on the (Sumiyoshi at Figure 5, Voice recognition (S503) and state transition rule (S506) and Para. [0088] disclosing performing an action like a change in state based on the voice and the rule:” [w]hen the received event matches with a state transition rule, the robot control program 331: changes the present state to a transition destination state of the state transition rule; and implements an action described in the state (S508).”), . Sumiyoshi does not explicitly disclose using a behavior tree to process the user interaction and to formulate a response on the basis of the user interaction. Breazeal in the same field of endeavor discloses the use of a behavior tree in a robot. See Figures 1-6, 22-23, and 27-35 with Paras. [0411]-[0423]. Sumiyoshi does not disclose but Breazeal discloses a behavior tree corresponding to the user interaction (Breazeal at Para. [0013] discloses the use of a behavior tree to process a conversation with a user:” a plurality of behavior tree data structures accessible by the behavior editor that facilitate controlling behavior and control flow of autonomous robot operational functions, the operational functions including a plurality of sensor input functions and a plurality expressive output functions, wherein the plurality of behavior tree data structures organize control of robot operational functions hierarchically, wherein at least one behavior tree data structure is associated with at least one skill performed by the PCD.”) Sumiyoshi does not disclose but Breazeal discloses wherein the behavior tree comprises a node for controlling a dialogue flow between the robot and a user (Breazeal at Paras. [0013]-[0014] discloses that the behavior tree comprises nodes from a dialogue flow (conversation):” system further including a plurality of behavior nodes of each behavior tree, each of the plurality of behavior nodes associated with one of four behavior states consisting of an invalid state, an in-progress state, a successful state, and a failed state.” At Para. [0014] aspects of the speech and dialogue flow are controlled:” a system for recognizing speech with a persistent companion device (PCD). The system may include a PCD speech recognition configuration system that facilitates natural language understanding by a PCD, the system comprising a plurality of user interface screens by which a user operates a speech rule editor executing on a networked computer to configure speech understanding rules comprising at least one of an embedded rule and a custom rule.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement the behavioral tree processing method taught in Breazeal in the robot controllers in Sumiyoshi with a reasonable expectation of success because this results in the robots being utilized to engage in a dialogue with a user that is relevant to the time and place, and using a behavior tree to select an action consistent with the dialogue. (see Breazeal at Para. [0015]). Sumiyoshi and Breazeal do not disclose wherein the acquiring information on the behavior tree corresponding to the user interaction comprises acquiring information on the behavior tree corresponding to the user interaction based on data stored in a blackboard memory area of the robot , and Sumiyoshi and Breazeal do not disclose. Li in the same field of endeavor discloses a robot apparatus for excitation, management and control and operation processes of each configuration module using a memory comprising a blackboard area. See Abstract and Figures 1-4. In particular, Li discloses wherein the acquiring information on the behavior tree corresponding to the user interaction (Li at Figure 1, blackboard 90 and allocated memory spaces 91-103, and Para. [0079] discloses using perceived state and user information from blackboard 90:” As shown in FIG. 12, the behavior management unit 40 refers to the perceived state value 96 and the emotional state value 97 of the blackboard 90, the SES 71 of the short-term memory 70, and the object of interest 72, and the plurality of context and behavior objects 98 of the scene memory 60. Determine the behavior. Therefore, the behavior management unit 40 outputs the final behavior object 98 to the blackboard 90. The behavior management unit 40 basically determines the behavior with reference to the context memory 60, and if necessary, controls the performance of the guidance behavior initiated by the user.”) comprises acquiring information on the behavior tree corresponding to the user interaction based on data stored in a blackboard memory area of the robot (Li at Figure 1, blackboard 90 having a structure shared by respective modules and used for integrating various information sources, and Para. [0052] which discloses storing data relating to sensor, user, actions performed by the robot:” physical state value 95, the perceived state value 96, and the emotional state value 97 recorded in the blackboard 90 include not only representative physical state values currently processed by the software robot, representative perceptual state values, and representative emotional state values, but also all physical”.) In particular, Li discloses wherein the data stored in the blackboard memory area of the robot comprises data detected by the robot, data regarding the user interaction, and data regarding the action performed by the robot (Li at Figures 1-5 and Para. [0050] discloses acquiring data and information from the blackboard to be used by the requesting module:” structure has a common data area corresponding to the blackboard, which is located at the center of the structure and unifies information provided from a plurality of modules to the common data area. The blackboard 90 is implemented by the CBlackboard class. The CBlackboard class has various data structures as defined in Table 7 below, each of which is provided to each module that constructs a virtual creature, or updates each data information from each module through a related Put function and a Get function”; further at Para. [0040] teaches constructing and sharing an information state:” construct an information space and provide an information space to a user, and can control a plurality of objects present in the information space in accordance with internal logic or in response to user input. In the information space, environmental information including the environmental factor information and the object location information and the object interaction information may be generated according to changes in environmental factors, motion of the object, or interaction between the objects.”). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the robot controllers in Sumiyoshi as modified by Breazeal with the blackboard centric control of a software robot as taught by Li with a reasonable expectation of success in order for the one or more method steps to be augmented by using a common data area such as a blackboard. The teaching suggestion/motivation to combine is that by maintaining and using data stored in a blackboard, integration of various data sources can be improved as taught by Li in Paras. [0049] - [0050]. As per claim 11, Sumiyoshi, Breazeal, and Li disclose a method, wherein the user interaction comprises a user voice (Sumiyoshi at Figure 4 voice recognition process.), and wherein the method (Sumiyoshi at Figure 8.) further comprises: acquiring information on a user intent corresponding to the user voice and information on a slot for performing an action corresponding to the user intent , determining whether the information on the slot is sufficient for performing a task corresponding to the user intent, based on determining that the information on the slot is insufficient for performing the task corresponding to the user intent, acquiring information on an additional slot necessary for performing the task corresponding to the user intent, and storing, in the blackboard area, the information on the user intent, the information on the slot, and the information on the additional slot (Sumiyoshi at Figures 12-13B and Paras. [0106]­[0134] disclosing wherein the service robot (20): receives an utterance from a user (40) to perform speech recognition; on the basis of a result of the speech recognition, executes a transition to the dialogue node (N11) according to a predetermined rule (S51) so as to generate a sub-scenario (560), and if nodes in the sub scenario (560) are insufficient, a node is automatically added to confirm an action.). As per claim 12, Sumiyoshi, Breazeal, and Li disclose a method, wherein the acquiring information on an additional slot comprises: converting the information on the slot into information in a form that can be interpreted by the robot, and acquiring information on the additional slot based on a dialogue history or through an additional inquiry and response operation (Sumiyoshi at Figures 2 and 12-13B, and Paras. [0058] and [0106]-[0134] wherein the storage device (220) may store a state table (state information) (341) and a scenario (343), and if nodes in the sub-scenario (560) are insufficient, a node is automatically added to confirm an action by selecting an action with the highest confidence factor.). As per claim 13, Sumiyoshi, Breazeal, and Li disclose a method, wherein the additional inquiry and response operation comprises a re-asking operation comprising an inquiry regarding the slot for performing the task corresponding to the user intent (Sumiyoshi at Para. [0133] a procedure for re-asking for instructions when certain information deemed not be adequate or missing:” inquiry node N115 to ask a place to the user 40 when the place is not identified.”, a selection operation configured to select one of a plurality of slots, and a confirmation operation configured to confirm whether the slot is the slot selected by the user (Sumiyoshi at Para. [0134] discloses confirmation of user information:” the judgment node N112, when a result of voice recognition is guidance and a result of voice recognition on a place exists, the robot control program 331 selects a place and an action conforming to the result of the voice recognition from the scenario 343. When the object is unknown, the program advances to the inquiry node N113 and commands the service robot 20 to inquire the object.”), and wherein the acquiring information on the behavior tree (Sumiyoshi at Figure 5, Voice recognition (S503) and state transition rule (S506) and Para. [0087] checking to determine if voice event matches a particular rule.) further comprises: storing, in the blackboard memory area, information on the additional inquiry and response operation (Sumiyoshi at Para. [0171] discloses the storing of information:” sub-scenario table 3435 includes a state 3436 of storing positions (nodes) of the service robot 20 and an action 3437 of storing the processing of the service robot 20 in a single entry.”); and acquiring information on the behavior tree including a node for controlling a dialogue flow between the robot and the user based on the additional inquiry and response operation (Sumiyoshi at Figures 13A-13B and Paras. [01 l l]-[0134] wherein the sub-scenario (560) includes a determination node (N112) to determine whether the purpose of dialogue is guidance, an inquiry node (N113) to ask the purpose if the purpose of the dialogue is not guidance, a determination node (N114) to determine whether a place inquired by the user (40) is identified, an inquiry node (N115) to ask the place to the user (40) if the place is not identified, and a dialogue finish node (Nl20); and a robot control program (331) selects one of the nodes (N112, N113, N114, N1l5, and N120) to proceed. Additionally see Breazeal at Figure 1 and Para. [0101] wherein, if the user's intent of taking a picture is identified, the robot (100) may ask directed questions either for con firming the user's intent or requesting additional information.). As per claim 14, Sumiyoshi, Breazeal, and Li disclose a method, further comprising: based on the task being successfully performed, learning whether to acquire the information on the additional slot based on the dialogue history (Breazeal at Figure 1 and Para. [0351] wherein, when the robot (100) detects that a user regularly uses a particular phrase, the robot (100) may add the particular phrase to interaction data, and persistently use same when performing interaction in the future.). As per claim 15, Sumiyoshi, Breazeal, and Li disclose a method, wherein the behavior tree comprises at least one of: a learnable selector node that is trained to select an optimal sub tree/node among a plurality of sub trees/nodes, a learnable sequence node that is trained to select an optimal order of the plurality of sub trees/nodes (Breazeal at Figures 3-4 and Para. [0404] wherein the behavior tree include at least one of a select node, a sequence node, and a parallel node:” a behavior tree may be made of these elementary behaviors: BaseBehavior—a leaf node; BaseDecorator—a behavior decorator; Parallel—a compound node; Sequence (and sequence variations)—a compound node; Select—a compound node; and Random (and random variations)—a compound node.”), or a learnable parallel node that is trained to select optimal sub trees/nodes that can perform simultaneously among the plurality of sub trees/nodes (Breazeal at Figure 1 and Paras. [0334] & [0403] wherein the behavior tree may be trained by a method, such as machine learning, and the machine learning may include learning via an analysis method, such as task/policy modeling:” Behavior hierarchies may be learned from the experience of the PCD 100, such as using machine learning methods such as reinforcement learning, among others.” See Para. [0403].). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELLIS B. RAMIREZ whose telephone number is (571)272-8920. The examiner can normally be reached 7:30 am to 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ELLIS B. RAMIREZ/Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Mar 29, 2023
Application Filed
Jan 25, 2025
Non-Final Rejection — §103
Mar 10, 2025
Interview Requested
Apr 01, 2025
Applicant Interview (Telephonic)
Apr 01, 2025
Examiner Interview Summary
Apr 29, 2025
Response Filed
Aug 14, 2025
Non-Final Rejection — §103
Sep 29, 2025
Interview Requested
Nov 18, 2025
Response Filed
Feb 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600034
Compensation of Positional Tolerances in the Robot-assisted Surface Machining
2y 5m to grant Granted Apr 14, 2026
Patent 12584758
VEHICLE DISPLAY DEVICE, VEHICLE DISPLAY PROCESSING METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12571639
SYSTEM AND METHOD FOR IDENTIFYING TRIP PAIRS
2y 5m to grant Granted Mar 10, 2026
Patent 12551302
CONTROLLING A SURGICAL INSTRUMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12552018
INTEGRATING ROBOTIC PROCESS AUTOMATIONS INTO OPERATING AND SOFTWARE SYSTEMS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+18.2%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 194 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month