Prosecution Insights
Last updated: April 17, 2026
Application No. 16/597,862

ROBOT WITH MULTIPLE PERSONAE BASED ON INTERCHANGEABILITY OF A ROBOTIC SHELL THEREOF

Final Rejection §103
Filed
Oct 10, 2019
Examiner
FOLLANSBEE, YVONNE TRANG
Art Unit
2117
Tech Center
2100 — Computer Architecture & Software
Assignee
unknown
OA Round
8 (Final)
57%
Grant Probability
Moderate
9-10
OA Rounds
3y 2m
To Grant
84%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
60 granted / 105 resolved
+2.1% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
33 currently pending
Career history
138
Total Applications
across all art units

Statute-Specific Performance

§101
16.0%
-24.0% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
22.2%
-17.8% vs TC avg
§112
7.7%
-32.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 105 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 11/03/2025 have been carefully and fully considered. With respect to applicant’s argument of the remarks regarding the 103 rejection which recites: “Kroyan’s identification mechanism only triggers pre-stored profile selection- it does not teach or suggest generating training datasets combining sensor and mobile device data, training machine learning algorithms on such datasets, or using machine learning to enhance capabilities specific to each shell and persona” Examiner notes that Kroyan is not relied upon to teach machine learning algorithms trained based on training data set, and that Hickman is relied upon in teaching this claim limitation. “Kroyan also fails to teach “dynamically adapt trait data associated with the persona, wherein the trait data includes at least one of virtual manifestation data, audio data, accent data, dialogue data, and virtual effects data.” While Kroyan mentions trait data like voice files in character profiles, these are static pre-programmed components, not dynamically adapted through machine learning based on real-tie data collection” Examiner notes that although the machine learning aspect is taught by Hickman, Kroyan does disclose “dynamically adapt trait data associated with the persona” and points to Kroyna [0065] the character software program may define a unique combination of colors or patterns for the lighting sub-system, a particular synthesized voice file or selected audio files for the robot's voice and a definition of a particular movement pattern, such that the toy robot 10 may now express emotion or respond in several different dimensions including sound, lighting, and movement (in response to any external stimulus that it detects, [0066] The robot can optionally be able to automatically detect that a character of the nearby robot is in the same “family” as its present character, or not in the same family, and can then interact with the other toy robot, differently depending on its character, e.g., voice response, visual response including movement of parts, movement of the body. As an example, the voice of the robot may be modified by these rules, such that a deeper voice is selected in some cases, and a higher pitched voice is selected in other cases. As an alternative or in addition, the lighting sub-system may be controlled differently so that for example the indicator lights 40 have a particular color combination and have greater intensity in some cases, and in other cases those indicator lights 40 are illuminated less intensely and/or at a different color. The programmed processor may revert back to a base set of rules once the toy robot 10 finds that it is no longer in proximity of the other toy robot. This discloses changing the trait data based on the given personae. Kroyan is used to teach the functionality of the claim, and Hickman is simply used to bring in an AI that is trained. The claim does not actually preclude pre-programmed responses. Hickman discloses [0018] Described herein are systems and methods for generating training data sets for training machine learning models. Within examples, a robotic device may detect a collision involving a physical object in an environment, this shows training machine learning models for example one could be robotic collision. The combination of Kroyan and Hickman disclose gathering data of the environment of the robot as input for training machine learning algorithms and outputting robot controls. “Kroyan does not teach the comprehensive virtual interaction system claimed:… Kroyan mentions robots detecting proximity to each other, not multi-modal virtual interaction with AR/MR/VR experiences and user-driven configuration” Examiner notes that the claim requires “virtual interaction between a robot and one or more external mobile devices to realize multiple scenarios including at least of a gaming scenario…”, and therefore examiner points to [0021] FIGS. 1-3, the base surface 14 may be the face of an electronic display screen of a computing device 16 such as a tablet computer or a smart phone. Alternatively however, the base surface 14 may be that of the top of a table or desk or a sheet lying on the table or desk. The toy robot 10 may be used with base surfaces that emit light (such as that of a tablet computer) as well as base surfaces that do not emit light but that reflect, such as that of a table, desktop, counter or a sheet lying thereon. The base surface 14 can either be part of a self-emitting device, which emits light, or it may be part of a non-emitting object. In one embodiment, the toy robot 10 has both capabilities in that it can follow a line segment on both types of base surfaces 14 and can seamlessly transition while following a line, as it moves from one type of surface to another). Broadest reasonable interpretation this could be the robot following a virtual line on the display of the tablet/smartphone interacting in a gaming scenario”. Additionally the claim requires “at least one” and therefore is not interpreted as AR/MR/VR. “Hickman does not teach machine learning that enhances “predictive and operative capabilities of the robot specific to each robotic shell and associated persona… Hickman has no concept of personae, robotic shells, or trait data” Examiner notes that Hickman is not relied upon in teaching that, and the capabilities of the robot specific to each robotic shell and associated persona is relied upon Kroyan. “The combination fails to teach the claimed invention… there is no motivation in the prior art to make such a combination.” In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, one of ordinary skill in the art would be motivated because by adding “intelligence as mentioned above to effectively transform the toy robot into a more sophisticated machine.” as disclosed by Kroyan [0006], having the robot use machine learning in training would optimize the robots functionalities, being more accurate and autonomous. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 34-39, 41-49, and 51-53 are rejected under 35 U.S.C. 103 as being unpatentable over Kroyan et al. (US20200129875A1, herein Kroyan), in view of Hickman et al. (US20190077019A1, herein Hickman). Regarding claim 34, Kroyan teaches A robot, comprising: a base mechanism comprising a processor and a memory ([0028] autonomous robot body having the housing 25 in which the electrical power device 34, primary electronics (including a processor and its program memory as part of the control unit 28)), the memory comprising computer readable instructions stored thereon ([0048] The profile had been previously downloaded as programming instructions for the processor, and is stored as part of the control unit 28 (e.g., within non-volatile memory such as flash memory)) , the base mechanism configured to operate and function within an environment (Fig. 4, [0021] the base surface 14 may be the face of an electronic display screen of a computing device 16 such as a tablet computer or a smart phone. Alternatively however, the base surface 14 may be that of the top of a table or desk or a sheet lying on the table or desk. The toy robot 10 may be used with base surfaces that emit light (such as that of a tablet computer) as well as base surfaces that do not emit light but that reflect, such as that of a table, desktop, counter or a sheet lying thereon) ; one or more shells coupled to the base mechanism, each respective shell of one or more shells are configured with information related to a personae and a specific set of capabilities associated with a specific character (Fig. 6-9, [0037] the robot character software program (profile), where different profiles can be defined that have different indicator light patterns each being consistent with the particular character to which the profile is assigned, [0042] The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0051] select the matching character from amongst several that are available (e.g., base, character 1, character 2, character 3) with which the processor will be configured, [0006] A range of different character skins may be produced, from a simple or lightweight version that may only have an appearance of a character from a known or other original audiovisual work of art, to a fully loaded version that may have many “bells and whistles”, and possibly even intelligence as mentioned above to effectively transform the toy robot into a more sophisticated machine, [0063] the toy robot 10 is given a voice (played through the speaker 46) that is unique to its present character skin (outer cover 50). In one embodiment, the different robot character software programs have electronically defined different voices, respectively, corresponding to their respective characters); one or more sensors coupled to at least one of the base mechanism and the shell ([0056] The microphone 59, the line sensor 30, and the proximity sensor 61 may be integrated within the housing 25—see FIG. 8 in which the robot housing body 25 includes a pair of proximity sensors 61), the one or more sensors configured to capture data of the environment of the robot ([0005] The robot's behavior includes actions that it takes in response to it detecting external stimuli, such as something that it detects using one or more built in sensors, e.g. a line, pattern, or contrast detected by line sensors, color detected by color sensors, objects in close proximity detected using IR-based proximity sensors and external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot). These external stimuli may in a sense be overlaid on top of a base, autonomous behavior. For example, the base behavior may by to follow a line that is of uniform color; the external stimulus may be discontinuities in the line or color patterns within or adjacent to the line); wherein the processor of the base mechanism is configured to execute the computer readable instructions to ([0051] The processor while executing the ID program), receive modified operative and functional capabilities for the base mechanism based on implementation of one or more …algorithms, the one or more … algorithms … based on … data… from the data provided by the one or more sensors, and data from one or more external mobile devices to enhance predictive and operative capabilities ([0005] The robot's behavior includes actions that it takes in response to it detecting external stimuli, such as something that it detects using one or more built in sensors, e.g. a line, pattern, or contrast detected by line sensors, color detected by color sensors, objects in close proximity detected using IR-based proximity sensors and external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot). These external stimuli may in a sense be overlaid on top of a base, autonomous behavior. For example, the base behavior may by to follow a line that is of uniform color; the external stimulus may be discontinuities in the line or color patterns within or adjacent to the line, [0040] programmed processor may be configured to perform an algorithm which governs the path that is chosen for the toy robot 10 to follow, as it senses a line segment, [0021] FIGS. 1-3, the base surface 14 may be the face of an electronic display screen of a computing device 16 such as a tablet computer or a smart phone. Alternatively however, the base surface 14 may be that of the top of a table or desk or a sheet lying on the table or desk. The toy robot 10 may be used with base surfaces that emit light (such as that of a tablet computer) as well as base surfaces that do not emit light but that reflect, such as that of a table, desktop, counter or a sheet lying thereon. The base surface 14 can either be part of a self-emitting device, which emits light, or it may be part of a non-emitting object. In one embodiment, the toy robot 10 has both capabilities in that it can follow a line segment on both types of base surfaces 14 and can seamlessly transition while following a line, as it moves from one type of surface to another) of the robot specific to each robotic shell and associated persona and to improve contextual awareness of the robot with respect to functionalities and operations relevant to the persona, receive modified personae and set of capabilities for the specific character of the shell based on the implementation of the one or more … algorithms that dynamically adapt trait data associated with the persona, wherein the trait data includes at least one of virtual manifestation data, audio data, accent data, dialogue data, and virtual effects data, wherein the robot continuously responds in real-time to an ongoing sequence of activities based on current persona, and wherein responses are adapted by the one or more machine learning algorithms, and implement the modified operative and functional capabilities for the base mechanism and the modified personae and the set of capabilities for the specific character of the shell to enhance experience of a user ([0065] the character software program may define a unique combination of colors or patterns for the lighting sub-system, a particular synthesized voice file or selected audio files for the robot's voice and a definition of a particular movement pattern, such that the toy robot 10 may now express emotion or respond in several different dimensions including sound, lighting, and movement (in response to any external stimulus that it detects, [0066] The robot can optionally be able to automatically detect that a character of the nearby robot is in the same “family” as its present character, or not in the same family, and can then interact with the other toy robot, differently depending on its character, e.g., voice response, visual response including movement of parts, movement of the body. As an example, the voice of the robot may be modified by these rules, such that a deeper voice is selected in some cases, and a higher pitched voice is selected in other cases. As an alternative or in addition, the lighting sub-system may be controlled differently so that for example the indicator lights 40 have a particular color combination and have greater intensity in some cases, and in other cases those indicator lights 40 are illuminated less intensely and/or at a different color. The programmed processor may revert back to a base set of rules once the toy robot 10 finds that it is no longer in proximity of the other toy robot).). (i.e. the sensor data is used as input and the software/algorithm determines the movement and behavior of the robot which updates in response to any external stimulus it detects) by enabling virtual interaction between the robot and one or more external mobile devices to realize multiple scenarios including at least one of a gaming scenario ([0021] FIGS. 1-3, the base surface 14 may be the face of an electronic display screen of a computing device 16 such as a tablet computer or a smart phone. Alternatively however, the base surface 14 may be that of the top of a table or desk or a sheet lying on the table or desk. The toy robot 10 may be used with base surfaces that emit light (such as that of a tablet computer) as well as base surfaces that do not emit light but that reflect, such as that of a table, desktop, counter or a sheet lying thereon. The base surface 14 can either be part of a self-emitting device, which emits light, or it may be part of a non-emitting object. In one embodiment, the toy robot 10 has both capabilities in that it can follow a line segment on both types of base surfaces 14 and can seamlessly transition while following a line, as it moves from one type of surface to another, [0066] The robot can optionally be able to automatically detect that a character of the nearby robot is in the same “family” as its present character, or not in the same family, and can then interact with the other toy robot, differently depending on its character, e.g., voice response, visual response including movement of parts, movement of the body), an augmented reality experience, a mixed reality experience, and a virtual reality experience, wherein an application executing on an external mobile device allows the user to configure the robot based on one or more robotic shells and control the robot based on inputs from the user ([0005] external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot). Kroyan does not teach machine learning… are trained based on a training data set generated Hickman teaches machine learning… are trained based on a training data set generated ([0077] a computing device may provide the sensor data as input to a machine learning model, and receive as an output an estimated trajectory of the physical object, [0018] systems and methods for generating training data sets for training machine learning models. Within examples, a robotic device may detect a collision involving a physical object in an environment, and responsively collect and package data pertaining to the collision as a training data set for a machine learning model. After training up the machine learning model using the training data set and other training data sets, the robotic device (or another robotic device) can then use the model to predict when future collisions will occur and to take preventive actions to avoid colliding with physical objects.). (i.e. based on the machine learning model the robot will stop or change directions) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kroyan’s teaching of a programable robot having a changeable character with Kroyan’s teaching of machine learning to train the robot. The combined teaching provides an expected result of a programable robot using machine learning to train it in changing characters. Therefore, one of ordinary skill in the art would be motivated because by adding “intelligence as mentioned above to effectively transform the toy robot into a more sophisticated machine.” as disclosed by Kroyan [0006], having the robot use machine learning in training would optimize the robots functionalities, being more accurate and autonomous. Regarding claim 35 the combination of Kroyan and Hickman teach The robot of claim 34, wherein, each of the one or more shells comprise a distinctive shell, wherein each distinctive shell comprises a unique personae and a specific set of capabilities for a specific character, each of the one or more shells is removably coupled to the base mechanism such that the robot displays a different character with the coupling of the distinctive shell with the base mechanism (Kroyan, Fig. 6-9, [[0042] The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0051] select the matching character from amongst several that are available (e.g., base, character 1, character 2, character 3) with which the processor will be configured, [0006] A range of different character skins may be produced, from a simple or lightweight version that may only have an appearance of a character from a known or other original audiovisual work of art, to a fully loaded version that may have many “bells and whistles”, and possibly even intelligence as mentioned above to effectively transform the toy robot into a more sophisticated machine, [0063] the toy robot 10 is given a voice (played through the speaker 46) that is unique to its present character skin (outer cover 50). In one embodiment, the different robot character software programs have electronically defined different voices, respectively, corresponding to their respective characters), and the processor of the base mechanism is configured to execute the computer readable instructions to, improve awareness of the robot in its environment based on implementation of the modified operative and functional capabilities for the base mechanism and the modified personae and the set of capabilities for the specific character for the distinctive shell ([0005] The robot's behavior includes actions that it takes in response to it detecting external stimuli, such as something that it detects using one or more built in sensors, e.g. a line, pattern, or contrast detected by line sensors, color detected by color sensors, objects in close proximity detected using IR-based proximity sensors and external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot). These external stimuli may in a sense be overlaid on top of a base, autonomous behavior. For example, the base behavior may by to follow a line that is of uniform color; the external stimulus may be discontinuities in the line or color patterns within or adjacent to the line, [0049] the new “character” of the toy robot 10 immediately and automatically comes to life once the outer cover 50 is fitted to the housing 25, resulting in the base behavior of the robot being modified or transformed to be consistent with that of its new character, [0066] The robot can optionally be able to automatically detect that a character of the nearby robot is in the same “family” as its present character, or not in the same family, and can then interact with the other toy robot, differently depending on its character, e.g., voice response, visual response including movement of parts, movement of the body. As an example, the voice of the robot may be modified by these rules, such that a deeper voice is selected in some cases, and a higher pitched voice is selected in other cases. As an alternative or in addition, the lighting sub-system may be controlled differently so that for example the indicator lights 40 have a particular color combination and have greater intensity in some cases, and in other cases those indicator lights 40 are illuminated less intensely and/or at a different color. The programmed processor may revert back to a base set of rules once the toy robot 10 finds that it is no longer in proximity of the other toy robot).). (i.e. the sensor data is used as input and the software/algorithm determines the movement and behavior of the robot which updates in response to any external stimulus it detects) Regarding claim 36 the combination of Kroyan and Hickman teach The robot of claim 34, wherein the processor of the base mechanism is configured to execute the computer readable instructions to, improve awareness of the robot by providing additional operative and functional capabilities for the base mechanism and a modified personae and additional set of capabilities associated with the respective shell (Kroyan, Fig. 6-9, [0042] The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0051] select the matching character from amongst several that are available (e.g., base, character 1, character 2, character 3) with which the processor will be configured, [0006] A range of different character skins may be produced, from a simple or lightweight version that may only have an appearance of a character from a known or other original audiovisual work of art, to a fully loaded version that may have many “bells and whistles”, and possibly even intelligence as mentioned above to effectively transform the toy robot into a more sophisticated machine, [0050] outer cover 50 can be fitted onto the housing 25 in order to change the behavior or character of the toy robot). Regarding claim 37 the combination of Kroyan and Hickman teach The robot of claim 34, wherein, the one or … algorithms are trained based on data provided by one or more external devices that are different from the robot, the data received from the one or more external devices is in addition to the data provided by the one or more sensors, the one or more external devices are mobile devices (Kroyan, [0038] The toy robot 10 may further include a communications port 42 for receiving programming instructions that configure the behavior of the robot (how it responds to external stimuli). These may be received from a programming device, such as a computer, smart phone, tablet computer or other programming devices known in the art. There may be several aspects of the toy robot 10 capable of being modified via these programming instructions. For instance, the signals emitted by the indicator light 40 (e.g., color, intensity, and any combination thereof including for example flashing patterns) may be changed or assigned via programming instructions. Furthermore, the preset time period at which the power device 34 ceases supplying power to the rest of the electronic components in the housing 25 (the robot “goes to sleep”) may also be altered or modified through programming)…in real-time ([0005] external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot, [0047] a real-time user command received wirelessly (by the RE module 31) from a remote control transmitter that is being operated by a human user, or a command or control signal received wirelessly (by the RF module 31) from another robot nearby ) , and the processor of the base mechanism is configured to execute the computer readable instructions to, improve awareness of the robot in its environment based on the implementation of the one or more machine learning algorithms trained on the data provided by the one or more external devices and the one or more sensors . (Kroyan, [0047] Examples of the external stimulus include the line segment 12 on the base surface 14 (see FIG. 4), a real-time user command received wirelessly (by the RE module 31) from a remote control transmitter that is being operated by a human user, or a command or control signal received wirelessly (by the RF module 31) from another robot nearby. The nearby robot may have been “detected” by the toy robot 10 using any combination of its available built-in sensors (e.g., a infrared proximity sensor or the RF module 31, [0048] In response to detecting an external stimulus, the programmed processor in the housing 25 of the toy robot 10 can automatically signal the propulsion sub-system to generate force so as to move the robot body, and/or the audio playback sub-system to produce sound. This behavior of the toy robot, namely its signaling of the propulsion sub-system or the audio playback sub-system or even as explained above, the signaling of the indicator light 40, or more generally its response to a detected external stimulus, is part of the character of the robot, which is governed by a robot character software program (also referred to as a profile). The profile had been previously downloaded as programming instructions for the processor, and is stored as part of the control unit 28 (e.g., within non-volatile memory such as flash memory)). Hickman further teaches machine learning… are trained ([0077] a computing device may provide the sensor data as input to a machine learning model, and receive as an output an estimated trajectory of the physical object, [0018] systems and methods for generating training data sets for training machine learning models. Within examples, a robotic device may detect a collision involving a physical object in an environment, and responsively collect and package data pertaining to the collision as a training data set for a machine learning model. After training up the machine learning model using the training data set and other training data sets, the robotic device (or another robotic device) can then use the model to predict when future collisions will occur and to take preventive actions to avoid colliding with physical objects.). (i.e. based on the machine learning model the robot will stop or change directions), configured to receive data from the one or more sensors ([0055] the log of sensor data may include images captured by a mobile computing device, such as a smartphone, tablet, wearable computing device, handheld camera computing device, etc. In addition, the log of sensor data may include sensor data captured from other sensors of the mobile computing device, such as IMUs or microphone arrays. In some instances, the mobile computing device may record the sensor data and provide the log of sensor data to another computing device… the mobile computing device may obtain the log of sensor data using sensors of the mobile device, [0023] trigger generation of training data sets in real-time when collisions occur) Regarding claim 38 the combination of Kroyan and Hickman teach The robot of claim 34, wherein, each one of the one or more shells comprises: a memory comprising computer readable instructions stored thereon; and at least one processor configured to execute the computer readable instructions to, load booting instructions from memory of the respective shell to the memory of the base mechanism (Kroyan, Fig. 10, [0006] The character skin may have integrated therein intelligence, in the form of a programmable processor that will communicate with a processor in the robot body housing, once the skin has been fitted onto the housing) , the processor of the base mechanism is configured to execute the computer readable instructions to, execute the booting instructions on the memory of the base mechanism ([0051] Each robot character software program or profile stored in the memory is assigned a separate or unique ID. The processor while executing the ID program may compare the stored IDs to a detected ID to find a match. In other words, the ID program may compare the detected ID to those of the various stored robot character software programs in order to then select the matching character from amongst several that are available (e.g., base, character 1, character 2, character 3) with which the processor will be configured) , and customize the specific set of capabilities associated with the personae of the specific character of the respective shell based on the execution of the booting instructions ([0048] This behavior of the toy robot, namely its signaling of the propulsion sub-system or the audio playback sub-system or even as explained above, the signaling of the indicator light 40, or more generally its response to a detected external stimulus, is part of the character of the robot, which is governed by a robot character software program (also referred to as a profile). The profile had been previously downloaded as programming instructions for the processor, and is stored as part of the control unit 28 (e.g., within non-volatile memory such as flash memory) . Regarding claim 39, the combination of Kroyan and Hickman teach The robot of claim 34, wherein the processor of the base mechanism is configured to execute the computer readable instructions to, translate the base mechanism around the environment while the respective shell is coupled to the base mechanism, the base mechanism comprises circuitry associated with core functionalities to the robot (Kroyan, Fig. 3-4, [0021] the toy robot 10 has both capabilities in that it can follow a line segment on both types of base surfaces 14 and can seamlessly transition while following a line, as it moves from one type of surface to another, [0051] Each robot character software program or profile stored in the memory is assigned a separate or unique ID. The processor while executing the ID program may compare the stored IDs to a detected ID to find a match. In other words, the ID program may compare the detected ID to those of the various stored robot character software programs in order to then select the matching character from amongst several that are available (e.g., base, character 1, character 2, character 3) with which the processor will be configured). Regarding claim 41, the combination of Kroyan and Hickman teach The robot of claim 34, wherein the processor of the base mechanism is configured to execute the computer readable instructions to, further adapt the personae and the specific set of capabilities associated with the specific character of the respective shell based on user feedback data in addition to the trait data adaption based on implementation of one or more … algorithms ([0049] the new “character” of the toy robot 10 immediately and automatically comes to life once the outer cover 50 is fitted to the housing 25, resulting in the base behavior of the robot being modified or transformed to be consistent with that of its new character, [0043] The programmed processor (part of the control unit 28) is in communication with the line sensor 30, so that the control unit 28 effectively senses or reads the patterns that appear on the base surface 14, and in response, based on previously determined rules, will automatically generate signals to the propulsion sub-system so that the latter generates the needed force to move the robot body in a desired way. This software for recognizing the various optical commands using the line sensor 30 may be updated on the control unit 28 as needed, and may be part of a wider encompassing “robot character program” that configures the programmed processor to control behavior of the toy robot. The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0040] The control unit 28 including its programmed processor may be configured to perform an algorithm which governs the path that is chosen for the toy robot 10 to follow, as it senses a line segment, [0065] the character software program may define a unique combination of colors or patterns for the lighting sub-system, a particular synthesized voice file or selected audio files for the robot's voice and a definition of a particular movement pattern, such that the toy robot 10 may now express emotion or respond in several different dimensions including sound, lighting, and movement (in response to any external stimulus that it detects, ([0005] external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot). Hickman further teaches machine learning ([0077] a computing device may provide the sensor data as input to a machine learning model, and receive as an output an estimated trajectory of the physical object, [0018] systems and methods for generating training data sets for training machine learning models. Within examples, a robotic device may detect a collision involving a physical object in an environment, and responsively collect and package data pertaining to the collision as a training data set for a machine learning model. After training up the machine learning model using the training data set and other training data sets, the robotic device (or another robotic device) can then use the model to predict when future collisions will occur and to take preventive actions to avoid colliding with physical objects.). (i.e. based on the machine learning model the robot will stop or change directions) Regarding claim 42, the combination of Kroyan and Hickman teach The robot of claim 34, wherein processor of the base mechanism is further configured to execute the computer readable instructions to, customize the personae and the specific set of capabilities associated with the specific character of the robot based on removably coupling the shell to the base mechanism (Kroyan, Fig. 6-9, [0042] The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0050] outer cover 50 can be fitted onto the housing 25 in order to change the behavior or character of the toy robot, ([0050] The resulting detected identification causes a particular one of several available robot character software programs or profiles (also stored in the memory) to be selected in accordance with which the processor will become reconfigured to change the behavior of the toy robot 10) . Regarding claim 43, the combination of Kroyan and Hickman teach The The robot of claim 42, wherein the customization of the personae and the specific set of capabilities associated with the specific character of the robot is further based on the receiving of the modified operative capabilities for the base mechanism and the modified character and the personae of the shell upon implementation of the one or more … algorithms ([0049] the new “character” of the toy robot 10 immediately and automatically comes to life once the outer cover 50 is fitted to the housing 25, resulting in the base behavior of the robot being modified or transformed to be consistent with that of its new character, [0043] The programmed processor (part of the control unit 28) is in communication with the line sensor 30, so that the control unit 28 effectively senses or reads the patterns that appear on the base surface 14, and in response, based on previously determined rules, will automatically generate signals to the propulsion sub-system so that the latter generates the needed force to move the robot body in a desired way. This software for recognizing the various optical commands using the line sensor 30 may be updated on the control unit 28 as needed, and may be part of a wider encompassing “robot character program” that configures the programmed processor to control behavior of the toy robot. The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0040] The control unit 28 including its programmed processor may be configured to perform an algorithm which governs the path that is chosen for the toy robot 10 to follow, as it senses a line segment). Hickman further teaches machine learning ([0077] a computing device may provide the sensor data as input to a machine learning model, and receive as an output an estimated trajectory of the physical object, [0018] systems and methods for generating training data sets for training machine learning models. Within examples, a robotic device may detect a collision involving a physical object in an environment, and responsively collect and package data pertaining to the collision as a training data set for a machine learning model. After training up the machine learning model using the training data set and other training data sets, the robotic device (or another robotic device) can then use the model to predict when future collisions will occur and to take preventive actions to avoid colliding with physical objects.). (i.e. based on the machine learning model the robot will stop or change directions) Regarding claim 44, Kroyan teaches A method, comprising: receive modified operative and functional capabilities for a base mechanism of a robot based on implementation of one or more … algorithms, the one or more … algorithms … based on data provided by one or more sensors and data from one or more external mobile devices to enhance persona and to improve contextual awareness of the robot with respect to functionalities and operations relevant to the persona; ([0005] The robot's behavior includes actions that it takes in response to it detecting external stimuli, such as something that it detects using one or more built in sensors, e.g. a line, pattern, or contrast detected by line sensors, color detected by color sensors, objects in close proximity detected using IR-based proximity sensors and external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot). These external stimuli may in a sense be overlaid on top of a base, autonomous behavior. For example, the base behavior may by to follow a line that is of uniform color; the external stimulus may be discontinuities in the line or color patterns within or adjacent to the line, [0040] programmed processor may be configured to perform an algorithm which governs the path that is chosen for the toy robot 10 to follow, as it senses a line segment, [0021] FIGS. 1-3, the base surface 14 may be the face of an electronic display screen of a computing device 16 such as a tablet computer or a smart phone. Alternatively however, the base surface 14 may be that of the top of a table or desk or a sheet lying on the table or desk. The toy robot 10 may be used with base surfaces that emit light (such as that of a tablet computer) as well as base surfaces that do not emit light but that reflect, such as that of a table, desktop, counter or a sheet lying thereon. The base surface 14 can either be part of a self-emitting device, which emits light, or it may be part of a non-emitting object. In one embodiment, the toy robot 10 has both capabilities in that it can follow a line segment on both types of base surfaces 14 and can seamlessly transition while following a line, as it moves from one type of surface to another); receive modified personae and set of capabilities for a specific character of a shell based on the implementation of the one or more machine learning algorithms that dynamically adapt trait data associated with the persona, wherein the trait data includes at least one of virtual manifestation data, audio data, accent data, dialogue data, and virtual effects data, wherein the robot continuously responds in real-time to an ongoing sequence of activities based on current persona, and wherein responses are adapted by the one or more machine learning algorithms ; and implement the modified operative and functional capabilities for the base mechanism and the modified personae and the set of capabilities for the specific character of the shell to enhance experience of a user ([0065] the character software program may define a unique combination of colors or patterns for the lighting sub-system, a particular synthesized voice file or selected audio files for the robot's voice and a definition of a particular movement pattern, such that the toy robot 10 may now express emotion or respond in several different dimensions including sound, lighting, and movement (in response to any external stimulus that it detects, [0066] The robot can optionally be able to automatically detect that a character of the nearby robot is in the same “family” as its present character, or not in the same family, and can then interact with the other toy robot, differently depending on its character, e.g., voice response, visual response including movement of parts, movement of the body. As an example, the voice of the robot may be modified by these rules, such that a deeper voice is selected in some cases, and a higher pitched voice is selected in other cases. As an alternative or in addition, the lighting sub-system may be controlled differently so that for example the indicator lights 40 have a particular color combination and have greater intensity in some cases, and in other cases those indicator lights 40 are illuminated less intensely and/or at a different color. The programmed processor may revert back to a base set of rules once the toy robot 10 finds that it is no longer in proximity of the other toy robot).). (i.e. the sensor data is used as input and the software/algorithm determines the movement and behavior of the robot which updates in response to any external stimulus it detects) by enabling virtual interaction between the robot and one or more external mobile devices to realize multiple scenarios including at least one of a gaming scenario ([0021] FIGS. 1-3, the base surface 14 may be the face of an electronic display screen of a computing device 16 such as a tablet computer or a smart phone. Alternatively however, the base surface 14 may be that of the top of a table or desk or a sheet lying on the table or desk. The toy robot 10 may be used with base surfaces that emit light (such as that of a tablet computer) as well as base surfaces that do not emit light but that reflect, such as that of a table, desktop, counter or a sheet lying thereon. The base surface 14 can either be part of a self-emitting device, which emits light, or it may be part of a non-emitting object. In one embodiment, the toy robot 10 has both capabilities in that it can follow a line segment on both types of base surfaces 14 and can seamlessly transition while following a line, as it moves from one type of surface to another, [0066] The robot can optionally be able to automatically detect that a character of the nearby robot is in the same “family” as its present character, or not in the same family, and can then interact with the other toy robot, differently depending on its character, e.g., voice response, visual response including movement of parts, movement of the body), an augmented reality experience, a mixed reality experience, and a virtual reality experience, wherein an application executing on an external mobile device allows for the user to configure the robot based on one or more robotic shells and control the robot based on inputs from the user, ([0065] the character software program may define a unique combination of colors or patterns for the lighting sub-system, a particular synthesized voice file or selected audio files for the robot's voice and a definition of a particular movement pattern, such that the toy robot 10 may now express emotion or respond in several different dimensions including sound, lighting, and movement (in response to any external stimulus that it detects, [0005] external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot).), wherein, the robot comprises the base mechanism, one or more shells, and the one or more sensors, the base mechanism comprising a processor and a memory, the memory comprising computer readable instructions stored thereon, the base mechanism configured to operate and function within an environment, the one or more shells coupled to the base mechanism (Fig. 6-10, [0021] the base surface 14 may be the face of an electronic display screen of a computing device 16 such as a tablet computer or a smart phone. Alternatively however, the base surface 14 may be that of the top of a table or desk or a sheet lying on the table or desk. The toy robot 10 may be used with base surfaces that emit light (such as that of a tablet computer) as well as base surfaces that do not emit light but that reflect, such as that of a table, desktop, counter or a sheet lying thereon), each respective shell of one or more shells are configured with information related to the personae and the specific set of capabilities associated with the specific character (Fig. 6-9, [0037] the robot character software program (profile), where different profiles can be defined that have different indicator light patterns each being consistent with the particular character to which the profile is assigned, [0042] The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0051] select the matching character from amongst several that are available (e.g., base, character 1, character 2, character 3) with which the processor will be configured, [0006] A range of different character skins may be produced, from a simple or lightweight version that may only have an appearance of a character from a known or other original audiovisual work of art, to a fully loaded version that may have many “bells and whistles”, and possibly even intelligence as mentioned above to effectively transform the toy robot into a more sophisticated machine, [0063] the toy robot 10 is given a voice (played through the speaker 46) that is unique to its present character skin (outer cover 50). In one embodiment, the different robot character software programs have electronically defined different voices, respectively, corresponding to their respective characters), and the one or more sensors coupled to at least one of the base mechanism and the shell, the one or more sensors configured to capture data of the environment of the robot (Fig. 10, [0047] operate together so that the programmed processor can detect an external stimulus using a sensor (e.g., the line sensor 30, or an RF module 31 having an antenna)) . Kroyan does not teach machine learning… are trained based on a training data set generated Hickman teaches machine learning… are trained ([0077] a computing device may provide the sensor data as input to a machine learning model, and receive as an output an estimated trajectory of the physical object, [0018] systems and methods for generating training data sets for training machine learning models. Within examples, a robotic device may detect a collision involving a physical object in an environment, and responsively collect and package data pertaining to the collision as a training data set for a machine learning model. After training up the machine learning model using the training data set and other training data sets, the robotic device (or another robotic device) can then use the model to predict when future collisions will occur and to take preventive actions to avoid colliding with physical objects.). (i.e. based on the machine learning model the robot will stop or change directions) Regarding claim 45 the combination of Kroyan and Hickman teach The method of claim 44, further comprising: improving awareness of the robot in its environment based on implementation of the modified operative and functional capabilities for the base mechanism and the modified personae and the set of capabilities for the specific character for a distinctive shell, wherein, each of the one or more shells comprises the distinctive shell, wherein each distinctive shell comprises a unique personae and a specific set of capabilities for a specific character, each of the one or more shells is removably coupled to the base mechanism such that the robot displays a different character with the coupling of the distinctive shell with the base mechanism (Kroyan, [0005] The robot's behavior includes actions that it takes in response to it detecting external stimuli, such as something that it detects using one or more built in sensors, e.g. a line, pattern, or contrast detected by line sensors, color detected by color sensors, objects in close proximity detected using IR-based proximity sensors and external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot). These external stimuli may in a sense be overlaid on top of a base, autonomous behavior. For example, the base behavior may by to follow a line that is of uniform color; the external stimulus may be discontinuities in the line or color patterns within or adjacent to the line, [0049] the new “character” of the toy robot 10 immediately and automatically comes to life once the outer cover 50 is fitted to the housing 25, resulting in the base behavior of the robot being modified or transformed to be consistent with that of its new character, [0066] The robot can optionally be able to automatically detect that a character of the nearby robot is in the same “family” as its present character, or not in the same family, and can then interact with the other toy robot, differently depending on its character, e.g., voice response, visual response including movement of parts, movement of the body. As an example, the voice of the robot may be modified by these rules, such that a deeper voice is selected in some cases, and a higher pitched voice is selected in other cases. As an alternative or in addition, the lighting sub-system may be controlled differently so that for example the indicator lights 40 have a particular color combination and have greater intensity in some cases, and in other cases those indicator lights 40 are illuminated less intensely and/or at a different color. The programmed processor may revert back to a base set of rules once the toy robot 10 finds that it is no longer in proximity of the other toy robot).). (i.e. the sensor data is used as input and the software/algorithm determines the movement and behavior of the robot which updates in response to any external stimulus it detects) Regarding claim 46 the combination of Kroyan and Hickman teach The method of claim 44, further comprising: improving awareness of the robot by providing additional operative and functional capabilities for the base mechanism and a modified personae and additional set of capabilities associated with the respective shell (Kroyan, Fig. 6-9, [0042] The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0051] select the matching character from amongst several that are available (e.g., base, character 1, character 2, character 3) with which the processor will be configured, [0006] A range of different character skins may be produced, from a simple or lightweight version that may only have an appearance of a character from a known or other original audiovisual work of art, to a fully loaded version that may have many “bells and whistles”, and possibly even intelligence as mentioned above to effectively transform the toy robot into a more sophisticated machine, [0050] outer cover 50 can be fitted onto the housing 25 in order to change the behavior or character of the toy robot). Regarding claim 47 the combination of Kroyan and Hickman teach The method of claim 44, further comprising: improving awareness of the robot in its environment based on the implementation of the one or more … algorithms trained on the data provided by one or more external devices and the one or more sensors, wherein, the one or … algorithms are trained based on data provided by the one or more external devices that are different from the robot, the data received from the one or more external devices is in addition to the data provided by the one or more sensors, the one or more external devices are mobile devices … (Kroyan, [0047] Examples of the external stimulus include the line segment 12 on the base surface 14 (see FIG. 4), a real-time user command received wirelessly (by the RE module 31) from a remote control transmitter that is being operated by a human user, or a command or control signal received wirelessly (by the RF module 31) from another robot nearby. The nearby robot may have been “detected” by the toy robot 10 using any combination of its available built-in sensors (e.g., a infrared proximity sensor or the RF module 31, [0048] In response to detecting an external stimulus, the programmed processor in the housing 25 of the toy robot 10 can automatically signal the propulsion sub-system to generate force so as to move the robot body, and/or the audio playback sub-system to produce sound. This behavior of the toy robot, namely its signaling of the propulsion sub-system or the audio playback sub-system or even as explained above, the signaling of the indicator light 40, or more generally its response to a detected external stimulus, is part of the character of the robot, which is governed by a robot character software program (also referred to as a profile). The profile had been previously downloaded as programming instructions for the processor, and is stored as part of the control unit 28 (e.g., within non-volatile memory such as flash memory)). Hickman further teaches machine learning ([0077] a computing device may provide the sensor data as input to a machine learning model, and receive as an output an estimated trajectory of the physical object, [0018] systems and methods for generating training data sets for training machine learning models. Within examples, a robotic device may detect a collision involving a physical object in an environment, and responsively collect and package data pertaining to the collision as a training data set for a machine learning model. After training up the machine learning model using the training data set and other training data sets, the robotic device (or another robotic device) can then use the model to predict when future collisions will occur and to take preventive actions to avoid colliding with physical objects.). (i.e. based on the machine learning model the robot will stop or change directions) configured to receive data from the one or more sensors ([0055] the log of sensor data may include images captured by a mobile computing device, such as a smartphone, tablet, wearable computing device, handheld camera computing device, etc. In addition, the log of sensor data may include sensor data captured from other sensors of the mobile computing device, such as IMUs or microphone arrays. In some instances, the mobile computing device may record the sensor data and provide the log of sensor data to another computing device… the mobile computing device may obtain the log of sensor data using sensors of the mobile device, [0023] trigger generation of training data sets in real-time when collisions occur) Regarding claim 48, the combination of Kroyan and Hickman teach The method of claim 44, further comprising: executing a booting instructions on the memory of the base mechanism (Kroyan, [0051] Each robot character software program or profile stored in the memory is assigned a separate or unique ID. The processor while executing the ID program may compare the stored IDs to a detected ID to find a match. In other words, the ID program may compare the detected ID to those of the various stored robot character software programs in order to then select the matching character from amongst several that are available (e.g., base, character 1, character 2, character 3) with which the processor will be configured), and customizing the specific set of capabilities associated with the personae of the specific character of the respective shell based on the execution of the booting instructions ([0048] This behavior of the toy robot, namely its signaling of the propulsion sub-system or the audio playback sub-system or even as explained above, the signaling of the indicator light 40, or more generally its response to a detected external stimulus, is part of the character of the robot, which is governed by a robot character software program (also referred to as a profile). The profile had been previously downloaded as programming instructions for the processor, and is stored as part of the control unit 28 (e.g., within non-volatile memory such as flash memory) . Regarding claim 49 the combination of Kroyan and Hickman teach The method of claim 44, further comprising: translating the base mechanism around the environment while the respective shell is coupled to the base mechanism, the base mechanism comprises circuitry associated with core functionalities to the robot (Kroyan, Fig. 3-4, [0021] the toy robot 10 has both capabilities in that it can follow a line segment on both types of base surfaces 14 and can seamlessly transition while following a line, as it moves from one type of surface to another, [0051] Each robot character software program or profile stored in the memory is assigned a separate or unique ID. The processor while executing the ID program may compare the stored IDs to a detected ID to find a match. In other words, the ID program may compare the detected ID to those of the various stored robot character software programs in order to then select the matching character from amongst several that are available (e.g., base, character 1, character 2, character 3) with which the processor will be configured). Regarding claim 51, the combination of Kroyan and Hickman teach The method of claim 44, further comprising: adapting the personae and the specific set of capabilities associated with the specific character of the respective shell based on user feedback data in addition to the trait data adaptation based on implementation of one or more … algorithms ([0049] the new “character” of the toy robot 10 immediately and automatically comes to life once the outer cover 50 is fitted to the housing 25, resulting in the base behavior of the robot being modified or transformed to be consistent with that of its new character, [0043] The programmed processor (part of the control unit 28) is in communication with the line sensor 30, so that the control unit 28 effectively senses or reads the patterns that appear on the base surface 14, and in response, based on previously determined rules, will automatically generate signals to the propulsion sub-system so that the latter generates the needed force to move the robot body in a desired way. This software for recognizing the various optical commands using the line sensor 30 may be updated on the control unit 28 as needed, and may be part of a wider encompassing “robot character program” that configures the programmed processor to control behavior of the toy robot. The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0040] The control unit 28 including its programmed processor may be configured to perform an algorithm which governs the path that is chosen for the toy robot 10 to follow, as it senses a line segment, [0065] the character software program may define a unique combination of colors or patterns for the lighting sub-system, a particular synthesized voice file or selected audio files for the robot's voice and a definition of a particular movement pattern, such that the toy robot 10 may now express emotion or respond in several different dimensions including sound, lighting, and movement (in response to any external stimulus that it detects, [0005] external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot)). Hickman further teaches machine learning ([0077] a computing device may provide the sensor data as input to a machine learning model, and receive as an output an estimated trajectory of the physical object, [0018] systems and methods for generating training data sets for training machine learning models. Within examples, a robotic device may detect a collision involving a physical object in an environment, and responsively collect and package data pertaining to the collision as a training data set for a machine learning model. After training up the machine learning model using the training data set and other training data sets, the robotic device (or another robotic device) can then use the model to predict when future collisions will occur and to take preventive actions to avoid colliding with physical objects.). (i.e. based on the machine learning model the robot will stop or change directions) Regarding claim 52, the combination of Kroyan and Hickman teach The method of claim 44, further comprising: customizing the personae and the specific set of capabilities associated with the specific character of the robot based on removably coupling the shell to the base mechanism (Kroyan, Fig. 6-9, [0042] The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0050] outer cover 50 can be fitted onto the housing 25 in order to change the behavior or character of the toy robot, ([0050] The resulting detected identification causes a particular one of several available robot character software programs or profiles (also stored in the memory) to be selected in accordance with which the processor will become reconfigured to change the behavior of the toy robot 10). Regarding claim 53, the combination of Kroyan and Hickman teach The method of claim 52, the customization of the personae and the specific set of capabilities associated with the specific character of the robot is further based on the receiving of the modified operative capabilities for the base mechanism and the modified character and the personae of the shell upon implementation of the one or more … algorithms ([0049] the new “character” of the toy robot 10 immediately and automatically comes to life once the outer cover 50 is fitted to the housing 25, resulting in the base behavior of the robot being modified or transformed to be consistent with that of its new character, [0043] The programmed processor (part of the control unit 28) is in communication with the line sensor 30, so that the control unit 28 effectively senses or reads the patterns that appear on the base surface 14, and in response, based on previously determined rules, will automatically generate signals to the propulsion sub-system so that the latter generates the needed force to move the robot body in a desired way. This software for recognizing the various optical commands using the line sensor 30 may be updated on the control unit 28 as needed, and may be part of a wider encompassing “robot character program” that configures the programmed processor to control behavior of the toy robot. The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0040] The control unit 28 including its programmed processor may be configured to perform an algorithm which governs the path that is chosen for the toy robot 10 to follow, as it senses a line segment). Hickman further teaches machine learning… ([0077] a computing device may provide the sensor data as input to a machine learning model, and receive as an output an estimated trajectory of the physical object, [0018] systems and methods for generating training data sets for training machine learning models. Within examples, a robotic device may detect a collision involving a physical object in an environment, and responsively collect and package data pertaining to the collision as a training data set for a machine learning model. After training up the machine learning model using the training data set and other training data sets, the robotic device (or another robotic device) can then use the model to predict when future collisions will occur and to take preventive actions to avoid colliding with physical objects.). (i.e. based on the machine learning model the robot will stop or change directions) Claim(s) 40, and 50 are rejected under 35 U.S.C. 103 as being unpatentable over Kroyan et al. (US20200129875A1, herein Kroyan), in view of Hickman et al. (US20190077019A1, herein Hickman), in further view of Kuffner (US20160055677). Regarding claim 40, the combination of Kroyan and Hickman teach The robot of claim 34, wherein the processor of the base mechanism is configured to execute the computer readable instructions to, adapt functionality of the base mechanism based on implementation of one or more …algorithms (Kroyan, [0049] the new “character” of the toy robot 10 immediately and automatically comes to life once the outer cover 50 is fitted to the housing 25, resulting in the base behavior of the robot being modified or transformed to be consistent with that of its new character, [0043] The programmed processor (part of the control unit 28) is in communication with the line sensor 30, so that the control unit 28 effectively senses or reads the patterns that appear on the base surface 14, and in response, based on previously determined rules, will automatically generate signals to the propulsion sub-system so that the latter generates the needed force to move the robot body in a desired way. This software for recognizing the various optical commands using the line sensor 30 may be updated on the control unit 28 as needed, and may be part of a wider encompassing “robot character program” that configures the programmed processor to control behavior of the toy robot. The character of the robot may be changed by changing a robot character software program described below, which in turn changes the rules that govern how the robot reacts to detected codes, e.g., the speed, duration and specific movement pattern with which it reacts to a particular code that appears as an external stimulus, [0040] The control unit 28 including its programmed processor may be configured to perform an algorithm which governs the path that is chosen for the toy robot 10 to follow, as it senses a line segment, [0065] the character software program may define a unique combination of colors or patterns for the lighting sub-system, a particular synthesized voice file or selected audio files for the robot's voice and a definition of a particular movement pattern, such that the toy robot 10 may now express emotion or respond in several different dimensions including sound, lighting, and movement (in response to any external stimulus that it detects), and establish a real-time bi-directional data channel with one or more external devices to enable collaborative gameplay via in gaming mechanics ([0044] The toy robot 10 adds an entertaining feature to the use of conventional screen-based devices such as smartphones and tablet computers, in that it provides a concrete, three-dimensional object which moves on the display surface of the smartphone or tablet computer, when a user is interacting with the smartphone or tablet computer to create a specific line segment 12 (shown on the display surface) for the robot to follow. The user is thus not solely engaged with the two-dimensional display screen of the smartphone or tablet computer, but is also engaged with a three-dimensional entertainment unit, which is a more interesting and challenging combination for the user especially a child, [0005] objects in close proximity detected using IR-based proximity sensors and external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot, [00047] Examples of the external stimulus include the line segment 12 on the base surface 14 (see FIG. 4), a real-time user command received wirelessly (by the RE module 31) from a remote control transmitter that is being operated by a human user, [0067] digital communication signals are transmitted and sent to an external device using a wireless link (e.g., a Bluetooth communication protocol). Hickman further teaches machine learning… ([0077] a computing device may provide the sensor data as input to a machine learning model, and receive as an output an estimated trajectory of the physical object, [0018] systems and methods for generating training data sets for training machine learning models. Within examples, a robotic device may detect a collision involving a physical object in an environment, and responsively collect and package data pertaining to the collision as a training data set for a machine learning model. After training up the machine learning model using the training data set and other training data sets, the robotic device (or another robotic device) can then use the model to predict when future collisions will occur and to take preventive actions to avoid colliding with physical objects.). (i.e. based on the machine learning model the robot will stop or change directions) The combination of Kroyan and Hickman do not teach …augmented reality overlays synchronized with movement of the robot… Kuffner teaches augmented reality overlays synchronized with movement of the robot ([0125] as the robotic device moves, nearby computing devices may have augmented reality interfaces that synchronize with a state of the robotic device so as to display future/planned trajectories or actions of the robotic device, [0034] An augmented reality interface is then used to provide feedback on a state and future movements of the robotic device moving in a physical world. The interface includes overlays to digitally annotate a video feed or wearer-view with information (semantic information). The information may indicate a navigation intent, a set of future footsteps, or a stripe on the floor for a wheeled robot that covers a future trajectory of the robot, for example. In instances in which the robotic device is interacting or going to interact with an object, the object to be manipulated could be highlighted in the overlay). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Kroyan’s and Hickman’s teaching of a programable robot having a changeable character using machine learning to train the robot with Kauffner’s teaching of incorporating an augmented reality interface which synchronizes with the actions of the robot. The combined teaching provides an expected result of a programable robot having a changeable character that incorporates machine learning and augmented reality. Therefore, one of ordinary skill in the art would be motivated to improve the user experience expanding the gaming potential. Regarding claim 50, the combination of Kroyan and Hickman teach The method of claim 44, further comprising: adapting functionality of the base mechanism based on implementation of one or more … algorithms; and establishing a real-time bidirectional data channel with one or more external devices to enable collaborative gameplay via in gaming mechanics (Kroyan, [0044] The toy robot 10 adds an entertaining feature to the use of conventional screen-based devices such as smartphones and tablet computers, in that it provides a concrete, three-dimensional object which moves on the display surface of the smartphone or tablet computer, when a user is interacting with the smartphone or tablet computer to create a specific line segment 12 (shown on the display surface) for the robot to follow. The user is thus not solely engaged with the two-dimensional display screen of the smartphone or tablet computer, but is also engaged with a three-dimensional entertainment unit, which is a more interesting and challenging combination for the user especially a child, [0005] objects in close proximity detected using IR-based proximity sensors and external communication detected via an RF antenna (e.g., a real-time user command received wirelessly from a remote control unit that is being operated by a human user of the robot, or from another nearby robot, [00047] Examples of the external stimulus include the line segment 12 on the base surface 14 (see FIG. 4), a real-time user command received wirelessly (by the RE module 31) from a remote control transmitter that is being operated by a human user, [0067] digital communication signals are transmitted and sent to an external device using a wireless link (e.g., a Bluetooth communication protocol). Hickman further teaches machine learning… ([0077] a computing device may provide the sensor data as input to a machine learning model, and receive as an output an estimated trajectory of the physical object, [0018] systems and methods for generating training data sets for training machine learning models. Within examples, a robotic device may detect a collision involving a physical object in an environment, and responsively collect and package data pertaining to the collision as a training data set for a machine learning model. After training up the machine learning model using the training data set and other training data sets, the robotic device (or another robotic device) can then use the model to predict when future collisions will occur and to take preventive actions to avoid colliding with physical objects.). (i.e. based on the machine learning model the robot will stop or change directions) The combination of Kroyan and Hickman do not teach …augmented reality overlays synchronized with movement of the robot… Kuffner teaches augmented reality overlays synchronized with movement of the robot ([0125] as the robotic device moves, nearby computing devices may have augmented reality interfaces that synchronize with a state of the robotic device so as to display future/planned trajectories or actions of the robotic device, [0034] An augmented reality interface is then used to provide feedback on a state and future movements of the robotic device moving in a physical world. The interface includes overlays to digitally annotate a video feed or wearer-view with information (semantic information). The information may indicate a navigation intent, a set of future footsteps, or a stripe on the floor for a wheeled robot that covers a future trajectory of the robot, for example. In instances in which the robotic device is interacting or going to interact with an object, the object to be manipulated could be highlighted in the overlay) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Gewecke (US20190224853) control of social robot based on prior character portrayal, the robot may be configured to learn different trait values or to develop its own personality through machine learning. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to YVONNE T FOLLANSBEE whose telephone number is (571)272-0634. The examiner can normally be reached Monday - Friday 1pm - 9pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Fennema can be reached at (571) 272-2748. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YVONNE TRANG FOLLANSBEE/Examiner, Art Unit 2117 /Christopher E. Everett/Primary Examiner, Art Unit 2117
Read full office action

Prosecution Timeline

Oct 10, 2019
Application Filed
Apr 21, 2022
Non-Final Rejection — §103
Nov 02, 2022
Response Filed
Dec 20, 2022
Final Rejection — §103
Jul 10, 2023
Request for Continued Examination
Jul 14, 2023
Response after Non-Final Action
Jul 31, 2023
Non-Final Rejection — §103
Jan 09, 2024
Response Filed
Feb 17, 2024
Final Rejection — §103
May 23, 2024
Request for Continued Examination
Jun 05, 2024
Response after Non-Final Action
Jun 23, 2024
Non-Final Rejection — §103
Oct 02, 2024
Interview Requested
Oct 31, 2024
Applicant Interview (Telephonic)
Nov 03, 2024
Examiner Interview Summary
Dec 03, 2024
Response Filed
Mar 17, 2025
Final Rejection — §103
Jun 10, 2025
Examiner Interview Summary
Jun 23, 2025
Request for Continued Examination
Jun 25, 2025
Response after Non-Final Action
Aug 07, 2025
Non-Final Rejection — §103
Nov 03, 2025
Response Filed
Jan 24, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547151
COMPENSATION FOR ADDITIVE MANUFACTURING
2y 5m to grant Granted Feb 10, 2026
Patent 12487586
Online water pump control and management system based on remote control
2y 5m to grant Granted Dec 02, 2025
Patent 12472693
ADDITIVE MANUFACTURING-COUPLED DIGITAL TWIN ECOSYSTEM
2y 5m to grant Granted Nov 18, 2025
Patent 12468277
INFORMATION PROCESSING DEVICE FOR OPTIMIZING FILTERS THAT PURIFY WASTEWATER
2y 5m to grant Granted Nov 11, 2025
Patent 12443162
SIMPLIFIED TUNING OF 3D PRINTERS
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
57%
Grant Probability
84%
With Interview (+26.4%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 105 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month