Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office action is in response to the amendment filed on 03/03/2026. Claims 1, 4-6, 8-13, and 16-23 are currently pending with claims 1, 4-6, 8, 13, and 16-21 being amended, claims 2-3, 7, and 14-15 being cancelled, and claims 22-23 being newly added.
Response to Amendment
The amendments to the claims submitted on 03/03/2026 overcome the claim objections set forth in the previous Office action except for those set forth in the claim objection section.
Response to Arguments
Examiner notes wherein Applicant argues the newly amended limitations, which have not been addressed by the prior art of record. As such, Examiner has augmented the below rejection(s) in view of the prior art of record to address the newly amended limitations.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 8-13, 20 and 22-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi et al. (US 20220066456 A1), hereinafter Ebrahimi in view of Lee et al. (US 20190068393 A1), hereinafter Lee and Duffley et al. (US 20160282862 A1), hereinafter Duffley.
Regarding claim 1, Ebrahimi teaches:
1. (Currently Amended) An interactive method, wherein the interactive method is applied to a terminal device that establishes a connection with a robot in advance, (Paragraph 0240, "In some embodiments, the robot may be wheeled (e.g., rigidly fixed, suspended fixed, steerable, suspended steerable, caster, or suspended caster), legged, or tank tracked. In some embodiments, the wheels, legs, tracks, etc. of the robot may be controlled individually or controlled in pairs (e.g., like cars) or in groups of other sizes, such as three or four as in omnidirectional wheels. In some embodiments, the robot may use differential-drive wherein two fixed wheels have a common axis of rotation and angular velocities of the two wheels are equal and opposite such that the robot may rotate on the spot. In some embodiments, the robot may include a terminal device such as those on computers, mobile phones, tablets, or smart wearable devices.") the interactive method comprising:
displaying a control interface of the robot, (Paragraph 0006, "generating, in a first operational session and after finishing an undocking routine, by the processor of the robot, a first iteration of a map of the workspace based on the LIDAR data, wherein the first iteration of the map is a bird-eye's view of at least a portion of the workspace; generating, by the processor of the robot, additional iterations of the map based on newly captured LIDAR data and newly captured movement data obtained as the robot performs coverage and traverses into new and undiscovered areas, wherein: successive iterations of the map are larger in size due to an addition of newly discovered areas; newly captured LIDAR data comprises data corresponding with perimeters and objects that overlap with previously captured LIDAR data and data corresponding with perimeters that were not visible from a previous position of the robot from which the previously captured LIDAR data was obtained; and the newly captured LIDAR data is integrated into a previous iteration of the map to generate a larger map of the workspace, wherein areas of overlap are discounted them from the larger map; identifying, by the processor of the robot, a room in the map based on at least a portion of any of the captured images, the LIDAR data, and the movement data; actuating, by the processor of the robot, the robot to drive along a trajectory that follows along a planned path by providing pulses to one or more electric motors of wheels of the robot; and localizing, by the processor of the robot, the robot within an iteration of the map by estimating a position of the robot based on the movement data, slippage, and sensor errors; wherein: the robot performs coverage and finds new and undiscovered areas until determining, by the processor, all areas of the workspace are discovered and included in the map based on at least all the newly captured LIDAR data overlapping with the previously captured LIDAR data and the closure of all gaps the map; the map is transmitted to an application of a communication device previously paired with the robot; and the application is configured to display the map on a screen of the communication device.") wherein the control interface comprises a motion map and an identifier of the robot, and the identifier of the robot is a graphical icon or avatar in the motion map for identifying a position of the robot in a real environment (Paragraph 0708, “In some embodiments, more than one robot and device (e.g., medical car robot, robot cleaner, service robot with voice and video capability, and other devices such as smart appliances, TV, building controls such as lighting, temperature, etc., tablet, computer, and home assistants) may be connected to the application and the user may use the application to choose settings for each robot and device. In some embodiments, the user may use the application to display all connected robots and other devices. For example, the application may display all robots and smart devices in a map of a home or in a logical representation such as a list with icons and names for each robot and smart device. The user may select each robot and smart device to provide commands and change settings of the selected device.” And Paragraph 0780, “In some embodiments, an avatar may be used to represent the visual identity of the robot. In some embodiments, the user may assign, design, or modify from template a visual identity of the robot. In some embodiments, the avatar may reflect the mood of the robot. For example, the avatar may smile when the robot is happy. In some embodiments, the robot may display the avatar or a face of the avatar on an LCD or other type of screen. In some embodiments, the screen may be curved (e.g., concave or convex). In some embodiments, the robot may identify with a name. For example, the user may call the robot a particular name and the robot may respond to the particular name. In some embodiments, the robot can have a generic name (e.g., Bob) or the user may choose or modify the name of the robot.” As well as Paragraph 0320, "In some cases, the pose of the robot may be shown within a map displayed on a screen of a communication device." As well as Paragraph 0984, "In some embodiments, the processor may manipulate the map by cleaning up the map for navigation purposes or aesthetics purposes (e.g., displaying the map to a user). For example, FIG. 76A illustrates a perimeter 3600 of an environment that may not be aesthetically pleasing to a user. FIG. 76B illustrates an alternative version of the map illustrated in FIG. 76A wherein the perimeter 3601 may be more aesthetically pleasing to the user.", and Paragraph 01421, "In some embodiments, the application of the communication device may display the spatial representation of the environment as its being built and after completion; a movement path of the robot; a current position of the robot; a current position of a charging station of the robot; robot status; a current quantity of total area cleaned; a total area cleaned after completion of a task; a battery level; a current cleaning duration; an estimated total cleaning duration required to complete a task; an estimated total battery power required to complete a task, a time of completion of a task; obstacles within the spatial representation including object type of the obstacle and percent confidence of the object type; obstacles within the spatial representation including obstacles with unidentified object type; issues requiring user attention within the spatial representation; a fluid flow rate for different areas within the spatial representation; a notification that the robot has reached a particular location; cleaning history; user manual; maintenance information; lifetime of components; and firmware information.". Please also see Figure 32 and Paragraphs 0936 and 0942) … and
in response to an operation of the user … controlling the robot to perform corresponding functions in the real environment corresponding to the motion map;
wherein in response to … controlling the robot to perform corresponding functions in the real environment corresponding to the motion map comprises:
… controlling the robot to move in a direction corresponding to the at least one direction control key in the real environment corresponding to the motion map. (Paragraph 0473, "The robot may be pushed by a human operator along a path during which sensors of the robot observe the environment, including landmark objects, such that they may learn the path and execute it autonomously in later work sessions. In future work sessions, the processor may understand a location of the robot and determine a next move of the robot upon sensing the presence of the object. The human operator may alternatively use an application of a communication device to draw the path of the robot in a displayed map. In some embodiments, upon detecting one or more particular visual words, such as the features defining the indentation pattern of object, the robot may autonomously execute one or more instructions. In embodiments, the robot may be manually set to react in various ways for different visual words or may be trained using a neural network that observes human behaviors while the robot is pushed around by the human. In embodiments, planned paths of the robot may almost be the same as a path a human would traverse and actual trajectories of the robot are deemed as acceptable. As the robot passes by landmarks, such as the object with unique indentation pattern, the processor of the robot may develop a reinforced sense of where the robot is expected to be located upon observing each landmark and where the robot is supposed to go. In some embodiments, the processor may be further refined by the operator training the robot digitally (e.g., via an application). The spatial representation of the environment (e.g., 2D, 3D, 3D+RGB, etc.) may be shown to the user using an application (e.g., using a mobile device or computer) and the user may use the application to draw lines that represent where the user wants the robot to drive." Also see paragraph 0794)
Ebrahimi does not specifically teach selecting an icon/visual representation of the robot on the interface in order to control the robot or triggering a remote control interface by selecting the robot. However, Lee, in the same field of endeavor of robotic control, teaches:
… and enabling a user to operate on the identifier; … on the identifier of the robot in the control interface, … the operation of the user on the identifier of the robot in the control interface, … the operation of the user on the identifier of the robot in the control interface, … (Paragraph 0069, “When the device selected upon receiving the input is movable and the input interface 16 receives the location to which the selected device is to be moved, the controller 13 may generate a control signal for moving the selected device to the received location. By receiving the location to which the device is moved, which is movable or needs to be moved such as a robot cleaner (e.g., a robot vacuum cleaner) and an air cleaner (e.g., an air purifier), the device can be moved. In addition to the location to which the device is to be moved, the movement path may be input by a user via dragging (e.g., a tap and drag gesture) or the like.” as well as Paragraphs 0094-0095, “Referring to FIG. 11, the setting zone of a cleaning robot may be set to coincide with the location of the cleaning robot. An arbitrary area may be designated as the setting zone as the cleaning robot is a moving device.
When the input interface 19 receives a touch input 102 for selecting the setting zone 101 of the cleaning robot from the user, the operation of the cleaning robot may be executed or interrupted. When information is received from the cleaning robot, the information 103 may be displayed on the display 12. The information may include status information or measurement information. For example, information such as progress or completion of cleaning, battery status, filter status information and operation error may be displayed. In addition, it is possible to set a target point 105 by changing the direction of the cleaning robot, by receiving a location to move, or by receiving a moving route. Further, information for locating the cleaning robot may be displayed on the display 12. In doing so, it is possible to accurately locate and guide the robot by using object recognition. Even if the cleaning robot is hidden behind another object and not visible in the image, the location of the cleaning robot may be determined by location tracking.” and Paragraphs 0044-0045, “A user may make an input by touching the screen with a finger, which hereinafter is referred to as a touch input. The touch input may include a tap gesture of a user touching the screen with a single finger for a short period of time, a long press gesture (also known as a tap-and-hold gesture) of a user touching the screen with a single finger for a long period of time (e.g., longer than a threshold time duration) and a multi-touch gesture of a user touching the screen with two or more fingers simultaneously.
When the user's touch input received by the input interface 19 falls within a setting zone associated with an indoor device in the image displayed on the display 12, the controller 13 generates a control signal for controlling the device. The setting zone refers to an area (e.g., a hotspot) on the display 12 designated by the user for controlling a specific device that is mapped to the area. The location of the setting zone of an indoor device to be controlled on the screen may be either the same as or different from the location of the indoor device on the screen.” This demonstrates that the user may apply a variety of gestures to a “hotspot”/”setting zone” representing a device which is controllable and performing control of said device based on the gesture input by the user.)
However, Duffley, in the same field of endeavor of robotics, teaches:
in response to … displaying a control mode selection interface, wherein the control mode selection interface is configured with a remote control option; (Paragraph 0111, “According to further embodiments, or according to the invention, and with reference to FIGS. 15-18, an application is provided on a mobile device 300 (which may be, for example, the local user terminal 142 having a touchscreen HMI) to provide additional functionality as described below. FIG. 15 shows an exemplary home screen 500 provided by the application to enable control and monitoring of the robot 200. The home screen 500 includes a control area 501 (the active input area of the touchscreen display of the device 300) and therein user manipulable control or interface elements in the form of a cleaning initiator button 512, a scheduling button 514, a cleaning strategy toggle button 516 (which toggles alternatingly between “QUICK” and “STANDARD” (not shown) status indicators when actuated), a dock recall button 520, a robot locator button 522, and a drive button 524. The home screen 500 may further display a robot identification 526 (e.g., a name (“Bruce”) assigned to the robot 200 by the user) as well as one or more operational messages 528 indicating a status of the robot 200 and/or other data.”)
in response to a click operation of the user on the remote control option, displaying a remote control interface, (Paragraph 0119, “When actuated, the drive button 524 will initiate a robot motive control screen (not shown) including user manipulable control elements (e.g., a virtual joystick or control pad) that the user can use to remotely control the movement of the robot 200 about the living space.”) wherein the remote control interface comprises a direction control area configured with at least one direction control key; and
in response to a click operation of the user on the at least one direction control key, (Paragraph 0181, “The HMI 370 may include, in addition to or in place of the touchscreen 372, any other suitable input device(s) including, for example, a touch activated or touch sensitive device, a joystick, a keyboard/keypad, a dial, a directional key or keys, and/or a pointing device (such as a mouse, trackball, touch pad, etc.).”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system as taught by Ebrahimi with the ability to select a visual representation of the robot via the user interface as taught by Lee and further with the ability to engage a remote control screen so as to control the movement of the robot as taught by Duffley. This would allow for the user to engage with the system quickly to control operation and “to monitor the statuses of the indoor devices in real time without requiring the user to navigate through different submenus for different devices” (Lee, Paragraph 0007) and further allow a user to drive the robot out of a location where it has become stuck without needing to physical find and move the robot.
Regarding claim 8, where all the limitations of claim 1 are discussed above, Ebrahimi further teaches:
8. (Currently Amended) The interactive method according to claim 1, wherein the controlling the robot to perform corresponding functions in the real environment corresponding to the motion map in response to an operation of the user … further comprises:
in response to the operation of the user in the control interface for … to a target position in the motion map, controlling the robot to move to a position corresponding to the target position in the real environment. (Paragraph 0697, "In some embodiments, via the user interface or automatically without user input, a starting and an ending point for a path to be traversed by the robot may be indicated on the user interface of the application executing on the communication device. Some embodiments may depict these points and propose various routes therebetween, for example, with various routing algorithms such as the path planning methods incorporated by reference herein. Examples include A*, Dijkstra's algorithm, and the like. In some embodiments, a plurality of alternate candidate routes may be displayed (and various metrics thereof, like travel time or distance), and the user interface may include inputs (like event handlers mapped to regions of pixels) by which a user may select among these candidate routes by touching or otherwise selecting a segment of one of the candidate routes, which may cause the application to send instructions to the robot that cause the robot to traverse the selected candidate route.")
Ebrahimi does not specifically teach the selection of the robot being on the icon representing the robotic system. However, Lee, in the same field of endeavor, teaches:
… on the identifier of the robot in the control interface … moving the identifier of the robot … (Paragraph 0069, “When the device selected upon receiving the input is movable and the input interface 16 receives the location to which the selected device is to be moved, the controller 13 may generate a control signal for moving the selected device to the received location. By receiving the location to which the device is moved, which is movable or needs to be moved such as a robot cleaner (e.g., a robot vacuum cleaner) and an air cleaner (e.g., an air purifier), the device can be moved. In addition to the location to which the device is to be moved, the movement path may be input by a user via dragging (e.g., a tap and drag gesture) or the like.” as well as Paragraphs 0094-0095, “Referring to FIG. 11, the setting zone of a cleaning robot may be set to coincide with the location of the cleaning robot. An arbitrary area may be designated as the setting zone as the cleaning robot is a moving device.
When the input interface 19 receives a touch input 102 for selecting the setting zone 101 of the cleaning robot from the user, the operation of the cleaning robot may be executed or interrupted. When information is received from the cleaning robot, the information 103 may be displayed on the display 12. The information may include status information or measurement information. For example, information such as progress or completion of cleaning, battery status, filter status information and operation error may be displayed. In addition, it is possible to set a target point 105 by changing the direction of the cleaning robot, by receiving a location to move, or by receiving a moving route. Further, information for locating the cleaning robot may be displayed on the display 12. In doing so, it is possible to accurately locate and guide the robot by using object recognition. Even if the cleaning robot is hidden behind another object and not visible in the image, the location of the cleaning robot may be determined by location tracking.” and Paragraphs 0044-0045, “A user may make an input by touching the screen with a finger, which hereinafter is referred to as a touch input. The touch input may include a tap gesture of a user touching the screen with a single finger for a short period of time, a long press gesture (also known as a tap-and-hold gesture) of a user touching the screen with a single finger for a long period of time (e.g., longer than a threshold time duration) and a multi-touch gesture of a user touching the screen with two or more fingers simultaneously.
When the user's touch input received by the input interface 19 falls within a setting zone associated with an indoor device in the image displayed on the display 12, the controller 13 generates a control signal for controlling the device. The setting zone refers to an area (e.g., a hotspot) on the display 12 designated by the user for controlling a specific device that is mapped to the area. The location of the setting zone of an indoor device to be controlled on the screen may be either the same as or different from the location of the indoor device on the screen.” This demonstrates that the user may apply a variety of gestures to a “hotspot”/”setting zone” representing a device which is controllable and performing control of said device based on the gesture input by the user.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system as taught by Ebrahimi with the ability to select the robot and drag the icon via the user interface so as to control the robot as taught by Lee. This would allow for the user to engage with the system quickly to control operation and “to monitor the statuses of the indoor devices in real time without requiring the user to navigate through different submenus for different devices” (Lee, Paragraph 0007).
Regarding claim 9, where all the limitations of claim 8 are discussed above, Ebrahimi further teaches:
9. (Previously Presented) The interactive method according to claim 8, wherein the controlling the robot to move to the position corresponding to the target position in the real environment in response to the operation of the user in the control interface for … to the target position in the motion map comprises:
when the robot is in an operation state, in response to the operation of the user for moving the identifier of the robot to the target position in the motion map, (Paragraph 0697, "In some embodiments, via the user interface or automatically without user input, a starting and an ending point for a path to be traversed by the robot may be indicated on the user interface of the application executing on the communication device. Some embodiments may depict these points and propose various routes therebetween, for example, with various routing algorithms such as the path planning methods incorporated by reference herein. Examples include A*, Dijkstra's algorithm, and the like. In some embodiments, a plurality of alternate candidate routes may be displayed (and various metrics thereof, like travel time or distance), and the user interface may include inputs (like event handlers mapped to regions of pixels) by which a user may select among these candidate routes by touching or otherwise selecting a segment of one of the candidate routes, which may cause the application to send instructions to the robot that cause the robot to traverse the selected candidate route.") controlling the robot to stop operating and controlling the robot to operate after the robot moves to the position corresponding to the target position in the real environment. (Paragraph 1422, "In some embodiments, the application may receive an input enacting an instruction for the robot to pause a current task; un-pause and continue the current task; start mopping or vacuuming; dock at the charging station; start cleaning; spot clean; navigate to a particular location and spot clean; navigate to a particular room and clean; execute back to back cleaning (continuous charging and cleaning cycle over multiple runs, such as coverage all or some areas twice); navigate to a particular location; skip a current room; and move or rotate in a particular direction.")
Ebrahimi does not specifically teach the selection of the robot being on the icon representing the robotic system. However, Lee, in the same field of endeavor, teaches:
… on the identifier of the robot in the control interface … moving the identifier of the robot … (Paragraph 0069, “When the device selected upon receiving the input is movable and the input interface 16 receives the location to which the selected device is to be moved, the controller 13 may generate a control signal for moving the selected device to the received location. By receiving the location to which the device is moved, which is movable or needs to be moved such as a robot cleaner (e.g., a robot vacuum cleaner) and an air cleaner (e.g., an air purifier), the device can be moved. In addition to the location to which the device is to be moved, the movement path may be input by a user via dragging (e.g., a tap and drag gesture) or the like.” as well as Paragraphs 0094-0095, “Referring to FIG. 11, the setting zone of a cleaning robot may be set to coincide with the location of the cleaning robot. An arbitrary area may be designated as the setting zone as the cleaning robot is a moving device.
When the input interface 19 receives a touch input 102 for selecting the setting zone 101 of the cleaning robot from the user, the operation of the cleaning robot may be executed or interrupted. When information is received from the cleaning robot, the information 103 may be displayed on the display 12. The information may include status information or measurement information. For example, information such as progress or completion of cleaning, battery status, filter status information and operation error may be displayed. In addition, it is possible to set a target point 105 by changing the direction of the cleaning robot, by receiving a location to move, or by receiving a moving route. Further, information for locating the cleaning robot may be displayed on the display 12. In doing so, it is possible to accurately locate and guide the robot by using object recognition. Even if the cleaning robot is hidden behind another object and not visible in the image, the location of the cleaning robot may be determined by location tracking.” and Paragraphs 0044-0045, “A user may make an input by touching the screen with a finger, which hereinafter is referred to as a touch input. The touch input may include a tap gesture of a user touching the screen with a single finger for a short period of time, a long press gesture (also known as a tap-and-hold gesture) of a user touching the screen with a single finger for a long period of time (e.g., longer than a threshold time duration) and a multi-touch gesture of a user touching the screen with two or more fingers simultaneously.
When the user's touch input received by the input interface 19 falls within a setting zone associated with an indoor device in the image displayed on the display 12, the controller 13 generates a control signal for controlling the device. The setting zone refers to an area (e.g., a hotspot) on the display 12 designated by the user for controlling a specific device that is mapped to the area. The location of the setting zone of an indoor device to be controlled on the screen may be either the same as or different from the location of the indoor device on the screen.” This demonstrates that the user may apply a variety of gestures to a “hotspot”/”setting zone” representing a device which is controllable and performing control of said device based on the gesture input by the user.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system as taught by Ebrahimi with the ability to select the robot and drag the icon via the user interface so as to control the robot as taught by Lee. This would allow for the user to engage with the system quickly to control operation and “to monitor the statuses of the indoor devices in real time without requiring the user to navigate through different submenus for different devices” (Lee, Paragraph 0007).
Regarding claim 10, where all the limitations of claim 9 are discussed above, Ebrahimi further teaches:
10. (Original) The interactive method according to claim 9, wherein the motion map comprises a plurality of motion zones, and each motion zone corresponds to a local space in the real environment and has a corresponding operation mode; (Paragraph 0573, "In some embodiments, the processor of the robot recognizes rooms and separates them by different colors that may be seen on an application of a communication device. In some embodiments, the robot cleans an entire room before moving onto a next room. In some embodiments, the robot may use different cleaning strategies depending on the particular area being cleaned. In some embodiments, the robot may use different strategies based on each zone. For example, a robot vacuum may clean differently in each room. The application may display different shades in different areas of the map, representing different cleaning strategies. The processor of the robot may load different cleaning strategies depending on the room, zone, floor type, etc. Examples of cleaning strategies may include, for example, mopping for the kitchen, steam cleaning for the toilet, UV sterilization for the baby room, robust coverage under chairs and tables, and regular cleaning for the rest of the house. In UV mode, the robot may drive slow and may spend 30 minutes covering each square foot.") and
the controlling the robot to stop operating and controlling the robot to operate after the robot moves to the position corresponding to the target position in the real environment comprises:
controlling the robot to stop operating, (Paragraph 1422, "In some embodiments, the application may receive an input enacting an instruction for the robot to pause a current task; un-pause and continue the current task; start mopping or vacuuming; dock at the charging station; start cleaning; spot clean; navigate to a particular location and spot clean; navigate to a particular room and clean; execute back to back cleaning (continuous charging and cleaning cycle over multiple runs, such as coverage all or some areas twice); navigate to a particular location; skip a current room; and move or rotate in a particular direction.") and controlling the robot to operate in an operation mode corresponding to a target zone after the robot is moved to the local space corresponding to the target zone in the real environment, (Paragraph 0573, "In some embodiments, the processor of the robot recognizes rooms and separates them by different colors that may be seen on an application of a communication device. In some embodiments, the robot cleans an entire room before moving onto a next room. In some embodiments, the robot may use different cleaning strategies depending on the particular area being cleaned. In some embodiments, the robot may use different strategies based on each zone. For example, a robot vacuum may clean differently in each room. The application may display different shades in different areas of the map, representing different cleaning strategies. The processor of the robot may load different cleaning strategies depending on the room, zone, floor type, etc. Examples of cleaning strategies may include, for example, mopping for the kitchen, steam cleaning for the toilet, UV sterilization for the baby room, robust coverage under chairs and tables, and regular cleaning for the rest of the house. In UV mode, the robot may drive slow and may spend 30 minutes covering each square foot.") wherein the target zone is a motion zone to which the target position belongs. (Paragraph 1422, "In some embodiments, the application may receive an input enacting an instruction for the robot to pause a current task; un-pause and continue the current task; start mopping or vacuuming; dock at the charging station; start cleaning; spot clean; navigate to a particular location and spot clean; navigate to a particular room and clean; execute back to back cleaning (continuous charging and cleaning cycle over multiple runs, such as coverage all or some areas twice); navigate to a particular location; skip a current room; and move or rotate in a particular direction.")
Regarding claim 11, where all the limitations of claim 10 are discussed above, Ebrahimi further teaches:
11. (Original) The interactive method according to claim 10, wherein after the robot completes the operation in the local space corresponding to the target zone, the interactive method further comprises at least one of the following:
controlling the robot to move to a base station in the real environment; (Paragraph 0439, "In some embodiments, the processor of the robot may keep a bread crumb path or a coastal path to its last known rendezvous point. For example, the processor of the robot may lose localization. A last known rendezvous point may be known by the processor. The processor may also have kept a bread crumb path to a charging station and a bread crumb path to the rendezvous point. The robot may follow a safe bread crumb path back to the charging station. The bread crumb path generally remains in a middle area of the environment to prevent the robot from collisions or becoming stuck. Although in going back to the last known location the robot may not have functionality of its original sensors, the processor may use data from other sensors to follow a path back to its last known good localization as best as possible because the processor kept a bread crumb path, a safe path (in the middle of the space), and a coastal path. In embodiments, the processor may be any of a bread crumb path, a safe path (in the middle of the space), and a coastal path. In embodiments, any of the bread crumb path, the safe path (in the middle of the space), and the coastal path comprise a path back to a last known good localized point, one point to a last known good localized point, two, three or more points to a last known good localized point, and to the start." and 0705, "In some embodiments, the user may use the user interface of the application to instruct the robot to return to a charging station for recharging if the battery level is low during a work session, then to continue the task." also see paragraphs 0729-0731) and
in the real environment, controlling the robot to move to a local space corresponding to a first motion zone in a non-operation state in an operation sequence, (Paragraph 0696, "In some embodiments, the user interface may indicate in the map a path the robot is about to take (e.g., according to a routing algorithm) between two points, to cover an area, or to perform some other task. For example, a route may be depicted as a set of line segments or curves overlaid on the map, and some embodiments may indicate a current location of the robot with an icon overlaid on one of the line segments with an animated sequence that depicts the robot moving along the line segments. In some embodiments, the future movements of the robot or other activities of the robot may be depicted in the user interface. For example, the user interface may indicate which room or other area the robot is currently covering and which room or other area the robot is going to cover next in a current work sequence. The state of such areas may be indicated with a distinct visual attribute of the area, its text label, or its perimeters, like color, shade, blinking outlines, and the like. In some embodiments, a sequence with which the robot is currently programmed to cover various areas may be visually indicated with a continuum of such visual attributes, for instance, ranging across the spectrum from red to blue (or dark grey to light) indicating sequence with which subsequent areas are to be covered.") wherein the operation sequence comprises the plurality of motion zones arranged in sequence. (Paragraph 1162, "In some embodiments, to optimize division of zones of an environment, the processor may proceed through the following iteration for each zone of a sequence of zones, beginning with the first zone: expansion of the zone if neighbor cells are empty, movement of the robot to a point in the zone closest to the current position of the robot, addition of a new zone coinciding with the travel path of the robot from its current position to a point in the zone closest to the robot if the length of travel from its current position is significant, execution of a coverage pattern (e.g. boustrophedon) within the zone, and removal of any uncovered cells from the zone.")
Regarding claim 12, where all the limitations of claim 10 are discussed above, Ebrahimi further teaches:
12. (Previously Presented) The interactive method according to claim 10, wherein the controlling the robot to move to a position corresponding to the target position in the real environment in response to the operation of the user in the control interface … to a target position in the motion map comprises:
in response to a start position of a motion operation of the user in the control interface and the target position being respectively in different motion zones in the motion map, (Paragraph 0573, "In some embodiments, the processor of the robot recognizes rooms and separates them by different colors that may be seen on an application of a communication device. In some embodiments, the robot cleans an entire room before moving onto a next room. In some embodiments, the robot may use different cleaning strategies depending on the particular area being cleaned. In some embodiments, the robot may use different strategies based on each zone. For example, a robot vacuum may clean differently in each room. The application may display different shades in different areas of the map, representing different cleaning strategies. The processor of the robot may load different cleaning strategies depending on the room, zone, floor type, etc. Examples of cleaning strategies may include, for example, mopping for the kitchen, steam cleaning for the toilet, UV sterilization for the baby room, robust coverage under chairs and tables, and regular cleaning for the rest of the house. In UV mode, the robot may drive slow and may spend 30 minutes covering each square foot.") controlling the robot to move to a position corresponding to the target position in the real environment. (Paragraph 1422, "In some embodiments, the application may receive an input enacting an instruction for the robot to pause a current task; un-pause and continue the current task; start mopping or vacuuming; dock at the charging station; start cleaning; spot clean; navigate to a particular location and spot clean; navigate to a particular room and clean; execute back to back cleaning (continuous charging and cleaning cycle over multiple runs, such as coverage all or some areas twice); navigate to a particular location; skip a current room; and move or rotate in a particular direction.")
Ebrahimi does not specifically teach the selection of the robot being on the icon representing the robotic system. However, Lee, in the same field of endeavor, teaches:
… moving the identifier of the robot … (Paragraph 0069, “When the device selected upon receiving the input is movable and the input interface 16 receives the location to which the selected device is to be moved, the controller 13 may generate a control signal for moving the selected device to the received location. By receiving the location to which the device is moved, which is movable or needs to be moved such as a robot cleaner (e.g., a robot vacuum cleaner) and an air cleaner (e.g., an air purifier), the device can be moved. In addition to the location to which the device is to be moved, the movement path may be input by a user via dragging (e.g., a tap and drag gesture) or the like.” as well as Paragraphs 0094-0095, “Referring to FIG. 11, the setting zone of a cleaning robot may be set to coincide with the location of the cleaning robot. An arbitrary area may be designated as the setting zone as the cleaning robot is a moving device.
When the input interface 19 receives a touch input 102 for selecting the setting zone 101 of the cleaning robot from the user, the operation of the cleaning robot may be executed or interrupted. When information is received from the cleaning robot, the information 103 may be displayed on the display 12. The information may include status information or measurement information. For example, information such as progress or completion of cleaning, battery status, filter status information and operation error may be displayed. In addition, it is possible to set a target point 105 by changing the direction of the cleaning robot, by receiving a location to move, or by receiving a moving route. Further, information for locating the cleaning robot may be displayed on the display 12. In doing so, it is possible to accurately locate and guide the robot by using object recognition. Even if the cleaning robot is hidden behind another object and not visible in the image, the location of the cleaning robot may be determined by location tracking.” and Paragraphs 0044-0045, “A user may make an input by touching the screen with a finger, which hereinafter is referred to as a touch input. The touch input may include a tap gesture of a user touching the screen with a single finger for a short period of time, a long press gesture (also known as a tap-and-hold gesture) of a user touching the screen with a single finger for a long period of time (e.g., longer than a threshold time duration) and a multi-touch gesture of a user touching the screen with two or more fingers simultaneously.
When the user's touch input received by the input interface 19 falls within a setting zone associated with an indoor device in the image displayed on the display 12, the controller 13 generates a control signal for controlling the device. The setting zone refers to an area (e.g., a hotspot) on the display 12 designated by the user for controlling a specific device that is mapped to the area. The location of the setting zone of an indoor device to be controlled on the screen may be either the same as or different from the location of the indoor device on the screen.” This demonstrates that the user may apply a variety of gestures to a “hotspot”/”setting zone” representing a device which is controllable and performing control of said device based on the gesture input by the user. Please further see Paragraph 0052, “The location of the setting zone within the image may be changed (e.g., updated) according to the change of the image. The change in the image may include a change in the angle of view of the image or a change in the location of a device within a given angle of view. The angle of view of the image capturing device may be changed according to panning, tilting and zooming (PTZ) control of the image capturing device or a device to be controlled may be movable like a cleaning robot (e.g., a robot vacuum cleaner) or the shape may vary, as well as when the image capturing device captures an image with the same angle of view or a device to be controlled is stationary. As a result, there may be a change in the image. When the setting zone is set to a location on the image where the device is recognized, the location of the device in the image changes according to the change of the image, and accordingly the setting zone may be changed in position or shape according to the change of the image. If the device moves out of frame from the image due to a change in the image, the setting zone may be moved to a predetermined location for the device. For example, the predetermined location may be a top-left corner or a top-right corner. Alternatively, the display 12 may display areas other than the area of the displayed image. At this time, the setting zone of the device which has moved out of frame from the image may be moved to an area other than the previously designated area of the image. Alternatively, a mini-map for the entire area or a partial area of the capturing range of the image capturing device may be displayed, such that the setting zone may be moved on the mini-map in order to continue to display the setting zone of the device which has moved out of frame from the current image.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system as taught by Ebrahimi with the ability to select the robot and drag the icon via the user interface so as to control the robot as taught by Lee. This would allow for the user to engage with the system quickly to control operation and “to monitor the statuses of the indoor devices in real time without requiring the user to navigate through different submenus for different devices” (Lee, Paragraph 0007).
Regarding claim 13, Ebrahimi further teaches:
13. (Currently Amended) An electronic device, the electronic device establishes a connection with a robot in advance, the electronic device comprising:
a processor; and
a, memory configured to store executable instructions for the processor, (Paragraph 0238, “Some embodiments may provide a robot including communication, mobility, actuation, and processing elements. In some embodiments, the robot may include, but is not limited to include, one or more of a casing, a chassis including a set of wheels, a motor to drive the wheels, a receiver that acquires signals transmitted from, for example, a transmitting beacon, a transmitter for transmitting signals, a processor, a memory storing instructions that when executed by the processor effectuates robotic operations, a controller, a plurality of sensors (e.g., tactile sensor, obstacle sensor, temperature sensor, imaging sensor, light detection and ranging (LIDAR) sensor, camera, depth sensor, time-of-flight (TOF) sensor, TSSP sensor, optical tracking sensor, sonar sensor, ultrasound sensor, laser sensor, light emitting diode (LED) sensor, etc.), network or wireless communications, radio frequency (RF) communications, power management such as a rechargeable battery, solar panels, or fuel, and one or more clock or synchronizing devices. In some cases, the robot may include communication means such as Wi-Fi, Worldwide Interoperability for Microwave Access (WiMax), WiMax mobile, wireless, cellular, Bluetooth, RF, etc. In some cases, the robot may support the use of a 360 degrees LIDAR and a depth camera with limited field of view. In some cases, the robot may support proprioceptive sensors (e.g., independently or in fusion), odometry devices, optical tracking sensors, smart phone inertial measurement units (IMU), and gyroscopes.”)
wherein the processor is configured to execute the executable instructions to:
display a control interface of the robot, (Paragraph 0006, "generating, in a first operational session and after finishing an undocking routine, by the processor of the robot, a first iteration of a map of the workspace based on the LIDAR data, wherein the first iteration of the map is a bird-eye's view of at least a portion of the workspace; generating, by the processor of the robot, additional iterations of the map based on newly captured LIDAR data and newly captured movement data obtained as the robot performs coverage and traverses into new and undiscovered areas, wherein: successive iterations of the map are larger in size due to an addition of newly discovered areas; newly captured LIDAR data comprises data corresponding with perimeters and objects that overlap with previously captured LIDAR data and data corresponding with perimeters that were not visible from a previous position of the robot from which the previously captured LIDAR data was obtained; and the newly captured LIDAR data is integrated into a previous iteration of the map to generate a larger map of the workspace, wherein areas of overlap are discounted them from the larger map; identifying, by the processor of the robot, a room in the map based on at least a portion of any of the captured images, the LIDAR data, and the movement data; actuating, by the processor of the robot, the robot to drive along a trajectory that follows along a planned path by providing pulses to one or more electric motors of wheels of the robot; and localizing, by the processor of the robot, the robot within an iteration of the map by estimating a position of the robot based on the movement data, slippage, and sensor errors; wherein: the robot performs coverage and finds new and undiscovered areas until determining, by the processor, all areas of the workspace are discovered and included in the map based on at least all the newly captured LIDAR data overlapping with the previously captured LIDAR data and the closure of all gaps the map; the map is transmitted to an application of a communication device previously paired with the robot; and the application is configured to display the map on a screen of the communication device.") wherein the control interface comprises a motion map and an identifier of the robot, and the identifier of the robot is a graphical icon or avatar in the motion map for identifying a position of the robot in a real environment (Paragraph 0320, "In some cases, the pose of the robot may be shown within a map displayed on a screen of a communication device." As well as Paragraph 0984, "In some embodiments, the processor may manipulate the map by cleaning up the map for navigation purposes or aesthetics purposes (e.g., displaying the map to a user). For example, FIG. 76A illustrates a perimeter 3600 of an environment that may not be aesthetically pleasing to a user. FIG. 76B illustrates an alternative version of the map illustrated in FIG. 76A wherein the perimeter 3601 may be more aesthetically pleasing to the user.", and Paragraph 01421, "In some embodiments, the application of the communication device may display the spatial representation of the environment as its being built and after completion; a movement path of the robot; a current position of the robot; a current position of a charging station of the robot; robot status; a current quantity of total area cleaned; a total area cleaned after completion of a task; a battery level; a current cleaning duration; an estimated total cleaning duration required to complete a task; an estimated total battery power required to complete a task, a time of completion of a task; obstacles within the spatial representation including object type of the obstacle and percent confidence of the object type; obstacles within the spatial representation including obstacles with unidentified object type; issues requiring user attention within the spatial representation; a fluid flow rate for different areas within the spatial representation; a notification that the robot has reached a particular location; cleaning history; user manual; maintenance information; lifetime of components; and firmware information.". Please also see Figure 32 and Paragraphs 0936 and 0942) … and
in response to an operation of the user … , control the robot to perform corresponding functions in the real environment corresponding to the motion map;
wherein the processor is further configured to execute the executable instructions to;
…
control the robot to move in a direction corresponding to the at least one direction control key in the real environment corresponding to the motion map. (Paragraph 0473, "The robot may be pushed by a human operator along a path during which sensors of the robot observe the environment, including landmark objects, such that they may learn the path and execute it autonomously in later work sessions. In future work sessions, the processor may understand a location of the robot and determine a next move of the robot upon sensing the presence of the object. The human operator may alternatively use an application of a communication device to draw the path of the robot in a displayed map. In some embodiments, upon detecting one or more particular visual words, such as the features defining the indentation pattern of object, the robot may autonomously execute one or more instructions. In embodiments, the robot may be manually set to react in various ways for different visual words or may be trained using a neural network that observes human behaviors while the robot is pushed around by the human. In embodiments, planned paths of the robot may almost be the same as a path a human would traverse and actual trajectories of the robot are deemed as acceptable. As the robot passes by landmarks, such as the object with unique indentation pattern, the processor of the robot may develop a reinforced sense of where the robot is expected to be located upon observing each landmark and where the robot is supposed to go. In some embodiments, the processor may be further refined by the operator training the robot digitally (e.g., via an application). The spatial representation of the environment (e.g., 2D, 3D, 3D+RGB, etc.) may be shown to the user using an application (e.g., using a mobile device or computer) and the user may use the application to draw lines that represent where the user wants the robot to drive." Also see paragraph 0794)
Ebrahimi does not specifically teach selecting an icon/visual representation of the robot on the interface in order to control the robot or triggering a remote control interface by selecting the robot. However, Lee, in the same field of endeavor of robotic control, teaches:
… and enabling a user to operate on the identifier; … on the identifier of the robot in the control interface, … the operation of the user on the identifier of the robot in the control interface, … (Paragraph 0069, “When the device selected upon receiving the input is movable and the input interface 16 receives the location to which the selected device is to be moved, the controller 13 may generate a control signal for moving the selected device to the received location. By receiving the location to which the device is moved, which is movable or needs to be moved such as a robot cleaner (e.g., a robot vacuum cleaner) and an air cleaner (e.g., an air purifier), the device can be moved. In addition to the location to which the device is to be moved, the movement path may be input by a user via dragging (e.g., a tap and drag gesture) or the like.” as well as Paragraphs 0094-0095, “Referring to FIG. 11, the setting zone of a cleaning robot may be set to coincide with the location of the cleaning robot. An arbitrary area may be designated as the setting zone as the cleaning robot is a moving device.
When the input interface 19 receives a touch input 102 for selecting the setting zone 101 of the cleaning robot from the user, the operation of the cleaning robot may be executed or interrupted. When information is received from the cleaning robot, the information 103 may be displayed on the display 12. The information may include status information or measurement information. For example, information such as progress or completion of cleaning, battery status, filter status information and operation error may be displayed. In addition, it is possible to set a target point 105 by changing the direction of the cleaning robot, by receiving a location to move, or by receiving a moving route. Further, information for locating the cleaning robot may be displayed on the display 12. In doing so, it is possible to accurately locate and guide the robot by using object recognition. Even if the cleaning robot is hidden behind another object and not visible in the image, the location of the cleaning robot may be determined by location tracking.” and Paragraphs 0044-0045, “A user may make an input by touching the screen with a finger, which hereinafter is referred to as a touch input. The touch input may include a tap gesture of a user touching the screen with a single finger for a short period of time, a long press gesture (also known as a tap-and-hold gesture) of a user touching the screen with a single finger for a long period of time (e.g., longer than a threshold time duration) and a multi-touch gesture of a user touching the screen with two or more fingers simultaneously.
When the user's touch input received by the input interface 19 falls within a setting zone associated with an indoor device in the image displayed on the display 12, the controller 13 generates a control signal for controlling the device. The setting zone refers to an area (e.g., a hotspot) on the display 12 designated by the user for controlling a specific device that is mapped to the area. The location of the setting zone of an indoor device to be controlled on the screen may be either the same as or different from the location of the indoor device on the screen.” This demonstrates that the user may apply a variety of gestures to a “hotspot”/”setting zone” representing a device which is controllable and performing control of said device based on the gesture input by the user.)
However, Duffley, in the same field of endeavor of robotics, teaches:
in response to … display a control mode selection interface, wherein the control mode selection interface is configured with a remote control option; (Paragraph 0111, “According to further embodiments, or according to the invention, and with reference to FIGS. 15-18, an application is provided on a mobile device 300 (which may be, for example, the local user terminal 142 having a touchscreen HMI) to provide additional functionality as described below. FIG. 15 shows an exemplary home screen 500 provided by the application to enable control and monitoring of the robot 200. The home screen 500 includes a control area 501 (the active input area of the touchscreen display of the device 300) and therein user manipulable control or interface elements in the form of a cleaning initiator button 512, a scheduling button 514, a cleaning strategy toggle button 516 (which toggles alternatingly between “QUICK” and “STANDARD” (not shown) status indicators when actuated), a dock recall button 520, a robot locator button 522, and a drive button 524. The home screen 500 may further display a robot identification 526 (e.g., a name (“Bruce”) assigned to the robot 200 by the user) as well as one or more operational messages 528 indicating a status of the robot 200 and/or other data.”)
in response to a click operation of the user on the remote control option, display a remote control interface, (Paragraph 0119, “When actuated, the drive button 524 will initiate a robot motive control screen (not shown) including user manipulable control elements (e.g., a virtual joystick or control pad) that the user can use to remotely control the movement of the robot 200 about the living space.”) wherein the remote control interface comprises a direction control area configured with at least one direction control key; and
in response to a click operation of the user on the at least one direction control key, (Paragraph 0181, “The HMI 370 may include, in addition to or in place of the touchscreen 372, any other suitable input device(s) including, for example, a touch activated or touch sensitive device, a joystick, a keyboard/keypad, a dial, a directional key or keys, and/or a pointing device (such as a mouse, trackball, touch pad, etc.).”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system as taught by Ebrahimi with the ability to select a visual representation of the robot via the user interface as taught by Lee and further with the ability to engage a remote control screen so as to control the movement of the robot as taught by Duffley. This would allow for the user to engage with the system quickly to control operation and “to monitor the statuses of the indoor devices in real time without requiring the user to navigate through different submenus for different devices” (Lee, Paragraph 0007) and further allow a user to drive the robot out of a location where it has become stuck without needing to physical find and move the robot.
Regarding claim 20, Ebrahimi further teaches:
20. (Currently Amended) A non-transitory computer-readable storage medium storing a computer program, wherein the non-transitory computer-readable storage medium storing is applied to a terminal device that establishes a connection with a robot in advance, and an interactive method is implemented when the computer program is executed by a processor, (Paragraph 0240, "In some embodiments, the robot may be wheeled (e.g., rigidly fixed, suspended fixed, steerable, suspended steerable, caster, or suspended caster), legged, or tank tracked. In some embodiments, the wheels, legs, tracks, etc. of the robot may be controlled individually or controlled in pairs (e.g., like cars) or in groups of other sizes, such as three or four as in omnidirectional wheels. In some embodiments, the robot may use differential-drive wherein two fixed wheels have a common axis of rotation and angular velocities of the two wheels are equal and opposite such that the robot may rotate on the spot. In some embodiments, the robot may include a terminal device such as those on computers, mobile phones, tablets, or smart wearable devices." And Paragraph 0238, “Some embodiments may provide a robot including communication, mobility, actuation, and processing elements. In some embodiments, the robot may include, but is not limited to include, one or more of a casing, a chassis including a set of wheels, a motor to drive the wheels, a receiver that acquires signals transmitted from, for example, a transmitting beacon, a transmitter for transmitting signals, a processor, a memory storing instructions that when executed by the processor effectuates robotic operations, a controller, a plurality of sensors (e.g., tactile sensor, obstacle sensor, temperature sensor, imaging sensor, light detection and ranging (LIDAR) sensor, camera, depth sensor, time-of-flight (TOF) sensor, TSSP sensor, optical tracking sensor, sonar sensor, ultrasound sensor, laser sensor, light emitting diode (LED) sensor, etc.), network or wireless communications, radio frequency (RF) communications, power management such as a rechargeable battery, solar panels, or fuel, and one or more clock or synchronizing devices. In some cases, the robot may include communication means such as Wi-Fi, Worldwide Interoperability for Microwave Access (WiMax), WiMax mobile, wireless, cellular, Bluetooth, RF, etc. In some cases, the robot may support the use of a 360 degrees LIDAR and a depth camera with limited field of view. In some cases, the robot may support proprioceptive sensors (e.g., independently or in fusion), odometry devices, optical tracking sensors, smart phone inertial measurement units (IMU), and gyroscopes.”) the interactive method comprising:
displaying a control interface of the robot, (Paragraph 0006, "generating, in a first operational session and after finishing an undocking routine, by the processor of the robot, a first iteration of a map of the workspace based on the LIDAR data, wherein the first iteration of the map is a bird-eye's view of at least a portion of the workspace; generating, by the processor of the robot, additional iterations of the map based on newly captured LIDAR data and newly captured movement data obtained as the robot performs coverage and traverses into new and undiscovered areas, wherein: successive iterations of the map are larger in size due to an addition of newly discovered areas; newly captured LIDAR data comprises data corresponding with perimeters and objects that overlap with previously captured LIDAR data and data corresponding with perimeters that were not visible from a previous position of the robot from which the previously captured LIDAR data was obtained; and the newly captured LIDAR data is integrated into a previous iteration of the map to generate a larger map of the workspace, wherein areas of overlap are discounted them from the larger map; identifying, by the processor of the robot, a room in the map based on at least a portion of any of the captured images, the LIDAR data, and the movement data; actuating, by the processor of the robot, the robot to drive along a trajectory that follows along a planned path by providing pulses to one or more electric motors of wheels of the robot; and localizing, by the processor of the robot, the robot within an iteration of the map by estimating a position of the robot based on the movement data, slippage, and sensor errors; wherein: the robot performs coverage and finds new and undiscovered areas until determining, by the processor, all areas of the workspace are discovered and included in the map based on at least all the newly captured LIDAR data overlapping with the previously captured LIDAR data and the closure of all gaps the map; the map is transmitted to an application of a communication device previously paired with the robot; and the application is configured to display the map on a screen of the communication device.") wherein the control interface comprises a motion map and an identifier of the robot, and the identifier of the robot is a graphical icon or avatar in the motion map for identifying a position of the robot in a real environment (Paragraph 0320, "In some cases, the pose of the robot may be shown within a map displayed on a screen of a communication device." As well as Paragraph 0984, "In some embodiments, the processor may manipulate the map by cleaning up the map for navigation purposes or aesthetics purposes (e.g., displaying the map to a user). For example, FIG. 76A illustrates a perimeter 3600 of an environment that may not be aesthetically pleasing to a user. FIG. 76B illustrates an alternative version of the map illustrated in FIG. 76A wherein the perimeter 3601 may be more aesthetically pleasing to the user.", and Paragraph 01421, "In some embodiments, the application of the communication device may display the spatial representation of the environment as its being built and after completion; a movement path of the robot; a current position of the robot; a current position of a charging station of the robot; robot status; a current quantity of total area cleaned; a total area cleaned after completion of a task; a battery level; a current cleaning duration; an estimated total cleaning duration required to complete a task; an estimated total battery power required to complete a task, a time of completion of a task; obstacles within the spatial representation including object type of the obstacle and percent confidence of the object type; obstacles within the spatial representation including obstacles with unidentified object type; issues requiring user attention within the spatial representation; a fluid flow rate for different areas within the spatial representation; a notification that the robot has reached a particular location; cleaning history; user manual; maintenance information; lifetime of components; and firmware information.". Please also see Figure 32 and Paragraphs 0936 and 0942) … and
in response to an operation of the user … , controlling the robot to perform corresponding functions in the real environment corresponding to the motion map;
wherein the processor is further configured to execute the executable instructions to;
…
control the robot to move in a direction corresponding to the at least one direction control key in the real environment corresponding to the motion map. (Paragraph 0473, "The robot may be pushed by a human operator along a path during which sensors of the robot observe the environment, including landmark objects, such that they may learn the path and execute it autonomously in later work sessions. In future work sessions, the processor may understand a location of the robot and determine a next move of the robot upon sensing the presence of the object. The human operator may alternatively use an application of a communication device to draw the path of the robot in a displayed map. In some embodiments, upon detecting one or more particular visual words, such as the features defining the indentation pattern of object, the robot may autonomously execute one or more instructions. In embodiments, the robot may be manually set to react in various ways for different visual words or may be trained using a neural network that observes human behaviors while the robot is pushed around by the human. In embodiments, planned paths of the robot may almost be the same as a path a human would traverse and actual trajectories of the robot are deemed as acceptable. As the robot passes by landmarks, such as the object with unique indentation pattern, the processor of the robot may develop a reinforced sense of where the robot is expected to be located upon observing each landmark and where the robot is supposed to go. In some embodiments, the processor may be further refined by the operator training the robot digitally (e.g., via an application). The spatial representation of the environment (e.g., 2D, 3D, 3D+RGB, etc.) may be shown to the user using an application (e.g., using a mobile device or computer) and the user may use the application to draw lines that represent where the user wants the robot to drive." Also see paragraph 0794)
Ebrahimi does not specifically teach selecting an icon/visual representation of the robot on the interface in order to control the robot or triggering a remote control interface by selecting the robot. However, Lee, in the same field of endeavor of robotic control, teaches:
… and enabling a user to operate on the identifier; … on the identifier of the robot in the control interface, … the operation of the user on the identifier of the robot in the control interface, … (Paragraph 0069, “When the device selected upon receiving the input is movable and the input interface 16 receives the location to which the selected device is to be moved, the controller 13 may generate a control signal for moving the selected device to the received location. By receiving the location to which the device is moved, which is movable or needs to be moved such as a robot cleaner (e.g., a robot vacuum cleaner) and an air cleaner (e.g., an air purifier), the device can be moved. In addition to the location to which the device is to be moved, the movement path may be input by a user via dragging (e.g., a tap and drag gesture) or the like.” as well as Paragraphs 0094-0095, “Referring to FIG. 11, the setting zone of a cleaning robot may be set to coincide with the location of the cleaning robot. An arbitrary area may be designated as the setting zone as the cleaning robot is a moving device.
When the input interface 19 receives a touch input 102 for selecting the setting zone 101 of the cleaning robot from the user, the operation of the cleaning robot may be executed or interrupted. When information is received from the cleaning robot, the information 103 may be displayed on the display 12. The information may include status information or measurement information. For example, information such as progress or completion of cleaning, battery status, filter status information and operation error may be displayed. In addition, it is possible to set a target point 105 by changing the direction of the cleaning robot, by receiving a location to move, or by receiving a moving route. Further, information for locating the cleaning robot may be displayed on the display 12. In doing so, it is possible to accurately locate and guide the robot by using object recognition. Even if the cleaning robot is hidden behind another object and not visible in the image, the location of the cleaning robot may be determined by location tracking.” and Paragraphs 0044-0045, “A user may make an input by touching the screen with a finger, which hereinafter is referred to as a touch input. The touch input may include a tap gesture of a user touching the screen with a single finger for a short period of time, a long press gesture (also known as a tap-and-hold gesture) of a user touching the screen with a single finger for a long period of time (e.g., longer than a threshold time duration) and a multi-touch gesture of a user touching the screen with two or more fingers simultaneously.
When the user's touch input received by the input interface 19 falls within a setting zone associated with an indoor device in the image displayed on the display 12, the controller 13 generates a control signal for controlling the device. The setting zone refers to an area (e.g., a hotspot) on the display 12 designated by the user for controlling a specific device that is mapped to the area. The location of the setting zone of an indoor device to be controlled on the screen may be either the same as or different from the location of the indoor device on the screen.” This demonstrates that the user may apply a variety of gestures to a “hotspot”/”setting zone” representing a device which is controllable and performing control of said device based on the gesture input by the user.)
However, Duffley, in the same field of endeavor of robotics, teaches:
in response to … display a control mode selection interface, wherein the control mode selection interface is configured with a remote control option; (Paragraph 0111, “According to further embodiments, or according to the invention, and with reference to FIGS. 15-18, an application is provided on a mobile device 300 (which may be, for example, the local user terminal 142 having a touchscreen HMI) to provide additional functionality as described below. FIG. 15 shows an exemplary home screen 500 provided by the application to enable control and monitoring of the robot 200. The home screen 500 includes a control area 501 (the active input area of the touchscreen display of the device 300) and therein user manipulable control or interface elements in the form of a cleaning initiator button 512, a scheduling button 514, a cleaning strategy toggle button 516 (which toggles alternatingly between “QUICK” and “STANDARD” (not shown) status indicators when actuated), a dock recall button 520, a robot locator button 522, and a drive button 524. The home screen 500 may further display a robot identification 526 (e.g., a name (“Bruce”) assigned to the robot 200 by the user) as well as one or more operational messages 528 indicating a status of the robot 200 and/or other data.”)
in response to a click operation of the user on the remote control option, display a remote control interface, (Paragraph 0119, “When actuated, the drive button 524 will initiate a robot motive control screen (not shown) including user manipulable control elements (e.g., a virtual joystick or control pad) that the user can use to remotely control the movement of the robot 200 about the living space.”) wherein the remote control interface comprises a direction control area configured with at least one direction control key; and
in response to a click operation of the user on the at least one direction control key, (Paragraph 0181, “The HMI 370 may include, in addition to or in place of the touchscreen 372, any other suitable input device(s) including, for example, a touch activated or touch sensitive device, a joystick, a keyboard/keypad, a dial, a directional key or keys, and/or a pointing device (such as a mouse, trackball, touch pad, etc.).”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system as taught by Ebrahimi with the ability to select a visual representation of the robot via the user interface as taught by Lee and further with the ability to engage a remote control screen so as to control the movement of the robot as taught by Duffley. This would allow for the user to engage with the system quickly to control operation and “to monitor the statuses of the indoor devices in real time without requiring the user to navigate through different submenus for different devices” (Lee, Paragraph 0007) and further allow a user to drive the robot out of a location where it has become stuck without needing to physical find and move the robot.
Regarding claim 22, where all the limitations of claim 1 are discussed above, Ebrahimi does not specifically teach using a dragging operation to define a control of the robot. However, Lee, in the same field of endeavor of robotic control, teaches:
22. (New) The interactive method according to claim 1, wherein the operation of the user on the identifier of the robot comprises at least one of the following: a click operation on the identifier of the robot, a long press operation on the identifier of the robot, a slide operation on the identifier of the robot, or a dragging operation on the identifier of the robot.
(Paragraph 0069, “When the device selected upon receiving the input is movable and the input interface 16 receives the location to which the selected device is to be moved, the controller 13 may generate a control signal for moving the selected device to the received location. By receiving the location to which the device is moved, which is movable or needs to be moved such as a robot cleaner (e.g., a robot vacuum cleaner) and an air cleaner (e.g., an air purifier), the device can be moved. In addition to the location to which the device is to be moved, the movement path may be input by a user via dragging (e.g., a tap and drag gesture) or the like.” as well as Paragraphs 0094-0095, “Referring to FIG. 11, the setting zone of a cleaning robot may be set to coincide with the location of the cleaning robot. An arbitrary area may be designated as the setting zone as the cleaning robot is a moving device.
When the input interface 19 receives a touch input 102 for selecting the setting zone 101 of the cleaning robot from the user, the operation of the cleaning robot may be executed or interrupted. When information is received from the cleaning robot, the information 103 may be displayed on the display 12. The information may include status information or measurement information. For example, information such as progress or completion of cleaning, battery status, filter status information and operation error may be displayed. In addition, it is possible to set a target point 105 by changing the direction of the cleaning robot, by receiving a location to move, or by receiving a moving route. Further, information for locating the cleaning robot may be displayed on the display 12. In doing so, it is possible to accurately locate and guide the robot by using object recognition. Even if the cleaning robot is hidden behind another object and not visible in the image, the location of the cleaning robot may be determined by location tracking.” and Paragraphs 0044-0045, “A user may make an input by touching the screen with a finger, which hereinafter is referred to as a touch input. The touch input may include a tap gesture of a user touching the screen with a single finger for a short period of time, a long press gesture (also known as a tap-and-hold gesture) of a user touching the screen with a single finger for a long period of time (e.g., longer than a threshold time duration) and a multi-touch gesture of a user touching the screen with two or more fingers simultaneously.
When the user's touch input received by the input interface 19 falls within a setting zone associated with an indoor device in the image displayed on the display 12, the controller 13 generates a control signal for controlling the device. The setting zone refers to an area (e.g., a hotspot) on the display 12 designated by the user for controlling a specific device that is mapped to the area. The location of the setting zone of an indoor device to be controlled on the screen may be either the same as or different from the location of the indoor device on the screen.” This demonstrates that the user may apply a variety of gestures to a “hotspot”/”setting zone” representing a device which is controllable and performing control of said device based on the gesture input by the user.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system as taught by Ebrahimi with the ability to input control instructions using a dragging action and to select a visual representation of the robot via the user interface so as to control the robot as taught by Lee. This would allow for the user to engage with the system quickly to control operation and “to monitor the statuses of the indoor devices in real time without requiring the user to navigate through different submenus for different devices” (Lee, Paragraph 0007).
Regarding claim 23, where all the limitations of claim 1 are discussed above, Ebrahimi does not specifically discuss the remote control interface including a search and a recharge option. However, Duffley, in the same field of endeavor of robotics, teaches:
23. (New) The interactive method according to claim 1, wherein the remote control interface is further configured with a device search option and a recharge key. (Paragraphs 0116-0117, “The dock recall button 520 and the robot locator button 522 collectively form a physical recall control group. The physical recall group has an immediate recall control state (by actuating the dock recall button 520) and a remote audible locator control state (by actuating the robot locator button 522). When activated, the dock recall button 520 will cause the device 300 to command the robot 200 to return to the dock 140.
[0117] When activated, the robot locator button 522 will cause the device 300 to command the robot 200 to emit an audible signal (e.g., beeping from an audio transducer or speaker 274B; FIG. 3). The user can use the audible signal to locate the robot 200.” As well as Paragraph 0038, “The robot dock 140 may include or be connected to a power supply and include a charger operative to charge a battery of the mobile robot 200 when the robot 200 is effectively docked at the robot dock 140. The dock 140 may include a receptacle to empty debris from the robot 200. In some embodiments, or in the invention, the dock 140 is connected (wired or wirelessly) to the private network 160 to enable or facilitate transmission of data from the robot 200 to the private network 160 and/or from the private network 160 to the robot 200.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic system as taught by Ebrahimi with the remote control interface including the ability to “find” the robot and to return the robot to its dock for charging as taught by Duffley. This would allow the user to efficiently control the operation of the robot without the need to physically approach the robot.
Claim(s) 4-6, 16-19 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi in view of Lee and Duffley and in further view of Cheuvront et al. (US 20210260773 A1), hereinafter Cheuvront.
Regarding claim 4, where all the limitations of claim 1 are discussed above, Ebrahimi does not specifically teach the robot performing an alert in response to a user. However, Cheuvront, in the same field of endeavor of robotic control, teaches:
4. (Currently Amended) The interactive method according to claim [[3,]] 1 wherein the control mode selection interface is further provided with a device search option; and
the interactive method further comprises:
in response to an operation of the user on the device search option, controlling the robot to perform a preset reminding action. (Paragraph 0205, "In some examples, the user 100 provides (806) a vocal request for a notification of the present location of the mobile robot 300. To determine the present location of the mobile robot 300, the remote computing system 200 checks the robot map and the mobile robot's estimated position within the robot map. The remote computing system 200 then causes the audio media device 400 to emit an audible notification indicating a location of the mobile robot 300. In some implementations, the audio media device 400 emits a notification that indicates a location of the mobile robot 300 relative to the user 100 or the audio media device 400. The audible notification, for example, indicates that the mobile robot 300 is located in a cardinal direction relative to the audio media device 400. In some cases, the remote computing system 200 identifies a room where the mobile robot 300 is located using the constructed robot map or acoustic map and then causes the name of the room to be indicated in the audible notification. In some implementations, if the mobile robot's location is unknown, the voice command initiates a process to determine the mobile robot's location. The voice command, for example, causes the audio emission system 312 of the mobile robot 300 to emit an acoustic signal to be received by the audio media device 400. The audio media device 400 then determines the location of the mobile robot 300 based on the characteristics of the acoustic signal as received by the microphone unit 402 of the audio media device 400. In some cases, to notify the user 100 of the present location of the mobile robot 300, the voice command causes the mobile robot 300 to emit an acoustic signal or periodically emit acoustic signals, e.g., every 1 to 5 seconds, to help the user 100 find the location of the mobile robot 300 within the home 10.")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic control system as taught by Ebrahimi with the ability to cause the robot to emit an alert to assist a user in finding the device as taught by Cheuvront. While Ebrahimi teaches an alert/notification system, they do not specifically disclose the ability to utilize this notification system when a user is searching for the robot. It would be obvious to incorporate the ability for the robot to emit a notification to assist a user in finding the robot as taught by Cheuvront. This would ensure the user is able to easily locate the robot within the environment.
Regarding claim 5, where all the limitations of claim 1 are discussed above, Ebrahimi does not specifically teach the robot performing an alert in response to a user. However, Cheuvront, in the same field of endeavor of robotic control, teaches:
5. (Currently Amended) The interactive method according to claim [[2]] 1, wherein the remote control interface is further provided with a device search option;
the interactive method further comprises:
in response to an operation of the user on the device search option, controlling the robot to perform a preset reminding action. (Paragraph 0205, "In some examples, the user 100 provides (806) a vocal request for a notification of the present location of the mobile robot 300. To determine the present location of the mobile robot 300, the remote computing system 200 checks the robot map and the mobile robot's estimated position within the robot map. The remote computing system 200 then causes the audio media device 400 to emit an audible notification indicating a location of the mobile robot 300. In some implementations, the audio media device 400 emits a notification that indicates a location of the mobile robot 300 relative to the user 100 or the audio media device 400. The audible notification, for example, indicates that the mobile robot 300 is located in a cardinal direction relative to the audio media device 400. In some cases, the remote computing system 200 identifies a room where the mobile robot 300 is located using the constructed robot map or acoustic map and then causes the name of the room to be indicated in the audible notification. In some implementations, if the mobile robot's location is unknown, the voice command initiates a process to determine the mobile robot's location. The voice command, for example, causes the audio emission system 312 of the mobile robot 300 to emit an acoustic signal to be received by the audio media device 400. The audio media device 400 then determines the location of the mobile robot 300 based on the characteristics of the acoustic signal as received by the microphone unit 402 of the audio media device 400. In some cases, to notify the user 100 of the present location of the mobile robot 300, the voice command causes the mobile robot 300 to emit an acoustic signal or periodically emit acoustic signals, e.g., every 1 to 5 seconds, to help the user 100 find the location of the mobile robot 300 within the home 10.")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic control system as taught by Ebrahimi with the ability to cause the robot to emit an alert to assist a user in finding the device as taught by Cheuvront. While Ebrahimi teaches an alert/notification system, they do not specifically disclose the ability to utilize this notification system when a user is searching for the robot. It would be obvious to incorporate the ability for the robot to emit a notification to assist a user in finding the robot as taught by Cheuvront. This would ensure the user is able to easily locate the robot within the environment.
Regarding claim 6, where all the limitations of claim 1 are discussed above, Ebrahimi does not specifically teach the robot performing an alert in response to a user or performing the selection of an icon representing the robot. However, Cheuvront, in the same field of endeavor of robotic control, teaches:
6. (Currently Amended) The interactive method according to claim 1, wherein the controlling the robot to perform corresponding functions in the real environment corresponding to the motion map in response to an operation of the user … further comprises:
in response to the operation of the user … , controlling the robot to perform a preset reminding action. (Paragraph 0205, "In some examples, the user 100 provides (806) a vocal request for a notification of the present location of the mobile robot 300. To determine the present location of the mobile robot 300, the remote computing system 200 checks the robot map and the mobile robot's estimated position within the robot map. The remote computing system 200 then causes the audio media device 400 to emit an audible notification indicating a location of the mobile robot 300. In some implementations, the audio media device 400 emits a notification that indicates a location of the mobile robot 300 relative to the user 100 or the audio media device 400. The audible notification, for example, indicates that the mobile robot 300 is located in a cardinal direction relative to the audio media device 400. In some cases, the remote computing system 200 identifies a room where the mobile robot 300 is located using the constructed robot map or acoustic map and then causes the name of the room to be indicated in the audible notification. In some implementations, if the mobile robot's location is unknown, the voice command initiates a process to determine the mobile robot's location. The voice command, for example, causes the audio emission system 312 of the mobile robot 300 to emit an acoustic signal to be received by the audio media device 400. The audio media device 400 then determines the location of the mobile robot 300 based on the characteristics of the acoustic signal as received by the microphone unit 402 of the audio media device 400. In some cases, to notify the user 100 of the present location of the mobile robot 300, the voice command causes the mobile robot 300 to emit an acoustic signal or periodically emit acoustic signals, e.g., every 1 to 5 seconds, to help the user 100 find the location of the mobile robot 300 within the home 10.")
However, Lee, in the same field of endeavor, teaches:
… on the identifier of the robot in the control interface … on the identifier of the robot in the control interface … (Paragraph 0069, “When the device selected upon receiving the input is movable and the input interface 16 receives the location to which the selected device is to be moved, the controller 13 may generate a control signal for moving the selected device to the received location. By receiving the location to which the device is moved, which is movable or needs to be moved such as a robot cleaner (e.g., a robot vacuum cleaner) and an air cleaner (e.g., an air purifier), the device can be moved. In addition to the location to which the device is to be moved, the movement path may be input by a user via dragging (e.g., a tap and drag gesture) or the like.” as well as Paragraphs 0094-0095, “Referring to FIG. 11, the setting zone of a cleaning robot may be set to coincide with the location of the cleaning robot. An arbitrary area may be designated as the setting zone as the cleaning robot is a moving device.
When the input interface 19 receives a touch input 102 for selecting the setting zone 101 of the cleaning robot from the user, the operation of the cleaning robot may be executed or interrupted. When information is received from the cleaning robot, the information 103 may be displayed on the display 12. The information may include status information or measurement information. For example, information such as progress or completion of cleaning, battery status, filter status information and operation error may be displayed. In addition, it is possible to set a target point 105 by changing the direction of the cleaning robot, by receiving a location to move, or by receiving a moving route. Further, information for locating the cleaning robot may be displayed on the display 12. In doing so, it is possible to accurately locate and guide the robot by using object recognition. Even if the cleaning robot is hidden behind another object and not visible in the image, the location of the cleaning robot may be determined by location tracking.” and Paragraphs 0044-0045, “A user may make an input by touching the screen with a finger, which hereinafter is referred to as a touch input. The touch input may include a tap gesture of a user touching the screen with a single finger for a short period of time, a long press gesture (also known as a tap-and-hold gesture) of a user touching the screen with a single finger for a long period of time (e.g., longer than a threshold time duration) and a multi-touch gesture of a user touching the screen with two or more fingers simultaneously.
When the user's touch input received by the input interface 19 falls within a setting zone associated with an indoor device in the image displayed on the display 12, the controller 13 generates a control signal for controlling the device. The setting zone refers to an area (e.g., a hotspot) on the display 12 designated by the user for controlling a specific device that is mapped to the area. The location of the setting zone of an indoor device to be controlled on the screen may be either the same as or different from the location of the indoor device on the screen.” This demonstrates that the user may apply a variety of gestures to a “hotspot”/”setting zone” representing a device which is controllable and performing control of said device based on the gesture input by the user.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic control system as taught by Ebrahimi with the ability to cause the robot to emit an alert to assist a user in finding the device as taught by Cheuvront and with the ability to select the robot and drag the icon via the user interface so as to control the robot as taught by Lee. While Ebrahimi teaches an alert/notification system, they do not specifically disclose the ability to utilize this notification system when a user selects the robot on the interface. It would be obvious to incorporate the ability for the robot to emit a notification to assist a user in identifying the robot as taught by Cheuvront. This would ensure the user is able to easily locate the robot within the environment especially in a condition where there are a plurality of robots present. The incorporation of the ability to select the robot icon and perform operations such as dragging, would allow for the user to engage with the system quickly to control operation and “to monitor the statuses of the indoor devices in real time without requiring the user to navigate through different submenus for different devices” (Lee, Paragraph 0007).
Regarding claim 16, where all the limitations of claim 13 are discussed above, Ebrahimi does not specifically teach the robot performing an alert in response to a user. However, Cheuvront, in the same field of endeavor of robotic control, teaches:
16. (Currently Amended) The electronic device according to claim [[15]] 13, wherein the control mode selection interface is further provided with a device search option, and the processor is further configured to execute the executable instructions to:
in response to an operation of the user on the device search option, control the robot to perform a preset reminding action. (Paragraph 0205, "In some examples, the user 100 provides (806) a vocal request for a notification of the present location of the mobile robot 300. To determine the present location of the mobile robot 300, the remote computing system 200 checks the robot map and the mobile robot's estimated position within the robot map. The remote computing system 200 then causes the audio media device 400 to emit an audible notification indicating a location of the mobile robot 300. In some implementations, the audio media device 400 emits a notification that indicates a location of the mobile robot 300 relative to the user 100 or the audio media device 400. The audible notification, for example, indicates that the mobile robot 300 is located in a cardinal direction relative to the audio media device 400. In some cases, the remote computing system 200 identifies a room where the mobile robot 300 is located using the constructed robot map or acoustic map and then causes the name of the room to be indicated in the audible notification. In some implementations, if the mobile robot's location is unknown, the voice command initiates a process to determine the mobile robot's location. The voice command, for example, causes the audio emission system 312 of the mobile robot 300 to emit an acoustic signal to be received by the audio media device 400. The audio media device 400 then determines the location of the mobile robot 300 based on the characteristics of the acoustic signal as received by the microphone unit 402 of the audio media device 400. In some cases, to notify the user 100 of the present location of the mobile robot 300, the voice command causes the mobile robot 300 to emit an acoustic signal or periodically emit acoustic signals, e.g., every 1 to 5 seconds, to help the user 100 find the location of the mobile robot 300 within the home 10.")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic control system as taught by Ebrahimi with the ability to cause the robot to emit an alert to assist a user in finding the device as taught by Cheuvront. While Ebrahimi teaches an alert/notification system, they do not specifically disclose the ability to utilize this notification system when a user is searching for the robot. It would be obvious to incorporate the ability for the robot to emit a notification to assist a user in finding the robot as taught by Cheuvront. This would ensure the user is able to easily locate the robot within the environment.
Regarding claim 17, where all the limitations of claim 13 are discussed above, Ebrahimi does not specifically teach the robot performing an alert in response to a user. However, Cheuvront, in the same field of endeavor of robotic control, teaches:
17. (Currently Amended) The electronic device according to claim [[14]] 13, wherein the remote control interface is further provided with a device search option, and the processor is further configured to execute the executable instructions to:
in response to an operation of the user on the device search option, control the robot to perform a preset reminding action. (Paragraph 0205, "In some examples, the user 100 provides (806) a vocal request for a notification of the present location of the mobile robot 300. To determine the present location of the mobile robot 300, the remote computing system 200 checks the robot map and the mobile robot's estimated position within the robot map. The remote computing system 200 then causes the audio media device 400 to emit an audible notification indicating a location of the mobile robot 300. In some implementations, the audio media device 400 emits a notification that indicates a location of the mobile robot 300 relative to the user 100 or the audio media device 400. The audible notification, for example, indicates that the mobile robot 300 is located in a cardinal direction relative to the audio media device 400. In some cases, the remote computing system 200 identifies a room where the mobile robot 300 is located using the constructed robot map or acoustic map and then causes the name of the room to be indicated in the audible notification. In some implementations, if the mobile robot's location is unknown, the voice command initiates a process to determine the mobile robot's location. The voice command, for example, causes the audio emission system 312 of the mobile robot 300 to emit an acoustic signal to be received by the audio media device 400. The audio media device 400 then determines the location of the mobile robot 300 based on the characteristics of the acoustic signal as received by the microphone unit 402 of the audio media device 400. In some cases, to notify the user 100 of the present location of the mobile robot 300, the voice command causes the mobile robot 300 to emit an acoustic signal or periodically emit acoustic signals, e.g., every 1 to 5 seconds, to help the user 100 find the location of the mobile robot 300 within the home 10.")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic control system as taught by Ebrahimi with the ability to cause the robot to emit an alert to assist a user in finding the device as taught by Cheuvront. While Ebrahimi teaches an alert/notification system, they do not specifically disclose the ability to utilize this notification system when a user is searching for the robot. It would be obvious to incorporate the ability for the robot to emit a notification to assist a user in finding the robot as taught by Cheuvront. This would ensure the user is able to easily locate the robot within the environment.
Regarding claim 18, where all the limitations of claim 13 are discussed above, Ebrahimi does not specifically teach the robot performing an alert in response to a user or performing the selection of an icon representing the robot. However, Cheuvront, in the same field of endeavor of robotic control, teaches:
18. (Currently Amended) The electronic device according to claim 13, wherein the processor is further configured to execute the executable instructions to:
in response to the operation of the user … , control the robot to perform a preset reminding action. (Paragraph 0205, "In some examples, the user 100 provides (806) a vocal request for a notification of the present location of the mobile robot 300. To determine the present location of the mobile robot 300, the remote computing system 200 checks the robot map and the mobile robot's estimated position within the robot map. The remote computing system 200 then causes the audio media device 400 to emit an audible notification indicating a location of the mobile robot 300. In some implementations, the audio media device 400 emits a notification that indicates a location of the mobile robot 300 relative to the user 100 or the audio media device 400. The audible notification, for example, indicates that the mobile robot 300 is located in a cardinal direction relative to the audio media device 400. In some cases, the remote computing system 200 identifies a room where the mobile robot 300 is located using the constructed robot map or acoustic map and then causes the name of the room to be indicated in the audible notification. In some implementations, if the mobile robot's location is unknown, the voice command initiates a process to determine the mobile robot's location. The voice command, for example, causes the audio emission system 312 of the mobile robot 300 to emit an acoustic signal to be received by the audio media device 400. The audio media device 400 then determines the location of the mobile robot 300 based on the characteristics of the acoustic signal as received by the microphone unit 402 of the audio media device 400. In some cases, to notify the user 100 of the present location of the mobile robot 300, the voice command causes the mobile robot 300 to emit an acoustic signal or periodically emit acoustic signals, e.g., every 1 to 5 seconds, to help the user 100 find the location of the mobile robot 300 within the home 10.")
However, Lee, in the same field of endeavor, teaches:
… on the identifier of the robot in the control interface … (Paragraph 0069, “When the device selected upon receiving the input is movable and the input interface 16 receives the location to which the selected device is to be moved, the controller 13 may generate a control signal for moving the selected device to the received location. By receiving the location to which the device is moved, which is movable or needs to be moved such as a robot cleaner (e.g., a robot vacuum cleaner) and an air cleaner (e.g., an air purifier), the device can be moved. In addition to the location to which the device is to be moved, the movement path may be input by a user via dragging (e.g., a tap and drag gesture) or the like.” as well as Paragraphs 0094-0095, “Referring to FIG. 11, the setting zone of a cleaning robot may be set to coincide with the location of the cleaning robot. An arbitrary area may be designated as the setting zone as the cleaning robot is a moving device.
When the input interface 19 receives a touch input 102 for selecting the setting zone 101 of the cleaning robot from the user, the operation of the cleaning robot may be executed or interrupted. When information is received from the cleaning robot, the information 103 may be displayed on the display 12. The information may include status information or measurement information. For example, information such as progress or completion of cleaning, battery status, filter status information and operation error may be displayed. In addition, it is possible to set a target point 105 by changing the direction of the cleaning robot, by receiving a location to move, or by receiving a moving route. Further, information for locating the cleaning robot may be displayed on the display 12. In doing so, it is possible to accurately locate and guide the robot by using object recognition. Even if the cleaning robot is hidden behind another object and not visible in the image, the location of the cleaning robot may be determined by location tracking.” and Paragraphs 0044-0045, “A user may make an input by touching the screen with a finger, which hereinafter is referred to as a touch input. The touch input may include a tap gesture of a user touching the screen with a single finger for a short period of time, a long press gesture (also known as a tap-and-hold gesture) of a user touching the screen with a single finger for a long period of time (e.g., longer than a threshold time duration) and a multi-touch gesture of a user touching the screen with two or more fingers simultaneously.
When the user's touch input received by the input interface 19 falls within a setting zone associated with an indoor device in the image displayed on the display 12, the controller 13 generates a control signal for controlling the device. The setting zone refers to an area (e.g., a hotspot) on the display 12 designated by the user for controlling a specific device that is mapped to the area. The location of the setting zone of an indoor device to be controlled on the screen may be either the same as or different from the location of the indoor device on the screen.” This demonstrates that the user may apply a variety of gestures to a “hotspot”/”setting zone” representing a device which is controllable and performing control of said device based on the gesture input by the user.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic control system as taught by Ebrahimi with the ability to cause the robot to emit an alert to assist a user in finding the device as taught by Cheuvront and with the ability to select the robot and drag the icon via the user interface so as to control the robot as taught by Lee. While Ebrahimi teaches an alert/notification system, they do not specifically disclose the ability to utilize this notification system when a user selects the robot on the interface. It would be obvious to incorporate the ability for the robot to emit a notification to assist a user in identifying the robot as taught by Cheuvront. This would ensure the user is able to easily locate the robot within the environment especially in a condition where there are a plurality of robots present. The incorporation of the ability to select the robot icon and perform operations such as dragging, would allow for the user to engage with the system quickly to control operation and “to monitor the statuses of the indoor devices in real time without requiring the user to navigate through different submenus for different devices” (Lee, Paragraph 0007).
Regarding claim 19, where all the limitations of claim 16 are discussed above, Ebrahimi does not specifically teach the predetermined alert being audio, vibration or light flickering. However, Cheuvront, in the same field of endeavor of robotic control, teaches:
19. (Currently Amended) The electronic device according to claim 16, wherein the processor is further configured to execute the executable instructions to:
control the robot to perform at least one preset reminding action selected from the group consisting of:
audio playback, vibration, and light flicker. (Paragraph 0205, "In some examples, the user 100 provides (806) a vocal request for a notification of the present location of the mobile robot 300. To determine the present location of the mobile robot 300, the remote computing system 200 checks the robot map and the mobile robot's estimated position within the robot map. The remote computing system 200 then causes the audio media device 400 to emit an audible notification indicating a location of the mobile robot 300. In some implementations, the audio media device 400 emits a notification that indicates a location of the mobile robot 300 relative to the user 100 or the audio media device 400. The audible notification, for example, indicates that the mobile robot 300 is located in a cardinal direction relative to the audio media device 400. In some cases, the remote computing system 200 identifies a room where the mobile robot 300 is located using the constructed robot map or acoustic map and then causes the name of the room to be indicated in the audible notification. In some implementations, if the mobile robot's location is unknown, the voice command initiates a process to determine the mobile robot's location. The voice command, for example, causes the audio emission system 312 of the mobile robot 300 to emit an acoustic signal to be received by the audio media device 400. The audio media device 400 then determines the location of the mobile robot 300 based on the characteristics of the acoustic signal as received by the microphone unit 402 of the audio media device 400. In some cases, to notify the user 100 of the present location of the mobile robot 300, the voice command causes the mobile robot 300 to emit an acoustic signal or periodically emit acoustic signals, e.g., every 1 to 5 seconds, to help the user 100 find the location of the mobile robot 300 within the home 10.")
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic control system as taught by Ebrahimi with the ability to cause the robot to emit an alert to assist a user in finding the device as taught by Cheuvront. While Ebrahimi teaches an alert/notification system, they do not specifically disclose the ability to utilize this notification system when a user selects the robot on the interface. It would be obvious to incorporate the ability for the robot to emit a notification to assist a user in identifying the robot as taught by Cheuvront. This would ensure the user is able to easily locate the robot within the environment especially in a condition where there are a plurality of robots present.
Regarding claim 21, where all the limitations of claim 1 are discussed above, Ebrahimi further teaches:
21. (Currently Amended) The interactive method according to claim 1, wherein
(Paragraph 0579, "In some embodiments, the processor of the robot detects rooms in real time. In some embodiments, the processor predicts a room within which the robot is in based on a comparison between real time data collected and map data. For example, the processor may detect a particular room upon identifying a particular feature known to be present within the particular room. In some embodiments, the processor of the robot uses room detection to perform work in one room at a time. In some embodiments, the processor determines a logical segmentation of rooms based on any of sensor data and user input received by the application designating rooms in the map. In some embodiments, rooms segmented by the processor or the user using the application are different shapes and sizes and are not limited to being a rectangular shape.") … controlling the robot to perform corresponding functions in the real environment corresponding to the motion map (Paragraph 0473, "The robot may be pushed by a human operator along a path during which sensors of the robot observe the environment, including landmark objects, such that they may learn the path and execute it autonomously in later work sessions. In future work sessions, the processor may understand a location of the robot and determine a next move of the robot upon sensing the presence of the object. The human operator may alternatively use an application of a communication device to draw the path of the robot in a displayed map. In some embodiments, upon detecting one or more particular visual words, such as the features defining the indentation pattern of object, the robot may autonomously execute one or more instructions. In embodiments, the robot may be manually set to react in various ways for different visual words or may be trained using a neural network that observes human behaviors while the robot is pushed around by the human. In embodiments, planned paths of the robot may almost be the same as a path a human would traverse and actual trajectories of the robot are deemed as acceptable. As the robot passes by landmarks, such as the object with unique indentation pattern, the processor of the robot may develop a reinforced sense of where the robot is expected to be located upon observing each landmark and where the robot is supposed to go. In some embodiments, the processor may be further refined by the operator training the robot digitally (e.g., via an application). The spatial representation of the environment (e.g., 2D, 3D, 3D+RGB, etc.) may be shown to the user using an application (e.g., using a mobile device or computer) and the user may use the application to draw lines that represent where the user wants the robot to drive." Also see paragraph 0794) further comprises at least one of the following:
changing a motion zone of the robot, in response to inputting a movement operation of the user (Paragraph 0771, "In some embodiments, a user may interact with the robot using different gestures and interaction types. For example, a user may gently kick or taps the robot twice (or another number of time) to skip a current room and move onto a next room or end a current scheduled cleaning round.") … and …
Ebrahimi does not specifically teach allowing a user to interact with the visual representation of the robot directly to trigger action or to cause the robot to indicate the location to a user in order to allow a user to find the robot. However, Cheuvront, in the same field of endeavor of robotic control, teaches:
… triggering the robot to perform a reminding action in order to find the robot, in response to a direct operation of the user (Paragraph 0111, "A user 100 communicates to the mobile robot 300 a particular room identifier (e.g., kitchen) associated with those recognizable objects. During a mission, when the mobile robot 300 recognizes these objects, it communicates its location to the user by causing emission of an audible alert, e.g., by requesting that the AMD 400 or the mobile computing device 202 produce an audible alert, or causing a visual alert to issue, e.g., by displaying a text notification on the mobile computing device 202 indicating the associated stored room identifier.") …
However, Lee, in the same field of endeavor of robotic control, teaches:
… in response to the operation of the user on the identifier of the robot in the control interface, … on the identifier of the robot; … on the identifier of the robot. (Paragraph 0069, “When the device selected upon receiving the input is movable and the input interface 16 receives the location to which the selected device is to be moved, the controller 13 may generate a control signal for moving the selected device to the received location. By receiving the location to which the device is moved, which is movable or needs to be moved such as a robot cleaner (e.g., a robot vacuum cleaner) and an air cleaner (e.g., an air purifier), the device can be moved. In addition to the location to which the device is to be moved, the movement path may be input by a user via dragging (e.g., a tap and drag gesture) or the like.” as well as Paragraphs 0094-0095, “Referring to FIG. 11, the setting zone of a cleaning robot may be set to coincide with the location of the cleaning robot. An arbitrary area may be designated as the setting zone as the cleaning robot is a moving device.
When the input interface 19 receives a touch input 102 for selecting the setting zone 101 of the cleaning robot from the user, the operation of the cleaning robot may be executed or interrupted. When information is received from the cleaning robot, the information 103 may be displayed on the display 12. The information may include status information or measurement information. For example, information such as progress or completion of cleaning, battery status, filter status information and operation error may be displayed. In addition, it is possible to set a target point 105 by changing the direction of the cleaning robot, by receiving a location to move, or by receiving a moving route. Further, information for locating the cleaning robot may be displayed on the display 12. In doing so, it is possible to accurately locate and guide the robot by using object recognition. Even if the cleaning robot is hidden behind another object and not visible in the image, the location of the cleaning robot may be determined by location tracking.” and Paragraphs 0044-0045, “A user may make an input by touching the screen with a finger, which hereinafter is referred to as a touch input. The touch input may include a tap gesture of a user touching the screen with a single finger for a short period of time, a long press gesture (also known as a tap-and-hold gesture) of a user touching the screen with a single finger for a long period of time (e.g., longer than a threshold time duration) and a multi-touch gesture of a user touching the screen with two or more fingers simultaneously.
When the user's touch input received by the input interface 19 falls within a setting zone associated with an indoor device in the image displayed on the display 12, the controller 13 generates a control signal for controlling the device. The setting zone refers to an area (e.g., a hotspot) on the display 12 designated by the user for controlling a specific device that is mapped to the area. The location of the setting zone of an indoor device to be controlled on the screen may be either the same as or different from the location of the indoor device on the screen.” This demonstrates that the user may apply a variety of gestures to a “hotspot”/”setting zone” representing a device which is controllable and performing control of said device based on the gesture input by the user.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to combine the robotic control system as taught by Ebrahimi with the ability to cause the robot to emit an alert to assist a user in finding the device as taught by Cheuvront as well as with the ability to interact with the visual representation (hotspot/setting zone) of the robot to trigger action as taught by Lee. While Ebrahimi teaches an alert/notification system, they do not specifically disclose the ability to utilize this notification system when a user selects the robot on the interface. It would be obvious to incorporate the ability for the robot to emit a notification to assist a user in identifying the robot as taught by Cheuvront. This would ensure the user is able to easily locate the robot within the environment especially in a condition where there are a plurality of robots present. Incorporating the ability to interact with the visual representation via the user interface would allow the user to easily trigger action remotely.
Conclusion
The Examiner has cited particular paragraphs or columns and line numbers in the referencesapplied to the claims above for the convenience of the Applicant. Although the specified citations arerepresentative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
/H.J.K./Examiner, Art Unit 3657
/ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657