Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/16/2026 has been entered.
Response to Amendment
This action is in response to amendments and remarks filed on 01/16/2026. Claims 1-12 and 15-21 are considered in this office action. Claims 13-14 are cancelled. Claims 1, 19, and 21 have been amended. Claims1-12 and 15-21 are pending examination. Applicant’s new claim limitations necessitated new grounds of rejection; therefore claims 1-12 and 15-21 are rejected.
Response to Arguments
Applicant presents the following arguments regarding the previous office action:
Jones fails to disclose real time monitoring of user position to base dynamic and continuous adjustments of the Do Not Disturb area.
Regarding applicant’s argument A, with respect to the amended independent claims, have been fully considered and are moot in light of the new grounds for rejection below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 8, 11 and 21 are all rejected under 35 U.S.C. 103 as being unpatentable over Jones et al (US10878294B2) in view of Mateus et al. (Efficient and robust Pedestrian Detection using Deep Learning for Human-Aware Navigation).
Regarding claim 1, Jones discloses, a work tool system comprising a work tool and a server, the server comprising a controller and a communication interface and the work tool comprising a controller and a communication interface, wherein the server is configured to: receive movement indications for a user through the communication interface; (Jones, Description, Paragraph 41, in the example of FIG. 3, the home 300 includes linked devices 328A and 328B. In some implementations, each of the linked devices 328A and 328B includes, e.g., sensors suitable for performing one or more of monitoring the home 300, monitoring occupants of the home 300, and monitoring operations of the mobile cleaning robot 102); determine a movement pattern based on the movement indications; determine a Do Not Disturb area suitable for the movement pattern; and to transmit information on the Do Not Disturb area to the work tool through the communication interface; (Jones, Summary, Paragraph 39, the mobile cleaning robot can include a communication module that is configured to communicate with at least one other device in the environment that collects information about the environment, in which the learning module can be configured to establish the model of the environment also based on information provided from the at least one other device) … (the mobile cleaning robot can include a learning module configured to determine foot traffic pattern in the environment based on the images captured by the at least one camera over a period of time. The control module can be configured to schedule a cleaning task to be performed at an area taking into account of the foot traffic pattern at the area. The control module can be configured to schedule a cleaning task to be performed at the area during a time period when there is less foot traffic at the area as compared to other time periods); and wherein the work tool is configured to: receive information on the Do Not Disturb area; control the work tool so that the Do Not Disturb area is not violated (Jones, Summary, Paragraph 42, the control module can be configured to, upon analyzing the calendar and determining that a party or gathering is to be held at a particular time of a particular day, schedule the cleaning tasks to be completed before the start time of the party or gathering) … (Description, Paragraph 54, the control module 110 is configured to, upon recognizing that a pet (e.g., a dog or cat) is located at a first location, control the mobile robot to keep a distance from the pet and navigate to a second location away from the first location, and perform a cleaning task at the second location to avoid disturbing the pet), wherein the work tool system is a robotic work tool system and the work tool is a robotic work tool, (Abstract, a mobile cleaning robot includes a cleaning head configured to clean a floor surface in an environment) … (Summary Paragraph 1, the description features a system for enabling a mobile robot to be aware of its surroundings and perform tasks taking into account of the characteristics of its surroundings), and wherein the robotic work tool is configured to control the robotic work tool so that the Do Not Disturb area is not violated, by controlling the navigation of the robotic work tool so that the Do Not Disturb area is not entered (Summary, Paragraph 9, an option to select a no-touch mode for the location or a region encompassing the location, in which the no-touch mode indicates that the robot is controlled to not bump into any object above ground level at the location or the region encompassing the location). However, Jones does not explicitly disclose, a camera arranged on the work tool that tracks the movement indications of the user in real-time and wherein the Do Not Disturb area is continually adjusted based on the movement indications of the user tracked in real-time.
Nevertheless, Mateus who is in the same field of endeavor of work robot human-aware navigation discloses, a camera arranged on the work tool (1. Introduction, we address the robot navigation in the presence of humans, resorting to multi cameras (static outside and/or onboard cameras) for the vision-based person tracking system), tracks the movement indications of the user in real-time (1.1.2. Human-Aware Navigation, constraints concerning social rules (e.g., navigate on the right side of narrow passages) and low-level human navigation behavior (e.g., face direction of movement) are also taken into account), and wherein the Do Not Disturb area is continually adjusted based on the movement indications of the user (1.1.2. Human-Aware Navigation, prediction cost function which, by increasing the cost in front of a moving human, decreases the probability of the robot entering that area), racked in real-time (4.2. Performance evaluation on real scenarios, it is possible to obtain approximately 10 FPS for each of the datasets (as shown in Table 2, field “Threshold”), which is suited for real-time applications).
It would have been prima facie obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to combine elements of Jones and Mateus disclosures. This would serve to enhance the system by providing additional cameras to collect more information about the environment, for increasing the level of precision on the estimation of a user/persons position. This in turn allows for robot navigation to be seamless minimizing the risk of collision and increasing user satisfaction.
Justification for combining Jones and Mateus disclosures not only comes from the state of the art but from Mateus (1. Introduction, it is useful to have other external sensors, which can add more information about the environment, not only in terms of coverage space, but also in terms of precision on the estimation of the person’s position).
Regarding claim 2, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Additionally, Jones discloses, the Do Not Disturb area is dynamic and adapts to the movements of the user (Jones, Description, Paragraph 25, the mobile robot 102 includes a learning module 126 that is configured to learn about patterns in the environment, such as foot traffic in a home. For example, the learning module 126 can be configured to store certain parameter values over time and perform statistical analyses of the stored parameter values to detect patterns in the data. The learning module 126 may store counts of human presence at each grid point on a map for each time period of the day for each day of the week. By analyzing the stored data, the learning module 126 can determine, e.g., for a given time during a given day of the week, which grid points on the map have higher or lower foot traffic. The learning module 126 can determine, e.g., for a given room in the house, which periods of time have less or no foot traffic).
Regarding claim 8, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Additionally, Jones discloses the Do Not Disturb area comprises a first sub-area and a second sub-area (Jones, Summary, Paragraph 9, the notice can include a plurality of user-selectable options including at least one of (i) an option for the robot to go to other locations and return to the location after a preset amount of time, (ii) an option to perform an extended cleaning task at the location on a next cleaning session, (iii) an option to move the impermanent barrier, or (iv) an option to select a no-touch mode for the location or a region encompassing the location, in which the no-touch mode indicates that the robot is controlled to not bump into any object above ground level at the location or the region encompassing the location).
Regarding claim 11, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Additionally, Jones discloses the work tool is a lighting tool (Jones, Description, Paragraph 44, Lines 15-20, in certain implementations, the wireless links permit communication with one or more devices including, but not limited to smart light bulbs, thermostats, garage door openers, door locks, remote controls, televisions, security systems, security cameras, smoke detectors, video game consoles, other robotic systems, or other communication enabled sensing and/or actuation devices or appliances).
Regarding claim 21, Jones discloses a work tool system comprising a work tool and a server (Description, Paragraph 36, the remote computing system 204 can include cloud server computers that are accessed through the Internet), the server comprising a controller and a communication interface (Description, Paragraph 36, in some examples, the controller for each of the mobile cleaning robots 102, 103, the linked devices 328A, 328B, and other devices may initiate and maintain wireless links directly with one another) … (Description, Paragraph 37, the mobile robot 202 communicates with the remote computing system 204. It is understood that in some implementations, the mobile robot 202 can also communicate with the mobile computing device 104), and the work tool comprising a controller and a communication interface (Description, the controllers for each of the mobile robots 202, 103, the linked devices 328A, 328B, and other devices may initiate and maintain wireless links for communication with the remote computing system 204) … (Summary Paragraph 41, the mobile cleaning robot includes a storage device configured to store a map of the environment, a cleaning head, and a wireless communication module configured to communicate with an external computer), wherein the server is configured to: receive movement indications for a user through the communication interface (Summary, 17, a learning module configured to determine foot traffic pattern in the environment based on the images captured by the at least one camera over a period of time) … (Summary 38, the mobile cleaning robot can include a communication module that is configured to communicate with at least one other device in the environment that collects information about the environment, in which the learning module can be configured to establish the model of the environment also based on information provided from the at least one other device); determine a movement pattern based on the movement indications (Summary, 17, the mobile cleaning robot can include a learning module configured to determine foot traffic pattern in the environment based on the images captured by the at least one camera over a period of time); determine a Do Not Disturb area suitable for the movement pattern (Summary, 19, the control module can be configured to schedule a cleaning task to be performed at the area during a time period when there is less foot traffic at the area as compared to other time periods); and to transmit information on the Do Not Disturb area to the work tool through the communication interface (Summary, 41, the mobile cleaning robot includes a control module configured to: use the wireless communication module to communicate with the computer and access the calendar to identify events that affect foot traffic in the environment; determine a schedule for cleaning tasks in the environment taking into account of timing of events that affect the foot traffic in the environment; and control the mobile cleaning robot to navigate in the environment using the map and perform the cleaning tasks according to the schedule) and wherein the work tool is configured to: receive information on the Do Not Disturb area; control the work tool so that the Do Not Disturb area is not violated (Summary, Paragraph 9, an option to select a no-touch mode for the location or a region encompassing the location, in which the no-touch mode indicates that the robot is controlled to not bump into any object above ground level at the location or the region encompassing the location), wherein a user positioning determining device separate from the work tool and the server determines a position of the user to define the movement indications (SUMMARY, 39, a camera located remotely from the robot that can capture images of the environment, or a smart lighting fixture having information about when the lighting fixture is turned on). However, Jones does not explicitly disclose, a camera arranged on the work tool that tracks the movement indications of the user in real-time and wherein the Do Not Disturb area is continually adjusted based on the movement indications of the user tracked in real-time.
Nevertheless, Mateus who is in the same field of endeavor of work robot human-aware navigation discloses, a camera arranged on the work tool (1. Introduction, we address the robot navigation in the presence of humans, resorting to multi cameras (static outside and/or onboard cameras) for the vision-based person tracking system), tracks the movement indications of the user in real-time (1.1.2. Human-Aware Navigation, constraints concerning social rules (e.g., navigate on the right side of narrow passages) and low-level human navigation behavior (e.g., face direction of movement) are also taken into account), and wherein the Do Not Disturb area is continually adjusted based on the movement indications of the user (1.1.2. Human-Aware Navigation, prediction cost function which, by increasing the cost in front of a moving human, decreases the probability of the robot entering that area), racked in real-time (4.2. Performance evaluation on real scenarios, it is possible to obtain approximately 10 FPS for each of the datasets (as shown in Table 2, field “Threshold”), which is suited for real-time applications).
Claims 3-5, 7, 9, and 17-18, are all rejected under 35 U.S.C. 103 as being unpatentable over Jones et al (US10878294B2) in view of Mateus et al. (Efficient and robust Pedestrian Detection using Deep Learning for Human-Aware Navigation), further in view of O'Sullivan et al (US9776323B2).
Regarding claim 3, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. However, O’Sullivan who is in the same field of endeavor of human intention classification and tracking discloses, the server being further configured to determine the Do Not Disturb area based on the speed of the user (O'Sullivan, Detailed Description, Paragraph 9, Lines 6-11, briefly, entities 110 such as humans sharing the robot workspace 104 can be identified, their present location and orientation determined, and their trajectory tracked by the equipment 120. This data can be stored in memory 150 of the robot 130 as shown at 152, with each record 152 typically including an entity ID along with the entity's tracked trajectory 154 (which may include the entity's current location, speed of travel, and orientation relative to the robot 130 in the space 104)).
It would have been prima facie obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to combine elements of the combination of Jones and Mateus with O’Sullivan’s disclosure. This would serve to enhance the system by providing a means to analyze user behaviors in real time to determine their trajectory and intention. This would allow the system to navigate while avoiding users as they traverse an environment, enabling a seamless interruption free experience.
Justification for combining the combination of Jones and Mateus with O’Sullivan not only comes from the state of the art but from O’Sullivan (O’Sullivan, Final Paragraph, numerous changes in the combination and arrangement of parts can be resorted to by those skilled in the art without departing from the spirit and scope of the invention, as claimed).
Regarding claim 4, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Additionally, O’Sullivan discloses, the server is further configured to determine the Do Not Disturb area based on the direction of travel of the user (O'Sullivan, Detailed Description, Paragraph 9, Lines 6-11, briefly, entities 110 such as humans sharing the robot workspace 104 can be identified, their present location and orientation determined, and their trajectory tracked by the equipment 120. This data can be stored in memory 150 of the robot 130 as shown at 152, with each record 152 typically including an entity ID along with the entity's tracked trajectory 154 (which may include the entity's current location, speed of travel, and orientation relative to the robot 130 in the space 104)).
Regarding claim 5, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Additionally, O’Sullivan discloses the server is further configured to determine the Do Not Disturb area based on the identity of the user (O'Sullivan, Detailed Description, Paragraph 9, Lines 6-11, briefly, entities 110 such as humans sharing the robot workspace 104 can be identified, their present location and orientation determined, and their trajectory tracked by the equipment 120. This data can be stored in memory 150 of the robot 130 as shown at 152, with each record 152 typically including an entity ID along with the entity's tracked trajectory 154 (which may include the entity's current location, speed of travel, and orientation relative to the robot 130 in the space 104)).
Regarding claim 7, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Furthermore, O’Sullivan discloses, the server being further configured to receive several movement indications for the user, determine at least one usual movement pattern based on the received movement indications for the plurality of users and to match the received movement indications for the user to the at least one usual movement pattern (O'Sullivan, Detailed Description, Paragraph 26, Lines 7-12, the Gaussian progress regression of module 520 is used to compute the probabilistic distribution of each individual human trajectory p(Qi). This regression is solely computed from the past trajectory and does not directly integrate the human intentions for the robot interactions. Therefore, pHI(QALL), which is a joint probabilistic distribution of all trajectories, is computed using the classification results of human trajectories). The justification and reasoning for combining these disclosures is the same reasoning and justification given in regard to claim 3.
Regarding claim 9, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Furthermore, O’Sullivan discloses, the server is further configured to receive movement indications for a second user and to determine a Do Not Disturb area based on the movement indications for the user and the movement indications for the second user (O'Sullivan, Summary, Paragraph 6, Lines 4-9, it may generate a predicted trajectory in the workspace for each of the mobile entities using trajectory regression. Further, the motion planner may then perform the updating of the trajectory based on the repulsive potentials and the predicted trajectories for each of the tracked mobile entities (e.g., humans or other sentient beings) in the workspace (or in a radius around the present location of the robot)). The justification and reasoning for combining these disclosures is the same reasoning and justification given in regard to claim 3.
Regarding claim 17, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Furthermore, O’Sullivan discloses, the user being human, (O'Sullivan, Summary, Paragraph 6, Lines 4-9, it may generate a predicted trajectory in the workspace for each of the mobile entities using trajectory regression. Further, the motion planner may then perform the updating of the trajectory based on the repulsive potentials and the predicted trajectories for each of the tracked mobile entities (e.g., humans or other sentient beings) in the workspace (or in a radius around the present location of the robot)). The justification and reasoning for combining these disclosures is the same reasoning and justification given in regard to claim 3.
Regarding claim 18 Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Furthermore, O’Sullivan discloses the user is a non-human outside the control of the work tool system (O'Sullivan, Summary, Paragraph 6, Lines 4-9, it may generate a predicted trajectory in the workspace for each of the mobile entities using trajectory regression. Further, the motion planner may then perform the updating of the trajectory based on the repulsive potentials and the predicted trajectories for each of the tracked mobile entities (e.g., humans or other sentient beings) in the workspace (or in a radius around the present location of the robot)). The justification and reasoning for combining these disclosures is the same reasoning and justification given in regard to claim 3.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Jones et al (US10878294B2), in view of Mateus et al. (Efficient and robust Pedestrian Detection using Deep Learning for Human-Aware Navigation), Further in view of Zhong et al (US20190251366A1).
Regarding claim 6, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. However, Zhong who is in the same field of endeavor of trajectory prediction in humans discloses, the server being further configured to receive movement indications for a plurality of users, determine at least one common movement pattern based on the received movement indications for the plurality of users and to match the received movement indications for the user to the at least one common movement pattern (Zhong, 0014, applying at least one machine learning or artificial intelligence technique to automatically learn an informative representation of location trajectory data for each object; identifying and analyzing individual and group activities in the scene based on the trajectory data).
It would have been prima facie obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to combine the combination of Jones and Mateus with Zhong. This would serve to enhance the system by reducing the computational load by clustering groups of subject’s trajectories to predict a shared movement path; as opposed to individual predictions for all subjects in an environment.
Further justification for combining the combination of Jones and Mateus with Zhong not only comes from the state of the art but from Zhong, (Zhong, 0095, although the above principles have been described with reference to certain specific examples, various modifications thereof will be apparent to those skilled in the art as outlined in the appended claims).
Claims 10, 12 and 16 are all rejected under 35 U.S.C. 103 as being unpatentable over Jones et al (US10878294B2), in view of Mateus et al. (Efficient and robust Pedestrian Detection using Deep Learning for Human-Aware Navigation), Further in view of Albinger et al (WO2016108104A1).
Regarding claim 10, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. However, Albinger who is in the same field of endeavor of outdoor work tool systems discloses, the work tool being a watering tool (Albinger, 0049, Lines 1-10, once the system 10 has scheduled the event on the user's calendar, block 44, it checks, block 46, to see if other equipment is installed on the property, such as a sprinkler system, that may need controlled. The system 10 then interfaces with the equipment and sets parameters, block 47, for the equipment to make sure that the conditions for performing the task are acceptable. For example, in the case of mowing a lawn, if the property has a sprinkler system, the system 10 will interface with the sprinkler system and temporarily adjust the sprinkler system's schedule to prevent the sprinkler system from watering the lawn prior to (for example 24 hours before) or during the scheduled time that the lawn is to be mowed).
It would have been prima facie obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to combine the combination of Jones and Mateus with Albinge. This would serve to enhance the system by broadening the work tools that are associated with the system. Instead of the system only being defined by a robotic lawnmower the system can also be connected with a plurality of working tools, enabling a user to control many aspects of tool work.
Further justification for combining the combination of Jones and Mateus with Albinger not only comes from the state of the art but from Albinger (Albinger, 0029, Lines 12-16, the system 10 is configured to interact with other control systems such as sprinkler systems 25 to provide a control mechanism to further reinforce the preferred time for use of the power machine 17. For example, the system 10 may intermittently change the program used for a sprinkler system such that the sprinklers do not turn on and wet the grass during a mowing day).
Regarding claim 12, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Furthermore, Albinger discloses, the work tool being a fan tool (Albinger, 0002, nonlimiting examples of outdoor power machines include lawn mowers, snow blowers, chain saws, blowers). The justification and reasoning for combining these disclosures is the same reasoning and justification given in regard to claim 10.
Regarding claim 16, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Additionally, Albinger discloses, the robotic work tool being a robotic lawnmower (Albinger, 0002, nonlimiting examples of outdoor power machines include lawn mowers). The justification for combining these disclosures is the same given in claim 10.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Jones et al (US10878294B2), in view of Mateus et al. (Efficient and robust Pedestrian Detection using Deep Learning for Human-Aware Navigation), further in view of Uehigashi (US20050188493A1).
Regarding claim 15, Jones and Mateus disclose, the work tool system according to claim 1 as discussed supra. Additionally, Uehigashi who is in the same field of endeavor of robotic work machines discloses, the robotic work tool being configured to control the robotic work tool so that the Do Not Disturb area is not violated, by controlling the robotic work tool so that the Do Not Disturb area is entered in a stealth mode (Uehigashi, Abstract, so long as a user merely places the transmitter 11 nearby, noise generated from the main body can be reduced when the self-propelling cleaner 1 approaches the user, so that the user's action taken at this time can not be interfered by noise generated from the self-propelling cleaner 1).
It would have been prima facie obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to combine the combination of Jones and Mateus with Uehigashi. This would serve to enhance the system by not disturbing a nearby user with excessive noise by switching to a silent mode when in close proximity to said user. For this would enhance a user’s experience.
Justification for combining the combination of Jones and Mateus with Uehigashi not only comes from the state of the art but from Uehigashi (Uehigashi, 0040, Lines 1-3, although the present invention has been shown and described with reference to a specific preferred embodiment, various changes and modifications will be apparent to those skilled in the art from the teachings herein).
Claims 19-20 are all rejected under 35 U.S.C. 103 as being unpatentable by O'Sullivan et al (US9776323B2), in view of Mateus et al. (Efficient and robust Pedestrian Detection using Deep Learning for Human-Aware Navigation), further in view of Uehigashi (US20050188493A1).
Regarding claim 19, O'Sullivan discloses, a method for use in a work tool system comprising a work tool and a server, wherein the method comprises in the server: receiving movement indications for a user (O'Sullivan, Detailed Description, Paragraph 9, Lines 6-11, briefly, entities 110 such as humans sharing the robot workspace 104 can be identified, their present location and orientation determined, and their trajectory tracked by the equipment 120. This data can be stored in memory 150 of the robot 130 as shown at 152, with each record 152 typically including an entity ID along with the entity's tracked trajectory 154 (which may include the entity's current location, speed of travel, and orientation relative to the robot 130 in the space 104)). Determining a movement pattern based on the movement indications (O'Sullivan, Detailed Description, Paragraph 26, Lines 7-12, the Gaussian progress regression of module 520 is used to compute the probabilistic distribution of each individual human trajectory p(Qi). This regression is solely computed from the past trajectory and does not directly integrate the human intentions for the robot interactions. Therefore, pHI(QALL), which is a joint probabilistic distribution of all trajectories, is computed using the classification results of human trajectories). Determining a Do Not Disturb area suitable for the movement pattern; and transmitting information on the Do Not Disturb area to the work tool; and wherein the method comprises in the work tool: receiving information on the Do Not Disturb area; and controlling the work tool so that the Do Not Disturb area is not violated (O'Sullivan, Detailed Description, Paragraph 11, Lines 10-20, an entity 110 may be within a predefined radius about the robot 130, and the robot controller 140 may signal 180 the controller 190, and it may process this information and send control signals 192 or a human operator may use the controller 190 to generate the signals 192 to modify operation of the drive system 138 (e.g., to stop the robot 130 at its current location in the workspace 104 or to modify the trajectory 176 to move away from the obstacle or entity 110). Additionally, as discussed below, the warning signals 180 may be generated by the robot controller 140 to indicate that the robot 130 is approaching or is nearby to an entity 110 that has been determined to likely behave in a manner that will block (including interacting with) the robot 130. The control signals 192 can then be generated to avoid this entity). However, O'Sullivan does not explicitly disclose, not violating the Do Not Disturb area is not entering the Do Not Disturb area and operating in a stealth mode responsive to the work tool being within a predetermined distance from the Do Not Disturb area.
Nevertheless, Uehigashi who is in the same field of endeavor of robotic work machines discloses, not violating the Do Not Disturb area is not entering the Do Not Disturb area and operating in a stealth mode responsive to the work tool being within a predetermined distance from the Do Not Disturb area (0026, when it is determined at step s4 that the receiver 6 receives radio wave of the predetermined frequency, the self-propelling cleaner 1 switches the operation mode to the silent mode) … (0038, since the intensity of radio wave transmitted from the transmitter can be adjusted, the size of an area where the main body can receive radio wave transmitted from the transmitter, that is, the size of an area where the main body is operated in the silent mode irrespective of the current mode being selected can be adjusted easily).
It would have been prima facie obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to combine elements of O'Sullivan and Uehigashi’s disclosures. This would serve to enhance the system by not disturbing a nearby user with excessive noise by switching to a silent mode when in close proximity to said user. For this would enhance a user’s experience.
Justification for combining O'Sullivan and Uehigashi’s disclosures not only comes from the state of the art but from Uehigashi (Uehigashi, 0040, Lines 1-3, although the present invention has been shown and described with reference to a specific preferred embodiment, various changes and modifications will be apparent to those skilled in the art from the teachings herein).
However, even the combination of O'Sullivan and Uehigashi still does not explicitly disclose, a camera arranged on the work tool that tracks the movement indications of the user in real-time and wherein the Do Not Disturb area is continually adjusted based on the movement indications of the user tracked in real-time.
Nevertheless, Mateus who is in the same field of endeavor of work robot human-aware navigation discloses, a camera arranged on the work tool (1. Introduction, we address the robot navigation in the presence of humans, resorting to multi cameras (static outside and/or onboard cameras) for the vision-based person tracking system), tracks the movement indications of the user in real-time (1.1.2. Human-Aware Navigation, constraints concerning social rules (e.g., navigate on the right side of narrow passages) and low-level human navigation behavior (e.g., face direction of movement) are also taken into account), and wherein the Do Not Disturb area is continually adjusted based on the movement indications of the user (1.1.2. Human-Aware Navigation, prediction cost function which, by increasing the cost in front of a moving human, decreases the probability of the robot entering that area), racked in real-time (4.2. Performance evaluation on real scenarios, it is possible to obtain approximately 10 FPS for each of the datasets (as shown in Table 2, field “Threshold”), which is suited for real-time applications).
It would have been prima facie obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention to combine the combination of O'Sullivan and Uehigashi with Mateus. This would serve to enhance the system by providing additional cameras to collect more information about the environment, for increasing the level of precision on the estimation of a user/persons position. This in turn allows for robot navigation to be seamless minimizing the risk of collision and increasing user satisfaction.
Justification for combining the combination of O'Sullivan and Uehigashi with Mateus not only comes from the state of the art but from Mateus (1. Introduction, it is useful to have other external sensors, which can add more information about the environment, not only in terms of coverage space, but also in terms of precision on the estimation of the person’s position).
Regarding claim 20, O'Sullivan, Uehigashi, and Mateus disclose, a computer-readable medium comprising computer-readable instructions according to claim 19 as discussed supra. Additionally, O’Sullivan discloses, when loaded into and executed by a controller enables the controller to execute the method according to claim 19 (O'Sullivan, Detailed Description, Paragraph 11, Lines 1-5, the I/O devices 134 typically include wireless transceivers for communicating with the entity tracking sensors 120 to receive the entity data (or tracking data) 124 and store this information in the memory 150 as shown at 152 along with the tracked trajectory).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANE E DOUGLAS whose telephone number is (703)756-1417. The examiner can normally be reached Monday - Friday 7:30AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached on (571) 272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.E.D./Examiner, Art Unit 3665
/CHRISTIAN CHACE/Supervisory Patent Examiner, Art Unit 3665