DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This Office Action is in response to the application filed on 02/06/2024 and amendment file on 02/07/2024.
Claims 4, 5, 7, 8, 10, 12, 14, 15, 18, 25, 26 and 28 have been amended. Further, claim 6 is incorporated into claim 5, claim 13 is incorporated into claim 12, and claims 23-24 are incorporated into claim 15. Claims 6, 9, 13, 21-24, 27 and 29 have been canceled.
Claim(s) 1-5, 7-8, 10-20, 25-26 and 28 are presently pending and are examined in this first action on the merits (FAOM).
Priority
Examiner acknowledges Applicant’s claim to priority based on PCT number PCT/CN2022/109522 filed 08/01/2022 with Foreign Priority date of 08/17/2021.
Information Disclosure Statement
The information disclosure statement(s) (IDS) submitted on 02/06/2024 and 02/12/2025 have been considered by the Examiner.
Specification Objections
Specification [0207] recites “The laser sensor may be, for example, the DTOF sensor, or a Laser Direct Structuring (LDS) sensor, etc.” Laser Direct Structuring is a manufacturing process and not a Laser ranging process. Appropriate correction is required.
Claim Objections
Claim 7 is objected to because of the following informalities: (i) Claim limitation states “determining a distance between the target objects in the adjacent two DTOF scatter plots”. Claim limitation mentions the robot following a target object. The examiner is treating the claim as a closed group "determining a distance between the target object in the adjacent two DTOF scatter plots".
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 1, 4 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Hyun Kim et. al. US20130218395 (“Kim”) in view of Ali Ebrahimi Afrouzi et. al.US20210089040 (“Afrouzi’040”)
As per Claim 1,
Kim discloses,
An autonomous mobile robot control method, comprising:
determining a sound source direction according to a voice signal from a user; (see at least [0012] an analyzer configured to recognize a direction of the call signal based on the input call signal determining moving objects around the autonomous mobile robot, [0035] call signal may include a human voice signal, and [0036] the call direction may use a method for localizing a sound source using 4 channel microphone.)
determining, from the moving objects, a target object located in the sound source direction; (see at least [0012] receive and analyze video information regarding the direction; an estimator configured to estimate a position of a signal source of the call signal using the sound source localization and the detected person's shape, and [0037] when a person calls the autonomous mobile apparatus using the human voice, the autonomous mobile apparatus detects the sound location, recognizes the human voice, moves around a person, queries the signal source (a caller) using the human language for recognizing a call, and analyzes human response to the query, thereby accurately recognizing the caller)
determining a working area according to the target object (see at least [0039] The position of the signal source may be generally estimated based on a direction and a distance. The direction of the signal source may be estimated by the receiving direction of the call signal, and the position of the signal source may be estimated by calculating the distance using the call sound of the call signal and/or the video information, and [0039] the moving path may also be set by detecting obstacles between the autonomous mobile apparatus and the signal source by using the received video signal)
moving to the working area, (see at least [0012] navigation controller configured to generate the moving command to the estimated position of the signal source to generate the navigation module, and wherein the analyzer recognizes the subject of the signal source by using the camera sensor module after movement according to the moving command)
Kim does not disclose,
executing a task within the working area
Afrouzi’040 teaches,
executing a task within the working area (see at least [0599] the processor may detect a person identified as its owner and in response may execute the commands provided by the person.
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’040 teaches an autonomous robot that executes command provided by a person.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with use of facial or voice recognition to identify and execute the command provided by the owner as taught by Afrouzi’040, with a reasonable expectation of success, to operate autonomously or with minimal (or less than fully manual) input and/or external control within an environment (0004).
As per Claim 4,
Kim discloses,
method according to claim 1, wherein the determining, from the moving objects, a target object located in the sound source direction comprises determining, from the moving objects, a moving object that makes a motion on a foot and is located in the sound source direction to obtain the target object (see at least [0011] recognizing a direction of a call signal based on the call signal; receiving and analyzing video information regarding the direction; estimating a position of a signal source of the call signal using the sound source localization and the detected person's shape; generating a moving command to the position of the signal source; and recognizing the subject of the signal source after movement according to the moving command, and [0047] analyzer 221 that recognizes the direction of the call signal based on the input call signal and receives and analyzes the video information regarding the direction, the estimator 223 that estimates the position of the signal source of the call signal)
As per Claim 28,
Kim discloses,
An autonomous mobile robot comprising a processor, a memory, and a computer program stored on the memory and capable of running on the processor, wherein when the processor executes the computer program, the autonomous mobile robot is caused to perform the method as claimed in claim 1 (see at least Fig.2 , Fig. 3, Fig. 4, and [0026] terms presented as processor, control, or a concept similar thereto are not construed as exclusively including hardware having ability executing software and are to be construed as implicitly including digital signal processor (DSP) hardware and ROM, RAM, and non-volatile memory for storing software. Widely known other hardware may also be included.)
Claims 2 and 3 are rejected under 35 U.S.C. 103 as being unpatentable over Kim and Afrouzi’040 as applied to Claim 1 above and further in view of Anurag Jakhotia et. al. US20220095871A1 (“Jakhotia”)
As per Claim 2,
Kim does not disclose,
method according to claim 1, wherein the determining moving objects around the autonomous mobile robot comprises
obtaining a plurality of simultaneous localization and mapping (SLAM) maps and a plurality of direct time-of-flight (DTOF) scatter plots, SLAM maps among the plurality of SLAM maps corresponding one-to-one with DTOF scatter plots among the plurality of DTOF scatter plots;
for each of the plurality of DTOF scatter plots, filtering out, from the DTOF scatter plot according to the corresponding SLAM map, pixels representing static objects to obtain a set of dynamic points
determining the moving objects around the autonomous mobile robot according to the set of dynamic points of adjacent two of the plurality of DTOF scatter plots
Afrouzi’040 teaches,
method according to claim 1, wherein the determining moving objects around the autonomous mobile robot comprises (see at least [0256] FIG. 2A illustrates an example of a robot including sensor windows 100 behind which sensors are positioned, sensors 101 (e.g., camera, laser emitter, TOF sensor, IR sensors, range finders, LIDAR, depth cameras, etc.)
obtaining a plurality of simultaneous localization and mapping (SLAM) maps and a plurality of direct time-of-flight (DTOF) scatter plots, SLAM maps among the plurality of SLAM maps corresponding one-to-one with DTOF scatter plots among the plurality of DTOF scatter plots (see at least [0286] The pose and maps portion 1103 may include a coverage tracker 1104, a pose estimator 1105, SLAM 1106, and a SLAM updater 1107. The pose estimator 1105 may include an Extended Kalman Filter (EKF) that uses odometry, IMU, and LIDAR data. SLAM 1106 may build a map based on scan matching. The pose estimator 1105 and SLAM 1106 may pass information to one another in a feedback loop, [0340] The TOF local map is overlaid on the global map illustrated in FIGS. 49A-49C. The TOF sensors may be used to determine short range distances to obstacles and [0871] examples of SLAM and AR integration. FIG. 268A illustrates au autonomous vehicle 11100 with a scanning devices (e.g., 360 degrees LIDAR) 11101 scanning the environment. Each time the scanning device 11101 scans the same area accuracy of that area within the map increases. Overlapping scans may be collected during a same or separate work session and are not required to be collected continuously)
for each of the plurality of DTOF scatter plots, filtering out, from the DTOF scatter plot according to the corresponding SLAM map, pixels representing static objects to obtain a set of dynamic points (see at least [0327] data points corresponding to a moving object captured in one or two frames overlapping with several other frames captured without the moving object may be assigned a low weight as they likely do not fall within the adjustment range and are not consistent with data points collected in other overlapping frames and would likely be rejected for having low assigned weight)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’040 teaches methods such as mapping, localization, object recognition, and path planning using SLAM and TOF.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with mapping, localization and path planning as taught by Afrouzi’040, with a reasonable expectation of success, to operate autonomously or with minimal (or less than fully manual) input and/or external control within an environment (0004).
Jakhotia teaches,
for each of the plurality of DTOF scatter plots, filtering out, from the DTOF scatter plot according to the corresponding SLAM map, pixels representing static objects to obtain a set of dynamic points (see at least [0297] With reference to FIG. 5A, an initial map 400A (representing map n) of an operating environment 401, in which the operating environment 401 has been divided into an array of cells 402, [0303] At 512, the probabilities of the cells can be updated to reflect detected changes within the operating environment. For example, the probability for cells occupying a detected object which has not moved can be increased to a value (e.g., 0.8) representing an increased likelihood that the object occupying that cell is a static feature. Cells having an assigned value above a determined threshold (e.g., 0.8, 0.9, etc.) can be determined to be occupied by a static feature, while all other objects detected by the unit 300 in subsequent operations within the operating environment can be deemed dynamic features, and [0307] The camera may provide 2D or 3D data. 3D data may also be determined from a LiDAR or the like providing information relating to the structure or shape of the object, from which the classification may also be performed)
determining the moving objects around the autonomous mobile robot according to the set of dynamic points of adjacent two of the plurality of DTOF scatter plots (see at least [0237] “dynamic feature” (e.g., objects, features and/or structures that are likely to move within the operating environment, particularly over the span of two or more navigations within the operating environment, [0303] On subsequent runs, at 510, the unit 300 can compare a current map (e.g., map 400B) with a previous map (e.g., map 400A) to determine if any of the detected objects have moved, and [0368\ determining a position of the robot unit vis-à-vis the recognized static feature(s) based on the weight allocated to the pertaining recognized feature(s).)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Jakhotia teaches determining location of objects and classifying them as static and dynamic objects for operating a navigation system for navigating a robot unit in a scene or venue, the scene/venue comprising a number of static features and a number of dynamic features.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with detection of moving objects as taught by Jakhotia, with a reasonable expectation of success, to produce a map of the operating environment, including relative positions of any detected static features and/or dynamic features within the operating environment, as well as the position of the robotic platform within the operating environment. (0246).
As per Claim 3,
Kim does not disclose,
method according to claim 2, wherein the determining the moving objects around the autonomous mobile robot according to the set of dynamic points of adjacent two of the plurality of DTOF scatter plots comprises:
determining a first subset from a first set of dynamic points of a first DTOF scatter plot;
determining whether there is a second subset in a second set of dynamic points of a second DTOF scatter plot, wherein a distance between a first location indicated by the first subset and a second location indicated by the second subset is greater than a preset distance, and a differential between number of the pixels in the first subset and number of the pixels in the second subset is less than a preset differential,
the first DTOF scatter plot and the second DTOF scatter plot are any two adjacent DTOF scatter plots among the plurality of DTOF scatter plots; and when the second subset exists in the second set of dynamic points, determining that the first subset and the second subset represent a same object and the object is a moving object.
Afrouzi’040 teaches,
method according to claim 2, wherein the determining the moving objects around the autonomous mobile robot according to the set of dynamic points of adjacent two of the plurality of DTOF scatter plots comprises: (see at least [0340] FIG. 51 illustrates an example of a local TOF map 4800 that is generated in simulation using data collected by TOF sensors located on robot 4801.)
Jakhotia further teaches,
determining a first subset from a first set of dynamic points of a first DTOF scatter plot; (see at least Fig. 6, [302] On an initial run, at 508, each of the cells in which a detected object was associated can be assigned an initial probability (e.g., 0.5) indicating a level of uncertainty as to whether the detected object represents a static object or a dynamic object)
determining whether there is a second subset in a second set of dynamic points of a second DTOF scatter plot, wherein a distance between a first location indicated by the first subset and a second location indicated by the second subset is greater than a preset distance, and a differential between number of the pixels in the first subset and number of the pixels in the second subset is less than a preset differential (see at least [302] Cells where no object has been detected can be assigned a low probability (e.g., 0.01), indicating that later detected objects occupying that cell should initially be presumed to be a dynamic object)
the first DTOF scatter plot and the second DTOF scatter plot are any two adjacent DTOF scatter plots among the plurality of DTOF scatter plots; and when the second subset exists in the second set of dynamic points, determining that the first subset and the second subset represent a same object and the object is a moving object (see at least (see at least [0251] the individual cells designated as being occupied by a detected object are assigned a first initial probability, and individual cells designated as being unoccupied are assigned second initial probability, wherein the second initial probability is lower than the first initial probability, thereby indicating that an object later detected in the operating environment corresponding to the individual cells designated as being unoccupied is more likely to be a dynamic feature and/or an estimated distance to a detected feature, [0303] On subsequent runs, at 510, and [0303] the unit 300 can compare a current map (e.g., map 400B) with a previous map (e.g., map 400A) to determine if any of the detected objects have moved)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Jakhotia teaches determining location of objects and classifying them as static and dynamic objects for operating a navigation system for navigating a robot unit in a scene or venue, the scene/venue comprising a number of static features and a number of dynamic features.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with detection of moving objects as taught by Jakhotia, with a reasonable expectation of success, to produce a map of the operating environment, including relative positions of any detected static features and/or dynamic features within the operating environment, as well as the position of the robotic platform within the operating environment (0246).
Claims 5, is rejected under 35 U.S.C. 103 as being unpatentable over Kim and Afrouzi’040 as applied to Claim 1 above and further in view of Moejammad Dzaky Fauzan Maas et. al. US20210191405 (“Maas”)
As per Claim 5,
Kim discloses,
method according to claim 1, wherein the determining a working area according to the target object comprises: moving to a location having the preset distance from the target object (see at least Fig. 5, [0030] estimating a position of the subject of the call and moving to the corresponding position, and [0075] When the position of the user is estimated, the robot moves to the estimated position (S506)).
Kim does not disclose,
determining the working area according to an initial location of the target object when the target object does not undergo displacement;
after moving to the location having the preset distance from the target object, controlling the autonomous mobile robot to follow the target object to move when the target object undergoes displacement; and
when the target object stops moving, determining the working area according to a location where the target object stops moving.
Maas teaches,
determining the working area according to an initial location of the target object when the target object does not undergo displacement (see at least [0087] the autonomous robots may be trained and programmed to perform certain tasks, [0087] A user can mention the room name, and the robot may translate the voice into text, and find a matching room name within the building, and [0087] The robot then may start to give guidance to the room/place the user wanted)
after moving to the location having the preset distance from the target object, controlling the autonomous mobile robot to follow the target object to move when the target object undergoes displacement (see at least Fig. 2, [0066] advanced intelligent remote sensing system may continuously process and learn information fed into robots to provide robots an autonomous navigation for maintaining the position of the robot with the selected target object, [0069] maintaining control a position of the robot with the target object through various action recommendations to perform certain autonomous tasks such as tracking, following, and/or guiding the target object, [[0084] The autonomous robot may detect a moving object and may be able to follow the moving object, [0097] The robot may start detecting the moving object and maintain the distance with the moving object, and [0092] Moving Object Tracking (MOT) which includes Moving Object Detection (MOD) and Moving Object Prediction (MOP), and Maintain Moving Object to provide recommendation path for the autonomous robot to maintain the target object of the autonomous robot)
when the target object stops moving, determining the working area according to a location where the target object stops moving (see at least [0074] a user may set a main task for the autonomous robot via an available user interface (following, guiding, etc.), and [0083] the presence of an autonomous robot that has the capabilities to nurse, taking care of an elderly person, act as a butler, or simply just a companion to the family member will be essential)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Maas teaches localizing a position of the device on the map based on the sensor data, determining a position of a moving object on the map based on the sensor data and changing position of the device based on changed position of a moving object.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with detecting and following moving objects as taught by Maas with a reasonable expectation of success, to predict the trajectory of a movement of the target object while generating a trajectory path and maintaining control of a position of the mobile robot with the target object (0064).
Claims 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, Afrouzi’040 and Maas as applied to Claim 5 above and further in view of Ali Ebrahimi Afrouzi et. al.US12099357 (“Afrouzi’357”)
As per Claim 7,
Kim does not disclose,
The method according to claim 5, wherein the controlling the autonomous mobile robot to follow the target object to move comprises: determining whether the target object appears in both adjacent two DTOF scatter plots
when the target object appears in both the adjacent two DTOF scatter plots, determining a distance between the target objects in the adjacent two DTOF scatter plots; and
adjusting a speed according to the distance to follow the target object to move
Maas teaches,
adjusting a speed according to the distance to follow the target object to move. when the target object stops moving, determining the working area according to a location where the target object stops moving (see at least [0061] build a robot predictive controller to enable tracking a moving object, including maintaining the distance with the target object in a dynamic environment, [0066] The advanced intelligent remote sensing system may continuously process and learn information fed into robots to provide robots an autonomous navigation for maintaining the position of the robot with the selected target object, [0128] In step 2208, the robot may build a trajectory path to move and control a speed of the robot based on the ability of movement, the speed of the robot, and the distance between the current position of the robot and the next position of the robot. The Maintain Moving Object module will enable the robot to execute the movement recommendation, and [0129] the robot may set its next position to avoid losing the target object based on internal properties of the robot (position, initial and maximum speed, etc.), properties of the target object (position, trajectory, etc.), properties of other objects (position, trajectory, etc.), blind spot area, and other environment condition)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Maas teaches localizing a position of the device on the map based on the sensor data, determining a position of a moving object on the map based on the sensor data and changing position of the device based on changed position of a moving object.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with detecting and following moving objects as taught by Maas with a reasonable expectation of success, to predict the trajectory of a movement of the target object while generating a trajectory path and maintaining control of a position of the mobile robot with the target object (0064).
Afrouzi’040 teaches,
The method according to claim 5, wherein the controlling the autonomous mobile robot to follow the target object to move comprises: determining whether the target object appears in both adjacent two DTOF scatter plots ([0340] FIG. 51 illustrates an example of a local TOF map 4800 that is generated in simulation using data collected by TOF sensors located on robot 4801, and [0340] a white line between the center of robot 4801 and the center of the obstacle that triggered the TOF is inferred free space. The white line is also the estimated TOF sensor distance from the center of robot 4801 to the obstacle. White areas 4803 come and go as obstacles move in and out of the fields of view of TOF sensors)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’040 teaches methods such as mapping, localization, object recognition, and path planning using DTOF method.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with mapping, localization and path planning using DTOF as taught by Afrouzi’040, with a reasonable expectation of success, to operate autonomously or with minimal (or less than fully manual) input and/or external control within an environment (0004).
Afrouzi’357 teaches,
when the target object appears in both the adjacent two DTOF scatter plots, determining a distance between the target objects in the adjacent two DTOF scatter plots (see at least [31] the image captured is a depth image, the depth image being any image containing data which may be related to the distance from the camera to objects captured in the image (e.g., pixel brightness, intensity, and color, time for light to reflect and return back to sensor, depth vector, etc.), and [36] the same objects in the captured images may be identified based on distances to objects in the captured images and the movement of the robot in between captured images and/or the position and orientation of the robot at the time the images were captured)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, with the processor of the robot, elements in a captured image that match elements in at least one previously captured image of the user.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with detecting and matching target objects as taught by Afrouzi’357with a reasonable expectation of success, to predict the trajectory of a movement of the target object while generating a trajectory path and maintaining control of a position of the mobile robot with the target object (0064).
As per Claim 8,
Kim discloses,
when the skeleton diagram is incomplete, avoiding or surmounting an obstacle between the autonomous mobile device and the target object until the autonomous mobile robot maintains the follow state to follow the target object to move after the AI camera captures the complete skeleton diagram ([0039] the moving path may also be set by detecting obstacles between the autonomous mobile apparatus and the signal source by using the received video signal, [0069] The perception system recognizes whether there is a person therearound (person shape detection), who the person is (user recognition), and the like, from the image information transferred from the camera sensor module. In addition, the perception system recognizes whether obstacles are present in front thereof by the distance information obtained from the ultrasonic sensor)
Kim does not disclose,
The method according to claim 5, wherein the controlling the autonomous mobile robot to follow the target object to move comprises:
capturing a skeleton diagram of the target object by means of an artificial intelligence (AI) camera
when the skeleton diagram is complete, maintaining a follow state to follow the target object to move; and
when the skeleton diagram is incomplete, avoiding or surmounting an obstacle between the autonomous mobile device and the target object until the autonomous mobile robot maintains the follow state to follow the target object to move after the AI camera captures the complete skeleton diagram
Afrouzi’357 teaches,
The method according to claim 5, wherein the controlling the autonomous mobile robot to follow the target object to move comprises:
capturing a skeleton diagram of the target object by means of an artificial intelligence (AI) camera (see at least [58] The processor may identify the same object 701 within the image based on identifying similar features as those identified in the image of FIG. 7B. FIG. 7D illustrates the movement 702 of the object 701. The processor may determine that the object 701 is a person based on trajectory and/or the speed of movement of the object 701 (e.g., by determining total movement of the object between the images captured in FIGS. 7B and 7C and the time between when the images in FIGS. 7B and 7C were taken), [62] the processor executes facial recognition based on unique depth patterns of a face. For instance, a face of a person may have a unique depth pattern when observed. FIG. 11A illustrates a face of a person 1100. FIG. 11B illustrates unique features 1101 identified by the processor that may be used in identifying the person 1100, and [64] distance measurements and image data may be used to extract features used to identify different objects. For example, FIGS. 17A-17C illustrate a person 1700 moving within an environment 1701 and corresponding depth readings 1702 from perspective 1703 appearing as a line)
when the skeleton diagram is complete, maintaining a follow state to follow the target object to move; (see at least [61] the processor of the robot uses the facial recognition and post-facial recognition (e.g., actions taken after facial identification) methods described in U.S. patent application Ser. No. 16/920,328, the entire contents of which are hereby incorporated by reference. In some embodiments, the processor may use sensor data to identify people and/or pets based on features of the people and/or animals extracted from the sensor data (e.g., features of a person extracted from images of the person captured by a camera of the robot), and [275] the robot may follow a user around the environment when not executing an intended function (e.g., cleaning) such that the user may relay commands from any location within the environment.
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, with the processor of the robot, elements in a captured image that match elements in at least one previously captured image of the user and actions taken after facial identification.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with detecting and matching target objects as taught by Afrouzi’357 with a reasonable expectation of success, to predict the trajectory of a movement of the target object while generating a trajectory path and maintaining control of a position of the mobile robot with the target object (0064).
Maas teaches,
when the skeleton diagram is incomplete, avoiding or surmounting an obstacle between the autonomous mobile device and the target object until the autonomous mobile robot maintains the follow state to follow the target object to move after the AI camera captures the complete skeleton diagram (see at least Fig. 9, [0014] changing of the position of the device may include moving the device based on the second position of the moving object being blocked by the at least one obstacle, and [0130] FIG. 24 illustrates a movement of a robot when a target object is blocked by other objects while the other objects are moving. When the target object's trajectory prediction possibly enters a blind spot area, this method will give recommendation to the robot to move to a certain point so the robot may have a better perception for maintaining the target position).
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Maas teaches localizing a position of the device on the map based on the sensor data, determining a position of a moving object on the map based on the sensor data and changing position of the device based on changed position of a moving object.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with detecting and following moving objects as taught by Maas with a reasonable expectation of success, to predict the trajectory of a movement of the target object while generating a trajectory path and maintaining control of a position of the mobile robot with the target object (0064).
Claims 10, is rejected under 35 U.S.C. 103 as being unpatentable over Kim and Afrouzi’040 as applied to Claim 1 above and further in view of Maas and Afrouzi’357.
As per Claim 10,
Kim does not disclose,
The method according to any one of claim 1, further comprising:
determining whether the target object is missing;
when the target object is missing, determining a search range according to coordinates of a location where the target object is last seen; searching for the target object within the search range; and
getting into a summon waiting state when the target object is not searched out
Maas teaches,
determining whether the target object is missing (see at least [0099] In step 912, the system may determine if there exists any blind spot. The blind spot may indicate areas that sensors of the robot may not reach because of obstacles. The obstacles may include either stationary objects or moving objects)
when the target object is missing, determining a search range according to coordinates of a location where the target object is last seen; searching for the target object within the search range; (see at least [0100] In step 914, when there exists a blind spot (‘Y’ in step 912), the system may generate a path of the robot, [0108] In the first step, an autonomous robot may scan the environment using a rangefinder sensor and build a map, and [0112] FIG. 15 discloses a Multiple Object Tracking (MOT) module of the disclosure, which includes two main components: Moving Object Detection component, hereinafter referred as MOD, and Moving Object Prediction component, hereinafter referred as MOP. Multiple Object Tracking, hereinafter referred as MOT, takes distance data from a rangefinder sensor to calculate object position and object momentum. The MOT uses the MOD module which uses Recurrent Neural Network (RNN) to determine an object position. The object position is used by the MOP to determine object momentum (speed and direction))
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Maas teaches localizing a position of the device on the map based on the sensor data, determining a position of a moving object on the map based on the sensor data and changing position of the device based on changed position of a moving object.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with detecting and following moving objects as taught by Maas with a reasonable expectation of success, to predict the trajectory of a movement of the target object while generating a trajectory path and maintaining control of a position of the mobile robot with the target object (0064).
Afrouzi’357 teaches,
getting into a summon waiting state when the target object is not searched out (see at least [310] the application may receive an input enacting an instruction for the robot to start, stop, or pause a current task)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, with the processor of the robot, elements in a captured image that match elements in at least one previously captured image of the user and actions taken after facial identification.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with detecting and matching target objects as taught by Afrouzi’357with a reasonable expectation of success, to predict the trajectory of a movement of the target object while generating a trajectory path and maintaining control of a position of the mobile robot with the target object (0064).
Claims 11-12, 14-20, 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over Kim and Afrouzi’040 as applied to Claim 1 above and further in view of Afrouzi’357 and Jonghoon Chae US 20200005795 (“Chae”)
As per Claim 11,
Kim discloses,
The method according to claim 1, further comprising:
collecting a first voice signal (see at least [0012] a sensor module configured to sense a call signal, and [0035] the call signal may include a human voice signal. However, the call signal is not limited thereto.)
waking up a voice control function of the autonomous mobile robot when the first voice signal matches a wake-up command of the autonomous mobile robot (see at least [0036] recognizing (S101) may include recognizing, by an analyzer 221, a voice signal. In the case of recognizing the voice signal, a call sound and a call direction can be recognized, [0037] The recognizing of the call sound (S109) may include generating, by the analyzer 221, a signal corresponding to the recognized voice signal)
Kim does not disclose,
waking up a voice control function of the autonomous mobile robot when the first voice signal matches a wake-up command of the autonomous mobile robot
collecting a second voice signal in a wake-up state of the voice control function;
determining at least two working areas according to the second voice signal; and
executing a task indicated by the second voice signal for the at least two working areas sequentially.
Chae teaches,
waking up a voice control function of the autonomous mobile robot when the first voice signal matches a wake-up command of the autonomous mobile robot (see at least [0008] recognizing a voice based on artificial intelligence that extract a template from a voice of a user to recognize the user even when a boundary between a wake-up voice and a command voice is unclear, and [0152] When the user utters the wake-up voice after the voice registration for each user is completed, the input unit 250 may authenticate the user based on voice information of the corresponding wake -up voice via a user distinguishing function of the processor 260a).
collecting a second voice signal in a wake-up state of the voice control function (see at least [0161] The processor 260a may generate and store parameter values for voiceprint analysis such as a frequency bandwidth, an amplitude spectrum, and the like of the voice signal of the user's wake-up voice and the unstructured natural language command voice signal, and [0161] when there is an input of a service request voice of the user, the processor 260a may compare voice parameter values in the service request voice with the pre-stored parameter values to perform an authentication procedure using the context independent speaker authentication method).
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Chae teaches a method for recognizing voice based on artificial intelligence that extract a template from a voice of a user to recognize the user even when a boundary between a wake-up voice and a command voice is unclear.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with recognition of wake-up voice and command voice taught by Chae with a reasonable expectation of success, to wake-up the electronic device based on the determination result (0010).
Afrouzi’357 teaches,
determining at least two working areas according to the second voice signal (see at least [93] setting an order of operating in different areas of the environment)
executing a task indicated by the second voice signal for the at least two working areas sequentially (see at least [93] controlling the robot includes setting a schedule for operating in different areas of the environment)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, different work areas / zones for the robot to operate.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with zone based operation taught by Afrouzi’357with a reasonable expectation of success, to execute at least one of: using the robot, controlling the robot, modifying settings of the robot, adding or deleting or modifying access of users to the robot, actuation of the robot, programming the robot, and assigning a task to the robot (93).
As per Claim 12,
Kim does not disclose,
The method according to claim 11, wherein the executing a task indicated by the second voice signal for the at least two working areas sequentially comprises:
determining a sequential order when each of the at least two working areas appears in the second voice signal; and
executing the task indicated by the second voice signal for the at least two working areas sequentially according to the sequential order;
or determining a distance between the autonomous mobile robot and each of the at least two working areas;
sorting the at least two working areas in order of distance from nearest to farthest to obtain a queue; and
executing the task indicated by the second voice signal for the at least two working areas sequentially according to the queue.
Afrouzi’357 teaches,
The method according to claim 11, wherein the executing a task indicated by the second voice signal for the at least two working areas sequentially comprises:
determining a sequential order when each of the at least two working areas appears in the second voice signal (see at least [93] setting an order of operating in different areas of the environment)
executing the task indicated by the second voice signal for the at least two working areas sequentially according to the sequential order (see at least [231] the processor of the robot is taught the same path or different paths multiple times in the same area. In some embodiments, the processor of the robot is taught one or more paths for one or more different areas (e.g., kitchen, bathroom, bedroom, etc.) and paths to navigate between one or more areas. Over time, as the processor learns more and more paths, the processor becomes more efficient at covering areas or navigating between two areas or locations, and [241] the processor executes the following iteration for each zone of a sequence of zones, beginning with the first zone to optimize division of zones: expansion of the zone if neighbor cells are empty, movement of the robot to a point in the zone closest to the current position of the robot, addition of a new zone coinciding with the movement path of the robot from its current position to a point in the zone closest to the robot if the length of travel from its current position is significant, execution of a movement path within the zone, and removal of any uncovered cells from the zone.)
or determining a distance between the autonomous mobile robot and each of the at least two working areas; sorting the at least two working areas in order of distance from nearest to farthest to obtain a queue (see at least [241] the processor determines a sequence of the zones among a plurality of candidate sequences based on an effect of the sequence on a cost of a cost function that is based on travel distance of the robot through the sequence. In some embodiments, the robot traverses the zones in the determined sequence. In some embodiments, the cost function is based on other variables, such as actual surface coverage, repeat coverage, and total coverage time, and [242] the processor determines optimal division of the environment by minimizing a cost function. In some embodiments, the cost function depends on distance travelled between zones, coverage, and coverage time. In some embodiments, the cost function is minimized by removing, adding, shrinking, expanding, moving and switching the order of coverage of the zones).
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, different work areas / zones for the robot to operate.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with zone based operation taught by Afrouzi’357with a reasonable expectation of success, to optimize division of zones (241).
As per Claim 14,
Kim does not disclose,
The method according to claim 11, wherein the executing a task indicated by the second voice signal for the at least two working areas sequentially comprises:
after completing the task for one of the at least two working areas, stop executing the task before moving to a next working area
Afrouzi’357 teaches,
The method according to claim 11, wherein the executing a task indicated by the second voice signal for the at least two working areas sequentially comprises:
after completing the task for one of the at least two working areas, stop executing the task before moving to a next working area (see at least [310] the application may receive an input enacting an instruction for the robot to start, stop, or pause a current task; start mopping or vacuuming; dock at the charging station; start cleaning; spot clean; navigate to a particular location; drive along a particular path; and move or rotate in a particular direction, and 313) the application may receive an input enacting an instruction for the robot to pause a current task; start mopping or vacuuming; dock at the charging station; start cleaning; spot clean; navigate to a particular location; and move or rotate in a particular direction)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, different work areas / zones for the robot to operate.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with zone based operation taught by Afrouzi’357with a reasonable expectation of success, to optimize division of zones (241).
As per Claim 15,
Kim does not disclose,
The method according to claim 11, wherein the determining at least two working areas according to the second voice signal comprises:
determining an area classification according to the second voice signal
determining the at least two working areas from a set of areas corresponding to an environmental map according to the area classification
or when the second voice signal indicates a task forbidden area, determining the task forbidden area from the environmental map, and
determining the at least two working areas from an area other than the task forbidden area;
or, determining an area classification according to the second voice signal; and collecting an image; and
determining, from the image, an area corresponding to the area classification to obtain the at least two working areas.
Afrouzi’357 teaches,
The method according to claim 11, wherein the determining at least two working areas according to the second voice signal comprises:
determining an area classification according to the second voice signal (see at least [93] controlling the robot includes setting a schedule for operating in different areas of the environment, and [307] user may choose the order of covering or operating in the areas of the environment using the user interface. In some embodiments, the user may choose areas to be excluded using the user interface. In some embodiments, the user may adjust or create a coverage path of the robot using the user interface)
determining the at least two working areas from a set of areas corresponding to an environmental map according to the area classification (see at least [57] In some embodiments, regions wherein object are consistently encountered or observed may be classified by the processor as high object density areas and may be marked as such in the map of the environment, [57] the processor may attempt to alter its path to avoid high object density areas or to cover high object density areas at the end of a work session, and [93] controlling the robot includes setting a schedule for operating in different areas of the environment; setting a no-entry zone; setting a no-operating zone; creating a virtual wall; setting an order of operating in different areas of the environment; creating or modifying or deleting a path of the robot; moving the robot in a particular direction; setting a driving speed of the robot; setting a volume of the robot; setting a voice type of the robot; commanding the robot to purchase an item; and commanding the robot to play a media.
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, different work areas / zones for the robot to operate.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with zone based operation taught by Afrouzi’357with a reasonable expectation of success, to optimize division of zones (241).
As per Claim 16,
Kim does not disclose,
The method according to claim 15, wherein the determining the at least two working areas from an environmental map according to the area classification comprises:
when the area classification indicates a target object, determining, from the environmental map by centering on the target object, an area including the target object to obtain the at least two working areas.
Afrouzi’357 teaches,
The method according to claim 15, wherein the determining the at least two working areas from an environmental map according to the area classification comprises:
when the area classification indicates a target object, determining, from the environmental map by centering on the target object, an area including the target object to obtain the at least two working areas (See at least [56] images of the environment captured by a camera of the robot may be used by the processor to identify objects observed, extract features of the objects observed (e.g., shapes, colors, size, angles, etc.), and determine the type of objects observed based on the extracted features, [56] types of objects surrounding a robot (e.g., television, home assistant, radio, coffee grinder, vacuum cleaner, treadmill, cat, dog, human users, etc.) may be determined based on features extracted, [93] setting an order of operating in different areas of the environment, and [243] processor determines an order score for each node to determine order of coverage based on the distance between the boundary node of interest and the closest boundary node in the next zone to be covered, the distance between the closest boundary nodes between the current zone and the next zone to be covered, and the distance between the furthest boundary nodes between the current zone and the next zone to be covered).
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, different work areas / zones / environments for the robot to operate.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with zone based operation taught by Afrouzi’357with a reasonable expectation of success, to optimize division of zones (241).
As per Claim 17,
Kim does not disclose,
The method according to claim 15, wherein before the determining at least two working areas according to the second voice signal, the method further comprises:
dividing the environmental map into a plurality of working areas according to the environmental map, location information of an object in the environmental map,
or location information of a gate in the environmental map, to obtain the set of areas;
updating an identifier of each working area in the set of areas; and
sending update information to a voice recognition server such that the voice recognition server updates the identifier of each working area.
Afrouzi’357 teaches,
The method according to claim 15, wherein before the determining at least two working areas according to the second voice signal, the method further comprises:
dividing the environmental map into a plurality of working areas according to the environmental map, location information of an object in the environmental map (see at least [8] map the environment, localize the robot, determine division of the environment into zones, [47] In some embodiments, the map is further processed to identify rooms and other segments. Examples of methods for dividing an environment into zones are described in U.S. patent application Ser. Nos. 14/817,952, 16/198,393, 16/599,169, and 15/619,449, the entire contents of which are hereby incorporated by reference. In some embodiments, a new map is constructed at each use, or an extant map is updated based on newly acquired data, and [231] the processor of the robot is taught the same path or different paths multiple times in the same area. In some embodiments, the processor of the robot is taught one or more paths for one or more different areas (e.g., kitchen, bathroom, bedroom, etc.) and paths to navigate between one or more areas. Over time, as the processor learns more and more paths, the processor becomes more efficient at covering areas or navigating between two areas or locations)
or location information of a gate in the environmental map, to obtain the set of areas (see at least [40] processor determines if the gap is a doorway using door detection methods described in U.S. Patent Application No. U.S. patent application Ser. Nos. 15/614,284, 16/851,614, and 16/163,541, the entire contents of which is hereby incorporated by reference. In some embodiments, the processor may mark the location of doorways within a map of the environment. In some embodiments, the processor uses doorways to segment the environment into two or more subareas).
updating an identifier of each working area in the set of areas (see at least [229] the processor divides the environment into zones and then dynamically adjusts a movement path within each of those zones based on sensed attributes of the environment)
sending update information to a voice recognition server such that the voice recognition server updates the identifier of each working area (see at least [306] the user may select areas within the map of the environment displayed on the screen using their finger or providing verbal instructions, or in some embodiments, an input device, such as a cursor, pointer, stylus, mouse, button or buttons, or other input methods. In some embodiments, the user may label different areas of the environment using the user interface of the application. In some embodiments, the user may use the user interface to select any size area (e.g., the selected area may be comprised of a small portion of the environment or could encompass the entire environment) or zone within a map displayed on a screen of the communication device and the desired settings for the selected area)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, different work areas / zones / environments for the robot to operate.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with zone based operation taught by Afrouzi’357with a reasonable expectation of success, to optimize division of zones (241).
As per Claim 25,
Kim does not disclose,
The method according to claim15, wherein the executing a task indicated by the second voice signal for the at least two working areas sequentially comprises:
determining an operation mode according to the area classification; and
executing the task for the at least two working areas sequentially according to the operation mode.
Afrouzi’357 teaches,
The method according to claim15, wherein the executing a task indicated by the second voice signal for the at least two working areas sequentially comprises:
determining an operation mode according to the area classification (see at least [200] the processor uses localization to control the behavior of the robot in different areas, where for instance, certain functions or settings are desired for different environments. These functions or settings may be triggered once the processor has localized the robot against the environment. For example, it may be desirable to run the motor at a higher speed when moving over rough surfaces, such as soft flooring as opposed to hardwood, wherein localization against floor type or against a room may trigger the motor speed, and [307] the user may choose different robot cleaning settings for different areas within the environment or may schedule particular robot cleaning settings at specific times using the user interface. In some embodiments, the user may choose the order of covering or operating in the areas of the environment using the user interface)
executing the task for the at least two working areas sequentially according to the operation mode (see at least [269] the processor adjusts a path, operational schedule (e.g., time when various designated areas are worked upon), and the like based on sensory data. Examples of environmental characteristics include floor type, obstacle density, room or area type, level of debris accumulation, level of user activity, time of user activity, etc.)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, different work areas / zones / environments for the robot to operate.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with zone based operation taught by Afrouzi’357 with a reasonable expectation of success, to optimize division of zones (241).
As per Claim 18,
Kim does not disclose,
The method according to claim 11, wherein the collecting a second voice signal in a wake-up state of the voice control function comprises:
controlling the autonomous mobile robot to switch from a first operating state to a second operating state when the first voice signal matches the wake-up command of the autonomous mobile robot,
wherein volume of sound produced by the autonomous mobile robot in the second operating state is smaller than volume of sound produced in the first operating state,
and, the wake-up command is configured for waking up the voice control function of the autonomous mobile robot; and collecting the second voice signal in the second operating state.
Chae teaches,
controlling the autonomous mobile robot to switch from a first operating state to a second operating state when the first voice signal matches the wake-up command of the autonomous mobile robot, (see at least [0008] provide a device and a method for recognizing a voice based on artificial intelligence that extract a template from a voice of a user to recognize the user even when a boundary between a wake-up voice and a command voice is unclear, and [0159] the processor 260a may distinguish the user using the spectrum of the voice signal and may selectively perform context dependent speaker identification using a keyword of the wake-up voice and context independent speaker identification based on an unstructured natural language command voice as a method for identifying the wake-up voice specific to the user)
and, the wake-up command is configured for waking up the voice control function of the autonomous mobile robot (see at least [0016] receive a first voice signal and compare the first voice signal with the template to determine whether the first voice signal matches the first user, and to determine whether to wake-up the electronic device based on the determination result)
and collecting the second voice signal in the second operating state (see at least [0023] the electronic device is automatically controlled in response to a voice signal associated to operation of the electronic device).
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Chae teaches a method for recognizing voice based on artificial intelligence that extract a template from a voice of a user to recognize the user even when a boundary between a wake-up voice and a command voice is unclear.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with recognition of wake-up voice and command voice taught by Chae with a reasonable expectation of success, to wake-up the electronic device based on the determination result (0010).
Afrouzi’040 teaches,
wherein volume of sound produced by the autonomous mobile robot in the second operating state is smaller than volume of sound produced in the first operating state (see at least [0634] In response to inferring the presence of users, the processor may reduce motor speed of components (e.g., impeller motor speed) to decrease noise disturbance, and [0721] the processor may use machine learning techniques to de-noise the voice input such that it may reach a quality desired for speech-to-text conversion. In some embodiments, the robot may constantly listen and monitor for audio input triggers that may instruct or initiate the robot to perform one or more actions)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’040 teaches methods such as mapping, localization, object recognition, and path planning.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with mapping, localization and path planning as taught by Afrouzi’040, with a reasonable expectation of success, to operate autonomously or with minimal (or less than fully manual) input and/or external control within an environment (0004).
As per Claim 19,
Kim discloses,
The method according to claim 18, further comprising:
when the first voice signal matches the wake-up command of the autonomous mobile robot, determining a sound source location of the first voice signal; (see at least Fig. 5, S503, [0037] when a person calls the autonomous mobile apparatus using the human voice, the autonomous mobile apparatus detects the sound location, recognizes the human voice, moves around a person)
controlling the autonomous mobile robot to switch from a first posture to a second posture according to the sound source location, wherein a distance between a microphone and the sound source location when the autonomous mobile robot is in the second posture is less than a distance between the microphone and the sound source location when the autonomous mobile robot is in the first posture, and the microphone is a microphone arranged on the autonomous mobile robot (see at least Fig. 5, S504, [0070] The behavior subsystem 430 manages various unit behaviors of a robot and executes a requested unit behavior at the time of request in a task execution module. The behavior includes a behavior (sound reacting behavior) turning a user's head to a sound direction by responding to a call sound of a user, a behavior (autonomous traveling behavior) moving to a designated position while avoiding obstacles, a behavior (user search) searching the surrounding)
Kim does not disclose,
The method according to claim 18, further comprising:
when the first voice signal matches the wake-up command of the autonomous mobile robot, determining a sound source location of the first voice signal; and
controlling the autonomous mobile robot to switch from a first posture to a second posture according to the sound source location, wherein a distance between a microphone and the sound source location when the autonomous mobile robot is in the second posture is less than a distance between the microphone and the sound source location when the autonomous mobile robot is in the first posture, and the microphone is a microphone arranged on the autonomous mobile robot.
Chae teaches,
The method according to claim 18, further comprising: when the first voice signal matches the wake-up command of the autonomous mobile robot, determining a sound source location of the first voice signal (see at least Fig. 7, [0014] extracting a spectrum for the first voice signal; and comparing the spectrum with the template specific to the first user to calculate a distance, [0015] the calculating of the distance may further include: comparing a length of the distance with a predetermined error range; and outputting a wake-up signal or a non-wake-up signal to control the electronic device when the length of the distance is greater than the predetermined error range, [0174] When the input unit 250 receives the voice signal of the user, the processor 260a may recognize whether a voice input corresponding to the input voice signal is the wake-up voice and compare the input voice with the template to measure the degree of match, that is, distance therebetween and transmit the degree of match to the output unit 270. Then, the output unit 270 may determine whether to operate the electronic device 100a based on the wake-up signal or the non-wake-up signal)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Chae teaches a method for recognizing voice based on artificial intelligence that extract a template from a voice of a user to recognize the user even when a boundary between a wake-up voice and a command voice is unclear.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with recognition of wake-up voice and command voice taught by Chae with a reasonable expectation of success, to wake-up the electronic device based on the determination result (0010).
Afrouzi’040 teaches,
controlling the autonomous mobile robot to switch from a first posture to a second posture according to the sound source location, wherein a distance between a microphone and the sound source location when the autonomous mobile robot is in the second posture is less than a distance between the microphone and the sound source location when the autonomous mobile robot is in the first posture, and the microphone is a microphone arranged on the autonomous mobile robot (see at least [0721] the robot may turn towards the direction from which a voice input originated for a better user-friendly interaction, as humans generally face each other when interacting. In some embodiments, there may be multiple devices including a microphone within a same environment. In some embodiments, the processor may continuously monitor microphones (local or remote) for audio inputs that may have originated from the vicinity of the robot)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’040 teaches methods such as mapping, localization, object recognition, and path planning.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with mapping, localization and path planning as taught by Afrouzi’040, with a reasonable expectation of success, to operate autonomously or with minimal (or less than fully manual) input and/or external control within an environment (0004).
As per Claim 20,
Kim discloses,
The method according to claim 19, wherein the controlling the autonomous mobile robot to switch from a first posture to a second posture according to the sound source location comprises:
determining a rotation angle according to the sound source location, the rotation angle being configured for indicating an angle that the autonomous mobile robot needs to rotate when switching from the first posture to the second posture (see at least [0057] the intelligent mobile robot recognizes a call sound and a call direction of a user to detect a person's shape, estimates a position of a user using the sound source localization and the detected person's shape, moves to a substantially estimated position, and then, again searches a person that is a caller through the user recognition, and [0074] When the user (S501) calls the mobile robot (S502), the mobile robot receives the user's voice and recognizes whether a call is present using the received voice information (S503). The camera moves (or, rotates, directs) to the call direction based on the call sound and the sound direction for a call (S504). The camera may be mounted at a head or other components of the robot)
Kim does not disclose,
classifying the rotation angle into a first angle and a second angle; and
rotating at a first speed within the first angle, and rotating at a second speed within the second angle, the first speed being greater than the second speed.
Afrouzi’357 teaches,
classifying the rotation angle into a first angle and a second angle (see at least [18] the robot may use differential-drive wherein two fixed wheels have a common axis of rotation and angular velocities of the two wheels are equal and opposite such that the robot may rotate on the spot, [31] The area of overlap between two consecutive fields of view correlates with the angular movement of the camera (relative to a static frame of reference of a room, for example) from one field of view to the next field of view. By ensuring the frame rate of the camera is fast enough to capture more than one frame of readings in the time it takes the camera to rotate the width of the frame, there is always overlap between the readings taken within two consecutive fields of view, and [31] In some embodiments, the processor infers the angular disposition of the Robot from the size of the area of overlap and uses the angular disposition)
rotating at a first speed within the first angle, and rotating at a second speed within the second angle, the first speed being greater than the second speed ([18] the robot may use differential-drive wherein two fixed wheels have a common axis of rotation and angular velocities of the two wheels are equal and opposite such that the robot may rotate on the spot, [22] The coverage tracker 104 may receive information from the pose estimator 105, SLAM 106, and SLAM updated 107 that it may use in tracking coverage. In one embodiment, the coverage tracker 104 may run at 2.4 Hz. In other indoor embodiments, the coverage tracker may run at between 1-50 Hz. For outdoor robots, the frequency may increase depending on the speed of the robot and the speed of data collection. For outdoor robots, the frequency may increase depending on the speed of the robot and the speed of data collection. A person in the art would be able to calculate the frequency of data collection, data usage, and data transmission to control system. The control system 101 may receive information from the pose and maps portion 103 that may be used for navigation decisions. Further details of a robot system that may be used is described in U.S. patent application Ser. No. 16/920,328, the entire contents of which is hereby incorporated by reference and, [310] move or rotate in a particular direction).
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, different work areas / zones / environments for the robot to operate.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with zone based operation taught by Afrouzi’357 with a reasonable expectation of success, to optimize division of zones (241).
As per Claim 26,
Kim does not disclose,
The method according to claim 11, wherein after the executing a task indicated by the second voice signal for the at least two working areas sequentially, the method further comprises:
determining whether the task has been executed for an initial area, the initial area being an area where the autonomous mobile robot collects the second voice signal; and returning to the initial area to execute the task when the task has not been executed for the initial area
Afrouzi’357 teaches,
The method according to claim 11, wherein after the executing a task indicated by the second voice signal for the at least two working areas sequentially, the method further comprises:
determining whether the task has been executed for an initial area, the initial area being an area where the autonomous mobile robot collects the second voice signal; and returning to the initial area to execute the task when the task has not been executed for the initial area (see at least [227] In some embodiments, if the robot enters a work area, the robot may be commanded to leave the work area. In some embodiments, the robot may attempt to return to the work area for operations at a later time, [227] the robot may alter a schedule it has set for recurring services based on commands received to vacate an area. In some embodiments, a command may be set for the robot to vacate an area but to return at an unspecified future time. In some embodiments, a command may be set for the robot to vacate an area but to return at a specified predetermined time, [271] providing the robot with current progress of the task and a map of the area such that it may complete the task)
Thus, Kim discloses a mobile apparatus that recognizes direction of a call signal and generates a moving command to move to the position of the work area (signal source) and Afrouzi’357 teaches identifying, different work areas / zones / environments for the robot to operate.
As a result, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the inventions as disclosed by Kim with zone based operation taught by Afrouzi’357 with a reasonable expectation of success, to optimize division of zones (241).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicants should take note of the prior art in the PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASHUTOSH PANDE whose telephone number is (571)272-6269. The examiner can normally be reached Monday -Friday 9:00am -5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 5712721516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.P./Examiner, Art Unit 3668
/Thomas Ingram/Primary Examiner, Art Unit 3668