DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
• This action is in reply to the amendments filed on 01/21/2026 for Application No. 18/351,484.
• Claims 1, 3, 5 – 18 are currently pending and have been examined. Claims 1, 3, 10, 12, 17 and 18 have been amended. Claims 2 and 4 have been cancelled.
• This action is made NON-FINAL.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/21/2026 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 6 – 11 and 14 – 16 are rejected under 35 U.S.C. 103 as being unpatentable over Afrouzi et al. (US 11274929 B1), further in view of Park et al. (US 20210331315 A1).
Regarding claim 1, Afrouzi teaches a robot control method for building a complete map (Afrouzi: Col. 2, lines 20 – 30 “Some aspects include a method for mapping and covering a workspace, including: capturing, with at least one sensor of a robot, first data indicative of the position of the robot in relation to objects within the workspace and second data indicative of movement of the robot; recognizing, with a processor of the robot, a first area of the workspace based on observing at least one of: a first part of the first data and a first part of the second data; generating, with the processor of the robot, at least part of a map of the workspace based on at least one of: the first part of the first data and the first part of the second data.”,
Supplemental Note: teaches a method of creating a map of the environment by the use of the robot sensors)
before a robot performs a task, comprising: (Afrouzi: Col. 1, line 65 – Col. 2, lines 6: “To operate autonomously or to operate with minimal (or less than fully manual) input and/or external control within a working environment, mapping, localization, and path planning methods may be such that robotic devices may autonomously create a map of the working environment, subsequently use the map for navigation, and devise intelligent path plans and task plans for efficient navigation and task completion.”)
performing, by at least one processer, preliminary map construction to obtain an initial map, wherein the initial map comprises unknown regions; (Afrouzi: Col. 59, lines 32 – 45: “In some embodiments, the processor may determine an amount of time for building the map. In some embodiments, an Internet of Things (IoT) subsystem may create and/or send a binary map to the cloud and an application of a communication device. In some embodiments, the IoT subsystem may store unknown points within the map. In some embodiments, the binary maps may be an object with methods and characteristics such as capacity, raw size, etc. having data types such as a byte. In some embodiments, a binary map may include the number of obstacles. In some embodiments, the map may be analyzed to find doors within the room. In some embodiments, the time of analysis may be determined. In some embodiments, the global map may be provided in ASCII format.” )
selecting, by at least one processer, each candidate navigation point from the initial map of the robot, wherein each candidate navigation point corresponds to an unknown region; (Afrouzi: Col. 59, lines 55 – 60: “In some embodiments, the map may be pushed to the cloud after completion of coverage wherein the robot has examined every area within the map by visiting each area implementing any required corrections to the map. In some embodiments, the map may be provided after a few runs to provide an accurate representation of the environment.”,
Supplemental Note: an incomplete map can be sent to the robot where it can complete the coverage of the map)
determining, by at least one processer, a target navigation point from each candidate navigation point according to a relative positional relationship between each candidate navigation point and the robot; (Afrouzi: Col. 54, lines 64 – Col. 55, line 2: “the processor identifies gaps in the map (e.g., due to areas blind to a sensor or a range of a sensor). In some embodiments, the processor may actuate the robot to move towards and investigates the gap, collecting observations and mapping new areas by adding new observations to the map until the gap is closed.”; Col. 31, lines 30 – 38: “gaps in the plotted boundary of the enclosure may be identified by one or more processors of the robot and further explored by one or more processors of the robot directing the camera until a complete (or more complete) closed loop boundary of the enclosure is plotted. In some embodiments, beacons are not required and the methods and apparatuses work with minimal or reduced processing power in comparison to traditional methods, which is not to suggest that any other described feature is required.”)
comprising:
determining, by the at least one processer, an access status between each candidate navigation point and the robot, respectively; (Afrouzi: Col. 60, lines 13 – 18: “The robot may, for example, use the map to autonomously navigate the environment during operation, e.g., accessing the map to determine that a candidate route is blocked by an obstacle denoted in the map, to select a path with a path planning algorithm from a current point to a target point, or the like.”; Col. 58, lines 59 – 66: “the processor may use a supervised machine learning algorithm to identify features of openings and walls. A training set of, for example, depth data may be used by the processor to teach the classifier common features or patterns in the data corresponding with openings and walls such that the processor may identify walls and openings in walls with some probability distribution.”,
Supplemental Note: the robot is able to travel along target points to identify if it is detecting a wall or openings. This is interpreted as determining an access status)
… controlling, by at least one processer, the robot to move to the target navigation point, so as to explore an unknown region corresponding to the target navigation point, thereby building the complete map (Afrouzi: Col. 54, lines 64 – Col. 55, line 2: “the processor identifies gaps in the map (e.g., due to areas blind to a sensor or a range of a sensor). In some embodiments, the processor may actuate the robot to move towards and investigates the gap, collecting observations and mapping new areas by adding new observations to the map until the gap is closed.”: Col. 55, lines 13 – 38: “Issues related to incorrect perimeter prediction may be eradicated with thorough inspection of the environment and training. For example, data from a second type of sensor may be used to validate a first map constructed based on data collected by a first type of sensor. In some embodiments, additional information discovered by multiple sensors may be included in multiple layers or different layers or in the same layer. In some embodiments, a training period of the robot may include the robot inspecting the environment various times with the same sensor or with a second (or more) type of sensor. In some embodiments, the training period may occur over one session (e.g., during an initial setup of the robot) or multiple sessions. In some embodiments, a user may instruct the robot to enter training at any point. In some embodiments, the processor of the robot may transmit the map to the cloud for validation and further machine learning processing. For example, the map may be processed on the cloud to identify rooms within the map. In some embodiments, the map including various information may be constructed into a graphic object and presented to the user (e.g., via an application of a communication device). In some embodiments, the map may not be presented to the user until it has been fully inspected multiple times and has high accuracy. In some embodiments, the processor disables a main brush and/or a side brush of the robot when in training mode or when searching and navigating to a charging station.”,
Supplemental Note: based on a gap in the map data, the robot can travel to that location to fill in the data. It should be noted that the system registers unknown areas as gaps as well as doorways).
In sum, Afrouzi teaches a robot control method for building a complete map before a robot performs a task, comprising: performing, by at least one processer, preliminary map construction to obtain an initial map, wherein the initial map comprises unknown regions; selecting, by at least one processer, each candidate navigation point from the initial map of the robot, wherein each candidate navigation point corresponds to an unknown region; determining, by at least one processer, a target navigation point from each candidate navigation point according to a relative positional relationship between each candidate navigation point and the robot; comprising: determining, by the at least one processer, an access status between each candidate navigation point and the robot, respectively; controlling, by at least one processer, the robot to move to the target navigation point, so as to explore an unknown region corresponding to the target navigation point, thereby building the complete map. Afrouzi however does not fully teach calculating, by the at least one processer, a distance between each candidate navigation point and the robot, respectively; and determining, by the at least one processer, the target navigation point from each candidate navigation point according to the access status and the distance between each candidate navigation point and the robot, further comprising: determining, by the at least one processer, a first weight of each candidate navigation point respectively according to the access status between each candidate navigation point and the robot; determining, by the at least one processer, a second weight of each candidate navigation point respectively according to the distance between each candidate navigation point and the robot; calculating, by the at least one processer, a comprehensive weight of each candidate navigation point respectively according to the first weight and the second weight of each candidate navigation point; and determining, by the at least one processer, a candidate navigation point with a highest comprehensive weight as the target navigation point.
Park teaches calculating, by the at least one processer, a distance between each candidate navigation point and the robot, respectively; and determining, by the at least one processer, the target navigation point from each candidate navigation point according to the access status and the distance between each candidate navigation point and the robot, further comprising: determining, by the at least one processer, a first weight of each candidate navigation point respectively according to the access status between each candidate navigation point and the robot; determining, by the at least one processer, a second weight of each candidate navigation point respectively according to the distance between each candidate navigation point and the robot; calculating, by the at least one processer, a comprehensive weight of each candidate navigation point respectively according to the first weight and the second weight of each candidate navigation point; and determining, by the at least one processer, a candidate navigation point with a highest comprehensive weight as the target navigation point; and (Park: Paragraph 0113: “Accordingly, as illustrated in FIG. 4, a waypoint, through which the robot is required pass to arrive at a destination, may be stored in the storage 730, and the robot may move while continuously changing a detailed plan for a path considering the position of a sensed (recognized) obstacle between waypoints.”; Paragraph 0115: “The robot goes through W1, W2, W3, W4, and W5 while moving from a start point (START) to a destination point (END). The waypoints may be classified as essential waypoints and non-essential waypoints, and the essential waypoints and non-essential waypoints may be stored in the storage 730.”; Paragraph 0121: “When the controller confirms that the robot may not approach to the first waypoint, but a distance between the current position of the robot and the coordinate of the first waypoint is longer than the preset reference distance, the robot may search for a path with no obstacle between the first waypoint and the robot while moving around the first waypoint, or when the second waypoint, through which the robot passes next, takes higher priority over the first waypoint, the controller may store the state of non-arrival at the first waypoint, and may generate a navigation route to the second waypoint (S106).”: Paragraph 0123: “Further, when a range of waypoints (effective area range) is fixed, the robot may wander around a waypoint in the case in which there are lots of obstacles in the effective area range. Furthermore, when an effective area range is too wide, the robot is highly likely to navigate away from a waypoint. Accordingly, the above-describe effective area range may be changed while the robot is navigating. Additionally, the robot may choose to move to the next waypoint on the basis of the current position of the robot and the location of the waypoint.”,
Supplemental Note: the robot is able to travel to various waypoints. If the distance to one waypoint is longer than expected (the waypoint surrounded by obstacles), the robot determines to move to the next way point. The access status is determining whether or not the robot is able to reach the waypoint based on the obstacles and if the distance to reach the waypoint is too large to then move to the next waypoint. These parameters of a distance to a waypoint and the access to a waypoint are interpreted as weights, thus both are evaluated in determining which waypoint the robot is to travel to. The multiple waypoints and their position in relation to the robot is shown below in Table A)
PNG
media_image1.png
266
501
media_image1.png
Greyscale
Table A - Park; Table 1
… wherein the relative positional relationship comprises the access status and the distance (Park: Paragraph 0123: “Further, when a range of waypoints (effective area range) is fixed, the robot may wander around a waypoint in the case in which there are lots of obstacles in the effective area range. Furthermore, when an effective area range is too wide, the robot is highly likely to navigate away from a waypoint. Accordingly, the above-describe effective area range may be changed while the robot is navigating. Additionally, the robot may choose to move to the next waypoint on the basis of the current position of the robot and the location of the waypoint.”).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Afrouzi with the teachings of Park with a reasonable expectation of success. Both Afrouzi and Park teach autonomous robots which are able to sense and map out their surroundings. Both robots have the ability to independently navigate while also avoiding obstacles in which Park teaches a method of evaluating distances to waypoints and their access status depending on how many obstacles are around a particular waypoint. This evaluation is used to determine which of the waypoints the robot is to travel to. This would be obvious to try to implement with the cleaning robot of Afrouzi by one of ordinary skill in the art. For example, if the cleaning robot is traveling to multiple waypoints and a user is in the way of one of the waypoints (named “W1”), the robot is now able to determine whether the distance to W1 is too large and if the robot is able to have access to W1 with the user in the way. Per this evaluation, the robot can determine whether or not to travel to W1 or another waypoint (named “W2”). This increases the efficiency in which the robot cleans the room as it can now clean areas with shorter distances and a higher access status, thus increasing the safety of the user/robot as, for example, if the user accidently steps on the robot as it is cleaning near them.
Regarding claim 3, Afrouzi, as modified, teaches wherein the determining the access status between each candidate navigation point and the robot, respectively comprises:
constructing, by the at least one processer, a connecting line between a current candidate navigation point and the robot, wherein the current candidate navigation point is any candidate navigation point; (Afrouzi: Col. 38, line 42 – Col. 39, line 14: “FIGS. 33A-33D illustrate an example of a boustrophedon movement pattern of the robot. In FIG. 33A robot 3300 begins near wall 3301, docked at its charging or base station 3302. Robot 3300 rotates 360 degrees in its initial position to attempt to map environment 3303, however, areas 3304 are not observed by the sensors of robot 3300 as the areas surrounding robot 3300 are too close, and the areas at the far end of environment 3303 are too far to be observed. Minimum and maximum detection distances may be, for example, 30 and 400 centimeters, respectively. Instead, in FIG. 33B, robot 3300 initially moves backwards in direction 3305 away from charging or base station 3302 by some distance 3306 where areas 3307 are observed. Distance 3306 is not particularly large, it may be 40 centimeters, for example. In FIG. 33C, robot 3300 then rotates 180 degrees in direction 3308 resulting in observed areas 3307 expanding. Areas immediately to either side of robot 3300 are too close to be observed by the sensors while one side is also unseen, the unseen side depending on the direction of rotation. In FIG. 33D, robot 3300 then moves in forward direction 3309 by some distance 3310, observed areas 3307 expanding further as robot 3300 explores undiscovered areas. The processor of robot 3300 determines distance 3310 by which robot 3300 travels forward by detection of an obstacle, such as wall 3311 or furniture or distance 3310 is predetermined. In FIG. 33E, robot 3300 then rotates another 180 degrees in direction 3308. In FIG. 33F, robot 3300 moves by some distance 3312 in forward direction 3313 observing remaining undiscovered areas. The processor determines distance 3312 by which the robot 3300 travels forward by detection of an obstacle, such as wall 3301 or furniture or distance 3312 is predetermined. The back and forth movement described is repeated wherein robot 3300 makes two 180 degree turns separated by some distance, such that movement of robot 3300 is a boustrophedon pattern, travelling back and forth across the environment while mapping. In other embodiments, the direction of rotations may be opposite to what is illustrated in this exemplary embodiment.”,
Supplemental Note: the robot is interpreted to construct a line in the direction of a travel by some distance as by performing the boustrophedon movement pattern. In this example, the robot first travels a predetermined distance of 3306 and then a 3310, 3312 and 3313 of approaching an undiscovered area and/or obstacle. This is creating lines from the current location of the robot to another point predicated by the distances and identified objects by the sensors)
determining, by the at least one processer, whether the connecting line passes through a long-side obstacle in the map;
when the connecting line passes through the long-side obstacle in the map, determining that the current candidate navigation point and the robot are in a blocked status; and (Afrouzi: Col. 38, line 65 – Col. 39, line 2: “The processor of robot 3300 determines distance 3310 by which robot 3300 travels forward by detection of an obstacle, such as wall 3311 or furniture or distance 3310 is predetermined. In FIG. 33E, robot 3300 then rotates another 180 degrees in direction 3308.”; Col. 37, lines 13 – 23: “The robot, in some embodiments, then moves in a forward direction (defined as the direction in which the sensor points, e.g., the centerline of the field of view of the sensor) by some first distance allowing the sensors to observe surroundings areas within the detection range as the robot moves. The processor, in some embodiments, determines the first forward distance of the robot by detection of an obstacle by a sensor, such as a wall or furniture, e.g., by making contact with a contact sensor or by bringing the obstacle closer than the maximum detection distance of the robot's sensor for mapping.”; Col. 37, lines 32 – 40: “In some embodiments, the processor may determine the second forward travel distance by detection of an obstacle by a sensor, such moving until a wall or furniture is within range of the sensor. In some embodiments, the second forward travel distance is predetermined or dynamically determined in the manner described above. In doing so, the sensors observe any remaining undiscovered areas from the first forward distance travelled across the environment as the robot returns back in the opposite direction.”,
Supplemental Note: the vehicle is able to identify a wall if it is in within the movement path of the robot. When detecting a wall, the robot is able to identify that the path is blocked as the travel path of the robot cannot travel past that obstacle)
when the connecting line does not pass through the long-side obstacle in the map, determining that the current candidate navigation point and the robot are in a directly communicated status (Afrouzi: Col. 37, line 58 – Col. 38, line 2: “in some embodiments, the robot is at one end of the environment, facing towards the open space. From here, the robot moves in a first forward direction (from the perspective of the robot as defined above) by some distance then rotates 90 degrees in a clockwise direction. The processor determines the first forward distance by which the robot travels forward by detection of an obstacle by a sensor, such as a wall or furniture. In some embodiments, the first forward distance is predetermined (e.g., and measured by another sensor, like an odometer or by integrating signals from an inertial measurement unit).”,
Supplemental Note: the travel path of the robot is towards an open space, thus a location in which the first forward direction does not intersect an obstacle. This is interpreted as a directly communicated status).
Regarding claim 6, Afrouzi, as modified, teaches wherein the selecting each candidate navigation point from a map of a robot comprises:
identifying, by the at least one processer, boundary pixel points in the map, wherein the boundary pixel points are known region pixel points adjacent to unknown region pixel points; (Afrouzi: Col. 2, lines 20 – 46: “Some aspects include a method for mapping and covering a workspace, including: capturing, with at least one sensor of a robot, first data indicative of the position of the robot in relation to objects within the workspace and second data indicative of movement of the robot; recognizing, with a processor of the robot, a first area of the workspace based on observing at least one of: a first part of the first data and a first part of the second data; generating, with the processor of the robot, at least part of a map of the workspace based on at least one of: the first part of the first data and the first part of the second data; generating, with the processor of the robot, a first movement path covering at least part of the first recognized area of the workspace; actuating, with the processor of the robot, the robot to move along the first movement path; recognizing, with the processor of the robot, a second area of the workspace based on observing at least one of: a second part of the first data and a second part of the second data; updating, with the processor of the robot, the at least the part of the map of the workspace based on at least one of: the second part of the first data and the second part of the second data; generating, with the processor of the robot, a second movement path covering at least part of the second recognized area of the workspace; and actuating, with the processor of the robot, the robot to move along the second movement path, wherein actuating the robot to move along at least one of the first movement path and the second movement path comprises at least a repetitive iteration”; Col. 29, lines 32 – 38: “In some embodiments, a modified RANSAC approach is used where any two points, one from each data set, are connected by a line. A boundary is defined with respect to either side of the line. Any points from either data set beyond the boundary are considered outliers and are excluded. The process is repeated using another two points. The process is intended to remove outliers to achieve a higher probability of being the true distance to the perceived wall.”; Col. 20, lines 2 – 11: “In some embodiments, the processor may generate or update a map using captured images of the environment. In some embodiments, a captured image may be processed prior to using the image in generating or updating the map. In some embodiments, processing may include replacing readings corresponding to each pixel with averages of the readings corresponding to neighboring pixels. FIG. 18 illustrates an example of replacing a reading 1800 corresponding with a pixel with an average of the readings 1801 of corresponding neighboring pixels 1802.”,
Supplemental Note: the robot is able to make movement paths from recognized areas to unrecognized areas to update the map. The boundaries are shown on the map which can be updated by pixel values)
clustering, by the at least one processer, the boundary pixel points to obtain each boundary line; and (Afrouzi: Col. 26, lines 44 – 46: “Some embodiments may then determine the centroid of each cluster in the spatial dimensions of an output depth vector for constructing floor plan maps.”,
Supplemental Note: the floor maps which are interpreted as the boundaries are cited to be created by the centroid of each captured spatial dimension)
selecting, by the at least one processer, a midpoint of each boundary line as each candidate navigation point (Afrouzi: Col. 37, lines 12 – 17: “The robot, in some embodiments, then moves in a forward direction (defined as the direction in which the sensor points, e.g., the centerline of the field of view of the sensor) by some first distance allowing the sensors to observe surroundings areas within the detection range as the robot moves.”,
Supplemental Note: the vehicle creates its recognized boundaries by the acquired censor data, thus when the movements of the robot are from the centerline of field of view of the sensors, it is interpreted as the claim limitation).
Regarding claim 7, Afrouzi, as modified, teaches, wherein, before the selecting each candidate navigation point from a map of a robot, the robot control method further comprises: identifying, by the at least one processer, a gap region in the map, wherein the gap region is a region whose entrance width is smaller than a preset width threshold; and (Afrouzi: Col. 56, lines 45 – 51: “In some embodiments, the processor may use a threshold to determine whether the data points considered indicate an opening in the wall when, for example, the error exceeds some threshold value. In some embodiments, the processor may use an adaptive threshold wherein the values below the threshold may be considered to be a wall.”)
removing, by the at least one processer, the gap region from the map (Afrouzi: Col. 54, line 64 – Col. 55, line 2: “In some embodiments, the processor identifies gaps in the map (e.g., due to areas blind to a sensor or a range of a sensor). In some embodiments, the processor may actuate the robot to move towards and investigates the gap, collecting observations and mapping new areas by adding new observations to the map until the gap is closed”; Col. 55, lines 40 – 56: “In some embodiments, a gap in the perimeters of the environment may be due to an opening in the wall (e.g., a doorway or an opening between two separate areas). In some embodiments, exploration of the undiscovered areas within which the gap is identified may lead to the discovery of a room, a hallway, or any other separate area. In some embodiments, identified gaps that are found to be, for example, an opening in the wall may be used in separating areas into smaller subareas. For example, the opening in the wall between two rooms may be used to segment the area into two subareas, where each room is a single subarea. This may be expanded to any number of rooms. In some embodiments, the processor of the robot may provide a unique tag to each subarea and may use the unique tag to order the subareas for coverage by the robot, choose different work functions for different subareas, add restrictions to subareas, set cleaning schedules for different subareas, and the like.”).
Regarding claim 8, Afrouzi, as modified, teaches a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores a computer program that, when executed by a processor,(Afrouzi: Claim 21: “A robot, comprising: a drive motor configured to actuate movement of the robot; at least one sensor coupled to the robot; a processor onboard the robot and configured to communicate with the sensor and the drive motor; and memory storing instructions that when executed by the processor cause the robot to effectuate operations comprising:”)
executes steps of the robot control method according to claim 1 (Afrouzi: Col. 8, lines 7 – 24: “In some embodiments, a robot may include, but is not limited to include, one or more of a casing, a chassis including a set of wheels, a motor to drive the wheels, a receiver that acquires signals transmitted from, for example, a transmitting beacon, a transmitter for transmitting signals, a processor, a memory, a controller, tactile sensors, obstacle sensors, network or wireless communications, radio frequency communications, power management such as a rechargeable battery or solar panels or fuel, one or more clock or synchronizing devices, temperature sensors, imaging sensors, and at least one cleaning tool (e.g., impeller, brush, mop, scrubber, steam mop, polishing pad, UV sterilizer, etc.). The processor may, for example, receive and process data from internal or external sensors, execute commands based on data received, control motors such as wheel motors, map the environment, localize the robot, determine division of the environment into zones, and determine movement paths.”).
Regarding claim 9, Afrouzi, as modified, teaches a robot, comprising a memory, a processor and a computer program stored in the memory and operable on the processor (Afrouzi: Claim 21: “A robot, comprising: a drive motor configured to actuate movement of the robot; at least one sensor coupled to the robot; a processor onboard the robot and configured to communicate with the sensor and the drive motor; and memory storing instructions that when executed by the processor cause the robot to effectuate operations comprising:”)
when executed by the processor, the computer program executes steps of the robot control method according to claim 1 is realized, when the processor executes the computer program (Afrouzi: Col. 8, lines 7 – 24: “In some embodiments, a robot may include, but is not limited to include, one or more of a casing, a chassis including a set of wheels, a motor to drive the wheels, a receiver that acquires signals transmitted from, for example, a transmitting beacon, a transmitter for transmitting signals, a processor, a memory, a controller, tactile sensors, obstacle sensors, network or wireless communications, radio frequency communications, power management such as a rechargeable battery or solar panels or fuel, one or more clock or synchronizing devices, temperature sensors, imaging sensors, and at least one cleaning tool (e.g., impeller, brush, mop, scrubber, steam mop, polishing pad, UV sterilizer, etc.). The processor may, for example, receive and process data from internal or external sensors, execute commands based on data received, control motors such as wheel motors, map the environment, localize the robot, determine division of the environment into zones, and determine movement paths.”; Col. 161, lines 27 – 30: “In some embodiments, boot up time of the robot may be reduced or performance may be improved by using a higher frequency CPU. In some instances, an increase in frequency of the processor may decrease runtime for all programs.”).
Regarding claim 10, Afrouzi teaches wherein, before the selecting each candidate navigation point from a map of a robot, the robot control method further comprises:
identifying a gap region in the map, wherein the gap region is a region whose entrance width is smaller than a preset width threshold; and (Afrouzi: Col. 56, lines 45 – 51: “In some embodiments, the processor may use a threshold to determine whether the data points considered indicate an opening in the wall when, for example, the error exceeds some threshold value. In some embodiments, the processor may use an adaptive threshold wherein the values below the threshold may be considered to be a wall.”)
removing the gap region from the map (Afrouzi: Col. 54, line 64 – Col. 55, line 2: “In some embodiments, the processor identifies gaps in the map (e.g., due to areas blind to a sensor or a range of a sensor). In some embodiments, the processor may actuate the robot to move towards and investigates the gap, collecting observations and mapping new areas by adding new observations to the map until the gap is closed”; Col. 55, lines 40 – 56: “In some embodiments, a gap in the perimeters of the environment may be due to an opening in the wall (e.g., a doorway or an opening between two separate areas). In some embodiments, exploration of the undiscovered areas within which the gap is identified may lead to the discovery of a room, a hallway, or any other separate area. In some embodiments, identified gaps that are found to be, for example, an opening in the wall may be used in separating areas into smaller subareas. For example, the opening in the wall between two rooms may be used to segment the area into two subareas, where each room is a single subarea. This may be expanded to any number of rooms. In some embodiments, the processor of the robot may provide a unique tag to each subarea and may use the unique tag to order the subareas for coverage by the robot, choose different work functions for different subareas, add restrictions to subareas, set cleaning schedules for different subareas, and the like.”).
Regarding claim 11, Afrouzi teaches wherein, before the selecting each candidate navigation point from a map of a robot, the robot control method further comprises:
identifying a gap region in the map, wherein the gap region is a region whose entrance width is smaller than a preset width threshold; and (Afrouzi: Col. 56, lines 45 – 51: “In some embodiments, the processor may use a threshold to determine whether the data points considered indicate an opening in the wall when, for example, the error exceeds some threshold value. In some embodiments, the processor may use an adaptive threshold wherein the values below the threshold may be considered to be a wall.”)
removing the gap region from the map (Afrouzi: Col. 54, line 64 – Col. 55, line 2: “In some embodiments, the processor identifies gaps in the map (e.g., due to areas blind to a sensor or a range of a sensor). In some embodiments, the processor may actuate the robot to move towards and investigates the gap, collecting observations and mapping new areas by adding new observations to the map until the gap is closed”; Col. 55, lines 40 – 56: “In some embodiments, a gap in the perimeters of the environment may be due to an opening in the wall (e.g., a doorway or an opening between two separate areas). In some embodiments, exploration of the undiscovered areas within which the gap is identified may lead to the discovery of a room, a hallway, or any other separate area. In some embodiments, identified gaps that are found to be, for example, an opening in the wall may be used in separating areas into smaller subareas. For example, the opening in the wall between two rooms may be used to segment the area into two subareas, where each room is a single subarea. This may be expanded to any number of rooms. In some embodiments, the processor of the robot may provide a unique tag to each subarea and may use the unique tag to order the subareas for coverage by the robot, choose different work functions for different subareas, add restrictions to subareas, set cleaning schedules for different subareas, and the like.”).
Regarding claim 14, Afrouzi, as modified, teaches wherein, before the selecting each candidate navigation point from a map of a robot, the robot control method further comprises:
identifying a gap region in the map, wherein the gap region is a region whose entrance width is smaller than a preset width threshold; and (Afrouzi: Col. 56, lines 45 – 51: “In some embodiments, the processor may use a threshold to determine whether the data points considered indicate an opening in the wall when, for example, the error exceeds some threshold value. In some embodiments, the processor may use an adaptive threshold wherein the values below the threshold may be considered to be a wall.”)
removing the gap region from the map (Afrouzi: Col. 54, line 64 – Col. 55, line 2: “In some embodiments, the processor identifies gaps in the map (e.g., due to areas blind to a sensor or a range of a sensor). In some embodiments, the processor may actuate the robot to move towards and investigates the gap, collecting observations and mapping new areas by adding new observations to the map until the gap is closed”; Col. 55, lines 40 – 56: “In some embodiments, a gap in the perimeters of the environment may be due to an opening in the wall (e.g., a doorway or an opening between two separate areas). In some embodiments, exploration of the undiscovered areas within which the gap is identified may lead to the discovery of a room, a hallway, or any other separate area. In some embodiments, identified gaps that are found to be, for example, an opening in the wall may be used in separating areas into smaller subareas. For example, the opening in the wall between two rooms may be used to segment the area into two subareas, where each room is a single subarea. This may be expanded to any number of rooms. In some embodiments, the processor of the robot may provide a unique tag to each subarea and may use the unique tag to order the subareas for coverage by the robot, choose different work functions for different subareas, add restrictions to subareas, set cleaning schedules for different subareas, and the like.”).
Regarding claim 15, Afrouzi, as modified, teaches wherein the long-side obstacle is an obstacle whose projected length on a vertical line of the connecting line between the candidate navigation point and the robot is greater than a preset length threshold, wherein the long-side obstacle comprises walls (Afrouzi: “In some embodiments, the map may be a state space with possible values for x, y, z. In some embodiments, a value of x and y may be a point on a Cartesian plane on which the robot drives and the value of z may be a height of obstacles or depth of cliffs. In some embodiments, the map may include additional dimensions (e.g., debris accumulation, floor type, obstacles, cliffs, stalls, etc.). For example, FIG. 17 illustrates an example of a map that represents a driving surface with vertical undulations (e.g., indicated by measurements in x-, y-, and z-directions).”; Col. 30, lines 62 – 65: “In some embodiments, maps may be three dimensional maps, e.g., indicating the position of walls, furniture, doors, and the like in a room being mapped.”,
Supplemental Note: the robot is able to identify and map out objects in a 3d map by indicating the position of the walls, thus able to identify walls).
Regarding claim 16, Afrouzi, as modified, teaches wherein the access status comprises the blocked status (Afrouzi: Col. 38, line 65 – Col. 39, line 2: “The processor of robot 3300 determines distance 3310 by which robot 3300 travels forward by detection of an obstacle, such as wall 3311 or furniture or distance 3310 is predetermined. In FIG. 33E, robot 3300 then rotates another 180 degrees in direction 3308.”; Col. 37, lines 13 – 23: “The robot, in some embodiments, then moves in a forward direction (defined as the direction in which the sensor points, e.g., the centerline of the field of view of the sensor) by some first distance allowing the sensors to observe surroundings areas within the detection range as the robot moves. The processor, in some embodiments, determines the first forward distance of the robot by detection of an obstacle by a sensor, such as a wall or furniture, e.g., by making contact with a contact sensor or by bringing the obstacle closer than the maximum detection distance of the robot's sensor for mapping.”; Col. 37, lines 32 – 40: “In some embodiments, the processor may determine the second forward travel distance by detection of an obstacle by a sensor, such moving until a wall or furniture is within range of the sensor. In some embodiments, the second forward travel distance is predetermined or dynamically determined in the manner described above. In doing so, the sensors observe any remaining undiscovered areas from the first forward distance travelled across the environment as the robot returns back in the opposite direction.”,
Supplemental Note: the robot is able to determine that a wall is an obstacle which it cannot go through, which it then maps out. This is interpreted as a blocked status as the map is updated to identify the boundaries of which the robot can travel constrained by the walls and other obstacles)
and the directly communicated status (Afrouzi: Col. 37, line 58 – Col. 38, line 2: “in some embodiments, the robot is at one end of the environment, facing towards the open space. From here, the robot moves in a first forward direction (from the perspective of the robot as defined above) by some distance then rotates 90 degrees in a clockwise direction. The processor determines the first forward distance by which the robot travels forward by detection of an obstacle by a sensor, such as a wall or furniture. In some embodiments, the first forward distance is predetermined (e.g., and measured by another sensor, like an odometer or by integrating signals from an inertial measurement unit).”,
Supplemental Note: the travel path of the robot is towards an open space, thus a location in which the first forward direction does not intersect an obstacle. This is interpreted as a directly communicated status).
Claim(s) 5 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Afrouzi et al. (US 11274929 B1) and Park et al. (US 20210331315 A1) as applied to claim 1 above, and further in view of Artes et al. (US 20200150655 A1).
Regarding claim 5, Afrouzi, as modified, teaches does not teach wherein the controlling the robot to move to the target navigation point comprises: moving, by the at least one processer, the target navigation point in a direction away from a target boundary to obtain a corrected target navigation point, wherein the target boundary is an unknown region boundary corresponding to the target navigation point; and controlling, by at least one processer, the robot to move to the corrected target navigation point.
Artes teaches wherein the controlling the robot to move to the target navigation point comprises: moving, by the at least one processer, the target navigation point in a direction away from a target boundary to obtain a corrected target navigation point, wherein the target boundary is an unknown region boundary corresponding to the target navigation point; and controlling, by at least one processer, the robot to move to the corrected target navigation point (Artes: Paragraph 0005: “In the case of autonomous mobile robots that store and maintain a map of their area of deployment in order to use it during their subsequent deployment, a virtual exclusion region can be entered directly into the map. Such an exclusion region may be delineated, for example, by a virtual boundary over which the robot is prohibited from moving. The advantage provided by this purely virtual prohibited are is that no additional markings are needed in the environment of the robot.”; Paragraph 0009: “Further described is a method for controlling an autonomous mobile robot that is configured to independently navigate in an area of robot deployment using sensors and a map, wherein the map comprises at least one virtual boundary line with an orientation that allows to distinguish a first side and a second side of the boundary line. When navigating the robot moving over the boundary line in a first direction—coming from the first side of the boundary line—is avoided, whereas moving over the boundary line in a second direction—coming from the second side of the boundary line—is permitted.”; Paragraph 0039: “Thus, in order to prevent the robot 100 from entering the exclusion region S, the control unit 150 of the robot 100 can employ an obstacle avoidance strategy, also known as obstacle avoidance algorithm, which is configured to control the robot, based on the location of identified obstacles, to prevent the robot from colliding with these obstacles. The location of one or more exclusion regions can be determined based on the virtual exclusion region S stored in the map data. These locations can then be treated in the obstacle avoidance strategy in the same manner as a real obstacle at this location would be treated. Thus, in a simple and easy manner, the robot 100 is prevented from autonomously entering and/or travelling over a virtual exclusion region S. The following examples will illustrate this in greater detail:”; Paragraph 0116: “During the self-localization the robot can test, based on localization hypotheses (that is, on hypotheses based on the sensor and map data regarding the possible position of the robot, for example, a probability model) whether it is in or near a virtual exclusion region S. Based on this information, the robot can adapt its exploratory run for the global self-localization in order to reduce the risk (i.e. the probability) of unintentionally entering a virtual exclusion region. Localization hypotheses for (global) self-localization that are based on probability are known and will therefore not be discussed here in detail. Relevant to this example is the fact that, if the robot does not know its exact position during an exploratory run for self-localization, it can only determine probabilities for specific map positions. When doing so the robot can also test to determine with what probability it is located in an exclusion region. The robot can then adapt its current path in dependency on this probability. If, for example, the probability of the robot finding itself in an exclusion region S increases while it moves along a given path it can change its direction of movement until the probability once again decreases.”,
Supplemental Note: the robot has virtual lines which dictate boundaries in which the robot is able to travel within. The robot is able to travel in and out of the boundary however only one side. This is interpreted to teach the claim limitation as any movement path outside of the boundary, unless from the approved side, is subject to be adjusted to stay within the boundary).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Afrouzi with the teachings of Artes with a reasonable expectation of success. Afrouzi and Artes both teach cleaning robots able to autonomously travel throughout an area equipped with sensors to map out their environment. Artes teaches an additional function of being able to create boundaries in which the robot cannot travel onto, one with knowledge in the art would find it obvious to try to combine this function with the robot of Afrouzi to increase the usability of the robot. For example, if there are parts of a room that the cleaning robot is to avoid, the ability to Artes allows the user to create a boundary in which a robot cannot enter. This allows for additional flexibility of controlling where the robot is traveling to mitigate, for example, the robot traveling in areas where it might be stuck or damage an object in the room.
Regarding claim 13, Afrouzi, as modified, teaches wherein, before the selecting each candidate navigation point from a map of a robot, the robot control method further comprises:
identifying a gap region in the map, wherein the gap region is a region whose entrance width is smaller than a preset width threshold; and (Afrouzi: Col. 56, lines 45 – 51: “In some embodiments, the processor may use a threshold to determine whether the data points considered indicate an opening in the wall when, for example, the error exceeds some threshold value. In some embodiments, the processor may use an adaptive threshold wherein the values below the threshold may be considered to be a wall.”)
removing the gap region from the map (Afrouzi: Col. 54, line 64 – Col. 55, line 2: “In some embodiments, the processor identifies gaps in the map (e.g., due to areas blind to a sensor or a range of a sensor). In some embodiments, the processor may actuate the robot to move towards and investigates the gap, collecting observations and mapping new areas by adding new observations to the map until the gap is closed”; Col. 55, lines 40 – 56: “In some embodiments, a gap in the perimeters of the environment may be due to an opening in the wall (e.g., a doorway or an opening between two separate areas). In some embodiments, exploration of the undiscovered areas within which the gap is identified may lead to the discovery of a room, a hallway, or any other separate area. In some embodiments, identified gaps that are found to be, for example, an opening in the wall may be used in separating areas into smaller subareas. For example, the opening in the wall between two rooms may be used to segment the area into two subareas, where each room is a single subarea. This may be expanded to any number of rooms. In some embodiments, the processor of the robot may provide a unique tag to each subarea and may use the unique tag to order the subareas for coverage by the robot, choose different work functions for different subareas, add restrictions to subareas, set cleaning schedules for different subareas, and the like.”).
Claim(s) 12, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Afrouzi et al. (US 11274929 B1) and Park et al. (US 20210331315 A1) as applied to claim 1 above, and further in view of Dong et al. (CN114442618A).
Regarding claim 12, Afrouzi, as modified, teaches wherein, before the selecting each candidate navigation point from a map of a robot, the robot control method further comprises:
identifying a gap region in the map, wherein the gap region is a region whose entrance width is smaller than a preset width threshold; and (Afrouzi: Col. 56, lines 45 – 51: “In some embodiments, the processor may use a threshold to determine whether the data points considered indicate an opening in the wall when, for example, the error exceeds some threshold value. In some embodiments, the processor may use an adaptive threshold wherein the values below the threshold may be considered to be a wall.”)
removing the gap region from the map (Afrouzi: Col. 54, line 64 – Col. 55, line 2: “In some embodiments, the processor identifies gaps in the map (e.g., due to areas blind to a sensor or a range of a sensor). In some embodiments, the processor may actuate the robot to move towards and investigates the gap, collecting observations and mapping new areas by adding new observations to the map until the gap is closed”; Col. 55, lines 40 – 56: “In some embodiments, a gap in the perimeters of the environment may be due to an opening in the wall (e.g., a doorway or an opening between two separate areas). In some embodiments, exploration of the undiscovered areas within which the gap is identified may lead to the discovery of a room, a hallway, or any other separate area. In some embodiments, identified gaps that are found to be, for example, an opening in the wall may be used in separating areas into smaller subareas. For example, the opening in the wall between two rooms may be used to segment the area into two subareas, where each room is a single subarea. This may be expanded to any number of rooms. In some embodiments, the processor of the robot may provide a unique tag to each subarea and may use the unique tag to order the subareas for coverage by the robot, choose different work functions for different subareas, add restrictions to subareas, set cleaning schedules for different subareas, and the like.”).
Regarding claim 17, Afrouzi, as modified, teaches wherein when an exploration path between the candidate navigation point and the robot comprises bypassing long-side obstacles, the access status between the candidate navigation point and the robot is set as the blocked status; and (Afrouzi: Col. 38, line 65 – Col. 39, line 2: “The processor of robot 3300 determines distance 3310 by which robot 3300 travels forward by detection of an obstacle, such as wall 3311 or furniture or distance 3310 is predetermined. In FIG. 33E, robot 3300 then rotates another 180 degrees in direction 3308.”; Col. 37, lines 13 – 23: “The robot, in some embodiments, then moves in a forward direction (defined as the direction in which the sensor points, e.g., the centerline of the field of view of the sensor) by some first distance allowing the sensors to observe surroundings areas within the detection range as the robot moves. The processor, in some embodiments, determines the first forward distance of the robot by detection of an obstacle by a sensor, such as a wall or furniture, e.g., by making contact with a contact sensor or by bringing the obstacle closer than the maximum detection distance of the robot's sensor for mapping.”; Col. 37, lines 32 – 40: “In some embodiments, the processor may determine the second forward travel distance by detection of an obstacle by a sensor, such moving until a wall or furniture is within range of the sensor. In some embodiments, the second forward travel distance is predetermined or dynamically determined in the manner described above. In doing so, the sensors observe any remaining undiscovered areas from the first forward distance travelled across the environment as the robot returns back in the opposite direction.”,
Supplemental Note: the vehicle is able to identify a wall if it is in within the movement path of the robot. When detecting a wall, the robot is able to identify that the path is blocked as the travel path of the robot cannot travel past that obstacle)
when an exploration path between the candidate navigation point and the robot does not comprise the bypassing long-side obstacles, the access status between the candidate navigation point and the robot is set as the directly communicated status, (Afrouzi: Col. 37, line 58 – Col. 38, line 2: “in some embodiments, the robot is at one end of the environment, facing towards the open space. From here, the robot moves in a first forward direction (from the perspective of the robot as defined above) by some distance then rotates 90 degrees in a clockwise direction. The processor determines the first forward distance by which the robot travels forward by detection of an obstacle by a sensor, such as a wall or furniture. In some embodiments, the first forward distance is predetermined (e.g., and measured by another sensor, like an odometer or by integrating signals from an inertial measurement unit).”,
Supplemental Note: the travel path of the robot is towards an open space, thus a location in which the first forward direction does not intersect an obstacle).
In sum, Afrouzi teaches wherein when an exploration path between the candidate navigation point and the robot comprises bypassing long-side obstacles, the access status between the candidate navigation point and the robot is set as the blocked status; and when an exploration path between the candidate navigation point and the robot does not comprise the bypassing long-side obstacles, the access status between the candidate navigation point and the robot is set as the directly communicated status. Afrouzi however does not teach wherein compared with the candidate navigation point that is in the blocked status related to the robot, the candidate navigation point that is in the directly communicated status related to the robot is assigned with a higher first weight to perform priority exploration.
Dong teaches wherein compared with the candidate navigation point that is in the blocked status related to the robot, the candidate navigation point that is in the directly communicated status related to the robot is assigned with a higher first weight to perform priority exploration (Dong: Lines 88 – 115: “Preferably, the specific process of the step 4 is as follows: 1)Taking the current position of the mobile robot as the center, a circle with a radius of 20 grid lengths is equally divided into 12 sectors, the sector number is represented by s, s=0,1,...,12, then the current position of the robot reaches the stage target point There are a total of 12 directions of candidate travel directions between; 2)The mobile robot scans the above 12 sectors, and assigns each grid in the sector a probability value P representing the characteristics of the obstacles it contains; 3)For each sector, calculate D(s) of the density of obstacles in all grids covered by it:
D(s)=∑p(i,j)∈Pij
Among them, Pij represents the obstacle feature probability value P contained in the grid p(i,j) whose coordinates are (i,j); the threshold TD is set, and when D(s)⟨TD, the sector s is selected as a candidate Area; 4)Search for the most suitable moving direction V in all candidate regions, that is, the V sector with the smallest cost function as follows:
w(V)=μ1Diff(V,Vtar)+μ2Diff(V,Vcur)
Among them, w(V) represents the cost function of the V sector, Diff(V, Vtar) represents the angular difference between the moving direction V and the direction of the stage target point, and Diff(V, Vcur) represents the moving direction V and the robot's current The angle difference between the traveling directions, the coefficients μ1 and μ2 both represent the weight ratio, and μ1+μ2=1; 5)The mobile robot moves one step along the most suitable moving direction V, returns to 2.1, and repeats the above process until it reaches the stage target point.”,
Supplemental Note: the cost function is activated when an obstacle is in the path of the robot in which multiple candidate navigation points are taken. The robot is further able to recognize if an obstacle is persistent in those paths and analyzes a path to take with the smallest angular difference and moving direction not in contact with the obstacle).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Afrouzi with the teachings of Dong with a reasonable expectation of success. As discussed in claim 4, both Afrouzi and Dong teach autonomous cleaning robots that are able to sense and map out their surroundings. Both robots have the ability to independently navigate while also avoiding obstacles in which Dong teaches utilizing a cost function within an obstacle avoidance algorithm. One with knowledge in the art would find this method of Dong as a simple substation with the obstacle avoidance technique of Afrouzi or using a known technique (cost function obstacle avoidance algorithm) to improve similar devices. For example, the technique of Dong teaches the ability of the robot to avoid an obstacle based on the shortest turning angle and movement distance needed to get around the obstacle, thus implementing this technique with the robot of Afrouzi is a simple substitution with the current obstacle avoidance technique and may also improve the current avoidance method as utilizing Dong’s technique of selecting the shortest path to avoid the obstacle may also increase the battery life of the robot, thus it can perform longer cleaning tasks.
Regarding claim 18, Afrouzi, as modified, does not teach wherein a higher second weight is assigned to a candidate navigation point that is closer to the robot, and a lower second weight is assigned to a candidate navigation point that is farther from the robot.
Dong teaches wherein a higher second weight is assigned to a candidate navigation point that is closer to the robot, and a lower second weight is assigned to a candidate navigation point that is farther from the robot (Dong: Lines 88 – 115: “Preferably, the specific process of the step 4 is as follows: 1)Taking the current position of the mobile robot as the center, a circle with a radius of 20 grid lengths is equally divided into 12 sectors, the sector number is represented by s, s=0,1,...,12, then the current position of the robot reaches the stage target point There are a total of 12 directions of candidate travel directions between; 2)The mobile robot scans the above 12 sectors, and assigns each grid in the sector a probability value P representing the characteristics of the obstacles it contains; 3)For each sector, calculate D(s) of the density of obstacles in all grids covered by it:
D(s)=∑p(i,j)∈Pij
Among them, Pij represents the obstacle feature probability value P contained in the grid p(i,j) whose coordinates are (i,j); the threshold TD is set, and when D(s)⟨TD, the sector s is selected as a candidate Area; 4)Search for the most suitable moving direction V in all candidate regions, that is, the V sector with the smallest cost function as follows:
w(V)=μ1Diff(V,Vtar)+μ2Diff(V,Vcur)
Among them, w(V) represents the cost function of the V sector, Diff(V, Vtar) represents the angular difference between the moving direction V and the direction of the stage target point, and Diff(V, Vcur) represents the moving direction V and the robot's current The angle difference between the traveling directions, the coefficients μ1 and μ2 both represent the weight ratio, and μ1+μ2=1; 5)The mobile robot moves one step along the most suitable moving direction V, returns to 2.1, and repeats the above process until it reaches the stage target point.”,
Supplemental Note: the cost function is activated when an obstacle is in the path of the robot in which multiple candidate navigation points are taken. The robot is further able to recognize if an obstacle is persistent in those paths and analyzes a path to take with the smallest angular difference and moving direction not in contact with the obstacle).
Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the invention disclosed by Afrouzi with the teachings of Dong with a reasonable expectation of success. As discussed in claim 4, both Afrouzi and Dong teach autonomous cleaning robots that are able to sense and map out their surroundings. Both robots have the ability to independently navigate while also avoiding obstacles in which Dong teaches utilizing a cost function within an obstacle avoidance algorithm. One with knowledge in the art would find this method of Dong as a simple substation with the obstacle avoidance technique of Afrouzi or using a known technique (cost function obstacle avoidance algorithm) to improve similar devices. For example, the technique of Dong teaches the ability of the robot to avoid an obstacle based on the shortest turning angle and movement distance needed to get around the obstacle, thus implementing this technique with the robot of Afrouzi is a simple substitution with the current obstacle avoidance technique and may also improve the current avoidance method as utilizing Dong’s technique of selecting the shortest path to avoid the obstacle may also increase the battery life of the robot, thus it can perform longer cleaning tasks.
Response to Arguments
Applicant’s arguments, see section Discussion of Claim Rejections under 35 U.S.C. 102 and 103 of the REMARKS, filed 01/21/2026, with respect to the 35 U.S.C. 102 and 103 prior art rejections of claims 1 – 18 have been fully considered and are persuasive. Applicant states that prior art of Dong does not teach the amended claim limitations of claim 1. Examiner agrees and Dong is not used to reject the amended claim limitations of claim 1. However, upon further consideration, a new ground(s) of rejection is made in view of Park et al. (US 20210331315 A1). Please see section Claim Rejections - 35 USC § 103 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVAM SHARMA whose telephone number is (703)756-1726. The examiner can normally be reached Monday-Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Bishop can be reached at 571-270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIVAM SHARMA/ Examiner, Art Unit 3665
/Erin D Bishop/ Supervisory Patent Examiner, Art Unit 3665