DETAILED ACTION
This is a first action on the merits. Claims 1-20 are pending. Claims dated 06/25/2024 are being examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06/25/2024 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The disclosure is objected to because of the following informalities bolded:
[0010] and [0033]: Recites “simplized” which is not a standard English word. Examiner suggests “simplified”.
[0029]: Recites “Furthermore, the sensor subunit 133 may further include other sensor for detecting the environment in which the mobile machine 100 is located, for example, an ultrasonic sensor and an infrared (IR) sensor”. Examiner suggests “sensors”.
[0033] Recites “The global map Mg is inflated by assigning he cost value of each cell of the global map Mg is according to the inflation”. Examiner suggests “the” and removal of “is”. Appropriate correction is required.
Claim Objections
Claim 1 is objected to because of the following informality: claim 1 recites the limitation “the costmap” (i.e., line 7). Since there is only one costmap introduced in the claims– the global costmap, for examination purposes, every recitation of “the costmap” is interpreted to refer to “the global costmap”.
Claims 7, 8, and 14 contain the same informalities, with “the costmap”. Examiner suggests to change “the costmap” to “the global costmap” to be consistent in the claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 7-9, and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (KR-20220058079-A), in view of Afrouzi et al. (US-20240310851-A1) and herein after will be referred to as Park and Afrouzi, respectively.
Regarding claim 1, Park teaches a method for navigating a mobile machine (FIG. 3 autonomous driving robot 100) having a plurality of sensors (FIG. 3 two information collection means 110, 120), comprising:
obtaining a global map ([0015] The method comprises: receiving a global map indicating a drivable area and a non-drivable area of a space in which the robot is driving);
inflating the global map; and ([0015] …generating a global cost map by setting an inflation radius for the object in the global map based on object class information for each object in which an inflation radius is set for each object and object information on the non-drivable area)
in response to receiving a navigation task of the mobile machine ([0007] A step of receiving a destination location in a space in which the robot is driving…):
creating a global costmap for the mobile machine by performing a costmap creation process ([0042] In the embodiment of the present invention, a cost map is generated based on a global map and a local map in an autonomous driving robot as an example);
planning, based on the costmap, a path for navigating the mobile machine; and ([0027] …a self-driving robot that creates a path optimized for an environment using a hierarchical cost map according to an embodiment of the present invention)
navigating the mobile machine using the planned path ([0035] The robot determines the space without objects based on the cost map, sets a path to the destination, and drives);
wherein, the costmap creation process includes: receiving, from each of the sensors of the mobile machine, sensor data ([0048] And the robot (100) is equipped with two information collection means (110, 120) to generate a local map and a global map and to obtain object information);
[…] creating a local map […] ([0050] The local map generated by the robot (100) can be divided into a static object local map and a dynamic object local map depending on whether the object moves);
[…] inflating the local map; and ([0054] The robot (100) creates a global map, a static object local map, and a dynamic object local map, and then sets an inflation radius for each object displayed in each map to create a global cost map and an object cost map (e.g., a static object cost map and a dynamic object cost map));
creating the costmap for the mobile machine by fusing the inflated global map and the inflated local map ([0060] The robot (100) overlaps the generated cost maps to create a single hierarchical cost map; [0068] When a cost map is created for each object in this way, the robot (100) overlaps the cost maps to create a hierarchical cost map, which is a layered cost map, as shown in (d) of FIG. 4).
Park does not explicitly teach: creating, based on the received sensor data from each of the sensors, a plurality of local sensor layers each corresponding to the received sensor data from each of the sensors; and creating a local map by integrating all the local sensor layers.
However, Afrouzi teaches creating, based on the received sensor data from each of the sensors, a plurality of local sensor layers each corresponding to the received sensor data from each of the sensors ([0943] In some embodiments, the global and local map may be updated with sensor events, such as bumper events, TSSP sensor events, safety events, TOF sensor events, edge events, etc; [0946] In some embodiments, more than one sensor providing various perceptions may be used to improve understanding of the environment and accuracy of the map; [1009] In some embodiments, a layer of a map may be a map generated based solely on the observations of a particular sensor type. For example, a map may include three layers and each layer may be a map generated based solely on the observations of a particular sensor type).
creating a local map by integrating all the local sensor layers ([1009] In some embodiments, maps of various layers may be superimposed vertically or horizontally, deterministically or probabilistically, and locally or globally. In some embodiments, a map may be horizontally filled with data from one (or one class of) sensor and vertically filled using data from a different sensor (or class of sensor); [1011] In some embodiments, the processor executes a series of procedures to generate layers of a map used to construct the map from stored values in memory).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify the local map as taught in Park to incorporate the teachings of Afrouzi to include creating, based on the received sensor data from each of the sensors, a plurality of local sensor layers each corresponding to the received sensor data from each of the sensors; and creating a local map by integrating all the local sensor layers, with a reasonable expectation of success since the use of sensor layers would have achieved the benefit of “improving understanding of the environment and accuracy of the map” (Afrouzi [0946]) and “avoiding blind spots” (Afrouzi [1008]).
Regarding claim 7, Park, as modified, teaches the method of claim 1.
Park also teaches wherein the costmap is a map having a plurality of cells each with a cost value with respect to obstacles (FIG. 1 typical cost map);
wherein planning, based on the created costmap, the path for navigating the mobile machine comprises: planning, according to the costs in the created costmap, the path for navigating the mobile machine while avoiding the obstacles ([0035] The robot determines the space without objects based on the cost map, sets a path to the destination, and drives).
Regarding claim 8, Park teaches a method for planning a path for navigating a mobile machine (FIG. 3 autonomous driving robot 100) having a plurality of sensors (FIG. 3 two information collection means 110, 120), comprising:
receiving, from each of the sensors of the mobile machine, sensor data ([0048] And the robot (100) is equipped with two information collection means (110, 120) to generate a local map and a global map and to obtain object information);
[…] inflating the local map ([0054] The robot (100) creates a global map, a static object local map, and a dynamic object local map, and then sets an inflation radius for each object displayed in each map to create a global cost map and an object cost map (e.g., a static object cost map and a dynamic object cost map));
creating a global costmap for the mobile machine by fusing an inflated global map and the inflated local map ([0060] The robot (100) overlaps the generated cost maps to create a single hierarchical cost map; [0068] When a cost map is created for each object in this way, the robot (100) overlaps the cost maps to create a hierarchical cost map, which is a layered cost map, as shown in (d) of FIG. 4);
planning, according to the costmap, the path for navigating the mobile machine; and ([0027] …a self-driving robot that creates a path optimized for an environment using a hierarchical cost map according to an embodiment of the present invention)
providing the planned path to the mobile machine for navigating the mobile machine using the planned path ([0035] The robot determines the space without objects based on the cost map, sets a path to the destination, and drives).
Park does not explicitly teach: creating, based on the received sensor data from each of the sensors, a plurality of local sensor layers each corresponding to the received sensor data from each of the sensors; and creating a local map by integrating all the local sensor layers.
However, Afrouzi teaches creating, based on the received sensor data from each of the sensors, a plurality of local sensor layers each corresponding to the received sensor data from each of the sensors ([0943] In some embodiments, the global and local map may be updated with sensor events, such as bumper events, TSSP sensor events, safety events, TOF sensor events, edge events, etc; [0946] In some embodiments, more than one sensor providing various perceptions may be used to improve understanding of the environment and accuracy of the map; [1009] In some embodiments, a layer of a map may be a map generated based solely on the observations of a particular sensor type. For example, a map may include three layers and each layer may be a map generated based solely on the observations of a particular sensor type).
creating a local map by integrating all the local sensor layers ([1009] In some embodiments, maps of various layers may be superimposed vertically or horizontally, deterministically or probabilistically, and locally or globally. In some embodiments, a map may be horizontally filled with data from one (or one class of) sensor and vertically filled using data from a different sensor (or class of sensor); [1011] In some embodiments, the processor executes a series of procedures to generate layers of a map used to construct the map from stored values in memory).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify the local map as taught in Park to incorporate the teachings of Afrouzi to include creating, based on the received sensor data from each of the sensors, a plurality of local sensor layers each corresponding to the received sensor data from each of the sensors; and creating a local map by integrating all the local sensor layers, with a reasonable expectation of success since the use of sensor layers would have achieved the benefit of “improving understanding of the environment and accuracy of the map” (Afrouzi [0946]) and “avoiding blind spots” (Afrouzi [1008]).
Regarding claim 9, Park, as modified, teaches the method of claim 8.
Park also teaches wherein the method is performed in response to receiving a navigation task of the mobile machine ([0007] A step of receiving a destination location in a space in which the robot is driving…).
Regarding claim 14, Park teaches a mobile machine (FIG. 3 autonomous driving robot 100), comprising:
one or more sensors (FIG. 3 two information collection means 110, 120);
one or more processors; and (FIG. 10 processor 310)
one or more memories storing a costmap module configured to be executed by the one or more processors, wherein the costmap module comprises a layer manager and an inflation manager, and the costmap module comprises instructions to (FIG. 10 storage device 370):
receive, from each of the sensors of the mobile machine, sensor data ([0048] And the robot (100) is equipped with two information collection means (110, 120) to generate a local map and a global map and to obtain object information);
create, using the layer manager based on the received sensor data from each of the sensors, a plurality of local sensor layers each corresponding to the received sensor data from each of the sensors;
create, using the layer manager, a local map […] ([0050] The local map generated by the robot
(100) can be divided into a static object local map and a dynamic object local map depending on whether the object moves);
inflate, using the inflation manager, the local map ([0054] The robot (100) creates a global
map, a static object local map, and a dynamic object local map, and then sets an inflation radius for each object displayed in each map to create a global cost map and an object cost map (e.g., a static object cost map and a dynamic object cost map));
create a global costmap for the mobile machine by fusing an inflated global map and the inflated local map ([0060] The robot (100) overlaps the generated cost maps to create a single hierarchical cost map; [0068] When a cost map is created for each object in this way, the robot (100) overlaps the cost maps to create a hierarchical cost map, which is a layered cost map, as shown in (d) of FIG. 4);
plan, according to the costmap, a path for navigating the mobile machine; and ([0027] …a self-driving robot that creates a path optimized for an environment using a hierarchical cost map according to an embodiment of the present invention)
provide the planned path to the mobile machine for navigating the mobile machine using the planned path ([0035] The robot determines the space without objects based on the cost map, sets a path to the destination, and drives).
Park does not explicitly teach create, using the layer manager based on the received sensor data from each of the sensors, a plurality of local sensor layers each corresponding to the received sensor data from each of the sensors; create, using the layer manager, a local map by integrating all the created local sensor layers; inflate, using the inflation manager, the local map.
However, Afrouzi teaches create, using the layer manager based on the received sensor data from each of the sensors, a plurality of local sensor layers each corresponding to the received sensor data from each of the sensors ([0943] In some embodiments, the global and local map may be updated with sensor events, such as bumper events, TSSP sensor events, safety events, TOF sensor events, edge events, etc; [0946] In some embodiments, more than one sensor providing various perceptions may be used to improve understanding of the environment and accuracy of the map; [1009] In some embodiments, a layer of a map may be a map generated based solely on the observations of a particular sensor type. For example, a map may include three layers and each layer may be a map generated based solely on the observations of a particular sensor type).
create, using the layer manager, a local map by integrating all the created local sensor layers; inflate, using the inflation manager, the local map ([1009] In some embodiments, maps of various layers may be superimposed vertically or horizontally, deterministically or probabilistically, and locally or globally. In some embodiments, a map may be horizontally filled with data from one (or one class of) sensor and vertically filled using data from a different sensor (or class of sensor); [1011] In some embodiments, the processor executes a series of procedures to generate layers of a map used to construct the map from stored values in memory).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify the local map as taught in Park to incorporate the teachings of Afrouzi to include create, using the layer manager based on the received sensor data from each of the sensors, a plurality of local sensor layers each corresponding to the received sensor data from each of the sensors; create, using the layer manager, a local map by integrating all the created local sensor layers; inflate, using the inflation manager, the local map, with a reasonable expectation of success since the use of sensor layers would have achieved the benefit of “improving understanding of the environment and accuracy of the map” (Afrouzi [0946]) and “avoiding blind spots” (Afrouzi [1008]).
Regarding claim 15, Park, as modified, teaches the mobile machine of claim 14.
Park also teaches wherein the costmap module is triggered to execute by the one or more processor in response to receiving a navigation task of the mobile machine ([0007] A step of receiving a destination location in a space in which the robot is driving…).
Regarding claim 16, Park, as modified, teaches the mobile machine of claim 14.
Park also teaches wherein the costmap module further comprises instructions to: obtain the global map ([0015] The method comprises: receiving a global map indicating a drivable area and a non-drivable area of a space in which the robot is driving);
inflate the global map; and ([0015] …generating a global cost map by setting an inflation radius for the object in the global map based on object class information for each object in which an inflation radius is set for each object and object information on the non-drivable area)
navigate the mobile machine using the planned path ([0035] The robot determines the space without objects based on the cost map, sets a path to the destination, and drives).
Claims 2, 4, 10, 12, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Park, in view of Afrouzi, in further view of Suthar et al. (US-20240419183-A1) and herein after will be referred to as Suthar.
Regarding claim 2, Park, as modified, teaches the method of claim 1.
Park also teaches further comprising: obtaining, based on the received sensor data from each of the sensors, dynamic obstacle information; and recording the obtained dynamic obstacle information of obstacles in the local map that corresponds to the received sensor data at a current time frame ([0050] The local map generated by the robot (100) can be divided into a static object local map and a dynamic object local map depending on whether the object moves; [0051] And when the robot recognizes objects such as people or animals moving in the area where it is driving, it reflects the location and identification information of the objects in the dynamic object local map)
and […] a pose of the mobile machine at the current time frame ([0050] And the robot (100) can determine its own location based on the local map; [0100] And the coordinates of the object are converted into a coordinate system so that they can be used in the SLAM system of the robot (100)).
Park, as modified, does not explicitly teach and transform “coordinates” representing a pose of the mobile machine at the current time frame.
However, Suthar teaches and transform “coordinates” representing a pose of the mobile machine at the current time frame ([0064] The 2D point registrations may be added to a pose-graph, which may be a graphical representation of the odometry poses at each lidar measurement and their relationships, where nodes represent the poses and edges represent the spatial constraints (i.e., 2D transformations between the poses of corresponding nodes); [0065] As subsequent scans and readings arrive at the main pipeline 306, a number “N” of the poses (nodes) in the pose-graph may be used together in a chain to look for loop closure).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify Park, as modified, to incorporate the teachings of Suthar to include integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame and the transform coordinates of the previous time frames, with a reasonable expectation of success to allow for accurate pose estimation and odometry pose corrections (Suthar [0070]).
Regarding claim 4, Park, as modified, teaches the method of claim 2.
Park, as modified, also teaches wherein creating the local map by integrating all the local sensor layers comprises: creating the local map by integrating all the local sensor layers (see rejection of claim 1 cited to Afrouzi – [01009], [01011] teaching the construction of the local map based on the sensor layers).
Park does not explicitly teach also integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame and the transform coordinates of the previous time frames.
However, Suthar teaches also integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame ([0091] Similarly, the local occupancy module may filter the point cloud with real-world coordinates and semantic label classes, keeping the semantically relevant points needed for local path planning and obstacle avoidance; [0180] global/local map update 5614 data may be stored as observations, current robot state, current object state, and sensor data 5616. The observations, current robot state, current object state, and sensor data 5616 may be used by the robotic control system 5500 of the robot 5300 in determining navigation paths and task strategies)
and the transform coordinates of the previous time frames ([0064] The 2D point registrations may be added to a pose-graph, which may be a graphical representation of the odometry poses at each lidar measurement and their relationships, where nodes represent the poses and edges represent the spatial constraints (i.e., 2D transformations between the poses of corresponding nodes); [0065] As subsequent scans and readings arrive at the main pipeline 306, a number “N” of the poses (nodes) in the pose-graph may be used together in a chain to look for loop closure).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify Park, as modified, to incorporate the teachings of Suthar to include integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame and the transform coordinates of the previous time frames, with a reasonable expectation of success to allow for accurate pose estimation and odometry pose corrections (Suthar [0070]).
Regarding claim 10, Park, as modified, teaches the method of claim 8.
Park also teaches further comprising: further comprising: obtaining, based on the received sensor data from each of the sensors, dynamic obstacle information; and recording the obtained dynamic obstacle information of obstacles in the local map that corresponds to the received sensor data at a current time frame ([0050] The local map generated by the robot (100) can be divided into a static object local map and a dynamic object local map depending on whether the object moves; [0051] And when the robot recognizes objects such as people or animals moving in the area where it is driving, it reflects the location and identification information of the objects in the dynamic object local map)
and […] a pose of the mobile machine at the current time frame ([0050] And the robot (100) can determine its own location based on the local map; [0100] And the coordinates of the object are converted into a coordinate system so that they can be used in the SLAM system of the robot (100)).
Park, as modified, does not explicitly teach and transform “coordinates” representing a pose of the mobile machine at the current time frame.
However, Suthar teaches and transform “coordinates” representing a pose of the mobile machine at the current time frame ([0064] The 2D point registrations may be added to a pose-graph, which may be a graphical representation of the odometry poses at each lidar measurement and their relationships, where nodes represent the poses and edges represent the spatial constraints (i.e., 2D transformations between the poses of corresponding nodes); [0065] As subsequent scans and readings arrive at the main pipeline 306, a number “N” of the poses (nodes) in the pose-graph may be used together in a chain to look for loop closure).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify Park, as modified, to incorporate the teachings of Suthar to include integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame and the transform coordinates of the previous time frames, with a reasonable expectation of success to allow for accurate pose estimation and odometry pose corrections (Suthar [0070]).
Regarding claim 12, Park, as modified, teaches the method of claim 10.
Park, as modified, also teaches wherein creating the local map by integrating all the local sensor layers comprises: creating the local map by integrating all the local sensor layers (see rejection of claim 8 cited to Afrouzi – [01009], [01011] teaching the construction of the local map based on the sensor layers).
Park does not explicitly teach also integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame and the transform coordinates of the previous time frames.
However, Suthar teaches also integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame ([0091] Similarly, the local occupancy module may filter the point cloud with real-world coordinates and semantic label classes, keeping the semantically relevant points needed for local path planning and obstacle avoidance; [0180] global/local map update 5614 data may be stored as observations, current robot state, current object state, and sensor data 5616. The observations, current robot state, current object state, and sensor data 5616 may be used by the robotic control system 5500 of the robot 5300 in determining navigation paths and task strategies)
and the transform coordinates of the previous time frames ([0064] The 2D point registrations may be added to a pose-graph, which may be a graphical representation of the odometry poses at each lidar measurement and their relationships, where nodes represent the poses and edges represent the spatial constraints (i.e., 2D transformations between the poses of corresponding nodes); [0065] As subsequent scans and readings arrive at the main pipeline 306, a number “N” of the poses (nodes) in the pose-graph may be used together in a chain to look for loop closure).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify Park, as modified, to incorporate the teachings of Suthar to include integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame and the transform coordinates of the previous time frames, with a reasonable expectation of success to allow for accurate pose estimation and odometry pose corrections (Suthar [0070]).
Regarding claim 17, Park, as modified, teaches the mobile machine of claim 14.
Park also teaches further comprising wherein the costmap module further comprises a memory manager, and further comprises instructions to: obtain, based on the received sensor data from each of the sensors, dynamic obstacle information; and record, using the memory manager, the obtained dynamic obstacle information of obstacles in the local map that corresponds to the received sensor data at a current time frame ([0050] The local map generated by the robot (100) can be divided into a static object local map and a dynamic object local map depending on whether the object moves; [0051] And when the robot recognizes objects such as people or animals moving in the area where it is driving, it reflects the location and identification information of the objects in the dynamic object local map)
and […] a pose of the mobile machine at the current time frame ([0050] And the robot (100) can determine its own location based on the local map; [0100] And the coordinates of the object are converted into a coordinate system so that they can be used in the SLAM system of the robot (100)).
Park, as modified, does not explicitly teach and transform “coordinates” representing a pose of the mobile machine at the current time frame.
However, Suthar teaches and transform “coordinates” representing a pose of the mobile machine at the current time frame ([0064] The 2D point registrations may be added to a pose-graph, which may be a graphical representation of the odometry poses at each lidar measurement and their relationships, where nodes represent the poses and edges represent the spatial constraints (i.e., 2D transformations between the poses of corresponding nodes); [0065] As subsequent scans and readings arrive at the main pipeline 306, a number “N” of the poses (nodes) in the pose-graph may be used together in a chain to look for loop closure).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify Park, as modified, to incorporate the teachings of Suthar to include integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame and the transform coordinates of the previous time frames, with a reasonable expectation of success to allow for accurate pose estimation and odometry pose corrections (Suthar [0070]).
Regarding claim 19, Park, as modified, teaches the mobile machine of claim 17.
Park, as modified, also teaches wherein creating the local map by integrating all the local sensor layers comprises: creating, using the layer manager, the local map by integrating all the local sensor layers (see rejection of claim 14 cited to Afrouzi – [01009], [01011] teaching the construction of the local map based on the sensor layers).
Park does not explicitly teach also integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame and the transform coordinates of the previous time frames.
However, Suthar teaches also integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame ([0091] Similarly, the local occupancy module may filter the point cloud with real-world coordinates and semantic label classes, keeping the semantically relevant points needed for local path planning and obstacle avoidance; [0180] global/local map update 5614 data may be stored as observations, current robot state, current object state, and sensor data 5616. The observations, current robot state, current object state, and sensor data 5616 may be used by the robotic control system 5500 of the robot 5300 in determining navigation paths and task strategies)
and the transform coordinates of the previous time frames ([0064] The 2D point registrations may be added to a pose-graph, which may be a graphical representation of the odometry poses at each lidar measurement and their relationships, where nodes represent the poses and edges represent the spatial constraints (i.e., 2D transformations between the poses of corresponding nodes); [0065] As subsequent scans and readings arrive at the main pipeline 306, a number “N” of the poses (nodes) in the pose-graph may be used together in a chain to look for loop closure).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify Park, as modified, to incorporate the teachings of Suthar to include integrating the recorded dynamic obstacle information corresponding to the received sensor data at previous time frames before the current time frame and the transform coordinates of the previous time frames, with a reasonable expectation of success to allow for accurate pose estimation and odometry pose corrections (Suthar [0070]).
Claims 3, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Park, in view of Afrouzi, in further view of Takahashi et al. (US-20200241554-A1) and herein after will be referred to as Takahashi.
Regarding claim 3, Park, as modified, teaches the method of claim 2.
Park, as modified, does not explicitly teach further comprising: discarding the recorded dynamic obstacle information of the obstacles that is beyond at least one of a field of view of the mobile machine and a specified number of previous time frames before the current time frame.
However, Takahashi teaches further comprising: discarding the recorded dynamic obstacle information of the obstacles that is beyond at least one of a field of view of the mobile machine and a specified number of previous time frames before the current time frame ([0085] The system integrates output of the distance sensor 110 into the environmental map 500, determines whether to erase or hold information in accordance with a distance from the mobile robot 100 and elapsed time, and updates the environmental map 500. As illustrated in FIG. 8, the processing performed in the system includes five steps of trimming of the outside of a certain distance range (Step S10), map integration (Step S12), update of a visible region (Step S14), update of an invisible region (Step S16), and composition and output of a map (Step S18)).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify Park, as modified, to incorporate the teachings of Takahashi to include further comprising: discarding the recorded dynamic obstacle information of the obstacles that is beyond at least one of a field of view of the mobile machine and a specified number of previous time frames before the current time frame, with a reasonable expectation of success since doing so would have achieved the benefit of releasing memory/storage by deleting “information that has been acquired earlier has low accuracy, and unnecessarily compresses the storage area” (Takahashi [0086]).
Regarding claim 11, Park, as modified, teaches the method of claim 10.
Park, as modified, does not explicitly teach further comprising: discarding the recorded dynamic obstacle information of the obstacles that is beyond at least one of a field of view of the mobile machine and a specified number of previous time frames before the current time frame.
However, Takahashi teaches further comprising: discarding the recorded dynamic obstacle information of the obstacles that is beyond at least one of a field of view of the mobile machine and a specified number of previous time frames before the current time frame ([0085] The system integrates output of the distance sensor 110 into the environmental map 500, determines whether to erase or hold information in accordance with a distance from the mobile robot 100 and elapsed time, and updates the environmental map 500. As illustrated in FIG. 8, the processing performed in the system includes five steps of trimming of the outside of a certain distance range (Step S10), map integration (Step S12), update of a visible region (Step S14), update of an invisible region (Step S16), and composition and output of a map (Step S18)).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify Park, as modified, to incorporate the teachings of Takahashi to include further comprising: discarding the recorded dynamic obstacle information of the obstacles that is beyond at least one of a field of view of the mobile machine and a specified number of previous time frames before the current time frame, with a reasonable expectation of success since doing so would have achieved the benefit of releasing memory/storage by deleting “information that has been acquired earlier has low accuracy, and unnecessarily compresses the storage area” (Takahashi [0086]).
Regarding claim 18, Park, as modified, teaches the mobile machine of claim 17.
Park, as modified, does not explicitly teach wherein the costmap module further comprises instructions to: discard, using the memory manager, the recorded dynamic obstacle information of the obstacles that is beyond at least one of a field of view of the mobile machine and a specified number of previous time frames before the current time frame.
However, Takahashi teaches wherein the costmap module further comprises instructions to: discard, using the memory manager, the recorded dynamic obstacle information of the obstacles that is beyond at least one of a field of view of the mobile machine and a specified number of previous time frames before the current time frame ([0085] The system integrates output of the distance sensor 110 into the environmental map 500, determines whether to erase or hold information in accordance with a distance from the mobile robot 100 and elapsed time, and updates the environmental map 500. As illustrated in FIG. 8, the processing performed in the system includes five steps of trimming of the outside of a certain distance range (Step S10), map integration (Step S12), update of a visible region (Step S14), update of an invisible region (Step S16), and composition and output of a map (Step S18)).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify Park, as modified, to incorporate the teachings of Takahashi to include wherein the costmap module further comprises instructions to: discard, using the memory manager, the recorded dynamic obstacle information of the obstacles that is beyond at least one of a field of view of the mobile machine and a specified number of previous time frames before the current time frame, with a reasonable expectation of success since doing so would have achieved the benefit of releasing memory/storage by deleting “information that has been acquired earlier has low accuracy, and unnecessarily compresses the storage area” (Takahashi [0086]).
Claims 5, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Park, in view of Afrouzi, in further view of Liu et al. (CN-118189934-A) and herein after will be referred to as Liu.
Regarding claim 5, Park, as modified, teaches the method of claim 1.
Park, as modified, does not explicitly teach further comprising: adjusting a size of the local map according to a velocity of the mobile machine so that the size is proportional to the velocity.
However, Liu teaches further comprising: adjusting a size of the local map according to a velocity of the mobile machine so that the size is proportional to the velocity ([0032] The size of the robot's perception space is determined by the size of the map area updated each time the 2D map is updated. The size of the map area updated each time is determined by the robot's task. The robot's travel speed when performing the task is positively correlated with the size of the map area updated each time. That is, the greater the travel speed, the larger the size of the map area updated each time).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify the size of the local map taught in Park, as modified, to incorporate the teachings of Liu to include further comprising: adjusting a size of the local map according to a velocity of the mobile machine so that the size is proportional to the velocity, with a reasonable expectation of success since “this ensures that the update range of the 2D map matches the robot's travel speed, thereby improving the accuracy of path planning” (Liu [0032]).
Regarding claim 13, Park, as modified, teaches the method of claim 8.
Park, as modified, does not explicitly teach further comprising: adjusting a size of the local map according to a velocity of the mobile machine so that the size is proportional to the velocity.
However, Liu teaches further comprising: adjusting a size of the local map according to a velocity of the mobile machine so that the size is proportional to the velocity ([0032] The size of the robot's perception space is determined by the size of the map area updated each time the 2D map is updated. The size of the map area updated each time is determined by the robot's task. The robot's travel speed when performing the task is positively correlated with the size of the map area updated each time. That is, the greater the travel speed, the larger the size of the map area updated each time).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify the size of the local map taught in Park, as modified, to incorporate the teachings of Liu to include further comprising: adjusting a size of the local map according to a velocity of the mobile machine so that the size is proportional to the velocity, with a reasonable expectation of success since “this ensures that the update range of the 2D map matches the robot's travel speed, thereby improving the accuracy of path planning” (Liu [0032]).
Regarding claim 20, Park, as modified, teaches the mobile machine of claim 14.
Park, as modified, does not explicitly teach wherein the costmap module further comprises instructions to: adjust a size of the local map according to a velocity of the mobile machine so that the size is proportional to the velocity.
However, Liu teaches wherein the costmap module further comprises instructions to: adjust a size of the local map according to a velocity of the mobile machine so that the size is proportional to the velocity.
([0032] The size of the robot's perception space is determined by the size of the map area updated each time the 2D map is updated. The size of the map area updated each time is determined by the robot's task. The robot's travel speed when performing the task is positively correlated with the size of the map area updated each time. That is, the greater the travel speed, the larger the size of the map area updated each time).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify the size of the local map taught in Park, as modified, to incorporate the teachings of Liu to include wherein the costmap module further comprises instructions to: adjust a size of the local map according to a velocity of the mobile machine so that the size is proportional to the velocity, with a reasonable expectation of success since “this ensures that the update range of the 2D map matches the robot's travel speed, thereby improving the accuracy of path planning” (Liu [0032]).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Park, in view of Afrouzi, in further view of Shen et al. (CN-113741438-A) and herein after will be referred to as Shen.
Regarding claim 6, Park, as modified, teaches the method of claim 1.
Park also teaches wherein the global map is a static map corresponding to a facility, and obtaining the global map comprises: obtaining, based on the static map, static obstacle information; and creating, based on the obtained static obstacle information, the global map ([0045] Here, the global map is a map initially created by the robot (100) as it drives throughout the space. The global map is expressed as a display means that can distinguish between areas where the robot (100) can drive and areas where it cannot drive (e.g., walls, pillars, etc.) within the space).
Park does not explicitly teach the global map is “pre-built”.
However, Shen teaches the interchangeability of a robot receiving a pre-built global map or a robot generating a global map before the robot performs a task ([0037] In step S100, a static cost map corresponding to the preset space is generated and maintained. For example, when creating the initial static cost map, the robot can be driven to inspect a preset space determined by the user and pre-build a basic static map according to the 3D laser SLAM (Simultaneous Localization and Mapping) algorithm, or obtain the basic static map from the network or input by the user through an application. The basic static map built by the application can be displayed intuitively on a smart device).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the present claimed invention to modify modified the global map generated by the robot as taught in Park, as modified, to substitute the global map that was pre-built as taught in Shen because it has been held that the substitution of one known element for another would have been obvious if the substitution yielded predictable results to one of ordinary skill in the art at the time of the invention. In this case, both global maps would have had the predictable result of mapping the environment.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US-20240168480-A1: Varadarajan discloses autonomous mapping by a mobile robot
US-20250207940-A1: Chan discloses incorporation of historical scans when generating a map
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVIN SEOL whose telephone number is (571) 272-6488. The examiner can normally be reached on Monday-Friday 9:00 a.m. to 5:00 p.m.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jelani Smith can be reached on (571) 270-3969. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVIN SEOL/Examiner, Art Unit 3662