Prosecution Insights
Last updated: April 19, 2026
Application No. 18/393,728

ENHANCED NAVIGATION, LOCALIZATION, AND PATH PLANNING FOR AUTONOMOUS ROBOTS

Non-Final OA §103
Filed
Dec 22, 2023
Examiner
STIEBRITZ, NOAH WILLIAM
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Intel Corporation
OA Round
3 (Non-Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
51%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
12 granted / 18 resolved
+14.7% vs TC avg
Minimal -16% lift
Without
With
+-15.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
44 currently pending
Career history
62
Total Applications
across all art units

Statute-Specific Performance

§101
18.6%
-21.4% vs TC avg
§103
61.7%
+21.7% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103
DETAILED ACTION This is a non-final Office Action on the merits in response to communications filed by Applicant on January 15th, 2026. Claims 1-20 are currently pending and examined below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendments to the Claims, filed on December 29th, 2025, have been entered. Claims 1, 10, and 19 are currently amended and pending, claim 4 is as previously presented and pending, and claims 1-3, 5-9, 11-18, and 20 are original, unamended, and pending. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2022/0276654 A1 ("Lee") in view of CN 113721608 A ("Zhang") in further view of US 12103540 B2 ("Slobodyanyuk") in further view of US 10304237 B2 ("Sequeira"). Regarding claim 1, Lee teaches a device comprising a processor configured to (Lee: ¶ 0039, “ Further, the control logic of the present disclosure may be embodied as non-transitory computer readable media on a computer readable medium containing executable program instructions executed by a processor, controller or the like. Examples of computer readable media include, but are not limited to, ROM, RAM, compact disc (CD)-ROMs, magnetic tapes, floppy disks, flash drives, smart cards and optical data storage devices. The computer readable medium can also be distributed in network coupled computer systems so that the computer readable media is stored and executed in a distributed fashion, e.g., by a telematics server or a Controller Area Network (CAN).”): obtain an occupancy grid associated with an environment around a robot (Lee: Figure 2, ¶ 0051, “As an example of a cost map in which a cost is allocated to each area by the cost map generation module 100, as shown in FIG. 2, the highest cost of 10 is allocated to an area occupied by an obstacle, and a decreased cost is allocated as it is further away from the area occupied by the obstacle.”. One of ordinary skill in the art would see that the map inf Figure 2 and the cited passage is clearly a type of occupancy map.), wherein the occupancy grid comprises grid points of potential destinations for the robot (Lee: Figure 2, ¶ 0047, “In this case, the information receiver 110 may receive map information on a space to which the mobile robot is to move, and obstacle information transmitted from a sensor device 12 provided in a mobile robot 10 and obtain a space for generation of a cost map and data on an avoidance target.”); determine, for each grid point of the grid points of potential destinations, a weight for the grid point based on a distance to the grid point from a predefined reference point (Lee: ¶ 0069, “In this case, the cost function factor determiner 310 may select various cost function factors or a combination of at least two or more cost function factors according to a driving condition requested during the movement of the mobile robot to reach the destination.”, ¶ 0070, “Accordingly, the cost function factor determiner 310 may select and determine, as a cost function factor for path selection, one of various cost function factors, such as a shortest distance, minimum cumulative rotation, a direction coincidence with the previous path, a direction coincidence with a direction pointed by the robot, a direction coincidence with an immediately-previous driving signal of the robot driver, cost map stability, cost map congestion, or the like, or a combination of at least two or more cost function factors.”. As can clearly be seen from the cited passages, the system is configured to use at least to factors when determining the cost, one of which is the distance between the robot and the points.) and based on a directional deviation to the grid point, wherein the directional deviation comprises an angular difference between a current heading of the robot and an angular direction from the predefined reference point toward the grid point (Lee: ¶ 0070, “Accordingly, the cost function factor determiner 310 may select and determine, as a cost function factor for path selection, one of various cost function factors, such as a shortest distance, minimum cumulative rotation, a direction coincidence with the previous path, a direction coincidence with a direction pointed by the robot, a direction coincidence with an immediately-previous driving signal of the robot driver, cost map stability, cost map congestion, or the like, or a combination of at least two or more cost function factors.”. As can be seen from the cited passage the angular deviation between the travel direction of the robot and the point can be determined in various ways, such as determining if the point is coincident with the travel direction of the robot or by determining the rotation required to travel to the point so as to minimize the total rotation.); wherein the weight is based on a distance (Lee: ¶ 0070, “Accordingly, the cost function factor determiner 310 may select and determine, as a cost function factor for path selection, one of various cost function factors, such as a shortest distance, minimum cumulative rotation, a direction coincidence with the previous path, a direction coincidence with a direction pointed by the robot, a direction coincidence with an immediately-previous driving signal of the robot driver, cost map stability, cost map congestion, or the like, or a combination of at least two or more cost function factors.”. As can be seen, the weight can be based on the distance of the path.); select, based on the weight, a target point from among the grid points (Lee: ¶ 0068, “In addition, the optimal path selection module 300 may include a cost function factor determiner 310 that determines a cost function factor for selecting a path from among the “n” multiple paths generated by the multi-path generation module, and a path selector 320 that selects a path to a destination according to the cost function factor determined by the cost function factor determiner 310 among the “n” multiple paths, as an optimal path to be followed by the mobile robot.”. As can be seen from the cited passage, the system selects the path with the lowest cost, wherein the cost is determined by applying factors to different criteria. One of ordinary skill in the art would see that the factors for each point are naturally considered through this method.); generate a movement instruction associated with moving the robot toward the target point (Lee: ¶ 0107, “The path control step (S400) may include generating a control signal for allowing the mobile robot to turn left or right, or drive straight such that the mobile robot reaches a destination along an optimal path selected by the optimal path selection step and transmitting the control signal to a robot driver provided in the mobile robot.”); and control the robot to move from location to location toward the target point according to the movement instruction and the occupancy grid (Lee: ¶ 0095, “The multi-path generation step (S200) may include a multi-waypoint detection process (S210) of detecting, as "n" multiple waypoints, areas located on the maximum straight line distance without collision with an obstacle in "n" directions (n is an integer greater than or equal to 2) centered on the mobile robot located on the cost map and a multi-path planning process (S220) of planning "n" multiple paths to reach a destination by avoiding the obstacle while passing through each of the multiple waypoints as an initial waypoint.”, ¶ 0102, “In addition, the optimal path selection step (S300) may include a cost function factor determination process (S310) of determining a cost function factor for selecting a path from among the "n" multiple paths generated by the multi-path generation module, and a path selection process (S320) of selecting a path to a destination according to the cost function factor determined by the cost function factor determiner 310 among the "n" multiple paths, as an optimal path to be followed by the mobile robot.”, ¶ 0107, “The path control step (S400) may include generating a control signal for allowing the mobile robot to turn left or right, or drive straight such that the mobile robot reaches a destination along an optimal path selected by the optimal path selection step and transmitting the control signal to a robot driver provided in the mobile robot.”. The cited passages clearly teach that the robot is configured to move from waypoint to waypoint along its path according to the generated instruction. Furthermore, the cited passages clearly show that this path is generated based on an occupancy map.). Lee does not teach improve the occupancy grid by combining a current scan of the environment around the robot with at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan; wherein the weight is based on a comparison of the distance to a predefined distance criterion, and wherein the predefined distance criterion is based on a maximum scan range of a sensor for scanning the environment. Zhang, in the same field of endeavor, teaches wherein the weight is based on a comparison of the distance to a predefined distance criterion (Zhang: ¶ 0005, “In a first aspect, the present invention provides a robot local path planning method, comprising the following steps: obtaining the global path of the robot from the initial position to the target position; the global path is the set of the global path point; sampling the global path point to obtain the global sampling point; the each global sampling point is translated at least two times in the direction vertical to the global path; each translation is one time; then obtaining a group of local sampling point; wherein the translation distance is in the preset translation distance threshold range; performing curve fitting to each group of local sampling point respectively so as to obtain a plurality of initial local path; obtaining the obstacle information, and evaluating each of the initial local path according to the obstacle information, and filter a local path.”, As can be seen from the cited passage, only the points that fall within the preset translation range of the robot are used to obtain a path.). Lee teaches a device comprising a processor configured to: obtain an occupancy grid associated with an environment around a robot, wherein the occupancy grid comprises grid points of potential destinations for the robot; determine, for each grid point of the grid points of potential destinations, a weight for the grid point based on a distance to the grid point from a predefined reference point and based on a directional deviation to the grid point, wherein the directional deviation comprises an angular difference between a current heading of the robot and an angular direction from the predefined reference point toward the grid point; wherein the weight is based on a distance; select, based on the weight, a target point from among the grid points; generate a movement instruction associated with moving the robot toward the target point; and control the robot to move from location to location toward the target point according to the movement instruction and the occupancy grid. Lee does not teach wherein the weight is based on a comparison of the distance to a predefined distance criterion. Zhang teaches wherein the weight is based on a comparison of the distance to a predefined distance criterion. A person of ordinary skill in the art would have had the technological capabilities required to have modified the device taught in Lee with wherein the weight is based on a comparison of the distance to a predefined distance criterion taught in Zhang. Furthermore, the device taught in Lee is already configured to determine the weight based on a distance, so modifying the device such that the weight is based on a comparison to predefined criterion would not change or introduce new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of a device wherein the weight is based on a comparison of the distance to a predefined distance criterion. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the device taught in Lee with wherein the weight is based on a comparison of the distance to a predefined distance criterion taught in Zhang with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. Lee in view of Zhang does not teach improve the occupancy grid by combining a current scan of the environment around the robot with at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan; wherein the predefined distance criterion is based on a maximum scan range of a sensor for scanning the environment. Slobodyanyuk, in the same field of endeavor, teaches wherein the predefined distance criterion is based on a maximum scan range of a sensor for scanning the environment (Slobodyanyuk: Column 10 lines 1-20, “For example, device 200 may include means for receiving point data associated with a cell of an occupancy grid for controlling a vehicle; means for determining, based on the point data, a characteristic of the cell that is associated with an occupancy probability of the cell, wherein the occupancy probability is determined according to a first technique based on the point data; and means for configuring, based on the characteristic, the occupancy probability for the cell, within the occupancy grid, according to a second technique; or the like.”, Column 16 lines 23-33, “As shown by reference number 345, the ECU 112 generates one or more occupancy grid frames. An occupancy grid frame may be generated based at least in part on the occupancy grid. In some aspects, the one or more occupancy grid frames might be generated based on converting the occupancy probability of a cell to a first value or a second value. The first value may indicate that the cell corresponds to an area on the map at which an object is located (e.g., an occupied cell). The second value may indicate that the cell corresponds to an area on the map at which an object is not located (e.g., an unoccupied cell).”, Column 16 lines 45-49, “As shown by reference number 350, the ECU 112 (e.g., the controller 320) controls the vehicle 110 according to the occupancy grid. The ECU 112 may identify free space that the vehicle 110 may occupy (e.g., travel through) based at least in part on the occupancy grid.”, Column 20 lines 53-58, “In a fourth aspect, alone or in combination with one or more of the first through third aspects, process 600 involves determining the occupancy probability for a frame associated with the point data and reducing the occupancy probability for the cell when the cell is no longer within range of the scanner.”, Column 21 lines 9-14, “In an eighth aspect, alone or in combination with one or more of the first through seventh aspects, process 600 includes performing, based on the configured occupancy probability, an action associated with controlling the vehicle using the occupancy grid with the occupancy probability for the cell configured according to the second technique.”. The cited passages teach a method of controlling a vehicle using an occupancy grid of the vehicle’s environment. The method is configured to determine the probability that a cell in the grid is occupied and controls the vehicle based on the probability that the cell is occupied. This probability, as shown in Column 20 lines 53-58, can be determined based on maximum range of the scanner. One of ordinary skill in the art would recognize that, because the probabilities are used to determine the actions of a vehicle, they perform a similar function to a weight, as those grids with a lower probability are more likely to get chosen then those with a higher probability. Therefore, the cited passages clearly teach wherein the predefined distance criterion is based on a maximum scan range of a sensor for scanning the environment). Lee in view of Zhang teaches a device for controlling a robot wherein the weight is based on a comparison of the distance to a predefined distance criterion. Lee in view of Zhang does not teach wherein the predefined distance criterion is based on a maximum scan range of a sensor for scanning the environment. Slobodyanyuk teaches wherein the predefined distance criterion is based on a maximum scan range of a sensor for scanning the environment. A person of ordinary skill in the art would have had the technological capabilities required to have modified the device taught in Lee in view of Zhang with wherein the predefined distance criterion is based on a maximum scan range of a sensor for scanning the environment taught in Slobodyanyuk. Furthermore, the device taught in Lee in view of Zhang is already configured to determine the weight based on a comparison to a predefined distance criterion, so modifying the device with the criterion taught in Slobodyanyuk would only require a simple substation of one known criterion for another. Additionally, a person of ordinary skill in the art would have had knowledge of the maximum range of sensors and would have had the capabilities required to implement the consideration of the maximum range of a sensor. Such a modification would not have changed or introduced new functionality. No inventive effort would have been required. The combination would have required the simple substitution of one known criterion for another. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the device taught in Lee in view of Zhang with wherein the predefined distance criterion is based on a maximum scan range of a sensor for scanning the environment taught in Slobodyanyuk with a reasonable expectation of success. One of ordinary sill in the art would have been motivated to make this modification because the combination would have required the simple substitution of one known criterion for another. Lee in view of Zhang in further view of Slobodyanyuk does not teach improve the occupancy grid by combining a current scan of the environment around the robot with at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan; Sequeira, in the same field of endeavor, teaches improve the occupancy grid by combining a current scan of the environment around the robot with at least one prior scan of the environment (Sequeira: Column 2 line 66 – Column 3 line 31, “To achieve this object, the present invention proposes, in a first aspect, a method for constructing a 3D reference map of an environment useable in (a method for) real-time mapping, localization and/or change analysis, comprising the following steps: acquiring (3D) scanner data of the environment with a mobile real-time laser range scanner at a rate of at least 5 frames (i.e. 5 point clouds), preferably at least 10 frames per second, constructing, using the (3D) scanner data for each of a plurality of poses of the laser range scanner, each pose having an associated point cloud defined by the scanner data, a map presentation, the map presentation having a data structure configured for random sample access thereto in constant time, fast nearest neighbor search and scalability over large areas, and building, using the map presentation, the 3D reference map for the environment using a 3D Simultaneous Localization And Mapping (3D SLAM) framework, said building comprising using an odometer module estimating a current pose of the laser range scanner for each point cloud based on the registration of the (last) point cloud to the local map presentation, using a local trajectory optimization module refining the trajectory of a (sub)set of point clouds in order to minimize the drift in the local map presentation, and performing offline a global trajectory optimization by reconstructing an entire map of the environment (preferably by using the entire set of point clouds) taking advantage of (or taking into account) loop closures, thereby forming said 3D reference map.”, Column 9 lines 32-34, “The odometer is typically performed in real-time.”, Column 10 lines 9-21, “In one preferred aspect, the present invention proposes an optimization method or framework for Simultaneous Localization And Mapping (SLAM) that properly models the acquisition process in a scanning-while-moving scenario. Each measurement is correctly reprojected in the map reference frame by considering a continuous time trajectory which is defined as the linear interpolation of a discrete set of control poses in SE E3. The invention also proposes a particularly efficient data structure that makes use of a hybrid sparse voxelized representation, allowing large map management. Thanks to this the inventors were also able to perform global optimization over trajectories, resetting the accumulated drift when loops are performed.”, Column 13 lines 32-37, “A preferred optimization framework of the invention is composed by two consecutive modules: an odometer that estimates the pose of each cloud given a map and a local trajectory optimizer that refines the trajectory of a set of clouds. Both modules employ the map data structure as described herein to handle the growing map.”, Column 13 lines 45-58, “The input of such a framework is a set of 3D point clouds { C,} produced with the data streamed by the sensor (in case of a Velodyne scanner, the point cloud is generated after a complete revolution of the sensor). Each point cloud C, is composed by a set of points P={pJ}, j=l ... NP, a set of relative timestamps T={tJ} and a set of normal unit vectors N={nJ}. Relative timestamps are assigned such that the first point produced has timestamp 0 and the last one has 1. Normal unit vectors may be estimated with the unconstrained least square formulation proposed in H. Badino, D. Huber, Y. Park, and T. Kanade, "Fast and accurate computation of surface normals from range images," in ICRA, 2011, taking advantage of box filtering on the point cloud grid structure.”. One of ordinary skill in the art would recognize that the cited passages describe a process by which a 3D map of an environment of a robot is updated. The cited passages clearly describe that the prior scan of the environment (i.e. the scan corresponding to relative timestamp 0) is position adjusted using odometer data to update a current scan of the environment (i.e. the scan data corresponding to relative timestamp 1).), wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan (Sequeira: Column 13 lines 60-64, “Initially, one needs to produce a first estimate of the sensor's trajectory by recovering the pose of each point cloud. Since the sensor is moving, one considers as representative pose of the cloud the sensor pose when the last point is received.”, Column 13 line 65 – Column 14 line 2, “One performs a point-plane ICP between a subset of points of the last received cloud and the map. Like in F. Moosmann and C. Stiller, "Velodyne SLAM," in IVS, 2011, the selected points of the cloud are unwarped by considering the last estimated motion before performing the registration.”, Column 14 lines 34-41, “This module takes as input the list of clouds with their associated poses RCodo and performs a trajectory refinement by employing a local window approach. When the distance between the first and the last pose in the list is larger than a threshold, cloud poses are optimized and a new list RCREF={[C,, I'*REF]}, i=l: Ne is produced with the refined pose and the input' clouds. Notice that this step properly integrates the unwarping in the optimization.”, Column 15 lines 37-56, “Once the optimization is terminated, the list RCREF can be updated with the optimized poses. Then, the entire set of points and normals of the clouds are converted into world coordinates according to Equation 3 and then added to the map M. At this stage one takes advantage of the efficient strategy to update the local map described in section A.2.: before adding points, one firstly deletes from the map all points that are further than a given radius from the last trajectory pose and then one adds the transformed clouds from RCREF. Once the map is updated a new kd-tree is created on the resulting points to allow subsequent nearest neighbor searches. The list RCodo is cleared and the odometer guess for the next cloud registration is updated according to the last two poses of RCREF. The proposed formulation represents an adherent description of the real sensor model, which acquires points while moving: point transformations in world coordinates involve both initial and final poses of each cloud. Moreover, the estimation of each pose (apart the first and the last) is directly influenced by two clouds.”. The cited passages clearly show that the prior scan (i.e. the scan corresponding to relative timestamp 0) is position adjusted based on the odometer data. If the position adjusted prior scan differs from the current scan (i.e. the scan corresponding to relative timestamp 1) above a threshold, the poses of each point in the point cloud of the two scans are optimized. These optimized poses are the used to update the 3D map of the area. One of ordinary skill in the art would clearly see that this teaches a method of improving the occupancy grid by combining a current scan of the environment around the robot with at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan. Please see Columns 14-15 for the equations used to derive the updated map based on the position adjusted scan data.). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the device taught in Lee in in view of Zhang in further view of Slobodyanyuk with improve the occupancy grid by combining a current scan of the environment around the robot with at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan taught in Sequeira with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because such a method of updating the map of an environment allows for the minimization and correction of drift in the map representation of the region. One of ordinary skill in the art would recognize that minimizing drift in a map provides a more accurate map (Sequeira: Column 2 line 66 – Column 3 line 31, “using a local trajectory optimization module refining the trajectory of a (sub)set of point clouds in order to minimize the drift in the local map presentation”, Column 18 line 43 – Column 19 line 7, “The inventors compared their system with the publicly available Velodyne SLAM [F. Moosmarm and C. Stiller, "Velodyne SLAM," in IVS, 2011] that also performs a motion compensation on the acquired point clouds. To compare the two systems the inventors measured drift accumulated using the outdoor car dataset. Since the same location is revisited multiple times, they estimated drift by registering the generated initial local map with the one generated at each subsequent passage. The translation and orientation components of the registration transform aligning the current local map to the initial one indicate how much drift has been accumulated. One of the salient characteristics of [F. Moosmann and C. Stiller, "Velodyne SLAM," in IVS, 2011] is the presence of a map refinement strategy ( called adaption) based on a set of heuristic tests that positively influence the trajectory estimation. Since the present system is focused on the optimization strategy by proper modeling the problem, the inventors disabled this feature in the original work to focus the analysis on the trajectory estimation. Results after each loop are shown in Table I. It can be noticed that one accumulates less drift than the original work. Moreover the present system is a natural formulation of the problem that requires less configuration parameters than the heuristic strategies of the Velodyne SLAM. Performance of the present system is superior to the Velodyne SLAM system both in terms of execution time and in the ability of maintaining a global map of the environment, while in the original work only a local map is maintained. The ability of using the global map has been confirmed, in case of the use of loop closure and the global optimization technique to correct the drift accumulated in the first loop and the use of the global map for the next loops.”). Regarding claim 2, Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira teaches wherein the processor is further configured to determine the distance based on a travel path to the grid point from a current position of the robot (¶ 0070, “Accordingly, the cost function factor determiner 310 may select and determine, as a cost function factor for path selection, one of various cost function factors, such as a shortest distance, minimum cumulative rotation, a direction coincidence with the previous path, a direction coincidence with a direction pointed by the robot, a direction coincidence with an immediately-previous driving signal of the robot driver, cost map stability, cost map congestion, or the like, or a combination of at least two or more cost function factors.”, ¶ 0074 “First, as shown in FIG. 5, when a driving condition requested from the mobile robot is to minimize a moving distance of the robot, the cost function factor determiner 310 may determine the shortest distance as a cost function factor and the path selector 320 may select path 1 having the shortest distance to the destination among the eight multiple paths generated by the multi-path generation module 200 as the optimal path.”. As can be seen from the cited passage, the distance is clearly based on the path to be traveled.). Regarding claim 3, Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira teaches wherein the processor is further configured to determine the travel path based on an occupancy characterization associated with each grid point in the occupancy grid along the travel path (Lee: Lee: Figure 2, ¶ 0051, “As an example of a cost map in which a cost is allocated to each area by the cost map generation module 100, as shown in FIG. 2, the highest cost of 10 is allocated to an area occupied by an obstacle, and a decreased cost is allocated as it is further away from the area occupied by the obstacle.”). Regarding claim 4, Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira teaches wherein the weight is based on a comparison of the distance to the predefined distance criterion (Zhang: ¶ 0005, “In a first aspect, the present invention provides a robot local path planning method, comprising the following steps: obtaining the global path of the robot from the initial position to the target position; the global path is the set of the global path point; sampling the global path point to obtain the global sampling point; the each global sampling point is translated at least two times in the direction vertical to the global path; each translation is one time; then obtaining a group of local sampling point; wherein the translation distance is in the preset translation distance threshold range; performing curve fitting to each group of local sampling point respectively so as to obtain a plurality of initial local path; obtaining the obstacle information, and evaluating each of the initial local path according to the obstacle information, and filter a local path.”, As can be seen from the cited passage, only the points that fall within the preset translation range of the robot are used to obtain a path.), wherein the predefined distance criterion is based on a physical dimension of the robot (Zhang: ¶ 0012, “In one embodiment, the preset translation distance threshold range is robot width to the robot width plus 0.5 m.”). Regarding claim 5, Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira in view of Zhang teaches wherein the processor is further configured to determine the weight based on a first weighting factor associated with the distance (Lee: ¶ 0070, “Accordingly, the cost function factor determiner 310 may select and determine, as a cost function factor for path selection, one of various cost function factors, such as a shortest distance, minimum cumulative rotation, a direction coincidence with the previous path, a direction coincidence with a direction pointed by the robot, a direction coincidence with an immediately-previous driving signal of the robot driver, cost map stability, cost map congestion, or the like, or a combination of at least two or more cost function factors.”. As can be seen, the weight can be based on the distance of the path.) and based on a second weighting factor associated with the directional deviation (Lee: ¶ 0070, “Accordingly, the cost function factor determiner 310 may select and determine, as a cost function factor for path selection, one of various cost function factors, such as a shortest distance, minimum cumulative rotation, a direction coincidence with the previous path, a direction coincidence with a direction pointed by the robot, a direction coincidence with an immediately-previous driving signal of the robot driver, cost map stability, cost map congestion, or the like, or a combination of at least two or more cost function factors.”. As can be seen from the cited passage the angular deviation between the travel direction of the robot and the point can be determined in various ways, such as determining if the point is coincident with the travel direction of the robot or by determining the rotation required to travel to the point so as to minimize the total rotation.), wherein the second weighting factor is based on the comparison of the distance to the predefined distance criterion (Zhang: ¶ 0005, “In a first aspect, the present invention provides a robot local path planning method, comprising the following steps: obtaining the global path of the robot from the initial position to the target position; the global path is the set of the global path point; sampling the global path point to obtain the global sampling point; the each global sampling point is translated at least two times in the direction vertical to the global path; each translation is one time; then obtaining a group of local sampling point; wherein the translation distance is in the preset translation distance threshold range; performing curve fitting to each group of local sampling point respectively so as to obtain a plurality of initial local path; obtaining the obstacle information, and evaluating each of the initial local path according to the obstacle information, and filter a local path.”, As can be seen from the cited passage, only the points that fall within the preset translation range of the robot are used to obtain a path.). Regarding claim 6, Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira teaches wherein the processor is further configured to include a grid point in the grid points of potential destinations based on whether the grid point is a predefined distance to a boundary defined by unexplored points in the occupancy grid in relation to explored points in the occupancy grid or based on whether the grid point is a predefined distance to an occupied point in the occupancy grid (Lee: Figure 2, ¶ 0050, “In addition, the cost allocator 120 may allocate a relatively high cost as an area resulting from division of the space is closer to the obstacle, and a relatively low cost as an area resulting from division of the space is further away from the obstacle, and then generate a cost map by storing a cost value thereof along with map information and obstacle information. Accordingly, the determination as to which area of areas is to be included in a path and which area of areas is to be avoided may be achieved merely by comparing the costs allocated to the areas with each other.”, ¶ 0051, “As an example of a cost map in which a cost is allocated to each area by the cost map generation module 100, as shown in FIG. 2, the highest cost of 10 is allocated to an area occupied by an obstacle, and a decreased cost is allocated as it is further away from the area occupied by the obstacle.”, ¶ 0052, “Accordingly, in FIG. 2, a cost map is generated in such a way that the cost linearly decreases as the distance from the area occupied by the obstacle increases and the lowest cost of 0 is allocated to the farthest area.”. As can be seen from the cited passages, the cost decreases as linearly at set distance from the object). Regarding claim 7, Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira teaches wherein the processor further configured to determine the weight for each grid point based on whether the grid point is reachable by the robot (Lee: ¶ 0060, “In addition, the multi-waypoint detector 210 may set, as an area where there is no collision with an obstacle, an area allocated a cost of no greater than 8, which is lower than a cost of 10 indicating that a cost allocated to each area represents an area occupied by an obstacle on the cost map and is lower than a cost of 9 indicating that the cost allocated to each area represents a nearest area to the obstacle on the cost map, to avoid an area with collision with the obstacle. In this case, the multi-waypoint detector 210 may change and set a cost for determining an area without collision with obstacles to a value lower than or higher than 8.”, ¶ 0061, “Accordingly, as shown in FIG. 3, the multi-waypoint detector 210 may detect, as multiple waypoints, areas located at the maximum distance, which are reached by passing through only areas with a cost of 8 or less on the cost map in eight directions within a certain radius from the mobile robot (indicated by a large circle around the mobile robot in FIG. 3) as shown in FIG. 3.”, ¶ 0062, “FIG. 3 shows that waypoint 1 to waypoint 8 are detected in the eight directions, and each waypoint detected as described above may mean an initial waypoint to which the mobile robot is able to move from a current position in each direction without colliding with an obstacle.”. As can be seen from the cited passages, the grid points the robot uses when determining it’s travel path are only those that the robot can reach without colliding with an object.). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2022/0276654 A1 ("Lee") in view of CN 113721608 A ("Zhang") in further view of US 12103540 B2 ("Slobodyanyuk") in further view of US 10304237 B2 ("Sequeira") in further view of US 2020/0003901 A1 ("Shroff"). Regarding claim 8, Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira teaches wherein the occupancy grid is associated with a first resolution defined by a first dimension of the grid points (Lee: ¶ 0045, “The cost map generation module 100 may include an information receiver 110 that receives and stores map information in which a space in which the mobile robot is movable is divided into a plurality of areas and information on an obstacle located in the space, and a cost allocator 120 that differentially allocates a cost according to how far each area resulting from division of the space is from an obstacle and then generates a cost map by matching the costs to the areas respectively.”, ¶ 0047, “In this case, the information receiver 110 may receive map information on a space to which the mobile robot is to move, and obstacle information transmitted from a sensor device 12 provided in a mobile robot 10 and obtain a space for generation of a cost map and data on an avoidance target.”), Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira does not teach wherein the processor is further configured to: align a current location of the robot to a sampling point of sensor data indicative of the environment; and resample the occupancy grid into a coarse occupancy grid associated with a second resolution defined by a second dimension of resampled grid points, wherein the second dimension is larger than the first dimension. Shroff, in the same field of endeavor, teaches wherein the processor is further configured to: align a current location of the robot to a sampling point of sensor data indicative of the environment (Shroff: ¶ 0028, “For example, the vehicle 106 can compare sensor data with map data (e.g., the map data 114) to determine a location of the vehicle 106 in the environment.”); and resample the occupancy grid into a coarse occupancy grid associated with a second resolution defined by a second dimension of resampled grid points (Shroff: ¶ 0029, “At operation 122, the process can include loading, based at least in part on a second distance between a third location associated with the vehicle and the second location associated with the region, a second map at a second resolution.”, ¶ 0044, “In some examples, the one or more maps 228 can include the resolution component 230 and/or the semantic component 232. In some examples, the resolution component 230 can store, for each region of an environment, multiple maps associated with different resolutions or levels of detail. As illustrated in FIG. 1, the region 112 may be associated with the first map data 114 (e.g., a low-resolution map) and the second map data 138 (e.g., a high-resolution map). Of course, the resolution component 230 can store any number of maps associated with a region, and is not limited to one, two, or even three different resolution maps. That is, the resolution component 230 can store, for a particular region in an environment, low-resolution map data, medium-resolution map data, high-resolution map data, highest-resolution map data, and the like.”), wherein the second dimension is larger than the first dimension (Shroff: ¶ 0044, “In some examples, the one or more maps 228 can include the resolution component 230 and/or the semantic component 232. In some examples, the resolution component 230 can store, for each region of an environment, multiple maps associated with different resolutions or levels of detail. As illustrated in FIG. 1, the region 112 may be associated with the first map data 114 (e.g., a low-resolution map) and the second map data 138 (e.g., a high-resolution map). Of course, the resolution component 230 can store any number of maps associated with a region, and is not limited to one, two, or even three different resolution maps. That is, the resolution component 230 can store, for a particular region in an environment, low-resolution map data, medium-resolution map data, high-resolution map data, highest-resolution map data, and the like.”). The only difference between the prior art and the claimed invention is that the prior art does not combine the device and method of resampling the occupancy map to a coarse resolution associated with a second dimension larger than the first. A person of ordinary skill in the art would have had the technological capabilities required to have combine the device taught in Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira with the method of resampling the occupancy map to a coarse resolution associated with a second dimension larger than the first taught in Shroff. Furthermore, even though Shroff teach going from a lower resolution map to a higher resolution map, one of ordinary skill in the art would easily be able to modify the method to perform the reverse. Additionally, the device taught in Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira already teaches the use of an occupancy map with a first resolution, so modifying the device to be able to resample the map to one with a second resolution would no change or introduce new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of a device that can resample an occupancy map with a first resolution to one with a second resolution. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the device taught in Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira with the method of resampling the occupancy map to a coarse resolution associated with a second dimension larger than the first taught in Shroff with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because it would have yielded predictable results. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2022/0276654 A1 ("Lee") in view of CN 113721608 A ("Zhang") in further view of US 12103540 B2 ("Slobodyanyuk") in further view of US 10304237 B2 ("Sequeira") in further view of US 9908432 B2 ("Park"). Regarding claim 9, Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira does not teach wherein the processor is further configured to, if a next waypoint along the travel path from a current position of the robot has a non-traversable occupancy characterization, provide a reverse instruction to the robot, wherein the reverse instruction indicates that the robot is to reverse along a previously traveled path used to arrive at the current position, wherein the reverse instruction indicates that the robot is to reverse along the traveled path without regard to an occupancy characterization of the grid points in the occupancy grid that are along the previously traveled path. Park, in the same field of endeavor, teaches wherein the processor is further configured to, if a next waypoint along the travel path from a current position of the robot has a non-traversable occupancy characterization, provide a reverse instruction to the robot (Park: Column 18 lines 31-37, “When a determination result in S326 is that the robot cleaner 1 is not lifted and thus stuck, the controller 220 drives the second driving motor 160 to decrease an overall height of the robot cleaner 1, and the robot cleaner 1 travels in a reverse direction to escape from being stuck (S340). Subsequently, the controller 220 proceeds to S330 and then performs subsequent operations.”. One of ordinary skill in the art would see that if the robot gets stuck, the current position is non-traversable), wherein the reverse instruction indicates that the robot is to reverse along a previously traveled path used to arrive at the current position (Park: Column 18 lines 31-37, “When a determination result in S326 is that the robot cleaner 1 is not lifted and thus stuck, the controller 220 drives the second driving motor 160 to decrease an overall height of the robot cleaner 1, and the robot cleaner 1 travels in a reverse direction to escape from being stuck (S340). Subsequently, the controller 220 proceeds to S330 and then performs subsequent operations.”. One of ordinary skill in the art would see that, because the robot is describes as travelling in a reverse direction, the robot follows the path used to arrive at the location.), wherein the reverse instruction indicates that the robot is to reverse along the traveled path without regard to an occupancy characterization of the grid points in the occupancy grid that are along the previously traveled path (Park: Figure 17B, Column 18 lines 31-37, “When a determination result in S326 is that the robot cleaner 1 is not lifted and thus stuck, the controller 220 drives the second driving motor 160 to decrease an overall height of the robot cleaner 1, and the robot cleaner 1 travels in a reverse direction to escape from being stuck (S340). Subsequently, the controller 220 proceeds to S330 and then performs subsequent operations.”. As can be seen from the cited figure and passage, the robot is configured to simply reverse until such a time that it is no longer determined to be stuck. Such a method obviously does not consider the occupancy status of location along this reverse route.). The only difference between the claimed invention and the prior art is that the prior art does not combine the device and the method of reversing the robot when it enters a grid point that is non-traversable. A person of ordinary skill in the art would have had the technological capabilities required to modify the device taught in Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira with the method of reversing the robot when it enters a grid point that is non-traversable taught in Park. Furthermore, the device in Lee is already configured to determine the occupancy (and therefore travelability) of each grid point as well as a travel path, so modifying the device to simply reverse in the event it unintentionally enters a non-traversable grid point would not change or introduce new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of a device capable of reversing when entering a non-traversable grid point. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the device taught in Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira with the method of reversing the robot when it enters a grid point that is non-traversable taught in Park with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because it would have yielded predictable results. Claim(s) 10-12 and 14-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 9286810 B2 ("Eade") in view of US 10304237 B2 ("Sequeira") in further view of US 20240168494 A1 ("Hebbar"). Regarding claim 10, Eade teaches a device comprising a processor configured to (Eade: Column 14 lines 5-11, “The control 108 can include hardware, such as microprocessors, memory, etc., can include firmware, can include software, can include network communication equipment, and the like. In one embodiment, the control 108 uses dedicated hardware, such as single-board computers, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), and the like.”): obtain an expected position of a robot in relation to a prior position of the robot within an environment (Eade: Column 18 lines 19-30, “Inputs to the VSLAM system 600 include raw pose data 610 from one or more dead reckoning sensors 614 and also include visual data 612 from one or more cameras or other visual sensors 616. It will be understood that a dead reckoning sensor 614, such as an optical wheel encoder, can communicate with the VS LAM system 600 via a dead reckoning inter face 618, such as via a driver or via a hardware abstraction layer. The raw pose data 610 can correspond to distance traveled, to velocity, to acceleration, and the like, and can depend on the type of dead reckoning sensor used. Outputs from the VS LAM system 600 can include one or more poses and maps 620.”, Column 19 lines 11-23, “The SLAM module 604 receives the raw pose data 610 from the dead reckoning interface 618. It will be understood that the nature of the raw pose data 610 can vary according to the type of dead reckoning sensor 614 and the type of output specified by the dead reckoning interface 618. Examples of the raw pose data 610 can include distance measurements, velocity measurements, and acceleration measurements. The dead reckoning data is used by the SLAM module 604 to estimate course and distance traveled from a prior pose.”); obtain prior scan data of the environment at the prior position and a previous key point within the prior scan data (Eade: Column 19 lines 24-32, “Other inputs to the SLAM module 604 include visual localization data from the Visual Front End 602 and/or the optional Pre-Filter module 622. As a robot with VSLAM travels in an environment, the robot observes visual landmarks. When a new visual landmark is encountered, the SLAM module 604 can store the robot's global reference frame location for the particles in the SLAM database 608. For example, the robot's pose can be estimated from a previous location and the course and distance traveled from a last known pose.”); obtain current scan data of the environment at a current position of the robot (Eade: Column 28 lines 47-55, “FIG. 8 is shows the landmark recognition process implemented in certain embodiments. Particularly, FIG. S's flow chart elaborates on state 2702 of the flowchart of FIG. 7, i.e. upon the landmark matching process implemented in certain embodiments. This process may be executed for each captured image during operation. In these embodiments, features may be matched using a coarse to fine strategy. This reduces processing effort while attempting to maximize the number of matched features found.”); combine the current scan data with the transformed scan data as combined scan data (Eade: Column 28 lines 56-67, “The process begins at state 2002 where the system looks up features in the query image from a global database to find candidate landmarks. One will understand the global database to comprise a database containing features from all the previously identified landmarks.”, Column 28 lines 56-67, “Consequently, to look up features in a global database indicates finding features in the global database that are visually similar to the ones from the query image. Visual similarity can be determined by a number of well-known methods. For example, SIFT descriptors may be used as described above.”. One of ordinary skill in the art would see that the query image (the current scan data) and images from the global database are used to create a set of candidate landmarks.); identify an observed key point based on the combined scan data (Eade: Column 28 lines 56-67, “The process begins at state 2002 where the system looks up features in the query image from a global database to find candidate landmarks. One will understand the global database to comprise a database containing features from all the previously identified landmarks.”, Column 28 lines 56-67, “Consequently, to look up features in a global database indicates finding features in the global database that are visually similar to the ones from the query image. Visual similarity can be determined by a number of well-known methods. For example, SIFT descriptors may be used as described above.”, Column 29 lines 1-3, “These features may then be used to establish a ranking by visual similarity in state 2004. From this ranking, promising landmark candidates may be selected in state 2006”. One of ordinary skill in the art would see that a combine data set is created based on visual similarity to the query image); determine a correlation between the observed key point and the previous key point (Eade: Column 29 lines 4-25, “more granular matching is then performed against the set of features in each candidate landmark using a local database. Here, one will understand a "local" database to refer to look up among features from a current landmark. The system then iterates through all the candidates, performing local matching against each candidate in 2008-2014, potentially yielding observations. Local matching may comprise looking up the 10 query image features in a database that only contains features from the current candidate landmark.”); control the robot to move from location to location toward a target point based on the estimated actual position, the combined scan data, and the improved occupancy grid (Eade: Column 13 line 64 – Column 14 line 4, “In response to the image data 106, the control 108 can provide control signals to the motors 110, 112 that control the movement of the robot 100. For example, the control 108 can provide control signals to instruct the robot to move forward, to stop, to move backward, to turn, to rotate about a vertical axis, and the like. When the robot rotates around a vertical axis, such as the exemplary vertical axis 118 shown in FIG. 1, this rotation is referred to as "yaw."”, Column 19 lines 11-23, “The SLAM module 604 receives the raw pose data 610 from the dead reckoning interface 618. It will be understood that the nature of the raw pose data 610 can vary according to the type of dead reckoning sensor 614 and the type of output specified by the dead reckoning interface 618. Examples of the raw pose data 610 can include distance measurements, velocity measurements, and acceleration measurements. The dead reckoning data is used by the SLAM module 604 to estimate course and distance traveled from a prior pose.”, Column 19 lines 24-32, “Other inputs to the SLAM module 604 include visual localization data from the Visual Front End 602 and/or the optional Pre-Filter module 622. As a robot with VSLAM travels in an environment, the robot observes visual landmarks. When a new visual landmark is encountered, the SLAM module 604 can store the robot's global reference frame location for the particles in the SLAM database 608. For example, the robot's pose can be estimated from a previous location and the course and distance traveled from a last known pose.”. The cited passages clearly show that the robot is controlled to move from location to location based on the combined scan data and estimated position.). Eade does not teach translate, based on a difference between the prior position and the expected position, the prior scan data into transformed scan data obtain an improved occupancy grid based on the combined scan data and at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the combined scan data; wherein the estimated actual position is based on a maximum scan range of a sensor for scanning the environment. Sequeira, in the same field of endeavor, teaches translate, based on a difference between the prior position and the expected position, the prior scan data into transformed scan data (Sequeira: Column 25 lines 26-32, “The pose tracking component deals with computing the local motion of the sensor as it moves around the environment. Knowing the previous estimated pose, when a new acquisition is received, the inventors perform a local registration between the map and the observed points. From the resulting transformation, the implicit motion is inferred and applied to the previously estimated pose.”. As can be seen from the cited passage, the system uses the prior estimated pose of the robot and the observed points (equivalent here to the scan data) to update the prior pose.). obtain an improved occupancy grid based on the combined scan data and at least one prior scan of the environment (Sequeira: Column 2 line 66 – Column 3 line 31, “To achieve this object, the present invention proposes, in a first aspect, a method for constructing a 3D reference map of an environment useable in (a method for) real-time mapping, localization and/or change analysis, comprising the following steps: acquiring (3D) scanner data of the environment with a mobile real-time laser range scanner at a rate of at least 5 frames (i.e. 5 point clouds), preferably at least 10 frames per second, constructing, using the (3D) scanner data for each of a plurality of poses of the laser range scanner, each pose having an associated point cloud defined by the scanner data, a map presentation, the map presentation having a data structure configured for random sample access thereto in constant time, fast nearest neighbor search and scalability over large areas, and building, using the map presentation, the 3D reference map for the environment using a 3D Simultaneous Localization And Mapping (3D SLAM) framework, said building comprising using an odometer module estimating a current pose of the laser range scanner for each point cloud based on the registration of the (last) point cloud to the local map presentation, using a local trajectory optimization module refining the trajectory of a (sub)set of point clouds in order to minimize the drift in the local map presentation, and performing offline a global trajectory optimization by reconstructing an entire map of the environment (preferably by using the entire set of point clouds) taking advantage of (or taking into account) loop closures, thereby forming said 3D reference map.”, Column 9 lines 32-34, “The odometer is typically performed in real-time.”, Column 10 lines 9-21, “In one preferred aspect, the present invention proposes an optimization method or framework for Simultaneous Localization And Mapping (SLAM) that properly models the acquisition process in a scanning-while-moving scenario. Each measurement is correctly reprojected in the map reference frame by considering a continuous time trajectory which is defined as the linear interpolation of a discrete set of control poses in SE E3. The invention also proposes a particularly efficient data structure that makes use of a hybrid sparse voxelized representation, allowing large map management. Thanks to this the inventors were also able to perform global optimization over trajectories, resetting the accumulated drift when loops are performed.”, Column 13 lines 32-37, “A preferred optimization framework of the invention is composed by two consecutive modules: an odometer that estimates the pose of each cloud given a map and a local trajectory optimizer that refines the trajectory of a set of clouds. Both modules employ the map data structure as described herein to handle the growing map.”, Column 13 lines 45-58, “The input of such a framework is a set of 3D point clouds { C,} produced with the data streamed by the sensor (in case of a Velodyne scanner, the point cloud is generated after a complete revolution of the sensor). Each point cloud C, is composed by a set of points P={pJ}, j=l ... NP, a set of relative timestamps T={tJ} and a set of normal unit vectors N={nJ}. Relative timestamps are assigned such that the first point produced has timestamp 0 and the last one has 1. Normal unit vectors may be estimated with the unconstrained least square formulation proposed in H. Badino, D. Huber, Y. Park, and T. Kanade, "Fast and accurate computation of surface normals from range images," in ICRA, 2011, taking advantage of box filtering on the point cloud grid structure.”. One of ordinary skill in the art would recognize that the cited passages describe a process by which a 3D map of an environment of a robot is updated. The cited passages clearly describe that the prior scan of the environment (i.e. the scan corresponding to relative timestamp 0) is position adjusted using odometer data to update a current scan of the environment (i.e. the scan data corresponding to relative timestamp 1).), wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the combined scan data (Sequeira: Column 13 lines 60-64, “Initially, one needs to produce a first estimate of the sensor's trajectory by recovering the pose of each point cloud. Since the sensor is moving, one considers as representative pose of the cloud the sensor pose when the last point is received.”, Column 13 line 65 – Column 14 line 2, “One performs a point-plane ICP between a subset of points of the last received cloud and the map. Like in F. Moosmann and C. Stiller, "Velodyne SLAM," in IVS, 2011, the selected points of the cloud are unwarped by considering the last estimated motion before performing the registration.”, Column 14 lines 34-41, “This module takes as input the list of clouds with their associated poses RCodo and performs a trajectory refinement by employing a local window approach. When the distance between the first and the last pose in the list is larger than a threshold, cloud poses are optimized and a new list RCREF={[C,, I'*REF]}, i=l: Ne is produced with the refined pose and the input' clouds. Notice that this step properly integrates the unwarping in the optimization.”, Column 15 lines 37-56, “Once the optimization is terminated, the list RCREF can be updated with the optimized poses. Then, the entire set of points and normals of the clouds are converted into world coordinates according to Equation 3 and then added to the map M. At this stage one takes advantage of the efficient strategy to update the local map described in section A.2.: before adding points, one firstly deletes from the map all points that are further than a given radius from the last trajectory pose and then one adds the transformed clouds from RCREF. Once the map is updated a new kd-tree is created on the resulting points to allow subsequent nearest neighbor searches. The list RCodo is cleared and the odometer guess for the next cloud registration is updated according to the last two poses of RCREF. The proposed formulation represents an adherent description of the real sensor model, which acquires points while moving: point transformations in world coordinates involve both initial and final poses of each cloud. Moreover, the estimation of each pose (apart the first and the last) is directly influenced by two clouds.”. The cited passages clearly show that the prior scan (i.e. the scan corresponding to relative timestamp 0) is position adjusted based on the odometer data. If the position adjusted prior scan differs from the current scan (i.e. the scan corresponding to relative timestamp 1) above a threshold, the poses of each point in the point cloud of the two scans are optimized. These optimized poses are the used to update the 3D map of the area. One of ordinary skill in the art would clearly see that this teaches a method of improving the occupancy grid by combining a current scan of the environment around the robot with at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan. Please see Columns 14-15 for the equations used to derive the updated map based on the position adjusted scan data.). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the device taught in Eade with obtain an improved occupancy grid based on the combined scan data and at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the combined scan data taught in Sequeira with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because such a method of updating the map of an environment allows for the minimization and correction of drift in the map representation of the region. One of ordinary skill in the art would recognize that minimizing drift in a map provides a more accurate map (Sequeira: Column 2 line 66 – Column 3 line 31, “using a local trajectory optimization module refining the trajectory of a (sub)set of point clouds in order to minimize the drift in the local map presentation”, Column 18 line 43 – Column 19 line 7, “The inventors compared their system with the publicly available Velodyne SLAM [F. Moosmarm and C. Stiller, "Velodyne SLAM," in IVS, 2011] that also performs a motion compensation on the acquired point clouds. To compare the two systems the inventors measured drift accumulated using the outdoor car dataset. Since the same location is revisited multiple times, they estimated drift by registering the generated initial local map with the one generated at each subsequent passage. The translation and orientation components of the registration transform aligning the current local map to the initial one indicate how much drift has been accumulated. One of the salient characteristics of [F. Moosmann and C. Stiller, "Velodyne SLAM," in IVS, 2011] is the presence of a map refinement strategy ( called adaption) based on a set of heuristic tests that positively influence the trajectory estimation. Since the present system is focused on the optimization strategy by proper modeling the problem, the inventors disabled this feature in the original work to focus the analysis on the trajectory estimation. Results after each loop are shown in Table I. It can be noticed that one accumulates less drift than the original work. Moreover the present system is a natural formulation of the problem that requires less configuration parameters than the heuristic strategies of the Velodyne SLAM. Performance of the present system is superior to the Velodyne SLAM system both in terms of execution time and in the ability of maintaining a global map of the environment, while in the original work only a local map is maintained. The ability of using the global map has been confirmed, in case of the use of loop closure and the global optimization technique to correct the drift accumulated in the first loop and the use of the global map for the next loops.”). Eade in view of Sequeira does not teach wherein the estimated actual position is based on a maximum scan range of a sensor for scanning the environment. Hebbar, in the same field of endeavor, teaches wherein the estimated actual position is based on a maximum scan range of a sensor for scanning the environment (Hebbar: ¶ 0068, “Next, the self-location estimator 13 estimates the self-location of the moving apparatus 1 on the basis of the analysis result of the distance calculator 14 and the pattern analyzer 15 and updates the self-location on the map (Step S107). Specifically, the self-location estimator 13 estimates the self-location in the Y-direction on the basis of the calculation result of the distance calculator 14 in Step S105 and estimates the self-location in the X-direction on the basis of the analysis result of the pattern analyzer 15 in Step S106, thereby estimating the self-location of the moving apparatus 1 on the stage 100. The self-location estimator 13 then updates the self-location on the map.”, ¶ 0072, “FIG. 15 illustrates a parameter used to determine the thickness T. The thickness T of the pattern PAT is determined by a minimum width Wmin of the pattern PAT, a level (a height H) of the ranging sensor 12, and a maximum ranging range Rmax of the ranging sensor 12.”, ¶ 0077, “In the example in FIG. 17, a space between the structures S1, S2 is the minimum width Wmin of the pattern PAT. In this case, the thickness T of the pattern PAT can be represented by the following expression.”, ¶ 0079, “In this manner, the moving apparatus 1 includes the ranger 11 that detects a distance to an end part (the stage edge 102) of the stage 100 and detects the pattern PAT located on the stage 100 with infrared light and the self-location estimator 13 that estimates the self-location on the basis of the detection result of the ranger 11. This makes it possible for the moving apparatus 1 to estimate the self-location on the stage 100.”. The cited passages clearly teach a method of localizing a robot in an environment. Said method is configured to estimate its location based on a distance calculation and by analyzing a pattern. In the analysis of the pattern, the thickness of the pattern is determined, in part, based on the maximum range of the scanner. This thickness is used in the analysis of the pattern and is further used to estimate the robots position. Therefore, the cited passages clearly teach wherein the estimated actual position is based on a maximum scan range of a sensor for scanning the environment). Eade in view Sequeira teaches a device configured to determine the estimated actual position of the robot. Eade in view of Sequeira does not teach wherein the estimated actual position is based on a maximum scan range of a sensor for scanning the environment. Hebbar teaches wherein the estimated actual position is based on a maximum scan range of a sensor for scanning the environment. A person of ordinary skill in the art would have had the technological capabilities required to have modified the device taught in Eade in view of Sequeira with wherein the estimated actual position is based on a maximum scan range of a sensor for scanning the environment taught in Hebbar. Furthermore, the device taught in Eade in view of Sequeira is already configured to estimate its location using a laser scanner. Additionally, a person of ordinary skill in the art would have known and been familiar with the maximum scan range of various sensors and would have had the technological capabilities required to have incorporated such information into the device taught in Eade in view of Sequeira. Such modifications would not have change or introduced new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of a device determine the estimated actual position of the robot wherein the estimated actual position is based on a maximum scan range of a sensor for scanning the environment. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the device taught in Eade in view of Sequeira with wherein the estimated actual position is based on a maximum scan range of a sensor for scanning the environment taught in Hebbar with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. Regarding claim 11, Eade in view of Sequeira in further view of Hebbar teaches wherein the processor is configured to determine the expected position based on an odometry change from the prior position to the current position (Eade: Column 19 lines 11-23, “The SLAM module 604 receives the raw pose data 610 from the dead reckoning interface 618. It will be understood that the nature of the raw pose data 610 can vary according to the type of dead reckoning sensor 614 and the type of output specified by the dead reckoning interface 618. Examples of the raw pose data 610 can include distance measurements, velocity measurements, and acceleration measurements. The dead reckoning data is used by the SLAM module 604 to estimate course and distance traveled from a prior pose.”). Regarding claim 12, Eade in view of Sequeira in further view of Hebbar teaches wherein the processor is configured to determine the expected position based on a planned trajectory of the robot with respect to the prior position (Sequeira: Column 13 lines 32-37, “A preferred optimization framework of the invention is composed by two consecutive modules: an odometer that estimates the pose of each cloud given a map and a local trajectory optimizer that refines the trajectory of a set of 35 clouds. Both modules employ the map data structure as described herein to handle the growing map.”, Column 13 lines 60-64, “Initially, one needs to produce a first estimate of the 60 sensor's trajectory by recovering the pose of each point cloud. Since the sensor is moving, one considers as representative pose of the cloud the sensor pose when the last point is received.”, Column 15 lines 37-56, “Once the optimization is terminated, the list RCR EF can be updated with the optimized poses. Then, the entire set of points and normals of the clouds are converted into world coordinates according to Equation 3 and then added to the map M. At this stage one takes advantage of the efficient strategy to update the local map described in section A.2.: before adding points, one firstly deletes from the map all points that are further than a given radius from the last trajectory pose and then one adds the transformed clouds 45 from RCREF- Once the map is updated a new kd-tree is created on the resulting points to allow subsequent nearest neighbor searches. The list RCono is cleared and the odometer guess for the next cloud registration is updated according to the last two poses of RCREF- The proposed formulation represents an adherent description of the real sensor model, which acquires points while moving: point transformations in world coordinates involve both initial and final poses of each cloud. Moreover, the estimation of each pose (apart the first and the last) is directly influenced by two clouds.”. As can be seen from the cited passage, the system is configured to update and optimize the estimated position based on the trajectory, prior poses, and odometry data.). Regarding claim 14, Eade in view of Sequeira in further view of Hebbar teaches wherein the processor configured to translate the prior scan data into transformed scan data comprises the processor configured to, based on the difference between the prior position and the expected position, translate coordinate points of the prior scan data that are based on the prior position into new coordinate points of the transformed scan data that are based on the expected position (Sequeira: Column 25 lines 26-32, “The pose tracking component deals with computing the local motion of the sensor as it moves around the environment. Knowing the previous estimated pose, when a new acquisition is received, the inventors perform a local registration between the map and the observed points. From the resulting transformation, the implicit motion is inferred and applied to the previously estimated pose.”. As can be seen from the cited passage, the system uses the prior estimated pose of the robot and the observed points (equivalent here to the scan data) to update the prior pose. Furthermore, one of ordinary skill in the art would see that the pose comprises the coordinates of the robot.). Regarding claim 15, Eade in view of Sequeira in further view of Hebbar teaches wherein the processor is configured to obtain a plurality of prior scan data sets (Eade: Column 19 lines 24-32, “Other inputs to the SLAM module 604 include visual localization data from the Visual Front End 602 and/or the optional Pre-Filter module 622. As a robot with VSLAM travels in an environment, the robot observes visual landmarks. When a new visual landmark is encountered, the SLAM module 604 can store the robot's global reference frame location for the particles in the SLAM database 608.”, Column 28 lines 56-67, “The process begins at state 2002 where the system looks up features in the query image from a global database to find candidate landmarks. One will understand the global database to comprise a database containing features from all the previously identified landmarks.”, Column 28 lines 56-67, “Consequently, to look up features in a global database indicates finding features in the global database that are visually similar to the ones from the query image. Visual similarity can be determined by a number of well-known methods. For example, SIFT descriptors may be used as described above.”. One of ordinary skill in the art would see that the system is configured to gather a plurality of images of landmarks taken at previous positions.), wherein each set of the prior scan data sets is at one of a plurality of prior positions of the robot (Eade: Column 19 lines 24-32, Column 28 lines 56-67, Column 28 lines 56-67. One of ordinary skill in the art would see that the images of the landmarks used for comparison were taken at previous locations of the robot.), wherein one of the prior scan data sets comprises the prior scan data and one of the plurality of prior positions comprises the prior position (Eade: Column 19 lines 24-32, Column 28 lines 56-67, Column 28 lines 56-67. One of ordinary skill in the art would see that, if the landmark is not new, then the landmark would be included in at least one of the prior images and positions.). Regarding claim 16, Eade in view of Sequeira in further view of Hebbar teaches wherein the processor is further configured to identify a plurality of previous key points at the prior position (Eade: Column 28 lines 56-67, “The process begins at state 2002 where the system looks up features in the query image from a global database to find candidate landmarks. One will understand the global database to comprise a database containing features from all the previously identified landmarks.”, Column 28 lines 56-67, “Consequently, to look up features in a global database indicates finding features in the global database that are visually similar to the ones from the query image. Visual similarity can be determined by a number of well-known methods. For example, SIFT descriptors may be used as described above.”. The system is clearly configured to gather images of landmarks that contain features.), wherein the previous key point comprises one of the plurality of previous key points (Eade: Column 28 lines 56-67, “The process begins at state 2002 where the system looks up features in the query image from a global database to find candidate landmarks. One will understand the global database to comprise a database containing features from all the previously identified landmarks.”, Column 28 lines 56-67, “Consequently, to look up features in a global database indicates finding features in the global database that are visually similar to the ones from the query image. Visual similarity can be determined by a number of well-known methods. For example, SIFT descriptors may be used as described above.”. The system is clearly configured to gather several potential landmark images, which would naturally include a plurality of features.), wherein the processor is further configured to identify a plurality of observed key points based on the combined scan data wherein the observed key point is one of the plurality of observed key points (Eade: Column 28 lines 56-67, “The process begins at state 2002 where the system looks up features in the query image from a global database to find candidate landmarks. One will understand the global database to comprise a database containing features from all the previously identified landmarks.”. The query image (which is the current captured image of a landmark) has it’s feature identified as shown in the cited passage.), wherein the processor is further configured to determine the correlation between the observed key points and the previous key points (Eade: Column 29 lines 4-25, “A more granular matching is then performed against the set of features in each candidate landmark using a local database. Here, one will understand a "local" database to refer to look up among features from a current landmark. The system then iterates through all the candidates, performing local matching against each candidate in 2008-2014, potentially yielding observations. Local matching may comprise looking up the query image features in a database that only contains features from the current candidate landmark.”. As can be seen from the cited passage, the landmark features are correlated based on visual similarity.). Regarding claim 17, Eade in view of Sequeira in further view of Hebbar teaches wherein the processor is configured to determine the correlation between the observed key point and the previous key point based on the combined scan data (Eade: Column 29 lines 4-25, “A more granular matching is then performed against the set of features in each candidate landmark using a local database. Here, one will understand a "local" database to refer to look up among features from a current landmark. The system then iterates through all the candidates, performing local matching against each candidate in 2008-2014, potentially yielding observations. Local matching may comprise looking up the query image features in a database that only contains features from the current candidate landmark.”. As can be seen from the cited passage, the landmark features are correlated based on visual similarity.). Claim(s) 13 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 9286810 B2 ("Eade") in view of US 10304237 B2 ("Sequeira") in further view of US 20240168494 A1 ("Hebbar") in further view of US 12327391 B2 ("Hehn"). Regarding claim 13, Eade in view of Sequeira in further view of Hebbar does not teach wherein the processor is configured to add an injected noise pattern to the prior scan data in a region of the prior scan data that positionally overlaps with the current scan data. Hehn, in the same field of endeavor, teaches wherein the processor is configured to add an injected noise pattern to the prior scan data in a region of the prior scan data that positionally overlaps with the current scan data (Hehn: Column 15 lines 20-32, “where θ is a certain distribution such that E[x]=xk and cov(x,x)=Pk. The feature <Pk is interpreted as a measurement of the projection of landmark z1, affected by zero-mean measurement noise. The function h(x) is part of the camera model, i.e. it models the 3D to 2D projection of landmarks to features performed by the camera.”. As can be seen from the cited passage, zero-mean noise is added to the scan data of the landmark feature.). The only difference between the prior art and the claimed invention is that the prior art does not combine the device and method of injecting noise into the prior scan data into a single reference. A person of ordinary skill in the art would have had the technological capabilities required to have combine the device taught in Eade in view of Sequeira in further view of Hebbar with the method of injecting noise into the prior scan data taught in Hehn. Furthermore, the device taught in Eade in view of Sequeira in further view of Hebbar already teaches that the prior scan data and current scan data positionally overlap (This shown by the fact that the device taught attempts to match the features of the landmark in the current image to those in prior images. One of ordinary skill in the art would see that id the device successfully finds a match that the positions would at least partially overlap.), so modifying the device such that it injects noise into the prior scan data would not change or introduce new functionality and would only require a basic change to the algorithm consisting of simply adding noise to the scan data. No inventive effort would have been required. The combination would have the yielded predictable result of a device that injects noise into the prior scan data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the device taught in Eade in view of Sequeira in further view of Hebbar with the method of injecting noise into the prior scan data taught in Hehn with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because it would have yielded predictable results. Regarding claim 18, Eade in view of Sequeira in further view of Hebbar does not teach wherein the processor is configured to determine the correlation between the observed key point and the previous key point based on a simultaneous localization and mapping (SLAM) equation solver algorithm that uses the combined scan data as an input to the SLAM equation solver. Hehn, in the same field of endeavor, teaches wherein the processor is configured to determine the correlation between the observed key point and the previous key point based on a simultaneous localization and mapping (SLAM) equation solver algorithm that uses the combined scan data as an input to the SLAM equation solver (Hehn: Column 15 lines 48-67 and the equations disclosed in the cited passage, “Following the same line of probabilistic reasoning from the previous section, the aim with the definition of the update function is to find a way of updating the expected value xk and covariance Pk of x that maximally exploits the knowledge of <Pk being the projection of landmark zJ" It can be shown that, under certain assumptions, the optimal update of the expected value is:”. One of ordinary skill in the art would see the system of equations disclosed are, by definition, the Kalman Filter, a common estimator used to facilitate SLAM. Additionally, one of ordinary skill in the art would see that the Kalman filter natural takes into account the correlation between the observed and previous key points. This is because the Kalman filter builds a covariance matrix regarding the key points and uses the prior information to update the covariance of the current information.). The only difference between the prior art and the claimed invention is that the prior art does not combine the device and the method of using a SLAM equation solver to determine the correlation between the prior and observed key points. A person of ordinary skill in the art would have had the technological capabilities required to have modified the device taught in Eade in view of Sequeira in further view of Hebbar with the method of using a SLAM equation solver to determine the correlation between the prior and observed key points taught in Hehn. Furthermore, the Kalman filter taught in Hehn is a well-known estimator that is commonly used in SLAM applications, so one of ordinary skill in the art would have easily been able to have modified the SLAM device taught in Eade in view of Sequeira in further view of Hebbar without requiring inventive effort. Additionally, using one SLAM solver over another would not change or introduce new functionality. The combination would have yielded the predictable result of a device that uses a SLAM equation solver to determine the correlation between the prior and observed key points. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the device taught in Eade in view of Sequeira in further view of Hebbar with the method of using a SLAM equation solver to determine the correlation between the prior and observed key points taught in Hehn with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. Claim(s) 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 12292291 B2 ("Iken") in view of US 9286810 B2 ("Eade") in further view of US 8271132 B2 ("Nielsen") in further view of US 10304237 B2 ("Sequeira") . Regarding claim 19, Iken teaches a device comprising a processor configured to: determine a first movement vector based on an odometry change from a prior position of a robot or based on a planned/expected movement of the robot from the prior position (Iken: Column 4 lines 30-33, “Furthermore, in a method step 2, odometric measured variables for the own odometry of the vehicle are captured when traveling along the route. Method step 1 and method step 2 are typically carried out simultaneously here.”, Column 5 lines 1-20, “In a method step 4, a correction of the own odometry of the vehicle then takes place on the basis of the evaluation. For this purpose, a vehicle-specific drift is calculated using an error model based on the highly accurate localization data determined, the odometric measured variables captured for the own odometry of the vehicle, further vehicle parameters such as the current vehicle speed and/or acceleration, and current direction of travel (left/right bend or traveling in a straight line) and, if applicable, the current influences from outside the vehicle.”. The cited passage clearly shows that a movement vector is gathered from the udometry data and the traveling direction.); determine a second movement vector based on a localization algorithm with respect to a current sensor scan in relation to the prior position (Iken: Column 3 lines 27-29, “The highly accurate localization data are for example determined by means of a digital area map and a sensory detection of landmarks marked on the digital area map.”, Column 5 lines 1-10, “In a method step 4, a correction of the own odometry of the vehicle then takes place on the basis of the evaluation. For this purpose, a vehicle-specific drift is calculated using an error model based on the highly accurate localization data determined, the odometric measured variables captured for the own odometry of the vehicle, further vehicle parameters such as the current vehicle speed and/or acceleration, and current direction of travel (left/right bend or traveling in a straight line) and, if applicable, the current influences from outside the vehicle.”. The cited passage clearly shows that a movement vector (the position data from the localization data and the traveling direction) is determined through a localization algorithm using a map and landmark features.); determine an error vector between the first movement vector and the second movement vector (Iken: Column 5 lines 1-10, “In a method step 4, a correction of the own odometry of the vehicle then takes place on the basis of the evaluation. For this purpose, a vehicle-specific drift is calculated using an error model based on the highly accurate localization data determined, the odometric measured variables captured for the own odometry of the vehicle, further vehicle parameters such as the current vehicle speed and/or acceleration, and current direction of travel (left/right bend or traveling in a straight line) and, if applicable, the current influences from outside the vehicle.”. The cited passage clearly teaches determining the error between the data gathered through localization and odometry.); Iken does not teach obtain an improved occupancy grid by combining a current scan of an environment around the robot with at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan; determine a mitigation strategy based on based on whether the error vector satisfies a predetermined error criterion; and generate an instruction to control the robot based on the mitigation strategy; and control the robot to move from location to location toward a target point based on the instruction to control the robot and at least one updated sensor scan, wherein to control the robot to move from location to location toward a target point further comprises to perform a retreat operation by following, in reverse, waypoints that were used to arrive at a current position until a safe retreat condition is met, wherein a safety margin is disabled during performance of the retreat operation. Eade, in the same field of endeavor, teaches determine a mitigation strategy based on based on whether the error vector satisfies a predetermined error criterion (Eade: Column 9 lines 10-65, “In some embodiments removing at least one edge of said plurality of edges present in the graph comprises: i) generating a current graph mean; ii) generating an error estimate for each edge, the error estimate being based at least in part on the difference between the pose of each of the plurality of edges present in the graph and the current graph mean; iii) identifying at least one edge having a lowest error estimate; and iv) removing the at least one identified edge from the graph.”. As can be seen from the cited passage, the transformations between pose/landmark nodes (the edges) are checked against a predetermined error criterion.); and generate an instruction to control the robot based on the mitigation strategy (Eade: Column 9 lines 10-65. One of ordinary skill in the art would see that the mitigation instruction is the removal of the erroneous information from the graph generated through SLAM.); and control the robot to move from location to location toward a target point based on the instruction to control the robot and at least one updated sensor scan (Eade: Eade: Column 13 line 64 – Column 14 line 4, “In response to the image data 106, the control 108 can provide control signals to the motors 110, 112 that control the movement of the robot 100. For example, the control 108 can provide control signals to instruct the robot to move forward, to stop, to move backward, to turn, to rotate about a vertical axis, and the like. When the robot rotates around a vertical axis, such as the exemplary vertical axis 118 shown in FIG. 1, this rotation is referred to as "yaw."”, Column 19 lines 11-23, “The SLAM module 604 receives the raw pose data 610 from the dead reckoning interface 618. It will be understood that the nature of the raw pose data 610 can vary according to the type of dead reckoning sensor 614 and the type of output specified by the dead reckoning interface 618. Examples of the raw pose data 610 can include distance measurements, velocity measurements, and acceleration measurements. The dead reckoning data is used by the SLAM module 604 to estimate course and distance traveled from a prior pose.”, Column 19 lines 24-32, “Other inputs to the SLAM module 604 include visual localization data from the Visual Front End 602 and/or the optional Pre-Filter module 622. As a robot with VSLAM travels in an environment, the robot observes visual landmarks. When a new visual landmark is encountered, the SLAM module 604 can store the robot's global reference frame location for the particles in the SLAM database 608. For example, the robot's pose can be estimated from a previous location and the course and distance traveled from a last known pose.”. The cited passages clearly show that the robot is controlled to move from location to location based on the updated scan data and control instruction.). Iken teaches a device comprising a processor configured to: determine a first movement vector based on an odometry change from a prior position of a robot or based on a planned/expected movement of the robot from the prior position; determine a second movement vector based on a localization algorithm with respect to a current sensor scan in relation to the prior position; determine an error vector between the first movement vector and the second movement vector. Iken does not teach determine a mitigation strategy based on based on whether the error vector satisfies a predetermined error criterion; and generate an instruction to control the robot based on the mitigation strategy; and control the robot to move from location to location toward a target point based on the instruction to control the robot and at least one updated sensor scan. Eade teaches determine a mitigation strategy based on based on whether the error vector satisfies a predetermined error criterion; and generate an instruction to control the robot based on the mitigation strategy; and control the robot to move from location to location toward a target point based on the instruction to control the robot and at least one updated sensor scan. A person of ordinary skill in the art would have had the technological capabilities required to have modified the device taught in Iken with determine a mitigation strategy based on based on whether the error vector satisfies a predetermined error criterion; and generate an instruction to control the robot based on the mitigation strategy; and control the robot to move from location to location toward a target point based on the instruction to control the robot and at least one updated sensor scan taught in Eade. Furthermore, the device taught in Iken is already used in the control of the movement of a vehicle. The device is also configured to correct the odometry data it gathers, which one of ordinary skill in the art would recognize is a basic mitigation strategy. Additionally adding a step of comparing the error to a threshold would be well within the capabilities of a person of ordinary skill in the art. Therefore, the device in Iken is readily modifiable with the methods taught in Eade. Such modification would not change or introduce new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of a device configured to: determine a mitigation strategy based on based on whether the error vector satisfies a predetermined error criterion; and generate an instruction to control the robot based on the mitigation strategy; and control the robot to move from location to location toward a target point based on the instruction to control the robot and at least one updated sensor scan. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the device taught in Iken with determine a mitigation strategy based on based on whether the error vector satisfies a predetermined error criterion; and generate an instruction to control the robot based on the mitigation strategy; and control the robot to move from location to location toward a target point based on the instruction to control the robot and at least one updated sensor scan taught in Eade with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. Iken in view of Eade does not teach obtain an improved occupancy grid by combining a current scan of an environment around the robot with at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan; wherein to control the robot to move from location to location toward a target point further comprises to perform a retreat operation by following, in reverse, waypoints that were used to arrive at a current position until a safe retreat condition is met, wherein a safety margin is disabled during performance of the retreat operation. Nielsen, in the same field of endeavor, teaches wherein to control the robot to move from location to location toward a target point further comprises to perform a retreat operation by following, in reverse, waypoints that were used to arrive at a current position until a safe retreat condition is met (Nielsen: Column lines, “The get-unstuck behavior 700 begins at decision block 710 by determining if the current path is blocked. This blocked situation may be defined as obstacle present in front, on the front-right side, and on the front-left side. If the path is blocked, control transfers to operation block 740, which is explained below. For an example, and using the range definitions defined above under the description of the range attribute, a blocked path may be defined by the Boolean equation:”, Column lines, “Wherein: (robot.fwdarw.forward_thresh) is a predetermined threshold parameter, that may be robot specific, to define a safety distance, or maneuverability distance, away from the robot.”, Column lines, “Once the determination has been made that the robot may be stuck, operation block 740 begins the process of attempting to get unstuck. Operation block 740 performs a back-out behavior. This back-out behavior causes the robot to backup from its present position while following the contours of obstacles near the rear sides of the robot. In general, the back-out behavior uses range sensing (e.g., from laser, sonar, infrared, or combinations thereof) of nearby obstacles near the rear sides to determine distance to the obstacles and provide assistance in following the contours of the obstacles. However, the back-out behavior may also include many robot attributes, including perception, position, bounding shape, and motion, to enable the robot to turn and back up while continuously responding to nearby obstacles. Using this fusion of attributes, the back-out behavior doesn't merely back the robot up, but rather allows the robot to closely follow the contours of whatever obstacles are around the robot.”. The cited passages teach a method by which a robot can get out of a position in which it is stuck. The method comprises determining if a robot’s path is obstructed, determining if the robot can rotate, the travelling backwards until such a time that a predetermined safety distance is met. One of ordinary skill in the art would recognize that this teaches causing a robot to reverse along its path until a safety condition is met.), wherein a safety margin is disabled during performance of the retreat operation (Nielsen: Column lines, “After the desired speed variable is set by operation block 610, 614, or 616, decision block 618 tests to see if anything is within the event horizon. This test may be based on the robot's physical dimensions, including protrusions from the robot such as an arm, relative to the robot's current speed. As an example using an arm extension, something inside the event horizon may be determined by the equation: Min_Front_Range&lt;1.0+Arm_Extension+(1.75*Abs(Current_Velocity))”, Column lines, “Wherein: (robot.fwdarw.forward_thresh) is a predetermined threshold parameter, that may be robot specific, to define a safety distance, or maneuverability distance, away from the robot.”. The cited passages teaches the safety threshold used in the control of the robot.. The threshold used in the control of the robot when the robot is getting itself unstuck is a separately defined threshold then the safety threshold used to maintain a safe distance from obstacles. One of ordinary skill in the art would see that the safety distance used in the normal control of the robot to avoid obstacles is not used when the robot is attempting to get itself unstuck and that these two threshold are different. Therefore, the cited passages clearly show that a safety margin is disabled during performance of the retreat operation). Iken in view of Eade teaches a device for controlling a robot. Iken in view of Eade does not teach wherein to control the robot to move from location to location toward a target point further comprises to perform a retreat operation by following, in reverse, waypoints that were used to arrive at a current position until a safe retreat condition is met, wherein a safety margin is disabled during performance of the retreat operation. Nielsen teaches wherein to control the robot to move from location to location toward a target point further comprises to perform a retreat operation by following, in reverse, waypoints that were used to arrive at a current position until a safe retreat condition is met, wherein a safety margin is disabled during performance of the retreat operation. A person of ordinary skill in the art would have had the technological capabilities required to have combine the device taught in Iken in view of Eade with wherein to control the robot to move from location to location toward a target point further comprises to perform a retreat operation by following, in reverse, waypoints that were used to arrive at a current position until a safe retreat condition is met, wherein a safety margin is disabled during performance of the retreat operation taught in Nielsen. Furthermore, the device taught in Iken in view of Nielsen is already configured to cause a robot to travel along a determined path. Causing said robot to travel in the reverse direction along its path is a simple modification that would be well within the technological capabilities of a person of ordinary skill in the art. Additionally, the device taught in Iken in view of Eade is configured to control the robot based on a comparison to a threshold, so modifying the device to control the robot to reverse along its path until a safety condition is met (such as a safety distance) as taught in Nielsen would be a simple modification that would only require the addition of a known algorithm. Such modification would not change or introduce new functionality. No inventive effort would have been required. The combination would have yielded the predictable result of a device comprising: wherein to control the robot to move from location to location toward a target point further comprises to perform a retreat operation by following, in reverse, waypoints that were used to arrive at a current position until a safe retreat condition is met, wherein a safety margin is disabled during performance of the retreat operation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the device taught in Iken in view of Eade with wherein to control the robot to move from location to location toward a target point further comprises to perform a retreat operation by following, in reverse, waypoints that were used to arrive at a current position until a safe retreat condition is met, wherein a safety margin is disabled during performance of the retreat operation taught in Nielsen with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because the combination would have yielded predictable results. Iken in view of Eade in further view of Nielsen does not teach obtain an improved occupancy grid by combining a current scan of an environment around the robot with at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan. Sequeira, in the same field of endeavor, obtain an improved occupancy grid by combining a current scan of an environment around the robot with at least one prior scan of the environment (Sequeira: Column 2 line 66 – Column 3 line 31, “To achieve this object, the present invention proposes, in a first aspect, a method for constructing a 3D reference map of an environment useable in (a method for) real-time mapping, localization and/or change analysis, comprising the following steps: acquiring (3D) scanner data of the environment with a mobile real-time laser range scanner at a rate of at least 5 frames (i.e. 5 point clouds), preferably at least 10 frames per second, constructing, using the (3D) scanner data for each of a plurality of poses of the laser range scanner, each pose having an associated point cloud defined by the scanner data, a map presentation, the map presentation having a data structure configured for random sample access thereto in constant time, fast nearest neighbor search and scalability over large areas, and building, using the map presentation, the 3D reference map for the environment using a 3D Simultaneous Localization And Mapping (3D SLAM) framework, said building comprising using an odometer module estimating a current pose of the laser range scanner for each point cloud based on the registration of the (last) point cloud to the local map presentation, using a local trajectory optimization module refining the trajectory of a (sub)set of point clouds in order to minimize the drift in the local map presentation, and performing offline a global trajectory optimization by reconstructing an entire map of the environment (preferably by using the entire set of point clouds) taking advantage of (or taking into account) loop closures, thereby forming said 3D reference map.”, Column 9 lines 32-34, “The odometer is typically performed in real-time.”, Column 10 lines 9-21, “In one preferred aspect, the present invention proposes an optimization method or framework for Simultaneous Localization And Mapping (SLAM) that properly models the acquisition process in a scanning-while-moving scenario. Each measurement is correctly reprojected in the map reference frame by considering a continuous time trajectory which is defined as the linear interpolation of a discrete set of control poses in SE E3. The invention also proposes a particularly efficient data structure that makes use of a hybrid sparse voxelized representation, allowing large map management. Thanks to this the inventors were also able to perform global optimization over trajectories, resetting the accumulated drift when loops are performed.”, Column 13 lines 32-37, “A preferred optimization framework of the invention is composed by two consecutive modules: an odometer that estimates the pose of each cloud given a map and a local trajectory optimizer that refines the trajectory of a set of clouds. Both modules employ the map data structure as described herein to handle the growing map.”, Column 13 lines 45-58, “The input of such a framework is a set of 3D point clouds { C,} produced with the data streamed by the sensor (in case of a Velodyne scanner, the point cloud is generated after a complete revolution of the sensor). Each point cloud C, is composed by a set of points P={pJ}, j=l ... NP, a set of relative timestamps T={tJ} and a set of normal unit vectors N={nJ}. Relative timestamps are assigned such that the first point produced has timestamp 0 and the last one has 1. Normal unit vectors may be estimated with the unconstrained least square formulation proposed in H. Badino, D. Huber, Y. Park, and T. Kanade, "Fast and accurate computation of surface normals from range images," in ICRA, 2011, taking advantage of box filtering on the point cloud grid structure.”. One of ordinary skill in the art would recognize that the cited passages describe a process by which a 3D map of an environment of a robot is updated. The cited passages clearly describe that the prior scan of the environment (i.e. the scan corresponding to relative timestamp 0) is position adjusted using odometer data to update a current scan of the environment (i.e. the scan data corresponding to relative timestamp 1).), wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan (Sequeira: Column 13 lines 60-64, “Initially, one needs to produce a first estimate of the sensor's trajectory by recovering the pose of each point cloud. Since the sensor is moving, one considers as representative pose of the cloud the sensor pose when the last point is received.”, Column 13 line 65 – Column 14 line 2, “One performs a point-plane ICP between a subset of points of the last received cloud and the map. Like in F. Moosmann and C. Stiller, "Velodyne SLAM," in IVS, 2011, the selected points of the cloud are unwarped by considering the last estimated motion before performing the registration.”, Column 14 lines 34-41, “This module takes as input the list of clouds with their associated poses RCodo and performs a trajectory refinement by employing a local window approach. When the distance between the first and the last pose in the list is larger than a threshold, cloud poses are optimized and a new list RCREF={[C,, I'*REF]}, i=l: Ne is produced with the refined pose and the input' clouds. Notice that this step properly integrates the unwarping in the optimization.”, Column 15 lines 37-56, “Once the optimization is terminated, the list RCREF can be updated with the optimized poses. Then, the entire set of points and normals of the clouds are converted into world coordinates according to Equation 3 and then added to the map M. At this stage one takes advantage of the efficient strategy to update the local map described in section A.2.: before adding points, one firstly deletes from the map all points that are further than a given radius from the last trajectory pose and then one adds the transformed clouds from RCREF. Once the map is updated a new kd-tree is created on the resulting points to allow subsequent nearest neighbor searches. The list RCodo is cleared and the odometer guess for the next cloud registration is updated according to the last two poses of RCREF. The proposed formulation represents an adherent description of the real sensor model, which acquires points while moving: point transformations in world coordinates involve both initial and final poses of each cloud. Moreover, the estimation of each pose (apart the first and the last) is directly influenced by two clouds.”. The cited passages clearly show that the prior scan (i.e. the scan corresponding to relative timestamp 0) is position adjusted based on the odometer data. If the position adjusted prior scan differs from the current scan (i.e. the scan corresponding to relative timestamp 1) above a threshold, the poses of each point in the point cloud of the two scans are optimized. These optimized poses are the used to update the 3D map of the area. One of ordinary skill in the art would clearly see that this teaches a method of improving the occupancy grid by combining a current scan of the environment around the robot with at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan. Please see Columns 14-15 for the equations used to derive the updated map based on the position adjusted scan data.). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have combine the device taught in Eade with obtain an improved occupancy grid by combining a current scan of an environment around the robot with at least one prior scan of the environment, wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the current scan taught in Sequeira with a reasonable expectation of success. One of ordinary skill in the art would have been motivated to make this modification because such a method of updating the map of an environment allows for the minimization and correction of drift in the map representation of the region. One of ordinary skill in the art would recognize that minimizing drift in a map provides a more accurate map (Sequeira: Column 2 line 66 – Column 3 line 31, “using a local trajectory optimization module refining the trajectory of a (sub)set of point clouds in order to minimize the drift in the local map presentation”, Column 18 line 43 – Column 19 line 7, “The inventors compared their system with the publicly available Velodyne SLAM [F. Moosmarm and C. Stiller, "Velodyne SLAM," in IVS, 2011] that also performs a motion compensation on the acquired point clouds. To compare the two systems the inventors measured drift accumulated using the outdoor car dataset. Since the same location is revisited multiple times, they estimated drift by registering the generated initial local map with the one generated at each subsequent passage. The translation and orientation components of the registration transform aligning the current local map to the initial one indicate how much drift has been accumulated. One of the salient characteristics of [F. Moosmann and C. Stiller, "Velodyne SLAM," in IVS, 2011] is the presence of a map refinement strategy ( called adaption) based on a set of heuristic tests that positively influence the trajectory estimation. Since the present system is focused on the optimization strategy by proper modeling the problem, the inventors disabled this feature in the original work to focus the analysis on the trajectory estimation. Results after each loop are shown in Table I. It can be noticed that one accumulates less drift than the original work. Moreover the present system is a natural formulation of the problem that requires less configuration parameters than the heuristic strategies of the Velodyne SLAM. Performance of the present system is superior to the Velodyne SLAM system both in terms of execution time and in the ability of maintaining a global map of the environment, while in the original work only a local map is maintained. The ability of using the global map has been confirmed, in case of the use of loop closure and the global optimization technique to correct the drift accumulated in the first loop and the use of the global map for the next loops.”). Regarding claim 20, Iken in view of Eade in view of Nielsen in further view of Sequeira teaches wherein the mitigation strategy comprises: a reset of the localization algorithm; or a return of the robot to the prior position or a previous position with an associated error vector that satisfies the predetermined error criterion (Eade: Column 9 lines 10-65, “In some embodiments removing at least one edge of said plurality of edges present in the graph comprises: i) generating a current graph mean; ii) generating an error estimate for each edge, the error estimate being based at least in part on the difference between the pose of each of the plurality of edges present in the graph and the current graph mean; iii) identifying at least one edge having a lowest error estimate; and iv) removing the at least one identified edge from the graph.”. As can be seen from the cited passage, the removal of the erroneous edges constitutes a partial reset of the localization algorithm.). Response to Arguments Applicant’s arguments with respect to claim(s) 1, specifically regarding that the combination of Lee in view of Zhang in further view of Slobodyanyuk does not teach the limitations of the amended claim 1, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant’s arguments with respect to claim(s) 19, specifically regarding that the combination of Iken in view of Eade in further view of Nielsen does not teach the limitations of the amended claim 19, have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant's arguments filed December 29th, 2025 have been fully considered but they are not persuasive. Regarding Applicant’s arguments on Pages 9-10, Applicant argues that the combination of prior art used in the rejection of claim 10 does not teach the amended limitations of claim 10. Specifically on Page 10 of Applicant’s arguments, Applicant argues that the primary reference Eade does not teach the limitation “obtain an improved occupancy grid based on the combined scan data and at least one prior scan of the environment, wherein the at least one prior scan is position- adjusted forward, to the same time and position of the current scan, to correspond to the combined scan data”. The Examiner respectfully disagrees. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). As stated above in the 35 U.S.C. § 103 rejection of independent claim 10, Eade was not solely relied upon. Eade teaches a device comprising a processor configured to (Eade: Column 14 lines 5-11): obtain an expected position of a robot in relation to a prior position of the robot within an environment (Eade: Column 18 lines 19-30, Column 19 lines 11-23); obtain prior scan data of the environment at the prior position and a previous key point within the prior scan data (Eade: Column 19 lines 24-32); obtain current scan data of the environment at a current position of the robot (Eade: Column 28 lines 47-55); combine the current scan data with the transformed scan data as combined scan data (Eade: Column 28 lines 56-67, Column 28 lines 56-67); identify an observed key point based on the combined scan data (Eade: Column 28 lines 56-67, Column 28 lines 56-67, Column 29 lines 1-3); determine a correlation between the observed key point and the previous key point (Eade: Column 29 lines 4-25); control the robot to move from location to location toward a target point based on the estimated actual position and the combined scan data (Eade: Column 13 line 64 – Column 14 line 4, Column 19 lines 11-23, Column 19 lines 24-32). The secondary reference Sequeira teaches based on a difference between the prior position and the expected position, the prior scan data into transformed scan data (Sequeira: Column 25 lines 26-32). obtain an improved occupancy grid based on the combined scan data and at least one prior scan of the environment (Sequeira: Column 2 line 66 – Column 3 line 31, Column 9 lines 32-34, Column 10 lines 9-21, Column 13 lines 32-37, Column 13 lines 45-58), wherein the at least one prior scan is position-adjusted forward, to the same time and position of the current scan, to correspond to the combined scan data (Sequeira: Column 13 lines 60-64, Column 13 line 65 – Column 14 line 2, Column 14 lines 34-41, Column 15 lines 37-56). The secondary reference Hebbar teaches wherein the estimated actual position is based on a maximum scan range of a sensor for scanning the environment (Hebbar: ¶ 0068, ¶ 0072, ¶ 0077, ¶ 0079). As can be clearly see from the cited passages, Sequeira teaches a method wherein a prior scan of an environment (i.e. the scan corresponding to the relative timestamp 0) is position adjusted using odometer data. This position adjusted scan data is the compared to the current scan data (i.e. the scan corresponding to the relative timestamp 1). If the difference between the position adjusted scan data and the current scan data differs above a threshold, the position adjusted scan data and the current scan data is used to optimize the pose of each point in the point cloud of the current scan data. The map of the environment is the updated using these optimized points of the point cloud. One of ordinary skill in the art would recognize that Sequeira clearly teaches the limitation “obtain an improved occupancy grid based on the combined scan data and at least one prior scan of the environment, wherein the at least one prior scan is position- adjusted forward, to the same time and position of the current scan, to correspond to the combined scan data”. Specifically on Page 9 of Applicant’s arguments, Applicant argues that Sequeira fails to teach the limitation “obtain an improved occupancy grid based on the combined scan data and at least one prior scan of the environment, wherein the at least one prior scan is position- adjusted forward, to the same time and position of the current scan, to correspond to the combined scan data”. The Examiner respectfully disagrees. As can be seen in the 35 U.S.C. § 103 rejection of the independent claims 1, 10, and 19, Sequeira teaches a method wherein a prior scan of an environment (i.e. the scan corresponding to the relative timestamp 0) is position adjusted using odometer data. This position adjusted scan data is the compared to the current scan data (i.e. the scan corresponding to the relative timestamp 1). If the difference between the position adjusted scan data and the current scan data differs above a threshold, the position adjusted scan data and the current scan data is used to optimize the pose of each point in the point cloud of the current scan data. The map of the environment is the updated using these optimized points of the point cloud. Additionally, one of ordinary skill in the art would recognize that the scan data corresponding to the relative timestamp 0 is clearly the prior scan data and that the scan data corresponding to the relative timestamp 0 is clearly the scan data corresponding to the current time and position of the robot. Therefore, one of ordinary skill in the art would recognize that Sequeira clearly teaches the limitation “obtain an improved occupancy grid based on the combined scan data and at least one prior scan of the environment, wherein the at least one prior scan is position- adjusted forward, to the same time and position of the current scan, to correspond to the combined scan data”. Therefore, for the reasons stated above and in the 35 U.S.C. § 103 rejection section, the combination of Lee in view of Zhang in further view of Slobodyanyuk in further view of Sequeira for the rejection of independent claim 1, the combination of Eade in view of Sequeira in further view of Hebbar for the rejection of independent claim 10, and the combination of Iken in view of Eade in further view of Nielsen in further view of Sequeira of independent claim 19, teaches the limitation “obtain an improved occupancy grid based on the combined scan data and at least one prior scan of the environment, wherein the at least one prior scan is position- adjusted forward, to the same time and position of the current scan, to correspond to the combined scan data”. Therefore the 35 U.S.C. § 103 rejections of claims 1, 10, and 19 are maintained. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Noah W Stiebritz whose telephone number is (571)272-3414. The examiner can normally be reached Monday thru Friday 7-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached at (571) 270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.W.S./Examiner, Art Unit 3658 /Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Dec 22, 2023
Application Filed
Jun 30, 2025
Non-Final Rejection — §103
Sep 17, 2025
Applicant Interview (Telephonic)
Sep 17, 2025
Examiner Interview Summary
Oct 01, 2025
Response Filed
Oct 20, 2025
Final Rejection — §103
Nov 03, 2025
Interview Requested
Dec 17, 2025
Examiner Interview Summary
Dec 29, 2025
Response after Non-Final Action
Jan 15, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602063
LOAD HANDLING SYSTEM AND LOAD HANDLING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12575900
Steerable Eversion Robot System and Method of Operating the Steerable Eversion Robot System
2y 5m to grant Granted Mar 17, 2026
Patent 12552043
METHOD FOR CONTROLLING ROBOTIC ARM, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12472640
CONTROL METHOD AND SYSTEM FOR ARTICLE TRANSPORTATION BASED ON MOBILE ROBOT
2y 5m to grant Granted Nov 18, 2025
Patent 12467759
VEHICLE WITH SWITCHABLE FORWARD AND BACKWARD CONFIGURATIONS, CONTROL METHOD, AND CONTROL PROGRAM
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
51%
With Interview (-15.6%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month