Prosecution Insights
Last updated: April 19, 2026
Application No. 18/331,466

ENABLING MOBILE ROBOTS FOR AUTONOMOUS MISSIONS

Non-Final OA §103
Filed
Jun 08, 2023
Examiner
CHANDRASIRI, UPUL PRIYADARSHAN
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BOARD OF SUPERVISORS OF LOUISIANA STATE UNIVERSITY AND AGRICULTURAL AND MECHANICAL COLLEGE
OA Round
3 (Non-Final)
20%
Grant Probability
At Risk
3-4
OA Rounds
2y 5m
To Grant
-9%
With Interview

Examiner Intelligence

Grants only 20% of cases
20%
Career Allow Rate
2 granted / 10 resolved
-32.0% vs TC avg
Minimal -29% lift
Without
With
+-28.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
36 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
18.9%
-21.1% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 01/22/2026 has been entered. Response to Amendment The amendment filed 01/22/2026 is being entered. Claims 1, 7, 10, 11, and 20 are amended. Claims 1-20 are pending, and rejected as detailed below. This action is final as necessitated by amendment. Claims 1 is patentable under 35 U.S.C.§112(a) and §112(b) Amendment to claims 1 is entered. Therefore the 35 U.S.C. 112(a) and 35 U.S.C. 112(b) claim rejections for claims 1 have been withdrawn. Examiner note: It appeared applicant has resubmitted the amendment for claim 7 that was filed on 09/05/2025 again. Accordingly, current claim 7 shows “the next desired destination”. Since the same amendment was present within claim 7 that was filed on 09/05/2025 and no new amendments are introduced, current claim 7 should have been a clean copy. Furthermore, status identifiers for Claim 7 should have been “Previously Presented” and not “Currently Amended” since there are no new amendments present within current claim 7. In order to advance compact prosecution, examiner has examined the application on the merit. Response to Arguments Claims 1, 3-5, and 7-9 Are Patentable Over the Prior Art Because the Prior Art Fails to Teach or Suggest Creating a Location-Based Map by Scaling a Digital Background Image Based on a Dimension of a Represented Space Applicant argues that Huval is not directed to the address the technical problem related to the present application since "the human annotator selects a set of (e.g., two or more) pixels or points in the LIDAR frame," in which "the annotation portal" performs calculations and "overlays a lane marker label defined by this curvilinear line over the LIDAR frame and intersecting pixels or points selected by the human annotator," as described in paragraph [0046]. In contrast, claim 1 recites "prior to the mobile robot navigating the physical area, creating, via the computing device, a location-based map based at least in part on a digital background image of a floor layout plan, a local map, or a site layout plan, wherein the location-based map is created by scaling to the digital background image, the location-based map being created by calculating a horizontal or a vertical scale based at least in part on a dimension of one space represented in the digital background image and a corresponding measurement performed via a user interface," as recited in claim 1. (Emphasis added). Therefore, Huval does not disclose or suggest at least these elements of claim 1. Applicant also argues that Yamauchi in view of Dasler does not show or suggest these claim elements, nor does the Office Action rely on Yamauchi in view of Dasler to allege that these claim elements are shown or suggested. Since Yamauchi in view of Dasler and in further view of Huval fail to disclose the above-recited elements individually, it follows that the combination of the cited references fails to disclose or suggest all of the elements of claim 1. For at least these reasons, Applicant respectfully requests that the rejection of claim 1 be withdrawn. In addition, Applicant respectfully requests that the rejection of claims 3-5 and 7-9 be withdrawn as depending from claim 1. Claims 3-5 and 7-9 may also be patentable for the additional elements that it recites. Applicant’s arguments, as amended herein, with respect to the rejections of claims 1 under 35 U.S.C. §103 have been fully considered and not persuasive. More specifically, the combination of reference Yamauchi, Dasler, and Huval teaches “prior to the mobile robot navigating the physical area, creating, via the computing device, a location-based map based at least in part on a digital background image of a floor layout plan, a local map, or a site layout plan, wherein the location-based map is created by scaling to the digital background image, the location-based map being created by calculating a horizontal or a vertical scale based at least in part on a dimension of one space represented in the digital background image and a corresponding measurement performed via a user interface,” In particular, the amendments to claim 1 are addressed in the instant office action with respect to the teaching of Yamauchi, Dasler, and Huval. Furthermore, MPEP 2145(IV) One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. Where a rejection of a claim is based on two or more references, a reply that is limited to what a subset of the applied references teaches or fails to teach, or that fails to address the combined teaching of the applied references may be considered to be an argument that attacks the reference(s) individually. Applicant’s reply fails to address the combined teaching of the applied references and instead only argues that each reference individually does not teach all of the claim limitations. Claims 11, 13-15, 17, 18, and 20 Applicant also argues that amended independent claims 11 and 20 are allowable for at least the same reasons as those discussed above with regards to claim 1, and the corresponding dependent claims 13-15, 17, and 18 are allowable by reason of depending from an allowable claim 11. Applicant’s arguments, as amended herein, with respect to the rejections of claims 11, 13-15, 17, 18, and 20 have been fully considered and not persuasive as the combination of reference Yamauchi, Dasler, and Huval anticipate the amended claim 11 and 20. Claims 2, 10, and 12 Are Patentable over Yamauchi in view of Dasler et al. in Applicant argues that the addition of Keivan fails to cure the deficiencies of the rejection claims 1 and 11, as discussed above. Accordingly, Applicant respectfully requests that the rejections of claims 2, 10, and 12 be withdrawn for at least the reason that claims 2, 10, and 12 depend from claims 1 or 11. Applicant’s arguments, as amended herein, with respect to the rejections of claims 2, 10, and 12 have been fully considered and not persuasive as the combination of reference Yamauchi, Dasler, and Huval anticipate the amended claim 1 and 11. Claims 6, 16, and 19 Are Patentable over Yamauchi in view of Dasler et al. in view of Huval and in view of Beth et al Applicant argues that the addition of Beth fails to cure the deficiencies of the rejection of claims 1 and 11, as discussed above. Accordingly, Applicant respectfully requests that the rejection of claims 6, 16, and 19 be withdrawn for at least the reason that claims 6, 16, and 19 depend from claims 1 and 11. Applicant’s arguments, as amended herein, with respect to the rejections of claims 6, 16, and 19 have been fully considered and not persuasive as the combination of reference Yamauchi, Dasler, and Huval anticipate the amended claim 1 and 11. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 3-5, 7-9, 11, 13-15, 17-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yamauchi (US 20220390950 A1), and further in view of Dasler (US 20220196405 A1) and Huval (US 20190243372 A1). Regarding claim 1, Yamauchi teaches (Currently Amended) A method of executing an autonomous mission for a mobile robot (Yamauchi, at least one para. 0004; “An aspect of the present disclosure provides a computer-implemented method that when executed by data processing hardware causes the data processing hardware to perform operations. The operations include receiving a navigation route including a set of waypoints including a first waypoint and a second waypoint, generating a local obstacle map based on sensor data captured by a mobile robot, determining that the mobile robot is unable to execute a movement instruction along a path between the first waypoint and the second waypoint due to an obstacle obstructing the path, identifying a third waypoint based on the local obstacle map, and generating an alternative path to navigate the mobile robot to the third waypoint to avoid the obstacle.”), comprising: providing, via a computing device associated with a mobile robot (Yamauchi, at least one para. 0099; “FIG. 4 is a schematic view of an example computing device 400 that may be used to implement the systems (e.g., the robot 100, the sensor system 130, the computing system 140, the remote system 160, the control system 170, the perception system 180, and/or the navigation system 200) and methods.”), an application (Yamauchi, at least one para. 0106; “These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language.”) for generating an autonomous mission for a plurality of autonomous mobile robots (Yamauchi, at least one para. 0069; “By gathering an understanding of the environment 10, the robot 100 may later move about the environment 10 (e.g., autonomously, semi-autonomously, or with assisted operation by a user) using the information or a derivative thereof gathered from the initial mapping process.”), the autonomous mission comprising a plurality of destination locations (Yamauchi, at least one para. 0065; “To illustrate, FIG. 1A depicts a navigation route 202 that includes three locations shown as waypoints 212 (e.g., shown as waypoints 212a, 212b, and 212c.”) for the mobile robot to visit in a physical area and at least one task to be performed at each of the plurality of destination locations (Yamauchi, at least one para. 0079; “In some configurations, the generator 210 receives a task or mission and generates a route 202 as a sequence of waypoints 212 that will achieve that task or mission. For instance, for a mission to inspect different locations on a pipeline, the generator 210 generates a route 202 that includes waypoints 212 that coincide with the inspection locations”); (Yamauchi, at least one para. 0035; “The sensor data (e.g., image data or point cloud data) gathered during this initial mapping process allows the robot to construct a map of the environment.”) (Yamauchi, at least one para. 0069; “The navigation generator 210 (also referred to as the navigation generator 210) is configured to construct a topological map 204 and to generate the navigation route 202 from the topological map 204. To generate the topological map 204, the navigation system 200 and, more particularly, the generator 210 records locations within an environment 10 that has been traversed or is being traversed by the robot 100 as waypoints 212.”); instructing, via the computing device, a mobile robot to self-navigate from a current location to the plurality of destination locations along the planned path (Yamauchi, at least one para. 0080; “The route executor 220 (also referred to as the executor 220) is configured to receive and to execute the navigation route 202. To execute the navigation route 202, the executor 220 may coordinate with other systems of the robot 100 to control the locomotion-based structures of the robot 100 (e.g., the legs) to drive the robot 100 along the route 202 through the sequence of waypoints 212.”); and instructing, via the computing device, the mobile robot to automatically perform the at least one task based at least in part on the mobile robot arriving at one of the plurality of destination locations (Yamauchi, at least one para. 0048; “As the sensor system 130 gathers sensor data 134, a computing system 140 stores, processes, and/or communicates the sensor data 134 to various systems of the robot 100 (e.g., the computing system 140, the control system 170, the perception system 180, and/or the navigation system 200). In order to perform computing tasks related to the sensor data 134, the computing system 140 of the robot 100 (which is schematically depicted in FIG. 1A and can be implemented in any suitable location(s), including internal to the robot 100) includes data processing hardware 142 and memory hardware 144. The data processing hardware 142 is configured to execute instructions stored in the memory hardware 144 to perform computing tasks related to activities (e.g., movement and/or movement-based activities) for the robot 100.”). Even though Yamauchi teaches about the creating the location-based map, Yamauchi does not appear to explicitly teach prior to the mobile robot navigating the physical area, creating a location-based map based at least in part on a digital background image of a floor layout plan, a local map, or a site layout plan, wherein the location-based map is created by scaling to the digital background image, the location-based map being created by calculating a horizontal or a vertical scale based at least in part on a dimension of one space represented in the digital background image and a corresponding measurement performed via a user interface; prior to the mobile robot navigating the physical area, determining a planned path However, Dasler in the same field of endeavor (Dasler, at least one para. 0004; “Techniques and systems are described for generating indications of traversable paths. In an example, a computing device implements a navigation system to receive map data describing a map of a physical environment that includes a destination, locations of display devices, and relative orientations of the display devices in the physical environment.”) teaches prior to the mobile robot navigating the physical area, creating a location-based map (Dasler, at least one para. 0082-83; “Map data is received describing a map of a physical environment that includes a destination, locations of display devices in the physical environment, and relative orientations of the display devices in the physical environment (block 402). The computing device 102 implements the navigation module 122 to receive the map data in one example. A navigation graph is formed (block 404) by representing the destination and the locations of the display devices as nodes of the navigation graph and connecting the nodes with edges that indicate traversable path segments in the physical environment. Request data is received, via a network, describing a request for navigation to the destination and a source of the request (block 406).”, wherein block 402 ( receiving map data) and block 404 (representing the navigation path within the physical area) take place before block 406 that requests for navigation to the destination). based at least in part on a digital background image of a floor layout plan, a local map, or a site layout plan, wherein the location-based map is created by scaling to the digital background image, the location-based map being created by calculating a horizontal or a vertical scale based at least in part on a dimension of one space represented in the digital background image and (Dasler, at least one para. 0055; “The navigation module 122 is capable of generating and/or receiving the map data 124 in a variety of different ways. In an example, the navigation module 122 generates the map data 124 using an existing map of the physical environment 106 which approximates dimensions and scale of the physical environment 106. For example, if a digital image depicting the physical environment 106 such as a digital photograph of the physical environment 106 is available, then navigation module 122 generates the map data 124 using the digital image. If a map of the physical environment 106 is available such as a map commonly posted in a building, then the navigation module 122 generates the map data 124 using this map or a digital photograph of the map. In one example, the navigation module 122 receives the map data 124 which describes an annotated map of the physical environment 106.”); prior to the mobile robot navigating the physical area, determining a planned path (Dasler, at least one para. 0082-83; “Map data is received describing a map of a physical environment that includes a destination, locations of display devices in the physical environment, and relative orientations of the display devices in the physical environment (block 402). The computing device 102 implements the navigation module 122 to receive the map data in one example. A navigation graph is formed (block 404) by representing the destination and the locations of the display devices as nodes of the navigation graph and connecting the nodes with edges that indicate traversable path segments in the physical environment. Request data is received, via a network, describing a request for navigation to the destination and a source of the request (block 406).”, wherein block 402 ( receiving map data) and block 404 (representing the navigation path within the physical area) take place before block 406 that requests for navigation to the destination). Yamauchi and Dasler are both considered to be analogous to the claimed invention because both of them are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the initial mapping process of the Yamauchi with the digital image and completion of mapping process prior to the navigation process of Dasler. One of the ordinary skill in the art would have been motivated to make this modification in order to expedite the initial mapping process and reduce the cost of initial mapping process. (Dasler; 0003). The combination of Yamauchi and Dasler does not appear to explicitly teach a corresponding measurement performed via a user interface; However, Huval in the same field of endeavor (Huval, at least one para. 0003; “This invention relates generally to the field of autonomous vehicles and more specifically to a new and useful method for calculating nominal vehicle paths for lanes within a geographic region in the field of autonomous vehicles.”) teaches a corresponding measurement performed via a user interface (Huval, at least one para. 0046; “In one implementation, the human annotator selects a set of (e.g., two or more) pixels or points in the LIDAR frame. The annotation portal then: calculates a smooth curvilinear line that extends between these pixels or points; stores geospatial coordinates and georeferenced orientations of vertices of the curvilinear line; calculates georeferenced orientations of tangent lines representing this curvilinear line, such as at each vertex; and overlays a lane marker label defined by this curvilinear line over the LIDAR frame and intersecting pixels or points selected by the human annotator.”); The combination of Yamauchi, Dasler, and Huval is considered to be analogous to the claimed invention because Yamauchi, Dasler, and Huval are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the mapping process of the Yamauchi with the teaching of Huval. One of the ordinary skill in the art would have been motivated to make this modification in order to selectively add, delete, and/or move plurality of destination locations to easily recalculate the planned path. (Huval; 0046). Regarding claim 3, Yamauchi teaches (Original) The method of claim 1, wherein the location-based map comprises a plurality of points of interest (POIs) and a plurality of connecting edges, wherein each of the plurality of connecting edges indicates a path between two POIs (Yamauchi, at least one para. 0072; “When recording each waypoint 212, the generator 210 generally associates a waypoint edge 214 (also referred to as an edge 214) with a respective waypoint 212 such that the topological map 204 produced by the generator 210 includes both waypoints 212 and their respective edges 214. An edge 214 is configured to indicate how one waypoint 212 (e.g., a first waypoint 210a) is related to another waypoint 212 (e.g., a second waypoint 212b).”). Regarding claim 4, Dasler teaches (Original) The method of claim 3, wherein the location-based map comprises a spatial reference that indicates a direction for the mobile robot to be sent to and respective tasks to be completed (Dasler, at least one para. 0044; “As used herein, the term “map of a physical environment” refers to a representation of a physical environment that depicts physical features of the physical environment and relationships between the physical features. By way of example, the relationships include spatial relationships between the physical features of the physical environment.”). Regarding claim 5, Yamauchi teaches (Original) The method of claim 3, each of the plurality of POIs (Yamauchi, at least one para. 0065; “To illustrate, FIG. 1A depicts a navigation route 202 that includes three locations shown as waypoints 212 (e.g., shown as waypoints 212a, 212b, and 212c).”) is represented by a set of coordinates based at least in part on a respective location of the POIs in the digital image and an embedded scale for the digital image (Yamauchi, at least one para. 0030; “Another form of a map representation is referred to as a metric map. Metric maps are derived from a more precise mapping framework when compared to topological maps. Metric maps represent locations of objects (e.g., obstacles) in an environment based on precise geometric coordinates (e.g., two-dimensional coordinates or three-dimensional coordinates).”). Regarding claim 7, Yamauchi teaches (Currently Amended) The method of claim 1, wherein instructing the mobile robot to self-navigate further comprises: executing, a feature of the application, via the computing device, to divide the planned path into a plurality of segments between the current location and a next destination location (Yamauchi, at least one para. 0057; “the no step map 182b is partitioned into a grid of cells where each cell represents a particular area in the environment 10 about the robot 100. For instance, each cell is a three centimeter square. For ease of explanation, each cell exists within an X-Y plane within the environment 10. When the perception system 180 generates the no-step map 182b, the perception system 180 may generate a Boolean value map where the Boolean value map identifies no step regions and step regions. A no step region refers to a region of one or more cells where an obstacle exists while a step region refers to a region of one or more cells where an obstacle is not perceived to exist.”, wherein the grid of cells is identified as the plurality of segments), wherein the feature comprises determining at least: a trajectory strategy, via the computing device, to enable the mobile robot to rotate itself based on a current orientation and maintain a proper orientation to move forward to a next desired destination (Yamauchi, at least one para. 0054; “The path generator 174 determines obstacles within the environment 10 about the robot 100 based on the sensor data 134. The path generator 174 communicates the obstacles to the step locator 176 such that the step locator 176 may identify foot placements for legs 120 of the robot 100 (e.g., locations to place the distal ends 124 of the legs 120 of the robot 100). The step locator 176 generates the foot placements (i.e., locations where the robot 100 should step) using inputs from the perception system 180 (e.g., map(s) 182). The body planner 178, much like the step locator 176, receives inputs from the perception system 180 (e.g., map(s) 182). Generally speaking, the body planner 178 is configured to adjust dynamics of the body 110 of the robot 100 (e.g., rotation, such as pitch or yaw and/or height) to successfully move about the environment 10.”); and a navigation strategy, via the computing device, to enable the mobile robot to move from a current destination to the next desired destination (Yamauchi, at least one para. 0054; “The path generator 174 determines obstacles within the environment 10 about the robot 100 based on the sensor data 134. The path generator 174 communicates the obstacles to the step locator 176 such that the step locator 176 may identify foot placements for legs 120 of the robot 100 (e.g., locations to place the distal ends 124 of the legs 120 of the robot 100). The step locator 176 generates the foot placements (i.e., locations where the robot 100 should step) using inputs from the perception system 180 (e.g., map(s) 182). The body planner 178, much like the step locator 176, receives inputs from the perception system 180 (e.g., map(s) 182). Generally speaking, the body planner 178 is configured to adjust dynamics of the body 110 of the robot 100 (e.g., rotation, such as pitch or yaw and/or height) to successfully move about the environment 10.”). Regarding claim 8, Yamauchi teaches (Original) The method of claim 7, wherein instructing the mobile robot to self-navigate further causes the mobile robot to at least: identify an obstacle in proximity on the planned path using a sensor of the mobile robot (Yamauchi, at least one para. 0086; “In some configurations, when an edge 214 is blocked by an unforeseeable object 20, the executor 220 resorts to other maps that are available from the systems of the robot 100. In some examples, the executor 220 uses or generates a local obstacle map 222 from current sensor data 134 captured by the sensor system 130 of the robot 100. Here, the local obstacle map 222 may refer to a more detailed map of the environment 10 than the topological map 204, but only for a local area surrounding the robot 100 (e.g., a three meter by three meter square area).”); and determine an alternative path for the mobile robot to avoid the obstacle (Yamauchi, at least one para. 0087; “In some configurations, the local obstacle map 222 includes an occupancy grid where each cell within the grid designates whether an obstacle is present in that cell or not. The executor 220 may then generate the alternative path 206 using the unoccupied cells of the occupancy grid in combination with the positions of the untraveled waypoints 212U.”). Regarding claim 9, Yamauchi teaches (Original) The method of claim 1, wherein instructing the mobile robot to automatically perform the at least one task further comprising: accessing a task library that includes a plurality of code blocks or a plurality of submodules for execution by the mobile robot (Yamauchi, at least one para. 0048; “In order to perform computing tasks related to the sensor data 134, the computing system 140 of the robot 100 (which is schematically depicted in FIG. 1A and can be implemented in any suitable location(s), including internal to the robot 100) includes data processing hardware 142 and memory hardware 144. The data processing hardware 142 is configured to execute instructions stored in the memory hardware 144 to perform computing tasks related to activities (e.g., movement and/or movement-based activities) for the robot 100.”, wherein the memory hardware 144 is seen as the task library.) for a respective task with the mobile robot (Yamauchi, at least one para. 0079; “For instance, for a mission to inspect different locations on a pipeline, the generator 210 generates a route 202 that includes waypoints 212 that coincide with the inspection locations. In the example shown in FIG. 2A, the generator 210 generates a route 202 with a sequence of waypoints 212 that include nine waypoints 212a-i and their corresponding edges 214a-h. FIG. 2A illustrates each waypoint 212 of the route 202 in a double circle while recorded waypoints 212 that are not part of the route 202 only have a single circle. The generator 210 then communicates the route 202 to the route executor 220.”). Regarding claim 11, Yamauchi teaches (Currently Amended) A system for managing autonomous mobile robots (Yamauchi, at least one para. 0005; “Another aspect of the present disclosure provides a mobile robot including a locomotion structure and a navigation system configured to control the locomotion structure to coordinate movement of the mobile robot.”), comprising: at least one computing device (Yamauchi, at least one para. 0099; “FIG. 4 is a schematic view of an example computing device 400 that may be used to implement the systems (e.g., the robot 100, the sensor system 130, the computing system 140, the remote system 160, the control system 170, the perception system 180, and/or the navigation system 200) and methods.”) that comprises a processor and memory (Yamauchi, at least one para. 0100; “The computing device 400 includes a processor 410 (e.g., data processing hardware 142, 162), memory 420 (e.g., memory hardware 144, 164), a storage device 430, a high-speed interface/controller 440 connecting to the memory 420 and high-speed expansion ports 450, and a low speed interface/controller 460 connecting to a low speed bus 470 and a storage device 430.”); and an application executable in the at least one computing device (Yamauchi, at least one para. 0106; “These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language.”) that, when executed by the processor, causes the at least one computing device to at least: receive a request (Yamauchi, at least one para. 0079; “In some configurations, the generator 210 receives a task or mission and generates a route 202 as a sequence of waypoints 212 that will achieve that task or mission.”) to generate an autonomous mission for a mobile robot (Yamauchi, at least one para. 0069; “By gathering an understanding of the environment 10, the robot 100 may later move about the environment 10 (e.g., autonomously, semi-autonomously, or with assisted operation by a user) using the information or a derivative thereof gathered from the initial mapping process.”); (Yamauchi, at least one para. 0035; “The sensor data (e.g., image data or point cloud data) gathered during this initial mapping process allows the robot to construct a map of the environment.”) (Yamauchi, at least one para. 0069; “The navigation generator 210 (also referred to as the navigation generator 210) is configured to construct a topological map 204 and to generate the navigation route 202 from the topological map 204. To generate the topological map 204, the navigation system 200 and, more particularly, the generator 210 records locations within an environment 10 that has been traversed or is being traversed by the robot 100 as waypoints 212.”) based at least in part on a first entry of the plurality of destination locations for the physical area (Yamauchi, at least one para. 0080; “The route executor 220 (also referred to as the executor 220) is configured to receive and to execute the navigation route 202. To execute the navigation route 202, the executor 220 may coordinate with other systems of the robot 100 to control the locomotion-based structures of the robot 100 (e.g., the legs) to drive the robot 100 along the route 202 through the sequence of waypoints 212.”) and a second entry of at least one task to be performed at a respective destination of the plurality of destination locations (Yamauchi, at least one para. 0079; “In some configurations, the generator 210 receives a task or mission and generates a route 202 as a sequence of waypoints 212 that will achieve that task or mission. For instance, for a mission to inspect different locations on a pipeline, the generator 210 generates a route 202 that includes waypoints 212 that coincide with the inspection locations.”) from the user interface (Yamauchi, at least one para. 0100; “The processor 410 can process instructions for execution within the computing device 400, including instructions stored in the memory 420 or on the storage device 430 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 480 coupled to high speed interface 440.”); and instruct the mobile robot to self-navigate along the planned path of the autonomous mission and to perform the at least one task at the respective destination (Yamauchi, at least one para. 0048; “As the sensor system 130 gathers sensor data 134, a computing system 140 stores, processes, and/or communicates the sensor data 134 to various systems of the robot 100 (e.g., the computing system 140, the control system 170, the perception system 180, and/or the navigation system 200). In order to perform computing tasks related to the sensor data 134, the computing system 140 of the robot 100 (which is schematically depicted in FIG. 1A and can be implemented in any suitable location(s), including internal to the robot 100) includes data processing hardware 142 and memory hardware 144. The data processing hardware 142 is configured to execute instructions stored in the memory hardware 144 to perform computing tasks related to activities (e.g., movement and/or movement-based activities) for the robot 100.”). Even though Yamauchi teaches about the creating the location-based map, Yamauchi does not appear to explicitly teach prior to the mobile robot navigating the physical area, generating a location-based map based at least in part on a digital image of the physical area, wherein the location-based map is generated by scaling the digital image of the physical area based at least in part on a dimension of one space represented in the digital image and a corresponding measurement performed via a user interface; prior to the mobile robot navigating the physical area, determining a planned path However, Dasler in the same field of endeavor (Dasler, at least one para. 0004; “Techniques and systems are described for generating indications of traversable paths. In an example, a computing device implements a navigation system to receive map data describing a map of a physical environment that includes a destination, locations of display devices, and relative orientations of the display devices in the physical environment.”) teaches prior to the mobile robot navigating the physical area, generating a location-based map (Dasler, at least one para. 0082-83; “Map data is received describing a map of a physical environment that includes a destination, locations of display devices in the physical environment, and relative orientations of the display devices in the physical environment (block 402). The computing device 102 implements the navigation module 122 to receive the map data in one example. A navigation graph is formed (block 404) by representing the destination and the locations of the display devices as nodes of the navigation graph and connecting the nodes with edges that indicate traversable path segments in the physical environment. Request data is received, via a network, describing a request for navigation to the destination and a source of the request (block 406).”, wherein block 402 ( receiving map data) and block 404 (representing the navigation path within the physical area) take place before block 406 that requests for navigation to the destination). based at least in part on a digital image of the physical area, wherein the location-based map is generated by scaling the digital image of the physical area based at least in part on a dimension of one space represented in the digital image and (Dasler, at least one para. 0055; “The navigation module 122 is capable of generating and/or receiving the map data 124 in a variety of different ways. In an example, the navigation module 122 generates the map data 124 using an existing map of the physical environment 106 which approximates dimensions and scale of the physical environment 106. For example, if a digital image depicting the physical environment 106 such as a digital photograph of the physical environment 106 is available, then navigation module 122 generates the map data 124 using the digital image. If a map of the physical environment 106 is available such as a map commonly posted in a building, then the navigation module 122 generates the map data 124 using this map or a digital photograph of the map. In one example, the navigation module 122 receives the map data 124 which describes an annotated map of the physical environment 106.”); prior to the mobile robot navigating the physical area, determining a planned path (Dasler, at least one para. 0082-83; “Map data is received describing a map of a physical environment that includes a destination, locations of display devices in the physical environment, and relative orientations of the display devices in the physical environment (block 402). The computing device 102 implements the navigation module 122 to receive the map data in one example. A navigation graph is formed (block 404) by representing the destination and the locations of the display devices as nodes of the navigation graph and connecting the nodes with edges that indicate traversable path segments in the physical environment. Request data is received, via a network, describing a request for navigation to the destination and a source of the request (block 406).”, wherein block 402 ( receiving map data) and block 404 (representing the navigation path within the physical area) take place before block 406 that requests for navigation to the destination). Yamauchi and Dasler are both considered to be analogous to the claimed invention because both of them are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the initial mapping process of the Yamauchi with the digital image and completion of mapping process prior to the navigation process of Dasler. One of the ordinary skill in the art would have been motivated to make this modification in order to expedite the initial mapping process and reduce the cost of initial mapping process. (Dasler; 0003). The combination of Yamauchi and Dasler does not appear to explicitly teach a corresponding measurement performed via a user interface; However, Huval in the same field of endeavor (Huval, at least one para. 0003; “This invention relates generally to the field of autonomous vehicles and more specifically to a new and useful method for calculating nominal vehicle paths for lanes within a geographic region in the field of autonomous vehicles.”) teaches a corresponding measurement performed via a user interface (Huval, at least one para. 0046; “In one implementation, the human annotator selects a set of (e.g., two or more) pixels or points in the LIDAR frame. The annotation portal then: calculates a smooth curvilinear line that extends between these pixels or points; stores geospatial coordinates and georeferenced orientations of vertices of the curvilinear line; calculates georeferenced orientations of tangent lines representing this curvilinear line, such as at each vertex; and overlays a lane marker label defined by this curvilinear line over the LIDAR frame and intersecting pixels or points selected by the human annotator.”); The combination of Yamauchi, Dasler, and Huval is considered to be analogous to the claimed invention because Yamauchi, Dasler, and Huval are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the mapping process of the Yamauchi with the teaching of Huval. One of the ordinary skill in the art would have been motivated to make this modification in order to selectively add, delete, and/or move plurality of destination locations to easily recalculate the planned path. (Huval; 0046). Regarding claim 13, Yamauchi teaches (Original) The system of claim 11, wherein the location-based map comprises a plurality of points of interest (POIs) and a plurality of connecting edges, wherein each of the plurality of connecting edges indicates a path between two POIs (Yamauchi, at least one para. 0072; “When recording each waypoint 212, the generator 210 generally associates a waypoint edge 214 (also referred to as an edge 214) with a respective waypoint 212 such that the topological map 204 produced by the generator 210 includes both waypoints 212 and their respective edges 214. An edge 214 is configured to indicate how one waypoint 212 (e.g., a first waypoint 210a) is related to another waypoint 212 (e.g., a second waypoint 212b).”). Regarding claim 14, Dasler teaches (Currently Amended) The system of claim 13, wherein the location-based map comprises a spatial reference (Dasler, at least one para. 0044; “As used herein, the term “map of a physical environment” refers to a representation of a physical environment that depicts physical features of the physical environment and relationships between the physical features. By way of example, the relationships include spatial relationships between the physical features of the physical environment.”) that indicates a relative location for each of the POIs and distances between them in a universal M-Map coordinate system (Dasler, at least one para. 0056; “Regardless of a manner in which the navigation module 122 determines locations of the display devices 110-114 and the destination 118 in the physical environment 106, the navigation module 122 generates the map data 124 as describing relative orientations of the display devices 110-114 in the physical environment 106.”). Regarding claim 15, Yamauchi teaches (Original) The system of claim 13, wherein the plurality of POIs (Yamauchi, at least one para. 0065; “To illustrate, FIG. 1A depicts a navigation route 202 that includes three locations shown as waypoints 212 (e.g., shown as waypoints 212a, 212b, and 212c).”) is represented by a set of coordinates based at least in part on a respective location of the POIs (Yamauchi, at least one para. 0030; “Another form of a map representation is referred to as a metric map. Metric maps are derived from a more precise mapping framework when compared to topological maps. Metric maps represent locations of objects (e.g., obstacles) in an environment based on precise geometric coordinates (e.g., two-dimensional coordinates or three-dimensional coordinates).”). Regarding claim 17, Yamauchi teaches (Original) The system of claim 11, wherein instructing the mobile robot to self-navigate further causes the application, when executed by the processor, causes the at least one computing device to at least: divide the planned path into a plurality of segments between a current location of the mobile robot and a next destination location (Yamauchi, at least one para. 0057; “the no step map 182b is partitioned into a grid of cells where each cell represents a particular area in the environment 10 about the robot 100. For instance, each cell is a three centimeter square. For ease of explanation, each cell exists within an X-Y plane within the environment 10. When the perception system 180 generates the no-step map 182b, the perception system 180 may generate a Boolean value map where the Boolean value map identifies no step regions and step regions. A no step region refers to a region of one or more cells where an obstacle exists while a step region refers to a region of one or more cells where an obstacle is not perceived to exist.”, wherein the grid of cells is identified as the plurality of segments). Regarding claim 18, Yamauchi teaches (Previously Presented) The system of claim 17, wherein instructing the mobile robot to self-navigate further causes the application, when executed by the processor, causes the at least one computing device to at least: generate a trajectory strategy to enable the mobile robot to rotate itself based on a current orientation and maintain a proper orientation to move forward to a next desired destination Yamauchi, at least one para. 0054; “The path generator 174 determines obstacles within the environment 10 about the robot 100 based on the sensor data 134. The path generator 174 communicates the obstacles to the step locator 176 such that the step locator 176 may identify foot placements for legs 120 of the robot 100 (e.g., locations to place the distal ends 124 of the legs 120 of the robot 100). The step locator 176 generates the foot placements (i.e., locations where the robot 100 should step) using inputs from the perception system 180 (e.g., map(s) 182). The body planner 178, much like the step locator 176, receives inputs from the perception system 180 (e.g., map(s) 182). Generally speaking, the body planner 178 is configured to adjust dynamics of the body 110 of the robot 100 (e.g., rotation, such as pitch or yaw and/or height) to successfully move about the environment 10.”); and generate a navigation strategy to enable the mobile robot to move from a current destination to the next desired destination (Yamauchi, at least one para. 0054; “The path generator 174 determines obstacles within the environment 10 about the robot 100 based on the sensor data 134. The path generator 174 communicates the obstacles to the step locator 176 such that the step locator 176 may identify foot placements for legs 120 of the robot 100 (e.g., locations to place the distal ends 124 of the legs 120 of the robot 100). The step locator 176 generates the foot placements (i.e., locations where the robot 100 should step) using inputs from the perception system 180 (e.g., map(s) 182). The body planner 178, much like the step locator 176, receives inputs from the perception system 180 (e.g., map(s) 182). Generally speaking, the body planner 178 is configured to adjust dynamics of the body 110 of the robot 100 (e.g., rotation, such as pitch or yaw and/or height) to successfully move about the environment 10.”). Regarding claim 20, Yamauchi teaches (Currently Amended) A mobile robot system for executing an autonomous mission (Yamauchi, at least one para. 0005; “Another aspect of the present disclosure provides a mobile robot including a locomotion structure and a navigation system configured to control the locomotion structure to coordinate movement of the mobile robot.”), comprising: a mobile robot (Yamauchi, at least one para. 0039; “Referring to example of FIGS. 1A and 1B, the robot 100 includes a body 110 with one or more locomotion-based structures (locomotion structures) such as legs 120a-d coupled to the body 110 that enable the robot 100 to move within the environment 10.)”; at least one computing device (Yamauchi, at least one para. 0099; “FIG. 4 is a schematic view of an example computing device 400 that may be used to implement the systems (e.g., the robot 100, the sensor system 130, the computing system 140, the remote system 160, the control system 170, the perception system 180, and/or the navigation system 200) and methods.”) that comprises a processor and memory (Yamauchi, at least one para. 0100; “The computing device 400 includes a processor 410 (e.g., data processing hardware 142, 162), memory 420 (e.g., memory hardware 144, 164), a storage device 430, a high-speed interface/controller 440 connecting to the memory 420 and high-speed expansion ports 450, and a low speed interface/controller 460 connecting to a low speed bus 470 and a storage device 430.”); an application executable in the at least one computing device (Yamauchi, at least one para. 0106; “These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language.”) that, when executed by the processor, causes the at least one computing device to at least: receive an autonomous mission (Yamauchi, at least one para. 0079; “In some configurations, the generator 210 receives a task or mission and generates a route 202 as a sequence of waypoints 212 that will achieve that task or mission.”) from a remote computing device (Yamauchi, at least one para. 0099; “The computing device 400 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers.”, wherein the laptop can be seen as remote computing device), the autonomous mission (Yamauchi, at least one para. 0069; “By gathering an understanding of the environment 10, the robot 100 may later move about the environment 10 (e.g., autonomously, semi-autonomously, or with assisted operation by a user) using the information or a derivative thereof gathered from the initial mapping process.”) comprising a plurality of destination locations on a location-based map (Yamauchi, at least one para. 0065; “To illustrate, FIG. 1A depicts a navigation route 202 that includes three locations shown as waypoints 212 (e.g., shown as waypoints 212a, 212b, and 212c).”) and at least one task assigned to be performed at a respective destination location (Yamauchi, at least one para. 0079; “In some configurations, the generator 210 receives a task or mission and generates a route 202 as a sequence of waypoints 212 that will achieve that task or mission. For instance, for a mission to inspect different locations on a pipeline, the generator 210 generates a route 202 that includes waypoints 212 that coincide with the inspection locations.”) (Yamauchi, at least one para. 0069; “The navigation generator 210 (also referred to as the navigation generator 210) is configured to construct a topological map 204 and to generate the navigation route 202 from the topological map 204. To generate the topological map 204, the navigation system 200 and, more particularly, the generator 210 records locations within an environment 10 that has been traversed or is being traversed by the robot 100 as waypoints 212.”), wherein the travel sequence can comprise a current location of the mobile robot as an initial set of coordinates (Yamauchi, at least one para. 0079; “In some implementations, the generator 210 generates the route 202 based on receiving a destination location and a starting location for the robot 100. For instance, the generator 210 matches the starting location with a nearest waypoint 212 and similarly matches the destination location with a nearest waypoint 212.”); navigate, the mobile robot, from the current location to a next destination of the plurality of destination locations in the location-based map (Yamauchi, at least one para. 0066; “In the example of FIG. 1A, while moving along a first portion of the route 202 (e.g., shown as a first edge 214, 214a) from a first location (e.g., shown as a first waypoint 212a) to a second location (e.g., shown as a second waypoint 212b)”); and execute the at least one task at the respective destination location (Yamauchi, at least one para. 0048; “As the sensor system 130 gathers sensor data 134, a computing system 140 stores, processes, and/or communicates the sensor data 134 to various systems of the robot 100 (e.g., the computing system 140, the control system 170, the perception system 180, and/or the navigation system 200). In order to perform computing tasks related to the sensor data 134, the computing system 140 of the robot 100 (which is schematically depicted in FIG. 1A and can be implemented in any suitable location(s), including internal to the robot 100) includes data processing hardware 142 and memory hardware 144. The data processing hardware 142 is configured to execute instructions stored in the memory hardware 144 to perform computing tasks related to activities (e.g., movement and/or movement-based activities) for the robot 100.”). Even though Yamauchi teaches about the creating the location-based map, Yamauchi does not appear to explicitly teach the location-based map comprising a digital map of a physical area, the plurality of destination locations being associated with a coordinate on the digital map based at least in part on a dimension of one space represented in the digital map and a corresponding measurement performed from a user interface; prior to the mobile navigating the physical area, assign a travel sequence of the mobile robot However, Dasler in the same field of endeavor (Dasler, at least one para. 0004; “Techniques and systems are described for generating indications of traversable paths. In an example, a computing device implements a navigation system to receive map data describing a map of a physical environment that includes a destination, locations of display devices, and relative orientations of the display devices in the physical environment.”) teaches the location-based map comprising a digital map of a physical area, the plurality of destination locations being associated with a coordinate on the digital map based at least in part on a dimension of one space represented in the digital map and (Dasler, at least one para. 0055; “The navigation module 122 is capable of generating and/or receiving the map data 124 in a variety of different ways. In an example, the navigation module 122 generates the map data 124 using an existing map of the physical environment 106 which approximates dimensions and scale of the physical environment 106. For example, if a digital image depicting the physical environment 106 such as a digital photograph of the physical environment 106 is available, then navigation module 122 generates the map data 124 using the digital image. If a map of the physical environment 106 is available such as a map commonly posted in a building, then the navigation module 122 generates the map data 124 using this map or a digital photograph of the map. In one example, the navigation module 122 receives the map data 124 which describes an annotated map of the physical environment 106.”); prior to the mobile navigating the physical area, assign a travel sequence of the mobile robot (Dasler, at least one para. 0082-83; “Map data is received describing a map of a physical environment that includes a destination, locations of display devices in the physical environment, and relative orientations of the display devices in the physical environment (block 402). The computing device 102 implements the navigation module 122 to receive the map data in one example. A navigation graph is formed (block 404) by representing the destination and the locations of the display devices as nodes of the navigation graph and connecting the nodes with edges that indicate traversable path segments in the physical environment. Request data is received, via a network, describing a request for navigation to the destination and a source of the request (block 406).”, wherein block 402 ( receiving map data) and block 404 (representing the navigation path within the physical area) take place before block 406 that requests for navigation to the destination). Yamauchi and Dasler are both considered to be analogous to the claimed invention because both of them are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the initial mapping process of the Yamauchi with the digital image and completion of mapping process prior to the navigation process of Dasler. One of the ordinary skill in the art would have been motivated to make this modification in order to expedite the initial mapping process and reduce the cost of initial mapping process. (Dasler; 0003). The combination of Yamauchi and Dasler does not appear to explicitly teach a corresponding measurement performed from a user interface; However, Huval in the same field of endeavor (Huval, at least one para. 0003; “This invention relates generally to the field of autonomous vehicles and more specifically to a new and useful method for calculating nominal vehicle paths for lanes within a geographic region in the field of autonomous vehicles.”) teaches a corresponding measurement performed from a user interface (Huval, at least one para. 0046; “In one implementation, the human annotator selects a set of (e.g., two or more) pixels or points in the LIDAR frame. The annotation portal then: calculates a smooth curvilinear line that extends between these pixels or points; stores geospatial coordinates and georeferenced orientations of vertices of the curvilinear line; calculates georeferenced orientations of tangent lines representing this curvilinear line, such as at each vertex; and overlays a lane marker label defined by this curvilinear line over the LIDAR frame and intersecting pixels or points selected by the human annotator.”); The combination of Yamauchi, Dasler, and Huval is considered to be analogous to the claimed invention because Yamauchi, Dasler, and Huval are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the mapping process of the Yamauchi with the teaching of Huval. One of the ordinary skill in the art would have been motivated to make this modification in order to selectively add, delete, and/or move plurality of destination locations to easily recalculate the planned path. (Huval; 0046). Claim(s) 2, 10, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Yamauchi (US 20220390950 A1), Dasler (US 20220196405 A1), and Huval (US 20190243372 A1), and further in view of Keivan (US 20190179329 A1). Regarding claim 2, Yamauchi teaches (Original )The method of claim 1, further comprising: executing, via the computing device, a software agent to manage the mobile robot on the autonomous mission (Yamauchi, at least one para. 0052; “A given controller 172 may control the robot 100 by controlling movement about one or more joints J of the robot 100. In some configurations, the given controller 172 is software with programming logic that controls at least one joint J or a motor M which operates, or is coupled to, a joint J.”), wherein the software agent comprises monitoring and reporting a progress and status of the autonomous mission. The combination of Yamauchi, Dasler, and Huval does not appear to explicitly teach wherein the software agent comprises monitoring and reporting a progress and status of the autonomous mission; However, Keivan in the same field of endeavor (Keivan, at least one para. 0019; “The method can further include identifying a plurality of destinations for the first autonomous cart based on the 2D traysersability map. A set of maneuvers can be evaluated for the first autonomous cart to arrive at each of the plurality of destinations. A feasible path is calculated based on the evaluation of the set of maneuvers. A planned path is generated based at least in part on the feasible path. A smoothed trajectory for the first autonomous cart can be obtained based on the planned path.”) teaches wherein the software agent comprises monitoring (Keivan, at least one para. 0077; “It also contains a map-viewing and map-editing interface, so that facility operator or owner (e.g., 668) may define ‘routes’ for the carts 600 to take, monitor the carts' 600 progress along routes, and perform other maintenance activities such as sending a cart to an area and powering it off for maintenance, initiating software upgrades, etc.”) and reporting a progress and status of the autonomous mission (Keivan, at least one para. 0076; “Carts 600 receive a set of commands from the server 660, execute them, and report status and progress back to the server 660.”); The combination of Yamauchi, Dasler, Huval, and Keivan is considered to be analogous to the claimed invention because all of them are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the autonomous mission of the Yamauchi with monitoring and reporting features of Keivan. One of the ordinary skill in the art would have been motivated to make this modification in order to perform maintenance activities. (Keivan; 0077). Regarding claim 10, Yamauchi teaches (Current Amended)The method of claim 1, wherein the application comprises: an application programming interface (API) for creating and managing a plurality of autonomous missions for a respective mobile robot (Yamauchi, at least one para. 0106; “These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language.”); the user interface that is used to generate an instruction to the API for creating and managing the plurality of autonomous missions (Yamauchi, at least one para. 0100; “The processor 410 can process instructions for execution within the computing device 400, including instructions stored in the memory 420 or on the storage device 430 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 480 coupled to high speed interface 440.”); and a software agent (Yamauchi, at least one para. 0052; “A given controller 172 may control the robot 100 by controlling movement about one or more joints J of the robot 100. In some configurations, the given controller 172 is software with programming logic that controls at least one joint J or a motor M which operates, or is coupled to, a joint J.”) to execute and monitor the mobile robot on the autonomous mission. The combination of Yamauchi, Dasler, and Huval does not appear to explicitly teach to execute and monitor the mobile robot on the autonomous mission. However, Keivan in the same field of endeavor (Keivan, at least one para. 0019; “The method can further include identifying a plurality of destinations for the first autonomous cart based on the 2D traysersability map. A set of maneuvers can be evaluated for the first autonomous cart to arrive at each of the plurality of destinations. A feasible path is calculated based on the evaluation of the set of maneuvers. A planned path is generated based at least in part on the feasible path. A smoothed trajectory for the first autonomous cart can be obtained based on the planned path.”) teaches to execute and monitor the mobile robot on the autonomous mission. (Keivan, at least one para. 0077; “It also contains a map-viewing and map-editing interface, so that facility operator or owner (e.g., 668) may define ‘routes’ for the carts 600 to take, monitor the carts' 600 progress along routes, and perform other maintenance activities such as sending a cart to an area and powering it off for maintenance, initiating software upgrades, etc.”). The combination of Yamauchi, Dasler, Huval, and Keivan is considered to be analogous to the claimed invention because all of them are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the autonomous mission of the Yamauchi with monitoring and reporting features of Keivan. One of the ordinary skill in the art would have been motivated to make this modification in order to perform maintenance activities. (Keivan; 0077). Regarding claim 12, Yamauchi teaches (Original) The system of claim 11, wherein the application, when executed by the processor, causes the at least one computing device to at least (Yamauchi, at least one para. 0106; “These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language.”): update the user interface to display a status of the mobile robot along the planned path of the autonomous mission. The combination of Yamauchi, Dasler, and Huval does not appear to explicitly teach update the user interface to display a status of the mobile robot along the planned path of the autonomous mission. However, Keivan in the same field of endeavor (Keivan, at least one para. 0019; “The method can further include identifying a plurality of destinations for the first autonomous cart based on the 2D traysersability map. A set of maneuvers can be evaluated for the first autonomous cart to arrive at each of the plurality of destinations. A feasible path is calculated based on the evaluation of the set of maneuvers. A planned path is generated based at least in part on the feasible path. A smoothed trajectory for the first autonomous cart can be obtained based on the planned path.”) teaches update the user interface to display a status of the mobile robot along the planned path of the autonomous mission (Keivan, at least one para. 0085; “FIG. 8 shows a snapshot of the web interface 874 for controlling an autonomous cart (e.g., autonomous cart 100 in FIG. 1A). FIG. 8 shows a top-down view of the world, where gray represents passable area, and white represents impassible area (obstacles.) This is a visualization of a “costmap,” which is constructed by the carts and used in path planning and path execution to gauge traversability of the environment.”) and (Keivan, at least one para. 0089; “In addition to a featureful maintenance-style interface, simpler dedicated-use interfaces are also provided/supported by the server. For instance, FIG. 9 shows an interface 976 used on phone and tablet form-factor hardware. This interface allows a user to release a cart from a workstation waypoint with the touch of a finger.”). The combination of Yamauchi, Dasler, Huval, and Keivan is considered to be analogous to the claimed invention because all of them are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the application executed by the Yamauchi with the teaching of Keivan. One of the ordinary skill in the art would have been motivated to make this modification in order to gauge the traversability of robot with respect to the environment. (Keivan; 0085) Claim(s) 6, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Yamauchi (US 20220390950 A1), Dasler (US 20220196405 A1), and Huval (US 20190243372 A1), and further in view of Beth US 20220156665 A1). Regarding claim 6, Yamauchi teaches (Original) The method of claim 1 (Yamauchi, at least one para. 0004; “An aspect of the present disclosure provides a computer-implemented method that when executed by data processing hardware causes the data processing hardware to perform operations. The operations include receiving a navigation route including a set of waypoints including a first waypoint and a second waypoint, generating a local obstacle map based on sensor data captured by a mobile robot, determining that the mobile robot is unable to execute a movement instruction along a path between the first waypoint and the second waypoint due to an obstacle obstructing the path, identifying a third waypoint based on the local obstacle map, and generating an alternative path to navigate the mobile robot to the third waypoint to avoid the obstacle.”), further comprising: simulating, via the computing device, the autonomous mission on a computer prior to instructing the mobile robot to the mission. The combination of Yamauchi, Dasler, and Huval does not appear to explicitly teach simulating, via the computing device, the autonomous mission on a computer prior to instructing the mobile robot to the mission. However, Beth in the same field of endeavor (Beth, at least one para. 0045; “Referring now to FIG. 1, an example system/platform 100 may integrate various software modules for representing and controlling manned and unmanned vehicles and provide them to a user such that they are available for mission planning, building, simulation or execution.”) teaches simulating, via the computing device, the autonomous mission on a computer prior to instructing the mobile robot to the mission (Beth, at least one para. 0057; “In other aspects, the platform 100 may be utilized to simulate a mission, for example providing one or more simulated vehicle(s) as digital twins for use in mission simulation.”). The combination of Yamauchi, Dasler, Huval, and Beth is considered to be analogous to the claimed invention because all of them are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the autonomous mission of the Yamauchi with the teaching of Beth. One of the ordinary skill in the art would have been motivated to make this modification in order to determine the feasibility of the mission in terms of vehicle choice, routing, traffic deconfliction protocols, etc (Beth; 0061). Regarding claim 16, Yamauchi teaches (Original) The system of claim 11, wherein the application, when executed by the processor (Yamauchi, at least one para. 0106; “These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language.”), causes the at least one computing device to at least: simulate and animate a selected mission prior to instructing the mobile robot to execute the selected mission. The combination of Yamauchi, Dasler, and Huval does not appear to explicitly teach causes the at least one computing device to at least: simulate and animate a selected mission prior to instructing the mobile robot to execute the selected mission. However, Beth in the same field of endeavor (Beth, at least one para. 0045; “Referring now to FIG. 1, an example system/platform 100 may integrate various software modules for representing and controlling manned and unmanned vehicles and provide them to a user such that they are available for mission planning, building, simulation or execution.”) teaches causes the at least one computing device to at least: simulate and animate a selected mission prior to instructing the mobile robot to execute the selected mission (Beth, at least one para. 0057; “In other aspects, the platform 100 may be utilized to simulate a mission, for example providing one or more simulated vehicle(s) as digital twins for use in mission simulation.”). The combination of Yamauchi, Dasler, Huval, and Beth is considered to be analogous to the claimed invention because all of them are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the autonomous mission of the Yamauchi with the teaching of Beth. One of the ordinary skill in the art would have been motivated to make this modification in order to determine the feasibility of the mission in terms of vehicle choice, routing, traffic deconfliction protocols, etc (Beth; 0061). Regarding claim 19, Yamauchi teaches (Previously Presented) The system of claim 11, wherein instructing the mobile robot to self-navigate further causes the application, when executed by the processor (Yamauchi, at least one para. 0048; “As the sensor system 130 gathers sensor data 134, a computing system 140 stores, processes, and/or communicates the sensor data 134 to various systems of the robot 100 (e.g., the computing system 140, the control system 170, the perception system 180, and/or the navigation system 200). In order to perform computing tasks related to the sensor data 134, the computing system 140 of the robot 100 (which is schematically depicted in FIG. 1A and can be implemented in any suitable location(s), including internal to the robot 100) includes data processing hardware 142 and memory hardware 144. The data processing hardware 142 is configured to execute instructions stored in the memory hardware 144 to perform computing tasks related to activities (e.g., movement and/or movement-based activities) for the robot 100.”), causes the at least one computing device to at least: identify a robot type selected, via the user interface, for instructing to self- navigate the planned path. The combination of Yamauchi, Dasler, and Huval does not appear to explicitly teach causes the at least one computing device to at least: identify a robot type selected, via the user interface, for instructing to self- navigate the planned path. However, Beth in the same field of endeavor (Beth, at least one para. 0045; “Referring now to FIG. 1, an example system/platform 100 may integrate various software modules for representing and controlling manned and unmanned vehicles and provide them to a user such that they are available for mission planning, building, simulation or execution.”) teaches causes the at least one computing device to at least: identify a robot type selected, via the user interface, for instructing to self- navigate the planned path (Beth, at least one para. 0160; “the workflow may include and/or be based at least in part on one or more vehicle type and associated tasks. In certain aspects, the one or more devices include the client device and/or the one or more manned or unmanned vehicles.”). The combination of Yamauchi, Dasler, Huval, and Beth is considered to be analogous to the claimed invention because all of them are in the same field as generating a navigational path for a mobile robot as the claimed invention. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to have modified the self-navigation of the mobile robot of the Yamauchi with the teaching of Beth. One of the ordinary skill in the art would have been motivated to make this modification so that the one or more manned or unmanned vehicle can be tracked with respect to the performance (Beth; 0160). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to UPUL P CHANDRASIRI whose telephone number is (703)756-5823. The examiner can normally be reached M-F 8.30 am to 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christian Chace can be reached at 571-272-4190. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /U.P.C./Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Jun 08, 2023
Application Filed
May 01, 2025
Non-Final Rejection — §103
Jul 31, 2025
Applicant Interview (Telephonic)
Jul 31, 2025
Examiner Interview Summary
Sep 05, 2025
Response Filed
Oct 31, 2025
Final Rejection — §103
Jan 22, 2026
Request for Continued Examination
Feb 18, 2026
Response after Non-Final Action
Feb 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12391240
VEHICLE DRIVING ASSIST DEVICE
2y 5m to grant Granted Aug 19, 2025
Patent 12325421
Method for Holding a Two-Track Motor Vehicle
2y 5m to grant Granted Jun 10, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
20%
Grant Probability
-9%
With Interview (-28.6%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month