Prosecution Insights
Last updated: April 19, 2026
Application No. 18/979,094

ROBOT LOCALIZATION USING DATA WITH VARIABLE DATA TYPES

Non-Final OA §102§103
Filed
Dec 12, 2024
Examiner
HALL, HANA VICTORIA
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Boston Dynamics Inc.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+48.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
31 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
25.9%
-14.1% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
9.6%
-30.4% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§102 §103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This communication is in response to application No. 18/979, 094 filed on December 16, 2025. Claims 1-25 are currently pending and have been examined. Claims 1-25 have been rejected as follows. Information Disclosure Statement The information disclosure statement (IDS) submitted on February 4, 2025 is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 1 is rejected under 35 U.S.C 102 as being unpatentable over Ebrahimi (US 20240310851 A1). Regarding claim 1, Ebrahimi teaches A method comprising: obtaining, by data processing hardware of a legged robot, satellite-based position data representing a set of positions of the legged robot within a site of the legged robot; (see at least [0794, 0795, 0244];" In some embodiments, the robot may include sensors to detect or sense objects, acceleration, angular and linear movement, temperature, humidity, water, pollution, particles in the air, supplied power, proximity, external motion, device motion, sound signals, ultrasound signals, light signals, fire, smoke, carbon monoxide, global-positioning-satellite (GPS) signals…In some embodiments, the robot may be wheeled (e.g., rigidly fixed, suspended fixed, steerable, suspended steerable, caster, or suspended caster), legged…The SLAM updated may estimate the pose of the robot") Ebrahimi describes a legged robot using satellite based position data that could represent a set of positions of the legged robot within a site. generating, by the data processing hardware, composite data reflecting the satellite-based position data and at least one of odometry data or point cloud data, (see at least [0794, 1426]; " In some embodiments, the robot may include sensors to detect or sense objects, acceleration, angular and linear movement, temperature, humidity, water, pollution, particles in the air, supplied power, proximity, external motion, device motion, sound signals, ultrasound signals, light signals, fire, smoke, carbon monoxide, global-positioning-satellite (GPS) signals…In one device, an MCU may gather all sensor information, perform pre-processing, and present the pre-processed data to another MCU that uses the obtained data to run a navigation system. For example, a robot includes one MCU for collecting information from cameras, performing visual odometry, and sending pre-processed information to another MCU that uses at least some of the information to execute a navigation subsystem") Ebrahimi describes composite data being generated based on the satellite based position data and at least odometry data. wherein generating the composite data comprises associating each of the set of positions of the legged robot with at least one of a portion of the odometry data or a portion of the point cloud data; (see at least [1426, 0244]; " For example, a robot includes one MCU for collecting information from cameras, performing visual odometry, and sending pre-processed information to another MCU that uses at least some of the information to execute a navigation subsystem…The SLAM updated may estimate the pose of the robot ") Ebrahimi describes generating composite data associated with data gathered, such as odometry data. instructing, by the data processing hardware, the legged robot to perform a localization based on the composite data; and (see at least [0732]; " For example, while a robot obtains input data during a first run, a graph of constraints is obtained using information from encoders, Inertial Measurement Units (IMUs), cameras, etc. In some embodiments, during run time of a first run, data from encoders, IMUs, etc. may be integrated and the relating equations solved by the processor. In some embodiments, vision data, being more computationally intensive, may be integrated after the first run when the robot is charging or not working. In a second run, the robot may have a more complete map and the processor may localize the robot and verify and/or improve the map in future runs. ") Ebrahimi describes instructing the robot to perform localization based on the generated composite data. instructing, by the data processing hardware, the legged robot to perform an action based on the localization. (see at least [0971]; "The robot follows along the rails and the processor uses SLAM methods to avoid objects, such as humans.") Ebrahimi describes an action from the data processing hardware as avoiding objects. Regarding claim 2, Ebrahimi teaches The method of claim 1, wherein generating the composite data comprises merging the satellite-based position data and the at least one of the odometry data or the point cloud data. (see at least [0467]; "For a car, GPS information may be bundled with images, wheel odometer data, steering angle data, etc.") Ebrahimi Regarding claim 3, Ebrahimi teaches The method of claim 1, further comprising: determining one or more values associated with the point cloud data are less than or equal to one or more reliability thresholds, (see at least [0633]; "In some embodiments, accurate and more confident readings of a line laser at each time stamp are kept and while readings with less confidence are retired.") wherein the composite data reflects the satellite-based position data and the odometry data, (see at least [0467]; "For a car, GPS information may be bundled with images, wheel odometer data, steering angle data, etc.") wherein generating the composite data is based on determining the one or more values are less than or equal to the one or more reliability thresholds. (see at least [0633]; "In some embodiments, accurate and more confident readings of a line laser at each time stamp are kept and while readings with less confidence are retired.") Regarding claim 5, Ebrahimi teaches The method of claim 1, further comprising: filtering at least a portion of the satellite-based position data from the composite data based on at least one of the odometry data, a number of satellites associated with the satellite-based position data, or an uncertainty associated with the satellite-based position data. (see at least [0345]; "or displacements, data may be gathered from one or more of GPS data, IMU data, LIDAR data, radar data, sonar data, TOF data (single point or multipoint), optical tracker data, odometer data,") Regarding claim 6, Ebrahimi teaches The method of claim 1, wherein the satellite-based position data comprises first satellite-based position data, (see at least [0471]; "For example, a satellite may generate a point cloud above a jungle area at a first time point t.sub.i") the method further comprising: identifying a map, (see at least [0898]; "In some embodiments, the constructed map is stored in memory for future use. In other embodiments, a map of the environment is constructed at each use. In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc.") wherein the map comprises one or more waypoints and (see at least [0898]; "In some embodiments, the constructed map is stored in memory for future use. In other embodiments, a map of the environment is constructed at each use. In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc.") Ebrahimi one or more edges, (see at least [0298]; "Using a convolutional network, a subset of the map may be saved in memory requirements (e.g., edges).") wherein the one or more waypoints are associated with second satellite-based position data, (see at least [0471, 0898]; "For example, a satellite may generate a point cloud above a jungle area at a first time point t.sub.i. As the satellite moves and gathers more data points, the processor separates the sparse points that reach ground level from the dense points that reach the tops of trees…In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc.") wherein instructing the legged robot to perform the localization is further based on the map. (see at least [0344]; "Mapping and localization algorithms may transmit and receive data from one another and transmit data to the path planning algorithm. Mapping, localization/re-localization, and path planning algorithms may transmit and receive data back and forth with the controller that commands the robot to start and stop by moving the wheels of the robot. ") Regarding claim 7, Ebrahimi teaches The method of claim 1, wherein the composite data further comprises at least one of ground plane data, step location data, fiducial data, loop closure data, or a user annotation, (see at least [0344]; "The data warehouse, the real-time classifier, the real-time feature extractor, the filter (for noise removal), the loop closure, and the object distance calculator may transmit data to, for example, mapping, localization/re-localization, and path planning algorithms. ") the method further comprising: identifying a map comprising one or more waypoints (see at least [0898]; "In some embodiments, the constructed map is stored in memory for future use. In other embodiments, a map of the environment is constructed at each use. In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc.") Ebrahimi and one or more edges, (see at least [0298]; "Using a convolutional network, a subset of the map may be saved in memory requirements (e.g., edges).") wherein the one or more waypoints are associated with the composite data, and (see at least [0898]; "In some embodiments, the constructed map is stored in memory for future use. In other embodiments, a map of the environment is constructed at each use. In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc.") wherein instructing the legged robot to perform the localization is further based on the map. (see at least [0344]; "Mapping and localization algorithms may transmit and receive data from one another and transmit data to the path planning algorithm. Mapping, localization/re-localization, and path planning algorithms may transmit and receive data back and forth with the controller that commands the robot to start and stop by moving the wheels of the robot. ") Regarding claim 12, Ebrahimi teaches The method of claim 1, further comprising: generating a user interface, (see at least [0690]; "In some embodiments, a user may use an application of a communication device (e.g., mobile device, laptop, tablet, smart watch, remote, etc.) and/or a graphical user interface (GUI) of the robot to access a map of the environment and select areas the robot is to avoid. ") wherein the user interface comprises the composite data overlaid on a representation of the site; and (see at least [0692]; " In some embodiments, the user interface may include inputs by which the user adjusts or corrects the map perimeters displayed on the screen or applies one or more of the various options to the perimeter line using their finger or by providing verbal instructions, or in some embodiments, an input device, such as a cursor, pointer, stylus, mouse, button or buttons, or other input methods may serve as a user-interface element by which input is received. ") instructing display of the user interface. (see at least [0692]; " In some embodiments, the application may receive a variety of inputs indicating commands using a user interface of the application (e.g., a native application) displayed on the screen of the communication device. ") Regarding claim 13, Ebrahimi teaches The method of claim 1, further comprising: generating a user interface, (see at least [0690]; "In some embodiments, a user may use an application of a communication device (e.g., mobile device, laptop, tablet, smart watch, remote, etc.) and/or a graphical user interface (GUI) of the robot to access a map of the environment and select areas the robot is to avoid. ") wherein the user interface comprises the composite data overlaid on a representation of the site; (see at least [0070]; "In some embodiments, the user interface may indicate in the map a path the robot is about to take (e.g., according to a routing algorithm) between two points, to cover an area, or to perform some other task. For example, a route may be depicted as a set of line segments or curves overlaid on the map,") instructing display of the user interface; (see at least [0699]; " the robot may report information about the states to the application via a wireless network, and the application may update the user interface on the communication device to display the updated information. ") receiving input via the user interface; and (see at least [0692]; "In some embodiments, the application may receive a variety of inputs indicating commands using a user interface of the application (e.g., a native application) displayed on the screen of the communication device. ") instructing the legged robot to navigate to a location within the site based on the input. (see at least [0691] ;"In some embodiments, the user may use the application to manually control the robot (e.g., manually driving the robot or instructing the robot to navigate to a particular location).") Regarding claim 14, Ebrahimi teaches The method of claim 1, further comprising: generating a user interface, wherein the user interface comprises the composite data overlaid on a representation of the site; (see at least [0690]; "In some embodiments, a user may use an application of a communication device (e.g., mobile device, laptop, tablet, smart watch, remote, etc.) and/or a graphical user interface (GUI) of the robot to access a map of the environment and select areas the robot is to avoid. ") instructing display of the user interface; (see at least [0699]; " the robot may report information about the states to the application via a wireless network, and the application may update the user interface on the communication device to display the updated information. ") receiving input via the user interface; and (see at least [0692]; "In some embodiments, the application may receive a variety of inputs indicating commands using a user interface of the application (e.g., a native application) displayed on the screen of the communication device. ") updating at least one of the composite data or the satellite-based position data based on the input. (see at least [0692]; "In some embodiments, a user interface may receive commands to make adjustments to settings of the robot and any of its structures or components. In some embodiments, the application of the communication device sends the updated map and settings to the processor of the robot using a wireless communication channel, such as Wi-Fi or Bluetooth.") Regarding claim 15, Ebrahimi teaches The method of claim 1, further comprising: generating a user interface, (see at least [0690]; "In some embodiments, a user may use an application of a communication device (e.g., mobile device, laptop, tablet, smart watch, remote, etc.) and/or a graphical user interface (GUI) of the robot to access a map of the environment and select areas the robot is to avoid. ") wherein the user interface comprises the composite data overlaid on a representation of the site; and (see at least [0070]; "In some embodiments, the user interface may indicate in the map a path the robot is about to take (e.g., according to a routing algorithm) between two points, to cover an area, or to perform some other task. For example, a route may be depicted as a set of line segments or curves overlaid on the map,") updating the user interface in real time to provide a live representation of a position of the legged robot within the site. (see at least [1448]; "The new location of the robot may be communicated to the user and the user may provide incremental adjustments. In some embodiments, the adjustments and spatial updates are in real time. ") Regarding claim 16, Ebrahimi teaches The method of claim 1, further comprising: automatically performing loop closure generation based on the satellite-based position data. (see at least [0434]; "In some embodiments, the processor compares newly collected data against data previously captured and used in forming previous maps. Upon finding a match, the processor merges the newly collected data with the previously captured data to close the loop of the map.") Regarding claim 17, Ebrahimi teaches The method of claim 1, further comprising: identifying a relationship between at least one of the site or at least a portion of the composite data and a physical coordinate system; (see at least [0448]; "The information captured by the magnetic field sensor, whether real time, or historical, may be used by the processor to localize the robot in a six-dimensional coordinate system. ") generating a user interface, (see at least [0690]; "In some embodiments, a user may use an application of a communication device (e.g., mobile device, laptop, tablet, smart watch, remote, etc.) and/or a graphical user interface (GUI) of the robot to access a map of the environment and select areas the robot is to avoid. ") wherein the user interface indicates the relationship; and (see at least [0484]; "The graph may be one dimensional (serial) or arranged such that the objects maintain relations with K-nearest neighbour objects. In sequential runs, as more data is collected by sensors of the robot or as the data are labelled by the user") instructing display of the user interface. (see at least [0699]; " the robot may report information about the states to the application via a wireless network, and the application may update the user interface on the communication device to display the updated information. ") Regarding claim 18, Ebrahimi teaches A system comprising: data processing hardware; and (see a least [1196]; "In some embodiments, the robot includes an image sensor (e.g., camera) to provide an input image and an object identification and data processing unit,") memory in communication with the data processing hardware, the memory storing instructions that when executed on the data processing hardware cause the data processing hardware to: (see at least [0238]; "a memory storing instructions that when executed by the processor effectuates robotic operations,") obtain satellite-based position data representing a set of positions of a legged robot within a site of the legged robot; (see at least [0309, 308, 0345]; "As the robot moves within the environment and this information is fed into the network, a direction of movement and location of the robot emerges…In some circumstances, displacement may roughly be known but accuracy may be needed. For instance, an old position may be known, displacement may be somewhat known, and it may be desired to predict a new location of the robot. The processor may use deep bundling (i.e., the related known information) to approximate the unknown.… For displacements, data may be gathered from one or more of GPS data, ") generate composite data reflecting the satellite-based position data and at least one of odometry data or point cloud data, (see at least [0794, 1426]; " In some embodiments, the robot may include sensors to detect or sense objects, acceleration, angular and linear movement, temperature, humidity, water, pollution, particles in the air, supplied power, proximity, external motion, device motion, sound signals, ultrasound signals, light signals, fire, smoke, carbon monoxide, global-positioning-satellite (GPS) signals…In one device, an MCU may gather all sensor information, perform pre-processing, and present the pre-processed data to another MCU that uses the obtained data to run a navigation system. For example, a robot includes one MCU for collecting information from cameras, performing visual odometry, and sending pre-processed information to another MCU that uses at least some of the information to execute a navigation subsystem") Ebrahimi describes composite data being generated based on the satellite based position data and at least odometry data. wherein generating the composite data comprises associating each of the set of positions of the legged robot with at least one of a portion of the odometry data or a portion of the point cloud data; (see at least [0345, 0244]; "In some circumstances, displacement may roughly be known but accuracy may be needed. For instance, an old position may be known, displacement may be somewhat known, and it may be desired to predict a new location of the robot. The processor may use deep bundling (i.e., the related known information) to approximate the unknown…For displacements, data may be gathered from one or more of GPS data, IMU data, LIDAR data, radar data, sonar data, TOF data (single point or multipoint), optical tracker data, odometer data, …The SLAM updated may estimate the pose of the robot") instruct the legged robot to perform a localization based on the composite data; and (see at least [0732]; " For example, while a robot obtains input data during a first run, a graph of constraints is obtained using information from encoders, Inertial Measurement Units (IMUs), cameras, etc. In some embodiments, during run time of a first run, data from encoders, IMUs, etc. may be integrated and the relating equations solved by the processor. In some embodiments, vision data, being more computationally intensive, may be integrated after the first run when the robot is charging or not working. In a second run, the robot may have a more complete map and the processor may localize the robot and verify and/or improve the map in future runs. ") Ebrahimi describes instructing the robot to perform localization based on the generated composite data. instruct the legged robot to perform an action based on the localization. (see at least [0377]; "In some embodiments, the processor immediately determines the location of the robot or actuates the robot to only execute actions that are safe until the processor is aware of the location of the robot.") Regarding claim 21, Ebrahimi teaches The system of claim 18, wherein the satellite-based position data comprises first satellite-based position data, (see at least [0471]; "For example, a satellite may generate a point cloud above a jungle area at a first time point t.sub.i") wherein execution of the instructions on the data processing hardware further causes the data processing hardware to: identify a map, wherein the map comprises one or more waypoints and one or more edges, (see at least [0898, 0298]; "In some embodiments, the constructed map is stored in memory for future use. In other embodiments, a map of the environment is constructed at each use. In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc…Using a convolutional network, a subset of the map may be saved in memory requirements (e.g., edges)."") wherein the one or more waypoints are associated with second satellite-based position data, (see at least [0471, 0898]; "For example, a satellite may generate a point cloud above a jungle area at a first time point t.sub.i. As the satellite moves and gathers more data points, the processor separates the sparse points that reach ground level from the dense points that reach the tops of trees…In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc.") wherein instructing the legged robot to perform the localization is further based on the map, (see at least [0344]; "Mapping and localization algorithms may transmit and receive data from one another and transmit data to the path planning algorithm. Mapping, localization/re-localization, and path planning algorithms may transmit and receive data back and forth with the controller that commands the robot to start and stop by moving the wheels of the robot. ") wherein the map is generated prior to the legged robot traversing the site. (see a least [0266]; "In some embodiments, compression is achieved when the universal map is created in advance for all instances of time and the localization of each car within the universal map is traced using time stamps.") Regarding claim 23, Ebrahimi teaches The robot of claim 22, wherein execution of the instructions on the data processing hardware further causes the data processing hardware to: obtain the satellite-based position data from at least one satellite-based position sensor, (see at least [0309, 308, 0345]; "As the robot moves within the environment and this information is fed into the network, a direction of movement and location of the robot emerges…In some circumstances, displacement may roughly be known but accuracy may be needed. For instance, an old position may be known, displacement may be somewhat known, and it may be desired to predict a new location of the robot. The processor may use deep bundling (i.e., the related known information) to approximate the unknown.… For displacements, data may be gathered from one or more of GPS data, ") wherein the at least one satellite-based position sensor is connected to the robot via a port. (see at least [239]; In some embodiments, the robot may include sensors to detect or sense objects, acceleration, angular and linear movement, temperature, humidity, water, pollution, particles in the air, supplied power, proximity, external motion, device motion, sound signals, ultrasound signals, light signals, fire, smoke, carbon monoxide, global-positioning-satellite (GPS)" Regarding claim 25, Ebrahimi teaches The robot of claim 22, wherein execution of the instructions on the data processing hardware further causes the data processing hardware to: filter at least a portion of the satellite-based position data from the composite data based on the odometry data.(see at least [0345]; "or displacements, data may be gathered from one or more of GPS data, IMU data, LIDAR data, radar data, sonar data, TOF data (single point or multipoint), optical tracker data, odometer data,") Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 4 is rejected under 35 U.S.C 103 as being unpatentable over Ebrahimi (US 20240310851 A1) in view of Toshiki (WO 2007040100 A1). Regarding claim 4, Ebrahimi does not explicitly disclose The method of claim 1, wherein obtaining the satellite-based position data comprises obtaining the satellite-based position data from at least one satellite-based position sensor, wherein the at least one satellite-based position sensor is detachable from the legged robot. However, Toshiki teaches The method of claim 1, wherein obtaining the satellite-based position data comprises obtaining the satellite-based position data from at least one satellite-based position sensor, wherein the at least one satellite-based position sensor is detachable from the legged robot. (see at least [0007]; "A calculation step of calculating a current position of the moving object by the GPS device provided in the attachment / detachment portion; ") It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ebrahimi to incorporate teachings of Toshiki which teaches a sensor being removable from a robot in order to easily remove and replace a specific sensor or to disable certain capabilities if not needed or desired. Claims 8, 9, 10, 11, 19, and 20 are rejected under 35 U.S.C 103 as being unpatentable over Ebrahimi (US 20240310851 A1) in view of Fay (US 11268816 B2). Regarding claim 8, Ebrahimi discloses The method of claim 1, further comprising: identifying a map comprising one or more waypoints and (see at least [0898]; "In some embodiments, the constructed map is stored in memory for future use. In other embodiments, a map of the environment is constructed at each use. In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc.") one or more edges, (see at least [0298]; "Using a convolutional network, a subset of the map may be saved in memory requirements (e.g., edges).") wherein instructing the legged robot to perform the localization is further based on the map; and (see at least [0344]; "Mapping and localization algorithms may transmit and receive data from one another and transmit data to the path planning algorithm. Mapping, localization/re-localization, and path planning algorithms may transmit and receive data back and forth with the controller that commands the robot to start and stop by moving the wheels of the robot. ") However, Ebrahimi does not explicitly disclose wherein the one or more waypoints are associated with the composite data, and identifying a relationship between a first waypoint of the one or more waypoints and a second waypoint of the one or more waypoints based on the composite data. However, Fay teaches wherein the one or more waypoints are associated with the composite data, and (see at least [3]; " The method also includes generating, by the data processing hardware, at least one intermediate waypoint based on the image data. ") identifying a relationship between a first waypoint of the one or more waypoints and a second waypoint of the one or more waypoints based on the composite data. (see at least [3]; "The navigation route includes a series of high-level waypoints that begin at a starting location and end at a destination location and are based on high-level navigation data. ") It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ebrahimi to incorporate teachings of Fay which teaches associating waypoints with the composite data in order to create a comprehensive map for the robot based on the gathered data. Regarding claim 9, Ebrahimi discloses The method of claim 1, further comprising: identifying a map comprising one or more waypoints and one or more edges, (see at least [0898, 0298]; "In some embodiments, the constructed map is stored in memory for future use. In other embodiments, a map of the environment is constructed at each use. In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc…Using a convolutional network, a subset of the map may be saved in memory requirements (e.g., edges)."" wherein instructing the legged robot to perform the localization is further based on the map; and (see at least [0344]; "Mapping and localization algorithms may transmit and receive data from one another and transmit data to the path planning algorithm. Mapping, localization/re-localization, and path planning algorithms may transmit and receive data back and forth with the controller that commands the robot to start and stop by moving the wheels of the robot. ") Ebrahimi does not explicitly disclose wherein the one or more waypoints are associated with the composite data, and identifying a relationship between the one or more waypoints and the site based on the composite data. However, Fay teaches wherein the one or more waypoints are associated with the composite data, and (see at least [3]; " The method also includes generating, by the data processing hardware, at least one intermediate waypoint based on the image data. ") identifying a relationship between the one or more waypoints and the site based on the composite data. (see at least [3]; "The navigation route includes a series of high-level waypoints that begin at a starting location and end at a destination location and are based on high-level navigation data. " It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ebrahimi to incorporate teachings of Fay which teaches associating waypoints with the composite data in order to create a comprehensive map for the robot based on the gathered data. Regarding claim 10, Ebrahimi discloses The method of claim 1, further comprising: identifying a map comprising one or more waypoints and one or more edges, (see at least [0898, 0298]; "In some embodiments, the constructed map is stored in memory for future use. In other embodiments, a map of the environment is constructed at each use. In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc…Using a convolutional network, a subset of the map may be saved in memory requirements (e.g., edges)."") wherein instructing the legged robot to perform the localization is further based on the map; (see at least [0344]; "Mapping and localization algorithms may transmit and receive data from one another and transmit data to the path planning algorithm. Mapping, localization/re-localization, and path planning algorithms may transmit and receive data back and forth with the controller that commands the robot to start and stop by moving the wheels of the robot. ") using an optimization problem; and (see at least [0363]; "Optimizing the model may be done by combining small faces (i.e., triangles) to larger faces using a given variation threshold.") using the optimization problem. (see at least [0363]; "Optimizing the model may be done by combining small faces (i.e., triangles) to larger faces using a given variation threshold.") Ebrahimi does not explicitly disclose wherein the one or more waypoints are associated with composite data, and identifying a first relationship between a first waypoint of the one or more waypoints and a second waypoint of the one or more waypoints based on the composite data identifying a second relationship between the one or more waypoints and the site based on the composite data. However, Fay teaches wherein the one or more waypoints are associated with composite data, and (see at least [3]; " The method also includes generating, by the data processing hardware, at least one intermediate waypoint based on the image data. ") identifying a first relationship between a first waypoint of the one or more waypoints and (see at least [3]; "The navigation route includes a series of high-level waypoints that begin at a starting location and end at a destination location and are based on high-level navigation data. ") a second waypoint of the one or more waypoints based on the composite data (see at least [3]; "The navigation route includes a series of high-level waypoints that begin at a starting location and end at a destination location and are based on high-level navigation data. ") identifying a second relationship between the one or more waypoints and the site based on the composite data (see at least [3]; "The navigation route includes a series of high-level waypoints that begin at a starting location and end at a destination location and are based on high-level navigation data. ") It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ebrahimi to incorporate teachings of Fay which teaches associating a first and second relationship between the waypoints and the site in order to include a multitude of waypoints and other gathered data when creating a comprehensive map for navigation of the robot. Regarding claim 11, Ebrahimi discloses The method of claim 1, wherein the composite data further comprises at least one of ground plane data, step location data, fiducial data, loop closure data, or a user annotation, the method further comprising: (see at least [0344]; "The data warehouse, the real-time classifier, the real-time feature extractor, the filter (for noise removal), the loop closure, and the object distance calculator may transmit data to, for example, mapping, localization/re-localization, and path planning algorithms. ") identifying a map comprising one or more waypoints and one or more edges, (see at least [0898, 0298]; "In some embodiments, the constructed map is stored in memory for future use. In other embodiments, a map of the environment is constructed at each use. In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc…Using a convolutional network, a subset of the map may be saved in memory requirements (e.g., edges)."") wherein instructing the legged robot to perform the localization is further based on the map; (see at least [0344]; "Mapping and localization algorithms may transmit and receive data from one another and transmit data to the path planning algorithm. Mapping, localization/re-localization, and path planning algorithms may transmit and receive data back and forth with the controller that commands the robot to start and stop by moving the wheels of the robot. ") identifying a first relationship between a first waypoint of the one or more waypoints and a second waypoint of the one or more waypoints using an optimization problem; and (see at least [1167]; "The processor may then optimize entry/exit points for the chosen zones and order of zones") identifying a second relationship between the one or more waypoints and the site using the optimization problem, (see at least [1167]; "The processor may then optimize entry/exit points for the chosen zones and order of zones") wherein one or more variables of the optimization problem comprise one or more locations of the one or more waypoints, (see at least [0314]; "In embodiments, the act of learning, whether neural or atomic machine learning may be executed on various devices and in various locations in an individual manner or distributed between the various devices located at various locations…Concurrently, the robot may use reinforcement learning for a task such as its calibration, obstacle inflation, bump reduction, path optimization, etc. ") wherein one or more cost functions of the optimization problem are based on one or more of the satellite-based position data, the odometry data, the point cloud data, the ground plane data, the step location data, the fiducial data, the loop closure data, or the user annotation. (see at least [0481, 0344]; " In some embodiments, the neural network may reduce to a single neuron, in which case finding which universe is the current universe is achieved by simple reinforcement learning and optimization of a cost function…The data warehouse, the real-time classifier, the real-time feature extractor, the filter (for noise removal), the loop closure, and the object distance calculator may transmit data to, for example, mapping, localization/re-localization, and path planning algorithms. ") Ebrahimi does not explicitly disclose wherein the one or more waypoints are associated with the composite data. However, Fay teaches wherein the one or more waypoints are associated with the composite data, and (see at least [3]; " The method also includes generating, by the data processing hardware, at least one intermediate waypoint based on the image data. ") It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ebrahimi to incorporate teachings of Fay which teaches associating waypoints with the composite data in order to create a comprehensive map for use of navigating the robot. Regarding claim 19, Ebrahimi discloses The system of claim 18, wherein execution of the instructions on the data processing hardware further causes the data processing hardware to: identify a map comprising one or more waypoints and one or more edges, (see at least [0898, 0298]; "In some embodiments, the constructed map is stored in memory for future use. In other embodiments, a map of the environment is constructed at each use. In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc…Using a convolutional network, a subset of the map may be saved in memory requirements (e.g., edges)."") wherein the one or more waypoints are associated with the composite data, and (see at least [0898]; "In some embodiments, the constructed map is stored in memory for future use. In other embodiments, a map of the environment is constructed at each use. In some embodiments, once the map is constructed, the processor determines a path for the robot to follow, such as by using the entire constructed map, waypoints, or endpoints, etc.") wherein instructing the legged robot to perform the localization is further based on the map; (see at least [0344]; "Mapping and localization algorithms may transmit and receive data from one another and transmit data to the path planning algorithm. Mapping, localization/re-localization, and path planning algorithms may transmit and receive data back and forth with the controller that commands the robot to start and stop by moving the wheels of the robot. ") identify a second relationship between the one or more waypoints and the site based on the composite data using the optimization problem, (see at least [1167]; "The processor may then optimize entry/exit points for the chosen zones and order of zones") Ebrahimi wherein one or more variables of the optimization problem comprise one or more locations of the one or more waypoints. (see a least [0363];" In some embodiments, the process of generating a 3D model based on point cloud data captured with a LIDAR or other device (e.g., depth camera) comprises obtaining a point cloud, optimization, triangulation, and optimization (decimation). For instance, in a first step of the process, the cloud is optimized and duplicate or unwanted points are removed.") Ebrahimi does not explicitly disclose identify a first relationship between a first waypoint of the one or more waypoints and a second waypoint of the one or more waypoints based on the composite data using an optimization problem; However, Fay teaches identify a first relationship between a first waypoint of the one or more waypoints and a second waypoint of the one or more waypoints based on the composite data using an optimization problem; and (see at least [3]; "The navigation route includes a series of high-level waypoints that begin at a starting location and end at a destination location and are based on high-level navigation data. ") It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ebrahimi to incorporate teachings of Fay which teaches associating a first and second relationship between the waypoints and the site in order to include a multitude of waypoints and other gathered data when creating a comprehensive map for navigation of the robot. Regarding claim 20, Ebrahimi does not explicitly disclose The system of claim 18, wherein the odometry data is based on one or more steps of one or more legs of the legged robot. However, Fay teaches The system of claim 18, wherein the odometry data is based on one or more steps of one or more legs of the legged robot. (see at least [25]; " The high-level waypoints 210 and any added intermediate waypoints 310 of the navigation route 112 are passed to a low-level path generator 130 that, combined with the sensor data 17, generates a step plan 142 that plots each individual step of the robot 10 to navigate from the current location of the robot 10 to the next waypoint 210, 310. Using the step plan 142, the robot 10 maneuvers through the environment 8 by following the step plan 142 by placing the feet 19 or distal ends of the leg 12 on the ground surface 9 at the locations indicated by the step plan 142.") It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ebrahimi to incorporate teachings of Fay which teaches using odometry data being based on steps of the legged robot because the steps represent displacement of the robot and therefore would be able to track distance traveled or position changes. Regarding claim 22, Ebrahimi discloses data processing hardware; and (see a least [1196]; "In some embodiments, the robot includes an image sensor (e.g., camera) to provide an input image and an object identification and data processing unit,") EBrahimi memory in communication with the data processing hardware, the memory storing instructions that when executed on the data processing hardware cause the data processing hardware to: (see at least [0238]; "a memory storing instructions that when executed by the processor effectuates robotic operations,") Ebrahimi obtain satellite-based position data representing a set of positions of the robot within a site of the robot; (see at least [0309, 308, 0345, 0244]; "As the robot moves within the environment and this information is fed into the network, a direction of movement and location of the robot emerges…In some circumstances, displacement may roughly be known but accuracy may be needed. For instance, an old position may be known, displacement may be somewhat known, and it may be desired to predict a new location of the robot. The processor may use deep bundling (i.e., the related known information) to approximate the unknown.… For displacements, data may be gathered from one or more of GPS data,…The SLAM updated may estimate the pose of the robot ") Ebrahimi generate composite data reflecting the satellite-based position data and at least one of odometry data or point cloud data, (see at least [0794, 1426]; " In some embodiments, the robot may include sensors to detect or sense objects, acceleration, angular and linear movement, temperature, humidity, water, pollution, particles in the air, supplied power, proximity, external motion, device motion, sound signals, ultrasound signals, light signals, fire, smoke, carbon monoxide, global-positioning-satellite (GPS) signals…In one device, an MCU may gather all sensor information, perform pre-processing, and present the pre-processed data to another MCU that uses the obtained data to run a navigation system. For example, a robot includes one MCU for collecting information from cameras, performing visual odometry, and sending pre-processed information to another MCU that uses at least some of the information to execute a navigation subsystem") Ebrahimi describes composite data being generated based on the satellite based position data and at least odometry data. wherein generating the composite data comprises associating each of the set of positions of the robot with at least one of a portion of the odometry data or a portion of the point cloud data; (see at least [0345, 0244]; "In some circumstances, displacement may roughly be known but accuracy may be needed. For instance, an old position may be known, displacement may be somewhat known, and it may be desired to predict a new location of the robot. The processor may use deep bundling (i.e., the related known information) to approximate the unknown…For displacements, data may be gathered from one or more of GPS data, IMU data, LIDAR data, radar data, sonar data, TOF data (single point or multipoint), optical tracker data, odometer data,… The SLAM updated may estimate the pose of the robot") instruct the robot to perform a localization based on the composite data; and (see at least [0732]; " For example, while a robot obtains input data during a first run, a graph of constraints is obtained using information from encoders, Inertial Measurement Units (IMUs), cameras, etc. In some embodiments, during run time of a first run, data from encoders, IMUs, etc. may be integrated and the relating equations solved by the processor. In some embodiments, vision data, being more computationally intensive, may be integrated after the first run when the robot is charging or not working. In a second run, the robot may have a more complete map and the processor may localize the robot and verify and/or improve the map in future runs. ") Ebrahimi describes instructing the robot to perform localization based on the generated composite data. instruct the robot to perform an action based on the localization. (see at least [0377]; "In some embodiments, the processor immediately determines the location of the robot or actuates the robot to only execute actions that are safe until the processor is aware of the location of the robot.") Ebrahimi does not explicitly disclose A robot comprising: at least two legs; However, Fay teaches A robot comprising: at least two legs; (see at least [20]; " Referring to FIG. 1, a robot or robotic device 10 includes a body 11 with two or more legs ") It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ebrahimi to incorporate teachings of Fay which teaches a robot comprising at least 2 legs in order to include more variety of robotic types. Regarding claim 24, Ebrahimi does not explicitly disclose The robot of claim 22, wherein the odometry data is based on one or more steps of one or more legs of the legged robot. However, Fay teaches The robot of claim 22, wherein the odometry data is based on one or more steps of one or more legs of the legged robot. (see at least [25]; " The high-level waypoints 210 and any added intermediate waypoints 310 of the navigation route 112 are passed to a low-level path generator 130 that, combined with the sensor data 17, generates a step plan 142 that plots each individual step of the robot 10 to navigate from the current location of the robot 10 to the next waypoint 210, 310. Using the step plan 142, the robot 10 maneuvers through the environment 8 by following the step plan 142 by placing the feet 19 or distal ends of the leg 12 on the ground surface 9 at the locations indicated by the step plan 142.") It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ebrahimi to incorporate teachings of Fay which teaches using odometry data being based on steps of the legged robot because the steps represent displacement of the robot and therefore would be able to track distance traveled or position changes. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HANA VICTORIA HALL whose telephone number is (571)272-5289. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached at 5712724896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HANA VICTORIA HALL/Examiner, Art Unit 3664 /RACHID BENDIDI/Supervisory Patent Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Dec 12, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month