DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant has submitted the following:
Claims 1-2, 4-14, 16-19, and 21-22 are pending examination;
Claims 1, 10, 13, 18 are newly amended; and
Claims 3, 15, and 20 remain cancelled.
Response to Arguments
Applicant's arguments filed 09/10/2025 have been fully considered but they are not persuasive.
Applicant argues that the prior art does not teach the newly amended claim limitations of independent claims 1, 13, and 18. Specifically, Applicant argues that the prior art does not teach the limitations: processing “the LiDAR usage data to determine a LiDAR-based localization success rate at the one or more locations” and depicting “a visualization including a heat map indicating the LiDAR-based localization success rate at the one or more locations”.
Examiner respectfully disagrees. While previously cited Arditi, Belkin, and Wirola do not explicitly teach the newly amended claim limitations, said claim limitations are taught in the prior art. Newly cited Dudzik et al. (US 20210150279 A1) teaches an analogous method for localization from LiDAR data (Abstract; localization component 520), comprising:
processing, by the one or more processors (one or more processors 516), the LiDAR usage data to determine a LiDAR-based localization ([0077] lines 8-18, “the localization component 520 can use SLAM (simultaneous localization and mapping) or CLAMS (calibration, localization and mapping, simultaneously) to receive time-of-flight data, image data, lidar data, radar data, sonar data, IMU data, GPS data, wheel encoder data, or any combination thereof, and the like to accurately determine a location of the autonomous vehicle. In some instances, the localization component 520 can provide data to various components of the vehicle 502 to determine an initial position of an autonomous vehicle for generating a trajectory, as discussed herein.”) success rate at the one or more locations ([0078] lines 6-16, “the localization component 520 can provide functionality to determine an error associated with the local map, the three-dimensional map, and/or the one or more sensor system(s) 506. For example, the localization component 520 can determine a position error (e.g., drift error) associated with the vehicle 502. Over time in operation, errors may accumulate, resulting in errors in positioning and/or trajectory data. In some instances, the localization component 520 can determine the error based on, for example, the position error meeting or exceeding a threshold value.”). The determination of error associated with the local map as part of the localization, is the localization success rate; and
the map layer depicts a visualization including a heat map indicating the LiDAR-based localization success rate at the one or more locations ([0086] lines 7-12, “the prediction component 526 can measure a track of an object and generate a discretized prediction probability map, a heat map, a probability distribution, a discretized probability distribution, and/or a trajectory for the object based on observed and predicted behavior”; [0089] lines 27-35, “the map(s) 530 can be used in connection with the localization component 520, the perception component 522, the machine learned component 524, the prediction component 226, and/or the planning component 528 to determine a location of the vehicle 502, identify objects in an environment, generate prediction probabilit(ies) associated with objects and/or the vehicle 502, and/or generate routes and/or trajectories to navigate within an environment.”). The map layer (“maps”) are generated in connection with the localization success rate (“error associated with the local map”) and depicting a heat map (“heat map” and “probability map”) of the determined localization success rates.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 22 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 22 recites the limitation "the one or more clusters" in line 1. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4-8, 11-14, and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arditi (US 20190147331 A1, previously cited) in view of Belkin et al. (Ilya Belkin, Alexander Abramenko, Dmitry Yudin, Real-Time Lidar-based Localization of Mobile Ground Robot, Procedia Computer Science, Volume 186, 2021, Pages 440-448, ISSN 1877-0509, previously cited.) and Dudzik et al. (US 20210150279 A1).
Regarding claim 1, Arditi teaches A method comprising: determining, by one or more processors (Abstract; Fig. 10), Light Detection and Ranging (LiDAR) usage data (LiDAR data 411, [0025] lines 43-54, "the LiDAR data may also be associated with metadata 412 that provide contextual information relating to the LiDAR data 411 and the particular LiDAR used for capturing the LiDAR data 411. The metadata 412 may, e.g., include any combination of: the mounting location of the LiDAR in three-dimensional space relative to a reference point; the brand, model, and/or year of the LiDAR sensor; capabilities of the LiDAR (e.g., density and spacing information, laser type, sensor type); configurations (e.g., sensitivity and rotation speed); information pertaining to the LiDAR's mount (e.g., model, size, etc.); and any other pertinent data at the time the LiDAR data 411 was captured.") generated by one or more mobile devices (data-gathering vehicles 300A, 300 B,) at one or more locations ([0045] lines 4-7, “The process may begin at step 710, at which a computing system of an autonomous vehicle driving on a road may obtain sensor data at a particular location (e.g., corresponding to an x latitude and y longitude)”);
generating, by the one or more processors, a map layer of a geographic database based on the LiDAR usage data ([0018] lines 1-5, "FIG. 4 illustrates an embodiment of a machine-learning model architecture for transforming data received from different data-gathering vehicles into a common space 460 and using the transformed data to generate HD map data 480"); and
providing, by the one or more processors, the map layer as an output (Fig. 7, steps 770 and 780; Fig. 8, [0060] lines 4-6, “The driving/navigation module 860 may be configured to access a local HD map stored in an HD map data sore 876.”). The access of the HD map is the providing the map layer as an output.
Arditi does not teach the method, wherein the usage data relates to a usage of LiDAR-based localization;
processing, by the one or more processors, the LiDAR usage data to determine a LiDAR-based localization success rate at the one or more locations; and
wherein the map layer depicts a visualization including a heat map indicating the LiDAR-based localization success rate at the one or more locations
Belkin teaches an analogous method of localization from LiDAR data (Abstract), wherein the usage data relates to usage of LiDAR-based localization (Fig. 1, lidar-based localization; page 444 “Localization module. Localization is performed via scan batch processing. Geometric features (corner and surface points) are extracted from the point cloud and matched with edges and planes on the map. The current lidar scan is aligned against the map by minimization of distance between features and their correspondences. This is done in a similar way to lidar-based odometry in the Map reconstructor module.”).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Arditi to include the LiDAR-based localization of Belkin because it would yield predictable and advantageous results of utilizing the LiDAR data to localize the mobile device, because the use of LiDAR to localize a mobile device is a well-known technique in the art and would yield predictable results, and the localization of the device from its LiDAR measurements would advantageously increase the accuracy of the localization by utilizing locally measured data.
Arditi in view of Belkin does not teach the method, processing, by the one or more processors, the LiDAR usage data to determine a LiDAR-based localization success rate at the one or more locations; and
wherein the map layer depicts a visualization including a heat map indicating the LiDAR-based localization success rate at the one or more locations
Dudzik teaches an analogous method, comprising: processing, by the one or more processors (one or more processors 516), the LiDAR usage data to determine a LiDAR-based localization ([0077] lines 8-18, “the localization component 520 can use SLAM (simultaneous localization and mapping) or CLAMS (calibration, localization and mapping, simultaneously) to receive time-of-flight data, image data, lidar data, radar data, sonar data, IMU data, GPS data, wheel encoder data, or any combination thereof, and the like to accurately determine a location of the autonomous vehicle. In some instances, the localization component 520 can provide data to various components of the vehicle 502 to determine an initial position of an autonomous vehicle for generating a trajectory, as discussed herein.”) success rate at the one or more locations ([0078] lines 6-16, “the localization component 520 can provide functionality to determine an error associated with the local map, the three-dimensional map, and/or the one or more sensor system(s) 506. For example, the localization component 520 can determine a position error (e.g., drift error) associated with the vehicle 502. Over time in operation, errors may accumulate, resulting in errors in positioning and/or trajectory data. In some instances, the localization component 520 can determine the error based on, for example, the position error meeting or exceeding a threshold value.”). The determination of error associated with the local map as part of the localization, is the localization success rate; and
the map layer depicts a visualization including a heat map indicating the LiDAR-based localization success rate at the one or more locations ([0086] lines 7-12, “the prediction component 526 can measure a track of an object and generate a discretized prediction probability map, a heat map, a probability distribution, a discretized probability distribution, and/or a trajectory for the object based on observed and predicted behavior”; [0089] lines 27-35, “the map(s) 530 can be used in connection with the localization component 520, the perception component 522, the machine learned component 524, the prediction component 226, and/or the planning component 528 to determine a location of the vehicle 502, identify objects in an environment, generate prediction probabilit(ies) associated with objects and/or the vehicle 502, and/or generate routes and/or trajectories to navigate within an environment.”). The map layer (“maps”) are generated in connection with the localization success rate (“error associated with the local map”) and depicting a heat map (“heat map” and “probability map”) of the determined localization success rates.
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Arditi in view of Belkin to include the localization success rate and the generation of a heat map indicated said rate of Dudzik because the visualization of a localization success would yield predictable and advantageous results, including indicating to a user a confidence or error in the mapping process at a location.
Regarding claim 2, Arditi in view of Belkin and Dudzik teaches The method of claim 1, further comprising: processing the LiDAR usage data to determine one or more of: (i) a number of LiDAR scans that have occurred at the one or more locations, (ii) a duration of LiDAR scans that have occurred at the one or more locations, or (iii) a number of the one or more mobile devices that have performed a LiDAR scan at the one or more locations (Arditi: [0026] “In particular embodiments, the CNN 450 may also be configured to receive environmental data 430 that describe the environment in which the sensor data (e.g., camera data 401, LiDAR data 411, radar data 421, etc.) were obtained. Each data-gathering vehicle may be associated with its own environment data 430, which would be generally associated with any sensor data output by any sensor of the vehicle. The environmental data 430 may include, for example, data relating to the data-gathering vehicle, such as its make, model, year, dimensions (e.g., length, width, and height), weight, tire size, tire pressure, mount locations, speed and/or vibration of the vehicle during data captured, and any other pertinent information that may affect measurement results. The environmental data 430 may include, for example, data relating to the data-gathering vehicle, such as its make, model, year, dimensions (e.g., length, width, and height), weight, tire size, tire pressure, mount locations, speed and/or vibration of the vehicle during data captured, and any other pertinent information that may affect measurement results. The environmental data 430 may also include the time and date at which sensor data were obtained, since time and date may correlate with lighting conditions, which in turn may affect certain types of sensor measurements (e.g., cameras). In particular embodiments, the environmental data 430 or portions thereof may be included as the metadata of any of the specific sensor data types. For example, if lighting conditions could impact camera data 401, environmental data 430 relating to lighting (e.g., time and date, visibility condition, etc.) may be included as the camera's metadata 402. As another example, if precipitation could impact LiDAR data 411, environmental data 430 relating to precipitation may be included in the LiDAR's metadata 412."). One of ordinary skill in the art would recognize the data related to the data gathering vehicle identifies individual devices, thereby identifying the number of devices for each scan; and the time and date of each scan would thereby identify the number of unique scans at a location; and
including the one or more locations in the map layer based on determining one or more of: (i) that the number of LiDAR scans is greater than a threshold number of LiDAR scans, (ii) that the duration of LiDAR scans is greater than a threshold duration of LiDAR scans, or (iii) that the number of the one or more mobile devices is greater than a threshold number of mobile devices ([0037] lines 6-20, " A score representing the comparison may be generated and compared with a threshold requirement, the satisfaction of which may cause the training to terminate. Additionally or alternatively, training may be deemed complete when a threshold number of training samples have been processed and/or a threshold amount of training time has been expended. If the system determines that training is incomplete, the system may repeat another iteration of the training process, starting at, e.g., step 510 and have the machine-learning model learn from another training sample. On the other hand, if the system determines that training is complete, the trained model may then be used in operation to automatically process data from data-gathering vehicles and generate latent representations and/or map data."). The number of training samples, taken from LiDAR scans, is the number of LiDAR scans, and the threshold for completeness, following which map data is generated, is the threshold number of scans.
Regarding claim 4, Arditi in view of Belkin and Dudzik teaches The method of claim 1, further comprising: processing the LiDAR usage data to determine parameters including a date or time of LiDAR usage or scanning, a manufacturer, model, type, year, capabilities/features, a manufacturer of a LiDAR sensor used by the one or more devices, a model of the LiDAR sensor, a type of the LiDAR sensor, one or more capabilities or features of the LiDAR sensor, a manufacturer of the one or more devices, a model of the one or more devices, a type of the one or more devices, one or more capabilities or features of the one or more devices, a LiDAR-based localization success rate, a type of object detected by a LiDAR scan, a number of scans in a scanning session, a scanning orientation, a moving versus stationary LiDAR scan, contextual information of a scanning environment, or a combination thereof, wherein the map layer further associates one or more of the parameters with the one or more locations (Arditi: [0025] lines 43-54, "the LiDAR data may also be associated with metadata 412 that provide contextual information relating to the LiDAR data 411 and the particular LiDAR used for capturing the LiDAR data 411. The metadata 412 may, e.g., include any combination of: the mounting location of the LiDAR in three-dimensional space relative to a reference point; the brand, model, and/or year of the LiDAR sensor; capabilities of the LiDAR (e.g., density and spacing information, laser type, sensor type); configurations (e.g., sensitivity and rotation speed); information pertaining to the LiDAR's mount (e.g., model, size, etc.); and any other pertinent data at the time the LiDAR data 411 was captured."; [0026] lines 18-30, "The environmental data 430 may also include the time and date at which sensor data were obtained, since time and date may correlate with lighting conditions, which in turn may affect certain types of sensor measurements (e.g., cameras). In particular embodiments, the environmental data 430 or portions thereof may be included as the metadata of any of the specific sensor data types. For example, if lighting conditions could impact camera data 401, environmental data 430 relating to lighting (e.g., time and date, visibility condition, etc.) may be included as the camera's metadata 402. As another example, if precipitation could impact LiDAR data 411, environmental data 430 relating to precipitation may be included in the LiDAR's metadata 412.").
Regarding claim 5, Arditi in view of Belkin and Dudzik teaches the method of claim 1, further comprising: processing the output to generate a user interface depicting a representation of the map layer (Arditi: [0068] lines 1-13, "In particular embodiments, third-party system 970 may be a network-addressable computing system that may provide HD maps or host GPS maps, customer reviews, music or content, weather information, or any other suitable type of information. Third-party system 970 may generate, store, receive, and send relevant data, such as, for example, map data, customer review data from a customer review website, weather data, or any other suitable type of data. Third-party system 970 may be accessed by the other computing entities of the network environment either directly or via network 910. For example, user device 930 may access the third-party system 970 via network 910, or via transportation management system 960."). The map data accessible from the user device is the generation of a user interface depicting a representation of the map layer.
Regarding claim 6, Arditi in view of Belkin and Dudzik teaches The method of claim 5, wherein the user interface presents a user interface element to initiate a filtering of the map layer, and wherein the filtering is based on one or more of the parameters (Arditi: [0066] lines 27-32, "Particular embodiments may provide interfaces that enable a user device 930 (which may belong to a ride requestor or provider), a transportation management system 960, vehicle system 940, or a third-party system 970 to process, transform, manage, retrieve, modify, add, or delete the information stored in data store."; [0055] lines 1-5, “Additionally, the HD Map System 800 may include or have access to a data store containing in-operation data set 840 used for generating the HD map. The in-operation data set 840 may include any number of data instances, each associated with a particular geographic location.”). The ability to “process, transform, manage, retrieve, modify, add, or delete” the data for generating the HD map is the filtering based on parameters.
Regarding claim 7, Arditi in view of Belkin and Dudzik teaches The method of claim 1, further comprising: processing the output to recommend a LiDAR scan parameter for a subsequent mobile device to perform a LiDAR scan (Arditi: [0047] lines 15-21, "At step 745, if the comparison results in a determination that the detected object(s) exists or is known in the HD map (e.g., the confidence score in the object existing in the map is higher than a threshold), then the system may not perform any map-updating operation and return to obtaining sensor data (e.g., step 710)."). The return to the obtaining sensor data is the recommending a parameter indicating a subsequent mobile device to perform a LiDAR scan.
Regarding claim 8, Arditi in view of Belkin and Dudzik teaches the method of claim 1, further comprising: determining a geographic area, an indoor space, or a combination thereof where camera usage is prohibited based on the map layer (Arditi: [0041] lines 4-14, "step 580, the system may associate the generated map data with the particular location in an HD map. For instance, if the generated map data is associated with sensor data gathered at a particular longitude and latitude, the corresponding location in the HD map may be associated with the generated map data. Thus, when the HD map is being used, the system using the HD map (e.g., an autonomous vehicle) may query for data associated with that longitudinal-latitudinal coordinate to retrieve the generated map data."; [0042] lines 16-19, "an HD map may be considered as complete when it has sufficient map data along a route to help an autonomous vehicle navigate from a source location to a destination location")
Even if Arditi in view of Belkin and Dudzik does not teach the method, wherein camera usage is prohibited, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the method of Arditi in view of Belkin and Dudzik where camera usage is prohibited because said limitation amounts to an intended use of the method. It has been held that a recitation with respect to the manner in which a claimed apparatus is intended to be employed does not differentiate the claimed apparatus from a prior art apparatus satisfying the claimed structural limitations. Ex parte Masham, 2 USPQ2d-1647 (1987). Arditi teaches motivation for utilizing the method with different sensor types, including LiDAR without the use of a camera ([0016] lines 1-16, "FIGS. 3A and 3B illustrate examples of potential differences between data-gathering vehicles. As represented by the sensors 320A and 320B of vehicles 300A and 300B, respectively, data-gather vehicles may have different types of sensors (e.g., LiDARs, cameras, radars, etc.), each with its unique strengths and weaknesses. Not only may the sensor types differ (e.g., LiDAR vs. camera vs. radar), the particular model, capability, age, condition, calibration, and/or configuration of each sensor may also differ from another sensor of the same type (as used herein, “sensor type” refers to a category of sensors designed for capturing a particular type of sensory information, such as LiDAR data from a LiDAR and images from a camera; differences in “sensor type” is therefore limited to such categorical differences and does not include differences in specific models, configurations, or setup of the same type of sensor).").
Regarding claim 11, Arditi in view of Belkin and Dudzik teaches the method of claim 1, further comprising: receiving the LiDAR usage data from the one or more mobile devices (Arditi: LiDAR data 411, data-gathering vehicle; Belkin: page 446 “Experimental results are obtained for own data from mobile ground robot ClearPath Husky with Velodyne HDL-32E lidar.”).
Regarding claim 12, Arditi in view of Belkin and Dudzik teaches the method of claim 1, further comprising: receiving LiDAR sensor data from the one or more mobile devices (Arditi: LiDAR data 411, data-gathering vehicle; Belkin: page 446 “Experimental results are obtained for own data from mobile ground robot ClearPath Husky with Velodyne HDL-32E lidar.”); and
processing the received LiDAR sensor data to determine the LiDAR usage data (Arditi: [0025] lines 43-54, "the LiDAR data may also be associated with metadata 412 that provide contextual information relating to the LiDAR data 411 and the particular LiDAR used for capturing the LiDAR data 411. The metadata 412 may, e.g., include any combination of: the mounting location of the LiDAR in three-dimensional space relative to a reference point; the brand, model, and/or year of the LiDAR sensor; capabilities of the LiDAR (e.g., density and spacing information, laser type, sensor type); configurations (e.g., sensitivity and rotation speed); information pertaining to the LiDAR's mount (e.g., model, size, etc.); and any other pertinent data at the time the LiDAR data 411 was captured.").
Regarding claim 13, Arditi teaches an apparatus (Fig. 9) comprising:
at least one processor (Fig. 10, processor 1002); and
at least one memory (memory 1004) including computer program code for one or more programs ([0081] lines 1-3, “In particular embodiments, processor 1002 includes hardware for executing instructions, such as those making up a computer program.”),
the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following (Figs. 4, 5, and 7),
determine Light Detection and Ranging (LiDAR) usage data (LiDAR data 411, [0025] lines 43-54, "the LiDAR data may also be associated with metadata 412 that provide contextual information relating to the LiDAR data 411 and the particular LiDAR used for capturing the LiDAR data 411. The metadata 412 may, e.g., include any combination of: the mounting location of the LiDAR in three-dimensional space relative to a reference point; the brand, model, and/or year of the LiDAR sensor; capabilities of the LiDAR (e.g., density and spacing information, laser type, sensor type); configurations (e.g., sensitivity and rotation speed); information pertaining to the LiDAR's mount (e.g., model, size, etc.); and any other pertinent data at the time the LiDAR data 411 was captured.") generated by one or more mobile devices (data-gathering vehicles 300A, 300 B,) at one or more locations ([0045] lines 4-7, “The process may begin at step 710, at which a computing system of an autonomous vehicle driving on a road may obtain sensor data at a particular location (e.g., corresponding to an x latitude and y longitude)”);
generate a map layer of a geographic database based on the LiDAR usage data ([0018] lines 1-5, "FIG. 4 illustrates an embodiment of a machine-learning model architecture for transforming data received from different data-gathering vehicles into a common space 460 and using the transformed data to generate HD map data 480"), wherein the map layer indicates one or more locations where a LiDAR device was used by the one or more mobile devices based on the LiDAR usage data ([0030] lines 20-22, "The resulting HD map data 480 would be associated with the particular location at which the sensor data were captured."); and
provide the map layer as an output (Fig. 7, steps 770 and 780.).
Even if Arditi does not explicitly teach that the usage data relates to an indoor usage for indoor positioning, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the method for positioning for indoor positioning. It has been held that a recitation with respect to the manner in which a claimed apparatus is intended to be employed does not differentiate the claimed apparatus from a prior art apparatus satisfying the claimed structural limitations. Ex parte Masham, 2 USPQ2d-1647 (1987).
Arditi does not teach the apparatus, wherein the usage data relates to a usage of LiDAR-based localization;
process the LiDAR usage data to determine a LiDAR-based localization success rate at the one or more locations; and
wherein the map layer depicts a visualization including a heat map indicating the LiDAR-based localization success rate at the one or more locations
Belkin teaches an analogous apparatus for localization from LiDAR data (Abstract), wherein the usage data relates to usage of LiDAR-based localization (Fig. 1, lidar-based localization; page 444 “Localization module. Localization is performed via scan batch processing. Geometric features (corner and surface points) are extracted from the point cloud and matched with edges and planes on the map. The current lidar scan is aligned against the map by minimization of distance between features and their correspondences. This is done in a similar way to lidar-based odometry in the Map reconstructor module.”).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus of Arditi to include the LiDAR-based localization of Belkin because it would yield predictable and advantageous results of utilizing the LiDAR data to localize the mobile device, because the use of LiDAR to localize a mobile device is a well-known technique in the art and would yield predictable results, and the localization of the device from its LiDAR measurements would advantageously increase the accuracy of the localization by utilizing locally measured data.
Arditi in view of Belkin does not teach the apparatus comprising: process the LiDAR usage data to determine a LiDAR-based localization success rate at the one or more locations; and
wherein the map layer depicts a visualization including a heat map indicating the LiDAR-based localization success rate at the one or more locations
Dudzik teaches an analogous apparatus (Fig. 5), comprising:
process the LiDAR usage data to determine a LiDAR-based localization ([0077] lines 8-18, “the localization component 520 can use SLAM (simultaneous localization and mapping) or CLAMS (calibration, localization and mapping, simultaneously) to receive time-of-flight data, image data, lidar data, radar data, sonar data, IMU data, GPS data, wheel encoder data, or any combination thereof, and the like to accurately determine a location of the autonomous vehicle. In some instances, the localization component 520 can provide data to various components of the vehicle 502 to determine an initial position of an autonomous vehicle for generating a trajectory, as discussed herein.”) success rate at the one or more locations ([0078] lines 6-16, “the localization component 520 can provide functionality to determine an error associated with the local map, the three-dimensional map, and/or the one or more sensor system(s) 506. For example, the localization component 520 can determine a position error (e.g., drift error) associated with the vehicle 502. Over time in operation, errors may accumulate, resulting in errors in positioning and/or trajectory data. In some instances, the localization component 520 can determine the error based on, for example, the position error meeting or exceeding a threshold value.”). The determination of error associated with the local map as part of the localization, is the localization success rate; and
the map layer depicts a visualization including a heat map indicating the LiDAR-based localization success rate at the one or more locations ([0086] lines 7-12, “the prediction component 526 can measure a track of an object and generate a discretized prediction probability map, a heat map, a probability distribution, a discretized probability distribution, and/or a trajectory for the object based on observed and predicted behavior”; [0089] lines 27-35, “the map(s) 530 can be used in connection with the localization component 520, the perception component 522, the machine learned component 524, the prediction component 226, and/or the planning component 528 to determine a location of the vehicle 502, identify objects in an environment, generate prediction probabilit(ies) associated with objects and/or the vehicle 502, and/or generate routes and/or trajectories to navigate within an environment.”). The map layer (“maps”) are generated in connection with the localization success rate (“error associated with the local map”) and depicting a heat map (“heat map” and “probability map”) of the determined localization success rates.
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the apparatus of Arditi in view of Belkin to include the localization success rate and the generation of a heat map indicated said rate of Dudzik because the visualization of a localization success would yield predictable and advantageous results, including indicating to a user a confidence or error in the mapping process at a location.
Regarding claim 14, Arditi in view of Belkin and Dudzik teaches The apparatus of claim 13, wherein the apparatus is caused to: process the LiDAR usage data to determine one or more of: (i) a number of LiDAR scans that have occurred at the one or more locations, (ii) a duration of LiDAR scans that have occurred at the one or more locations, or (iii) a number of the one or more mobile devices that have performed a LiDAR scan at the one or more locations (Arditi: [0026] “In particular embodiments, the CNN 450 may also be configured to receive environmental data 430 that describe the environment in which the sensor data (e.g., camera data 401, LiDAR data 411, radar data 421, etc.) were obtained. Each data-gathering vehicle may be associated with its own environment data 430, which would be generally associated with any sensor data output by any sensor of the vehicle. The environmental data 430 may include, for example, data relating to the data-gathering vehicle, such as its make, model, year, dimensions (e.g., length, width, and height), weight, tire size, tire pressure, mount locations, speed and/or vibration of the vehicle during data captured, and any other pertinent information that may affect measurement results. The environmental data 430 may include, for example, data relating to the data-gathering vehicle, such as its make, model, year, dimensions (e.g., length, width, and height), weight, tire size, tire pressure, mount locations, speed and/or vibration of the vehicle during data captured, and any other pertinent information that may affect measurement results. The environmental data 430 may also include the time and date at which sensor data were obtained, since time and date may correlate with lighting conditions, which in turn may affect certain types of sensor measurements (e.g., cameras). In particular embodiments, the environmental data 430 or portions thereof may be included as the metadata of any of the specific sensor data types. For example, if lighting conditions could impact camera data 401, environmental data 430 relating to lighting (e.g., time and date, visibility condition, etc.) may be included as the camera's metadata 402. As another example, if precipitation could impact LiDAR data 411, environmental data 430 relating to precipitation may be included in the LiDAR's metadata 412."). One of ordinary skill in the art would recognize the data related to the data gathering vehicle identifies individual devices, thereby identifying the number of devices for each scan; and the time and date of each scan would thereby identify the number of unique scans at a location;
and include the one or more locations in the map layer based on determining one or more of: (i) that the number of LiDAR scans is greater than a threshold number of LiDAR scans, (ii) that the duration of LiDAR scans is greater than a threshold duration of LiDAR scans, or (iii) that the number of the one or more mobile devices is greater than a threshold number of mobile devices (Arditi [0037] lines 6-20, "A score representing the comparison may be generated and compared with a threshold requirement, the satisfaction of which may cause the training to terminate. Additionally or alternatively, training may be deemed complete when a threshold number of training samples have been processed and/or a threshold amount of training time has been expended. If the system determines that training is incomplete, the system may repeat another iteration of the training process, starting at, e.g., step 510 and have the machine-learning model learn from another training sample. On the other hand, if the system determines that training is complete, the trained model may then be used in operation to automatically process data from data-gathering vehicles and generate latent representations and/or map data."). The number of training samples, taken from LiDAR scans, is the number of LiDAR scans, and the threshold for completeness, following which map data is generated, is the threshold number of scans.
Regarding claim 16, Arditi in view of Belkin and Dudzik teaches The apparatus of claim 13, wherein the apparatus is caused to: process the LiDAR usage data to determine parameters including a date or time of LiDAR usage or scanning, a manufacturer, model, type, year, capabilities/features, a manufacturer of a LiDAR sensor used by the one or more devices, a model of the LiDAR sensor, a type of the LiDAR sensor, one or more capabilities or features of the LiDAR sensor, a manufacturer of the one or more devices, a model of the one or more devices, a type of the one or more devices, one or more capabilities or features of the one or more devices, a LiDAR-based localization success rate, a type of object detected by a LiDAR scan, a number of scans in a scanning session, a scanning orientation, a moving versus stationary LiDAR scan, contextual information of a scanning environment, or a combination thereof, wherein the map layer further associates one or more of the parameters with the one or more locations (Arditi: [0025] lines 43-54, "the LiDAR data may also be associated with metadata 412 that provide contextual information relating to the LiDAR data 411 and the particular LiDAR used for capturing the LiDAR data 411. The metadata 412 may, e.g., include any combination of: the mounting location of the LiDAR in three-dimensional space relative to a reference point; the brand, model, and/or year of the LiDAR sensor; capabilities of the LiDAR (e.g., density and spacing information, laser type, sensor type); configurations (e.g., sensitivity and rotation speed); information pertaining to the LiDAR's mount (e.g., model, size, etc.); and any other pertinent data at the time the LiDAR data 411 was captured."; [0026] lines 18-30, "The environmental data 430 may also include the time and date at which sensor data were obtained, since time and date may correlate with lighting conditions, which in turn may affect certain types of sensor measurements (e.g., cameras). In particular embodiments, the environmental data 430 or portions thereof may be included as the metadata of any of the specific sensor data types. For example, if lighting conditions could impact camera data 401, environmental data 430 relating to lighting (e.g., time and date, visibility condition, etc.) may be included as the camera's metadata 402. As another example, if precipitation could impact LiDAR data 411, environmental data 430 relating to precipitation may be included in the LiDAR's metadata 412.").
Regarding claim 17, Arditi in view of Belkin and Dudzik teaches the apparatus of claim 13, wherein the apparatus is caused to: process the output to generate a user interface depicting a representation of the map layer (Arditi: [0068] lines 1-13, "In particular embodiments, third-party system 970 may be a network-addressable computing system that may provide HD maps or host GPS maps, customer reviews, music or content, weather information, or any other suitable type of information. Third-party system 970 may generate, store, receive, and send relevant data, such as, for example, map data, customer review data from a customer review website, weather data, or any other suitable type of data. Third-party system 970 may be accessed by the other computing entities of the network environment either directly or via network 910. For example, user device 930 may access the third-party system 970 via network 910, or via transportation management system 960."). The map data accessible from the user device is the generation of a user interface depicting a representation of the map layer.
Regarding claim 18, Arditi teaches A non-transitory computer-readable storage medium (Fig. 10, storage 1006), carrying one or more sequences of one or more instructions which, when executed by one or more processors ( [0081] lines 4-9, “to execute instructions, processor 1002 may retrieve (or fetch) the instructions from an internal register, an internal cache, memory 1004, or storage 1006; decode and execute them; and then write one or more results to an internal register, an internal cache, memory 1004, or storage 1006.”), cause an apparatus (Fig. 9) to at least perform the following steps:
determining, by one or more processors (Abstract; Fig. 10), Light Detection and Ranging (LiDAR) usage data (LiDAR data 411, [0025] lines 43-54, "the LiDAR data may also be associated with metadata 412 that provide contextual information relating to the LiDAR data 411 and the particular LiDAR used for capturing the LiDAR data 411. The metadata 412 may, e.g., include any combination of: the mounting location of the LiDAR in three-dimensional space relative to a reference point; the brand, model, and/or year of the LiDAR sensor; capabilities of the LiDAR (e.g., density and spacing information, laser type, sensor type); configurations (e.g., sensitivity and rotation speed); information pertaining to the LiDAR's mount (e.g., model, size, etc.); and any other pertinent data at the time the LiDAR data 411 was captured.") generated by one or more mobile devices (data-gathering vehicles 300A, 300 B,) at one or more locations ([0045] lines 4-7, “The process may begin at step 710, at which a computing system of an autonomous vehicle driving on a road may obtain sensor data at a particular location (e.g., corresponding to an x latitude and y longitude)”);
generating, by the one or more processors, a map layer of a geographic database based on the LiDAR usage data ([0018] lines 1-5, "FIG. 4 illustrates an embodiment of a machine-learning model architecture for transforming data received from different data-gathering vehicles into a common space 460 and using the transformed data to generate HD map data 480"); and
providing, by the one or more processors, the map layer as an output (Fig. 7, steps 770 and 780; Fig. 8, [0060] lines 4-6, “The driving/navigation module 860 may be configured to access a local HD map stored in an HD map data sore 876.”). The access of the HD map is the providing the map layer as an output.
Arditi does not teach the method, wherein the usage data relates to a usage of LiDAR-based localization;
processing, by the one or more processors, the LiDAR usage data to determine a LiDAR-based localization success rate at the one or more locations; and
wherein the map layer depicts a visualization including a heat map indicating the LiDAR-based localization success rate at the one or more locations
Belkin teaches an analogous instructions for localization from LiDAR data (Abstract), wherein the usage data relates to usage of LiDAR-based localization (Fig. 1, lidar-based localization; page 444 “Localization module. Localization is performed via scan batch processing. Geometric features (corner and surface points) are extracted from the point cloud and matched with edges and planes on the map. The current lidar scan is aligned against the map by minimization of distance between features and their correspondences. This is done in a similar way to lidar-based odometry in the Map reconstructor module.”).
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the instructions of Arditi to include the LiDAR-based localization of Belkin because it would yield predictable and advantageous results of utilizing the LiDAR data to localize the mobile device, because the use of LiDAR to localize a mobile device is a well-known technique in the art and would yield predictable results, and the localization of the device from its LiDAR measurements would advantageously increase the accuracy of the localization by utilizing locally measured data.
Arditi in view of Belkin does not teach the instructions comprising: processing, by the one or more processors, the LiDAR usage data to determine a LiDAR-based localization success rate at the one or more locations; and
wherein the map layer depicts a visualization including a heat map indicating the LiDAR-based localization success rate at the one or more locations
Dudzik teaches an analogous instructions, comprising: processing, by the one or more processors (one or more processors 516), the LiDAR usage data to determine a LiDAR-based localization ([0077] lines 8-18, “the localization component 520 can use SLAM (simultaneous localization and mapping) or CLAMS (calibration, localization and mapping, simultaneously) to receive time-of-flight data, image data, lidar data, radar data, sonar data, IMU data, GPS data, wheel encoder data, or any combination thereof, and the like to accurately determine a location of the autonomous vehicle. In some instances, the localization component 520 can provide data to various components of the vehicle 502 to determine an initial position of an autonomous vehicle for generating a trajectory, as discussed herein.”) success rate at the one or more locations ([0078] lines 6-16, “the localization component 520 can provide functionality to determine an error associated with the local map, the three-dimensional map, and/or the one or more sensor system(s) 506. For example, the localization component 520 can determine a position error (e.g., drift error) associated with the vehicle 502. Over time in operation, errors may accumulate, resulting in errors in positioning and/or trajectory data. In some instances, the localization component 520 can determine the error based on, for example, the position error meeting or exceeding a threshold value.”). The determination of error associated with the local map as part of the localization, is the localization success rate; and
the map layer depicts a visualization including a heat map indicating the LiDAR-based localization success rate at the one or more locations ([0086] lines 7-12, “the prediction component 526 can measure a track of an object and generate a discretized prediction probability map, a heat map, a probability distribution, a discretized probability distribution, and/or a trajectory for the object based on observed and predicted behavior”; [0089] lines 27-35, “the map(s) 530 can be used in connection with the localization component 520, the perception component 522, the machine learned component 524, the prediction component 226, and/or the planning component 528 to determine a location of the vehicle 502, identify objects in an environment, generate prediction probabilit(ies) associated with objects and/or the vehicle 502, and/or generate routes and/or trajectories to navigate within an environment.”). The map layer (“maps”) are generated in connection with the localization success rate (“error associated with the local map”) and depicting a heat map (“heat map” and “probability map”) of the determined localization success rates.
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the instructions of Arditi in view of Belkin to include the localization success rate and the generation of a heat map indicated said rate of Dudzik because the visualization of a localization success would yield predictable and advantageous results, including indicating to a user a confidence or error in the mapping process at a location.
Regarding claim 19, Arditi in view of Belkin and Dudzik teaches The non-transitory computer-readable storage medium of claim 18, wherein the apparatus is caused to further perform: processing the LiDAR usage data to determine one or more of: (i) a number of LiDAR scans that have occurred at the one or more locations, (ii) a duration of LiDAR scans that have occurred at the one or more locations, or (iii) a number of the one or more mobile devices that have performed a LiDAR scan at the one or more locations (Arditi: [0026] “In particular embodiments, the CNN 450 may also be configured to receive environmental data 430 that describe the environment in which the sensor data (e.g., camera data 401, LiDAR data 411, radar data 421, etc.) were obtained. Each data-gathering vehicle may be associated with its own environment data 430, which would be generally associated with any sensor data output by any sensor of the vehicle. The environmental data 430 may include, for example, data relating to the data-gathering vehicle, such as its make, model, year, dimensions (e.g., length, width, and height), weight, tire size, tire pressure, mount locations, speed and/or vibration of the vehicle during data captured, and any other pertinent information that may affect measurement results. The environmental data 430 may include, for example, data relating to the data-gathering vehicle, such as its make, model, year, dimensions (e.g., length, width, and height), weight, tire size, tire pressure, mount locations, speed and/or vibration of the vehicle during data captured, and any other pertinent information that may affect measurement results. The environmental data 430 may also include the time and date at which sensor data were obtained, since time and date may correlate with lighting conditions, which in turn may affect certain types of sensor measurements (e.g., cameras). In particular embodiments, the environmental data 430 or portions thereof may be included as the metadata of any of the specific sensor data types. For example, if lighting conditions could impact camera data 401, environmental data 430 relating to lighting (e.g., time and date, visibility condition, etc.) may be included as the camera's metadata 402. As another example, if precipitation could impact LiDAR data 411, environmental data 430 relating to precipitation may be included in the LiDAR's metadata 412."). One of ordinary skill in the art would recognize the data related to the data gathering vehicle identifies individual devices, thereby identifying the number of devices for each scan; and the time and date of each scan would thereby identify the number of unique scans at a location; and
including the one or more locations in the map layer based on determining one or more of: (i) that the number of LiDAR scans is greater than a threshold number of LiDAR scans, (ii) that the duration of LiDAR scans is greater than a threshold duration of LiDAR scans, or (iii) that the number of the one or more mobile devices is greater than a threshold number of mobile devices (Arditi: [0037] lines 6-20, " A score representing the comparison may be generated and compared with a threshold requirement, the satisfaction of which may cause the training to terminate. Additionally or alternatively, training may be deemed complete when a threshold number of training samples have been processed and/or a threshold amount of training time has been expended. If the system determines that training is incomplete, the system may repeat another iteration of the training process, starting at, e.g., step 510 and have the machine-learning model learn from another training sample. On the other hand, if the system determines that training is complete, the trained model may then be used in operation to automatically process data from data-gathering vehicles and generate latent representations and/or map data."). The number of training samples, taken from LiDAR scans, is the number of LiDAR scans, and the threshold for completeness, following which map data is generated, is the threshold number of scans.
Regarding claim 21, Arditi in view of Belkin and Dudzik teaches The method of claim 1, wherein the one or more mobile devices comprise one or more mobile phones (Arditi: [0063] lines 11-12, “user device 930 may be a smartphone with LTE connection”) equipped with one or more respective LiDAR sensors (Arditi: [0063] lines 1-4, “The user device 930, transportation management system 960, autonomous vehicle 940, and third-party system 970 may be communicatively connected or co-located with each other in whole or in part.”; [0066] lines 27-32, “Particular embodiments may provide interfaces that enable a user device 930 (which may belong to a ride requestor or provider), a transportation management system 960, vehicle system 940, or a third-party system 970 to process, transform, manage, retrieve, modify, add, or delete the information stored in data store.”; [0069] lines 8-17, “through the network 910, the transportation management system 960 or third-party system 970 may receive sensor data, associated metadata, object classifications, and/or environmental data from the autonomous vehicles 940. In particular embodiments, the transportation management system 960 or third-party system 970 may also send autonomous vehicles 940 an HD map, map data associated with a particular location, latent representation of data associated with a particular location, and/or instructions for local map updates.”). One of ordinary skill in the art would recognize that the mobile device comprising a mobile phone (e.g. “smartphone”) with connection (“communicatively connected or co-located with each other”) to the mobile device with LiDAR sensor (by connection to and able to retrieve information from systems including the sensor) would be a mobile phone equipped with one or more respective LiDAR sensors.
Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arditi in view of Belkin and Dudzik as applied to claim 1 above, and further in view of Lin et al. (US 20190391800 A1).
Regarding claim 9, Arditi in view of Belkin and Dudzik teaches the method of claim 1.
Arditi in view of Belkin and Dudzik does not teach the method, further comprising: processing the output to generate a geofenced area associated with LiDAR usage.
Lin teaches an analogous method of analyzing location information from sensors (Abstract), further comprising: processing the output to generate a geofenced area associated with LiDAR usage ([0139] lines 1-6, "FIG. 5 illustrates a visualization for displaying a map that allows a DM to draw a geofence for software update distributions, according to an embodiment. In this example, dynamic and interactive visualization 500 is shown that includes a real-time location 502 of a mobility client on map 501."; [0101] lines 1-8, "In an embodiment, geofence crossings can be used to determine if a corporate policy has been violated. For example, drivers for a taxi service may be prohibited from traveling within or outside a certain geographic region enclosed by a geofence. If the mobility client is a self-driving vehicle, then Extended Data fields(s) 208f can include mobility client data specific to self-driving vehicles, such as LiDAR").
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Arditi in view of Belkin and Dudzik to include the generation of geofenced area of Lin because it would yield predictable and advantageous results of sectioning off an area of interest based on sensor data, thereby utilizing sensing and computational resources in a more directed way.
Regarding claim 10, Arditi in view of Belkin, Wirola, and Lin teaches the method of claim 9, wherein the geofenced area indicates an geographic area for transitioning to LiDAR for localization (Lin: [0196] lines 14-26, "For self-driving vehicles, the predictive maintenance processes can predict when sensors (e.g., stereo camera, LiDAR, Radar, Sonar, accelerometers, gyroscopes) are failing or need to be recalibrated. The prediction maintenance processes can also be extended to predicting occurrences of events not related directly to vehicles, such as predicting a traffic jam based on sensor data collected from crowdsourcing a large number of vehicles that are traveling on a particular road or highway. Such predictions can be used to generate map tiles with traffic overlays (e.g., color coding routes), which can be sent to mobility clients when the mobility clients enter a specific geographic region or geofence (see, FIG. 5)").
Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arditi in view of Belkin and Dudzik as applied to claim 1 above, and further in view of Wirola et al. (US 9304970 B2, previously cited.)
Regarding claim 22, Arditi in view of Belkin and Dudzik teaches The method of claim 1.
Arditi in view of Belkin and Dudzik does not teach the method wherein the one or more clusters .”) are depicted in the map layer using one or more graphics, one or more thumbnails, one or more images of one or more objects at the one or more locations, or a combination thereof.
Wirola teaches an analogous method wherein the one or more clusters (col 7 lines 3-13 “Electronic positioning/mapping may be based on “crowd-sourced” information that may comprise, for example, location-related information that is collected by users and that is voluntarily provided for positioning/mapping purposes. The crowd-sourced information may further be considered “sensor-surveyed” in that the information may be recorded by sensors in user apparatuses. At least some of the sensed information may then be stored in one or more databases as “extended fingerprints” that may comprise elements extracted from the sensed information that are usable for positioning/mapping.”) are depicted in the map layer using one or more graphics, one or more thumbnails, one or more images of one or more objects at the one or more locations, or a combination thereof (col 15 lines 29-36, “An example of how extended fingerprint information may be utilized is disclosed in FIG. 9. User 900 may be present somewhere in hallway 800. User 900 may be implementing positioning or mapping functionality on an apparatus. For example, the apparatus of user 900 may execute applications that provide textual location/coordinate information, that graphically display the location of the user within hallway 800 or point out a direction towards an objective, etc.”). The apparatus graphically displaying the location of the user within the hallway is at least depicting the map layer using one or more graphics or images of one or more objects at the location.
It would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of to include graphics of Wirola because it would yield predictable and advantageous results of displaying the map for one or more users, thereby enabling users to potentially visually check for inaccuracies.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN BUTLER GEISS whose telephone number is (571)270-1248. The examiner can normally be reached Monday - Friday 7:30 am - 4:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Catherine Rastovski can be reached at (571)270-0349. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.B.G./Examiner, Art Unit 2863
/Catherine T. Rastovski/Supervisory Primary Examiner, Art Unit 2863