DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in response to the amendments filed on 12/02/2025, in which claims 1, 5, 13-17, and 22 are amended, claims 8 and 19 are cancelled, and claim 23 is new. Claims 1-7 and 9-18 and 20-23 are rejected.
Response to Arguments
Applicant’s amendments and arguments, see REMARKS, filed 12/02/2025, with respect to the rejection of claims 1-12 and 22-23, under 35 USC § 103, have been fully considered and are persuasive. Therefore, the previous rejections have been withdrawn. However, a new rejection under He et al. is presented below.
Applicant’s arguments with respect to the rejection of claims 13-21, rejected under 35 USC §103, have been fully considered, but are not persuasive. However, the Applicant has removed the limitations taught by the secondary art of Shiba. Therefore, a new rejection of claims 13-18, under 35 USC §102, is presented below.
Applicant’s arguments with respect to claim(s) 1-12 and 22-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
With respect to the rejection of claim 13, the Applicant argues that the prior art of record does not explicitly teach determine one or more alignment parameters based at least on a cost space corresponding to a plurality of different potential poses of an ego-machine for a particular point in time that corresponds to capture of sensor data associated with the ego-machine, the plurality of potential poses differing from an initial pose determined for the ego-machine for the particular point in time the cost space indicating different degrees of alignment between a subset of points selected from the sensor data based at least on one or more selection criteria individually corresponding to one or more target parameters of the subset of points, as aligned based at least on the plurality of potential poses, and map data associated with a geographical area, the one or more alignment parameters including one or more third pose parameters of the ego-machine.
The Examiner disagrees with this assertion. Viswanathan discloses: “In one embodiment, the system 100 may start lateral searching the lateral positions of the vehicle 101 that are near or proximate to the middle or the mean of the possible poses (e.g., within the area 223 of FIG. 2C) before proceeding to solve for the vehicle heading.” (¶ [0038]) Here, the system is identifying a subset of points based on a selection criteria, e.g., near or proximate to the middle or mean, corresponding to possible poses within an area, i.e., corresponding to one or more target parameters of the subset of points.
For the above reasons the Examiner finds the arguments directed towards claims 13-18 unpersuasive.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 2, 4, 5, 7, 9-12, 22, and 23 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by He et al. (US 2021/0158546 A1, “He”).
Regarding claim 1, He discloses updated point cloud registration pipeline based on ADMM algorithm for autonomous vehicles and teaches:
A method comprising: generating a pose space corresponding to a first point cloud generated using first sensor data selected from a first sensor data set captured using one or more sensors corresponding to an ego-machine, (At operation 901, the method 900 receives point clouds and corresponding poses of a region from a LIDAR sensor. The point clouds may represent images captured by the LIDAR sensor of a region corresponding to the initial poses (position, orientation ) of the ADV – See at least ¶ [0083]) the first sensor data selected for inclusion in the first point cloud based at least on one or more criteria individually corresponding to generation of the first point cloud, the generating of the pose space further including: (At operation 903, the method 900 selects those point cloud poses identified as having a high confidence level during the data capture phase as anchor poses because the optimized poses of these poses after point cloud registration are not expected to change by more than a threshold from their initial poses – See at least ¶ [0084])
determining an initial pose for the ego-machine for a particular point in time based at least on the first point cloud; (At operation 907, the method 900 searches for related point cloud pairs or frame pairs in each partition. Each frame pair may include a non-anchor pose and an anchor pose. In one embodiment, operation 907 may identify frame pairs based on timestamps corresponding to the frame pairs, for example, when two poses have consecutive timestamps or timestamps that are within a time threshold. In one embodiment, operation 907 may identify frame pairs based on the positions corresponding to the frame pairs, for example, when the geometric distance between the two poses is within a distance threshold – See at least ¶ [0086])
determining a plurality of pose parameter sets for the particular point in time based at least on the initial pose, the determining of individual pose parameter sets of the plurality of pose parameter sets respectively including determining one or more hypothetical pose parameter values with respect to the initial pose and with respect to the particular point in time (At operation 909 , the method 900 selects points in the identified frame pairs in each partition. For each frame pair, operation 909 may select points from the non-anchor pose, i.e., a hypothetical pose parameter value, and corresponding points from the anchor pose to apply the ICP algorithm – See at least ¶ [0087])
determining, for the pose space, a cost space by performing a cost determination for the plurality of pose parameter sets with respect to second sensor data; and (At operation 911, the method 900 applies the ICP algorithm by solving the bundle adjustment equation updated with a regularity term in each partition for the selected points of frame pairs. The regularity term may be a measure of the sum of the geometric distance between the current estimate of the poses and the previous or the initial estimates of the poses of the frame pairs. By minimizing the cost function of the bundle adjustment equation that includes the regularity term for the poses , the ICP algorithm minimizes the differences between successive estimates of the poses and creates intermediate estimates from which the ICP – See at least ¶ [0088])
causing performance of one or more autonomous driving operations based at least on an aligning of the first sensor data and the second sensor data that is based at least on the cost space. (An autonomous vehicle refers to a vehicle that can be configured in an autonomous mode in which the vehicle navigates through an environment with little or no input from a driver. Such an autonomous vehicle can include a sensor system having one or more sensors that are configured to detect information about the environment in which the vehicle operates. The vehicle and its associated controller(s) use the detected information to navigate through the environment. Autonomous vehicle 101 can operate in a manual mode, a full autonomous mode, or a partial autonomous mode – See at least ¶ [0028])
Regarding claim 2, He further teaches:
wherein the second sensor data is included in map data of a geographical area. (Point cloud registration during construction of the HD point cloud map estimates the LIDAR's GPS positions and poses used during the data capture phase to align point clouds of the area to be mapped. After alignment of the point cloud data, an HD 2D or 3D point cloud map may be constructed from the raw point cloud map. To reduce the computational complexity of the point cloud registration, the map area may be divided into smaller partitions or sub maps. Point cloud registration for the sub-map may be implemented in parallel on computation nodes of a computing cluster using the regional iterative closest point ( ICP ) algorithm – See at least ¶ [0021])
Regarding claim 4, He further teaches:
further comprising generating map data based on the aligning of the first sensor data and the second sensor data. (Point cloud registration during construction of the HD point cloud map estimates the LIDAR's GPS positions and poses used during the data capture phase to align point clouds of the area to be mapped. After alignment of the point cloud data, an HD 2D or 3D point cloud map may be constructed from the raw point cloud map – See at least ¶ [0021]; The vehicle and its associated controller(s) use the detected information to navigate through the environment. Autonomous vehicle 101 can operate in a manual mode, a full autonomous mode, or a partial autonomous mode – See at least ¶ [0028])
Regarding claims 5 and 23, He further teaches:
wherein the one or more criteria are based at least on one or more of:
a target number of data points for the first point cloud;
a signal strength threshold corresponding to the first point cloud;
a total number of data points included in the first sensor data set; (In one embodiment, block separation module 803 may partition the point clouds so that the maximum distance between frame pairs in a block does not exceed a threshold. In one embodiment, block separation module 803 may partition the point clouds so that the maximum number of point clouds in a block does not exceed a threshold since the complexity and the computational loading of the ICP algorithm is a function of the number of frame pairs – See at least ¶ [0074])
a target resolution of the first point cloud;
a target data size of the first point cloud;
one or more target map parameters of a map generated using the first point cloud;
one or more target localization parameters of a localization modality corresponding to the first point cloud;
a target spatial coverage of the first point cloud; or
a target angular coverage of the first point cloud.
Regarding claim 7, He further teaches:
wherein the at least one pose parameter set of the one or more pose parameter sets being determined based on an estimated geographical position of the ego-machine having the one or more sensors disposed thereon. (Point cloud registration during construction of the HD point cloud map estimates the LIDAR's GPS positions and poses used during the data capture phase to align point clouds of the area to be mapped. After alignment of the point cloud data, an HD 2D or 3D point cloud map may be constructed from the raw point cloud map – See at least ¶ [0021])
Regarding claim 9, He further teaches:
wherein aligning the first sensor data and the second sensor data is further based at least on one or more other previously determined cost spaces. (According to one embodiment, the regularity term added to the cost function for a pose may be a measure of the geometric distance between the current estimate of the pose and the previous or the initial estimate of the pose. By minimizing the cost function for the pose that includes the regularity term, the ICP algorithm minimizes the differences between successive estimates of the pose and creates inter mediate estimates from which the ICP algorithm may restart if the solution is not satisfactory – See at least ¶ [0024])
Regarding claim 10, He further teaches:
wherein the cost determination is specific to a data type of the first sensor data and the second sensor data. (The ICP algorithm solves the bundle adjustment equation in each sub-map by minimizing a cost function associated with aligning the points of the non-anchor point cloud poses with reference to the corresponding points of anchor poses in each sub-map – See at least ¶ [0040])
Regarding claim 11, He further teaches:
wherein aligning the first sensor data and the second sensor data includes determining relative poses between the first sensor data and the second sensor data. (According to one embodiment, a method for point cloud registration may select point cloud poses that are characterized by higher confidence level during the data capture phase for use as reference poses. The selected poses, whose positions and orientations are fixed, are used as anchor poses for estimating and optimizing the positions and orientations of other point cloud poses during point cloud registration. Estimating and optimizing the poses of non-anchor poses with reference to the anchor poses reduces the number of decision variables to optimize during point cloud registration, reducing the memory requirement. In one embodiment, the method may use metrics such as the number of visible GPS satellites used to calculate the position of the pose, the standard deviation of the position, etc., to determine if the pose is an anchor pose – See at least ¶ [0022])
Regarding claim 12, He further teaches:
wherein determining the cost space includes determining one or more covariances for one or more individual pose parameter sets of the plurality of pose parameter sets. (The method also includes selecting fixed anchor poses from the initial poses. The method further includes separating the point clouds into a number of blocks or partitions. The method further includes identifying frame pairs from the point clouds in each block. Each frame pair includes a fixed anchor pose and a non-anchor pose. The method further includes identifying pairs of points from the point clouds of the frame pairs in each block. The method further includes optimizing the non-anchor poses with reference to the fixed anchor poses in each block based on the pairs of points of the frame pairs by constraining differences between the initial poses and the optimized poses of the non-anchor poses. The method further includes merging the optimized poses for the non-anchor poses from multiple blocks to generate optimized poses for the point clouds of the region by constraining differences between the initial poses and the optimized poses of the non-anchor poses in overlapping areas between the blocks – See at least ¶ [0026])
Regarding claim 22, He discloses updated point cloud registration pipeline based on ADMM algorithm for autonomous vehicles and teaches:
A system comprising: (An autonomous vehicle refers to a vehicle that can be configured in an autonomous mode in which the vehicle navigates through an environment with little or no input from a driver. Such an autonomous vehicle can include a sensor system having one or more sensors that are configured to detect information about the environment in which the vehicle operates. The vehicle and its associated controller(s) use the detected information to navigate through the environment. Autonomous vehicle 101 can operate in a manual mode, a full autonomous mode, or a partial autonomous mode – See at least ¶ [0028])
one or more processors to: (Some or all of the functions of autonomous vehicle 101 may be controlled or managed by perception and planning system 110 , especially when operating in an autonomous driving mode. Perception and planning system 110 includes the necessary hardware (e.g., processor(s), memory, storage) and software (e.g., operating system, planning and routing programs) to receive information from sensor system 115, control system 111, wireless communication system 112, and/or user interface system 113, process the received information, plan a route or path from a starting point to a destination point, and then drive vehicle 101 based on the planning and control information. Alternatively, perception and planning system 110 may be integrated with vehicle control system 111 – See at least ¶ [0035])
determine one or more alignment parameters based at least on a cost space corresponding to a pose space that includes a plurality of different potential poses of a machine for a particular point in time that corresponds to capture of sensor data associated with the ego-machine, (At operation 903, the method 900 selects those point cloud poses identified as having a high confidence level during the data capture phase as anchor poses because the optimized poses of these poses after point cloud registration are not expected to change by more than a threshold from their initial poses – See at least ¶ [0084]) the plurality of potential poses differing from an initial pose determined for the ego-machine for the particular point in time, the cost space indicating different degrees of alignment between a point cloud selected from the sensor data based at least on one or more criteria individually corresponding to one or more target parameters of the point cloud, as aligned based at least on the plurality of potential poses, and map data associated with a geographical area, the one or more alignment parameters including one or more third pose parameters of the ego-machine; (At operation 907, the method 900 searches for related point cloud pairs or frame pairs in each partition. Each frame pair may include a non-anchor pose and an anchor pose. In one embodiment, operation 907 may identify frame pairs based on timestamps corresponding to the frame pairs, for example, when two poses have consecutive timestamps or timestamps that are within a time threshold. In one embodiment, operation 907 may identify frame pairs based on the positions corresponding to the frame pairs, for example, when the geometric distance between the two poses is within a distance threshold – See at least ¶ [0086])
determine a set of pose parameters based at least on the one or more alignment parameters; and (At operation 911, the method 900 applies the ICP algorithm by solving the bundle adjustment equation updated with a regularity term in each partition for the selected points of frame pairs. The regularity term may be a measure of the sum of the geometric distance between the current estimate of the poses and the previous or the initial estimates of the poses of the frame pairs. By minimizing the cost function of the bundle adjustment equation that includes the regularity term for the poses , the ICP algorithm minimizes the differences between successive estimates of the poses and creates intermediate estimates from which the ICP – See at least ¶ [0088])
causing performance of one or more control operations of the machine based at least on the set of pose parameters. (An autonomous vehicle refers to a vehicle that can be configured in an autonomous mode in which the vehicle navigates through an environment with little or no input from a driver. Such an autonomous vehicle can include a sensor system having one or more sensors that are configured to detect information about the environment in which the vehicle operates. The vehicle and its associated controller(s) use the detected information to navigate through the environment. Autonomous vehicle 101 can operate in a manual mode, a full autonomous mode, or a partial autonomous mode – See at least ¶ [0028])
Claim(s) 13-18 and 21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Viswanathan et al. (US 2020/0174487 A1, “Viswanathan”).
Regarding claim 13, Viswanathan discloses method and apparatus for estimating a localized position on a map and teaches:
A system comprising: (FIG. 1 is a diagram of a system capable of localizing a vehicle pose on a map, according to one embodiment – See at least ¶ [0029] and Fig. 1)
one or more processors to: (According to another embodiment, an apparatus for localizing a vehicle pose on a map, comprises at least one processor - See at least ¶ [0004])
determine one or more alignment parameters based at least on a cost space corresponding to a plurality of different potential poses of an ego-machine for a particular point in time that corresponds to capture of sensor data associated with the ego-machine, the plurality of potential poses differing from an initial pose determined for the ego-machine for the particular point in time (In one embodiment, the system 100 can identify all possible vehicle poses or positions based on the width of the lane 109, a width of a road, a width of the vehicle 101 or an average vehicle, or a combination thereof. In one instance, the possible vehicle poses may be based on a predicted or known lateral offset relative to a raw sensor reading (e.g., a GPS sensor 103). By way of example, a sensor reading (e.g., sensor readings 201-217) may typically be X distance from the actual location sensors 103 (e.g., GPS) – See at least ¶ [0037]) the cost space indicating different degrees of alignment between a subset of points selected from the sensor data based at least on one or more selection criteria individually corresponding to one or more target parameters of the subset of points, as aligned based at least on the plurality of potential poses, and map data associated with a geographical area, (In one embodiment, once all possible vehicle poses are known to the system 100 (e.g., lateral positions 201a-201m), the system 100 searches laterally over the poses to minimize an error, i.e., looks for the poses with the lowest cost, between the map 111 and the current set of sensor observations (e.g., sensor readings 201-217) and to obtain a best (and/or initial) lateral position of the vehicle 101… In one embodiment, the system 100 may start lateral searching the lateral positions of the vehicle 101 that are near or proximate to the middle or the mean of the possible poses (e.g., within the area 223 of FIG. 2C), i.e., a subset based at least on one or more selection criteria individually corresponding to one or more target parameters of the subset of points, before proceeding to solve for the vehicle heading . – See at least ¶ [0038]) the one or more alignment parameters including one or more third pose parameters of the ego-machine; (as shown in fig. 2b, the potential poses, i.e., pose parameter sets, are different than each other and different than the initial pose. The potential poses are also for individual points in time, e.g., 201a-201m, and include one or more third pose parameters of the ego vehicle.)
determine a set of pose parameters based on the one or more alignment parameters; and (In one embodiment, once all possible vehicle poses are known to the system 100 (e.g., lateral positions 201a-201m), the system 100 searches laterally over the poses to minimize an error, i.e., alignment parameters, between the map 111 and the current set of sensor observations (e.g., sensor readings 201-217) and to obtain a best (and/or initial) lateral position of the vehicle 101 – See at least ¶ [0038])
causing performance of one or more control operations based at least on the set of pose parameters. (In step 507, the calculation module 407 determines a local optimum of the vehicle pose based on the selected lateral offset and the selected vehicle heading, wherein the vehicle pose is localized to the map based on the local optimum. The position and heading of the vehicle 101 are important because these localization components are required by the mapping platform 119 to provide the vehicle 101 with a proper steering angle and speed to ensure safe and stable travel on the lane 109 ( i.e., centered within the lane) – See at least ¶ [0049])
Regarding claim 14, Viswanathan further teaches:
wherein: one or more of:
the set of pose parameters includes at least one pose parameter in common; or
the set of pose parameters includes at least one unique pose parameter. (the unknown values, parameters, and/or latent values are the localization components (e.g., lateral offsets, longitudinal offsets, and/or heading offsets from a true vehicle pose). Through the use of the EM framework, the system 100 can pick an arbitrary value for one set of unknowns (e.g., lateral offset or lateral position) and then use that value to estimate the second set of unknowns (e.g., vehicle heading, longitudinal offset or position). Thereafter, the system 100 uses the new value (s) to improve the estimate of the first set, and then keeps alternating between the two sets (i.e., iterating) until the respective values converge – See at least ¶ [0034])
Regarding claim 15, Viswanathan further teaches:
wherein the one or more target parameters of the subset of points are based at least on one or more of:
a target number of data points for the subset of points;
a signal strength threshold corresponding to the subset of points;
a total number of data points included in the sensor data;
a target resolution of the subset of points;
a target data size of the subset of points;
one or more target map parameters of a map generated using the subset of points;
one or more target localization parameters of a localization modality corresponding to the subset of points; (In one embodiment, once all possible vehicle poses are known to the system 100 (e.g., lateral positions 201a-201m), the system 100 searches laterally over the poses to minimize an error, i.e., looks for the poses with the lowest cost, between the map 111 and the current set of sensor observations (e.g., sensor readings 201-217) and to obtain a best (and/or initial) lateral position of the vehicle 101… In one embodiment, the system 100 may start lateral searching the lateral positions of the vehicle 101 that are near or proximate to the middle or the mean of the possible poses (e.g., within the area 223 of FIG. 2C), i.e., a subset, before proceeding to solve for the vehicle heading – See at least ¶ [0038])
a target spatial coverage of the subset of points; or
a target angular coverage of the subset of points.
Regarding claim 16, Viswanathan further teaches:
wherein the predicted position of the ego machine is based at least on the one or more of: one or more ego-motion parameters or one or more plane parameters. (during localization, the vehicle position and/or heading direction can be obtained from various sensors of the vehicle 101 – See at least ¶ [0029])
Regarding claim 17, Viswanathan further teaches:
wherein determining the one or more alignment parameters includes: obtaining a pose space, the pose space including a plurality of pose parameter sets, individual pose parameters set of the plurality of pose parameter sets respectively including one or more hypothetical pose parameter values with respect to the subset of points such that the plurality of pose parameter sets respectively indicate the plurality of potential poses of the ego-machine; and (In one embodiment, once all possible vehicle poses are known to the system 100 (e.g., lateral positions 201a-201m), the system 100 searches laterally over the poses to minimize an error, i.e., looks for the poses with the lowest cost, between the map 111 and the current set of sensor observations (e.g., sensor readings 201-217) and to obtain a best (and/or initial) lateral position of the vehicle 101 – See at least ¶ [0038])
determining the cost space for the pose space, the determining of the cost space including performing a cost determination for the plurality of pose parameter sets, the cost determination being based at least on respective comparisons between the map data and the subset of points in which, for individual respective comparisons, the subset of points is oriented based at least respective potential poses corresponding to individual pose parameter sets. (In step 507, the calculation module 407 determines a local optimum of the vehicle pose based on the selected lateral offset and the selected vehicle heading, wherein the vehicle pose is localized to the map based on the local optimum. The position and heading of the vehicle 101 are important because these localization components are required by the mapping platform 119 to provide the vehicle 101 with a proper steering angle and speed to ensure safe and stable travel on the lane 109 ( i.e., centered within the lane) – See at least ¶ [0049])
Regarding claim 18, Viswanathan further teaches:
wherein the one or more hypothetical pose parameter values include one or more of:
a hypothetical geographical position of the ego-machine; or
a hypothetical orientation of the ego-machine. (In step 507, the calculation module 407 determines a local optimum of the vehicle pose based on the selected lateral offset and the selected vehicle heading, wherein the vehicle pose is localized to the map based on the local optimum. The position and heading of the vehicle 101 are important because these localization components are required by the mapping platform 119 to provide the vehicle 101 with a proper steering angle and speed to ensure safe and stable travel on the lane 109 ( i.e., centered within the lane) – See at least ¶ [0049])
Regarding claim 21, Viswanathan further teaches:
wherein the system comprises one or more of:
a control system for an autonomous or semi-autonomous machine; (The position and heading of the vehicle 101 are important because these localization components are required by the mapping platform 119 to provide the vehicle 101 with a proper steering angle and speed to ensure safe and stable travel on the lane 109 (i.e., centered within the lane) – See at least ¶ [0049] Examiner further notes that the invention is directed generally to the operation of autonomous vehicles.)
a perception system for an autonomous or semi-autonomous machine; (the sensor data module 401 may also receive an input (e.g., imagery data) from one or more other sensors 107 (e.g., a camera sensor, a LIDAR sensor, a RADAR sensor, etc.) associated with the vehicle 101 – See at least ¶ [0046])
a system for performing simulation operations;
a system for performing deep learning operations; (In one embodiment, the feature extraction process also comprises converting the feature data into a format suitable for input into the machine learning model 123. For example, the features or data items can be converted into an input vector or matrix for training the by the machine learning model 123. Other examples of feature conversion can include but is not limited to: converting a text label to a Boolean flag; converting text labels to categorical labels; converting dates/times to a standardized format; normalizing or converting the extracted feature data into a common taxonomy or dictionary of terms; etc. – See at least ¶ [0061])
a system for generating synthetic data;
a system for generating multi-dimensional assets using a collaborative content platform;
a system implemented using an edge device;
a system implemented using a robot;
a system incorporating one or more virtual machines (VMs);
a system implemented at least partially in a data center; or (Accurately determining the vehicle 101's location map 111 enables planning of a route, both on fine and coarse scales. On a coarse scale, navigation maps (e.g., a digital map provided from a geographic database 117) allow a vehicle 101 to know what roads to use to reach a destination – See at least ¶ [0030])
a system implemented at least partially using cloud computing resources.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3 is rejected under 35 U.S.C. 103 as being unpatentable over He, as applied to claim 1, and in further view of Hansen et al. (US 2020/0333466 A1, “Hansen”).
Regarding claim 3, He not explicitly teach wherein the one or more sensors that capture the first sensor data are disposed on a first ego-machine and the second sensor data is captured by one or more sensors disposed on a second ego-machine. However, Hansen discloses ground intensity lidar localizer and teaches:
wherein the one or more sensors that capture the first sensor data are disposed on the first ego-machine and the second sensor data is captured by one or more sensors disposed on a second ego-machine. (To aid in navigating the environment, autonomous vehicles can also rely on preconstructed localization maps that contain detailed prior data. For example, the localization maps can encompass long stretches of highways, city road segments, and the like. In order to create and update these localization maps, the AV management system can use the sensor data that are collected and stored by a fleet of autonomous vehicles and/or human-driven vehicles – See at least ¶ [0035])
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the updated point cloud registration pipeline based on ADMM algorithm for autonomous vehicles of He to provide for the ground intensity lidar localization, as taught in Hansen, to improve techniques for capturing localization data in geometrically generate areas. (At Hansen ¶ [0009])
Claim(s) 6 are rejected under 35 U.S.C. 103 as being unpatentable over He et al. in view of Viswanathan.
Regarding claim 6, He does not explicitly teach wherein the one or more hypothetical pose parameters include one or more of: a hypothetical geographical position of the ego-machine; or a hypothetical orientation of the ego-machine. However, Viswanathan discloses method and apparatus for estimating a localized position on a map and teaches:
wherein the one or more hypothetical pose parameters include one or more of:
a hypothetical geographical position of the ego-machine; or (In step 501, the sensor data module 401 receives an input specifying the vehicle pose (e.g., vehicle 101) with respect to a road lane (e.g., lane 109) of a map (e.g., a digital map 111) – See at least ¶ [0046])
a hypothetical orientation of the ego-machine. (By way of example, a vehicle pose may include a vehicle position and a vehicle heading, i.e., an orientation, relative to a lane or road (e.g., lane 109) – See at least ¶ [0046])
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the updated point cloud registration pipeline based on ADMM algorithm for autonomous vehicles of He to provide for the method and apparatus for estimating a localized position on a map, as taught in Viswanathan, for safe and stable autonomous driving (At Viswanathan ¶ [0046])
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Viswanathan, as applied to claim 13, and in further view of Zhang et al. (US 2022/0185316 A1, “Zhang”).
Regarding claim 20, Viswanathan does not explicitly teach wherein the sensor data includes RADAR data and the map data includes RADAR map data. However, Zhang discloses change detection criteria for updating sensor-based reference maps and teaches:
wherein the sensor data includes RADAR data and the map data includes RADAR map data. (When the sensor data 112 includes radar data, differences can be identified in some features that are unique to radar, which if exploited enable more accurate identifications of change detections in a radar layer of the map 114 – See at least ¶ [0073])
In summary, Viswanathan discloses RADAR sensors producing RADAR data and map data that may use the RADAR data to localize the vehicle. Viswanathan does not explicitly teach that the map data includes RADAR map data. However, Zhang discloses change detection criteria for updating sensor-based reference maps and teaches a RADAR data layer in the map, which is compared to the RADAR sensor data from the vehicle.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to have modified the method and apparatus for estimating a localized position on a map of Viswanathan to provide for the change detection criteria for updating sensor-based reference maps, as taught in Zhang, to enable better real-time awareness to aid in control and improve driving-safety. (At Zhang ¶ [0016])
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHASE L COOLEY whose telephone number is (303)297-4355. The examiner can normally be reached Monday-Thursday 7-5MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached on 571-270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.L.C./Examiner, Art Unit 3662
/ANISS CHAD/Supervisory Patent Examiner, Art Unit 3662