Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 03/17/2026 have been fully considered but they are not persuasive.
Regarding the placement of the lidars on the bumper, while there are changes to the FOV based on the lidar placement and certain placements may have greater or more limited FOVs compared to other placements this does not fundamentally change the operation of lidar and at a high level of generality the placement of lidars around the whole body/on various portions of a vehicle is WURC, as such the applicant’s arguments concerning the Lidars being specifically bumper mounted are not persuasive.
As further evidence as to the placement of Lidars on the bumper being generally equivalent to placement of lidars elsewhere (i.e. that the reasoning for rearrangement of parts of In Re Japikse being applied applies), please see:
US 20160282468 A1; [0034] lidar mounted on bumpers or on any other side of the vehicle
US 20170349174 A1, [0028] one or more lidars are implemented on various parts of the vehicle, including on the bumper
US 20200064483 A1, [0171]
The citing of additional or new references as evidence for a previous action’s assertion that something is WURC in the field is not generally considered to be a new grounds (MPEP 1207.03(a))
Additionally while Di Cicco does disclose roof mounted Lidars, it does also explicitly disclose body mounted lidars, e.g. [0038] “As described in more detail below with respect to FIGS. 13-19, a vehicle (such as an autonomous vehicle) can have multiple LiDAR devices mounted at different locations of the vehicle. Data from the LiDAR devices can be merged in order to take advantage of this redundancy. “ and later in [0137] provides teachings that “mounted at different locations on the vehicle” is not meant to be limiting to only locations on the roof. [0137] “In an embodiment, the polar voxels 1702a-d are updated based on one or more metrics associated with the respective polar voxel. One type of metric is the time at which the polar voxel was last updated. In an embodiment, the polar voxels 1702a-d are updated in a staggered fashion in which some of the voxels are updated at a different time than the other voxels are updated. ... In an embodiment, some other voxel metrics include timestamp of the last update for a particular polar voxel, the number of LiDARs that are updating a particular given polar voxel, covariance of the spread of points in a particular polar voxel, and the ratio of the number of LiDARs attached to the roof of the vehicles to the number of LiDARs attached to the body of the vehicle.”
As such the examiner is not persuaded that Di Cicco is directed to roof mounted lidars in particular. While Di Cicco does contain figures of roof mounted areas, when the specification is referenced it is clear that the roof mounting is merely illustrative, the specification repeatably uses generally language e.g. [0038] for the mounting locations of Lidars that Di Cicco is limited only to roof mounted lidars is not supported and from other sections of Di Cicco body mounted Lidars are disclosed e.g. [0137]; Di Cicco is therefore directed to Lidars mounted (anywhere) on vehicle and harmonizing/fusion of their point clouds, not just to lidars mounted on the roof.
Regarding the arguments of He (i.e. the clustering to points for detection and to reduce load on the embedded board); this limitation as recited in the claims is understood to have patentable weight only in so much as “to reduce a number points” and “to reduce load on an embedded board” would inherently modify algorithm clustering in compared to a different reasoning/rational for clustering; in the current case both of these reasons/purposes for clustering are natural results of clustering. When clustering (Data points) inherently it is reducing the number of points, and as cited in He the clustering is done as part of the object detection, as to the “Reduce load” limitation this is a inherent quality which flows naturally from the clustering, when the number of points that are analyzed is reduced (Clustered) there is subsequentially less determinations which need to be performed for each/among all the resulting clusters for a given detection algorithm and thus the load on the processor is reduced.
As it relates to the arguments concerning Hoelscher (i.e. the filtering of the cluster(s) with less than a predetermined number of points) it is not persuasive.
When read in light of page 14 of the applicant’s specification, the filtering (of clusters/data points) to leave only “reliable” data points are left is understood to include/be the filtering of small temporary objects (e.g. leaves, paper) such that only the important/static objects (tree trunks, signs, etc) are left. While Hoeslcher is directed to fallen object detection the underlying purpose of the filtering is the same, Hoelscher is removing small temporary objects (or more specifically the clusters of points which likely correspond to such objects), (as cited in [0075] of Hoelscher the filtering is performed to remove noise data points and points corresponding to temporary/unimportant objects such as dust); further in Di Cicco the object detection/recognition is a step of the map creation, e.g. [0089] “FIG. 5 shows an example of inputs 502a-d (e.g., sensors 121 shown in FIG. 1) and outputs 504a-d (e.g., sensor data) that is used by the perception module 402 (FIG. 4). One input 502a is a LiDAR (Light Detection and Ranging) system (e.g., LiDAR 123 shown in FIG. 1). LiDAR is a technology that uses light (e.g., bursts of light such as infrared light) to obtain data about physical objects in its line of sight. A LiDAR system produces LiDAR data as output 504a. For example, LiDAR data is collections of 3D or 2D points (also known as a point clouds) that are used to construct a representation of the environment 190.” as such the distinction that Hoelscher is filtering as part of an object detection algorithm/step versus to create a map is not persuasive as the step of (reliable) object detection is part of creating a map using lidar data. i.e. to create a lidar map one must first determine the various objects (and their locations relative to the sensor and each other) that a lidar detects. And through that collection of detect objects a map is created.
Therefore the grounds of rejection are maintained, an updated grounds to reflect the applicant’s amendments appear below.
Claim Rejections - 35 USC § 103
Claim(s) 1-2, 6-9, and 12-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20200249353 A1, “MERGING DATA FROM MULTIPLE LIDAR DEVICES”, Di Cicco et al and further in view of Case law In Re Japikse and further in view of US 20210323572 A1, He et al, “A POINT CLOUDS REGISTRATION SYSTEM FOR AUTONOMOUS VEHICLES” and further in view of US 20230182314 A1, Hoelscher et al, “METHODS AND APPARATUSES FOR DROPPED OBJECT DETECTION”.
Regarding Claim 1, Di Cicco teaches “system comprising: a first LiDAR and a second LiDAR ( [0139] The processor receives 1802 first LiDAR point cloud information from a first LiDAR device and second LiDAR point cloud information from a second LiDAR device. In an embodiment, the LiDAR devices are the LiDAR devices 1302, 1304 shown in FIG. 13, and the LiDAR point cloud information is the point clouds 1622, 1624 shown in FIG. 16.);” a LiDAR data merge unit receiving data from the first LiDAR and the second LiDAR, aligning LiDAR times through time synchronization, and then converting the data into a point cloud type and merging the data;”( [0133] FIG. 16 shows components of a system used to generate a consolidated point cloud 1600. Each LiDAR device 1602, 1612 has a processor 1604, 1614 (e.g., microprocessor, microcontroller) each of which is configured with a respective starting angle 1606, 1616 and frequency 1608, 1618. In use, the LiDAR devices 1602, 1612 generates point clouds 1622, 1624 that are received by a processor 1626 (e.g., an implementation of or component of the perception module 402 shown in FIG. 4). The points of the point clouds 1622, 1624 are associated with timestamp data 1628, 1630. The processor 1626 uses the starting angles 1606, 1616, frequencies 1608, 1618, and timestamp data 1628, 1630 to generate the consolidated point cloud 1600. ” The lidars’ data outputs are aligned/synchronized via their timestamps and a merged/consolidated point cloud is created);” an electronic control unit (ECU) providing inertial data of the vehicle for correcting the data merged in the LiDAR data merge unit; and an the LiDAR data merge unit by using the inertial data of the vehicle received from the ECU to obtain LiDAR odometry for estimating a movement of the vehicle”( [0093] “In some embodiments, outputs 504a-d are combined using a sensor fusion technique. Thus, either the individual outputs 504a-d are provided to other systems of the AV 100 (e.g., provided to a planning module 404 as shown in FIG. 4), or the combined output can be provided to the other systems, either in the form of a single combined output or multiple combined outputs of the same type (e.g., using the same combination technique or combining the same outputs or both) or different types type (e.g., using different respective combination techniques or combining different respective outputs or both) In some embodiments, an early fusion technique is used. An early fusion technique is characterized by combining outputs before one or more data processing steps are applied to the combined output. In some embodiments, a late fusion technique is used. A late fusion technique is characterized by combining outputs after one or more data processing steps are applied to the individual outputs.” Here teaches outputs are fused (merged) into a single output; earlier in [0089] it is known that the outputs 504a-d correspond to the inputs of 502a-d which come from the sensor(s) 121, which earlier in [0060] these sensors include an IMU used to infer the vehicles position/state);””( [0087] The planning module 404 also receives data representing the AV position 418 from the localization module 408. The localization module 408 determines the AV position by using data from the sensors 121 and data from the database module 410 (e.g., a geographic data) to calculate a position.” The localization module uses the sensors 121 (i.e. lidars and IMU) to determine (Extract) location + [0085] teaches the extracting (planning) of a traveling route within the route based on the data from the perception module (i.e. lidar data and imu data)); and additionally Di Cicco teaches that its various modules are implemented via processors ([0059])
Di Cicco however does not teach that the lidars are implemented on the bumper (“using a bumper-mounted dual LiDAR, “) and that the vehicle control and navigation (lidar data processing and recognition) is implemented in a SLAM system (it teaches the localization portion “L” as taught above but not necessarily the mapping “M”). (i.e. “to output data for map creation and location recognition;” and “generating a 3D map of a road on which the vehicle travels, and extracting a location and a traveling route of the vehicle inside a road.”
Regarding the first difference in where the lidars are positioned. Di Cicco does teach that the lidars can generally be mounted anywhere on the vehicle ([0038] + [0115] “Each of the two LiDAR devices 1302, 1304 is positioned at a different location on the AV 1300. In an embodiment, one of the devices 1302 is attached (e.g., welded, affixed, or mounted) at one position 1312, and the other device 1304 is attached at another position 1314. While some attachment techniques (e.g., welding) are semi-permanent and are unlikely to change during the life of the AV 1300, other attachment techniques (e.g., magnetic attachment) enable the LiDAR devices 1302, 1304 to be removed (e.g., for maintenance or replacement) or moved to a different position at a different time.”) however it does not explicitly teach two lidars mounted onto the bumper of the vehicle.
However as Di Cicco teaches generally any placement of the lidars on the vehicle, and at the level of generality currently recited regarding the placement of the lidars and their functioning the specific implementation of the lidars onto the bumper unpatentable as a simple Rearragnment of Parts as set forth by In Re Japikse.
Currently the specific placement (on a bumper of a vehicle) as opposed to elsewhere (E.g. on the roof of the vehicle) does not change the underlying principles of operation of the Lidar. And from Di Cicco [0116]-[0118] the specific placement of the lidar only changes their (positional) relationship to a common reference point in which the lidar data/point clouds are merged relative to and is something already envisioned by Di Cicco and within the capabilities of one of ordinary skill in the art to account for. Thus the difference/implementing of the lidars on the bumper is only one of placement and not of function and the specific placement of lidar onto the bumpers of Di Cicco does not change the underlying principles of operation of Di Cicco and therefore this difference of the placement in the lidars’ locations is unpatentable as a simple matter of design choice not affecting the operation of the device.
As modified to have its lidar(s) on the bumper of the vehicle, Di Cicco would still not teach the generation of a three dimensional map while also localizing the vehicle (i.e. Di Cicco does not teach the “S” and “AM” portions of “SLAM”)
He teaches an autonomous vehicle control system (He Abstract + [0001] Teaches that the system is for autonomous vehicles) which includes an Onboard SLAM system which utilizes lidar point cloud data and IMU data to create a three dimensional map of the area the vehicle is traveling through. (He [0081] IMU and lidar (point cloud) are used to generate an HD map which from [0004] is known to be in 3D) and additionally teaches “wherein the LiDAR odometry is obtained using the point cloud data of the first LiDAR and the second LiDAR, and the odometry is calculated through matching between scans using features detected in LiDAR scans,”(He [0108] Based on the extracted segments, segment based registration process 1103 utilizes a limited set of points from each segment type of the frame and applies an optimization algorithm such as ICP (as part of algorithms/models 313 of FIG. 9) to find matches from the same segment types from the previous immediate frame in the buffer. If initial frame is the only frame, e.g., the first frame, the initial frame can be established as the reference frame and the corresponding pose for the initial frame can be a reference pose.” The current frame’s features are matched to the previous (Reference) frames features);” and clustering of the received point cloud is performed in order to reduce the number of point clouds used for detection and minimize a load in an embedded board.”(He [0107] Referring to FIG. 11A, in one embodiment, pipeline 1100 receives initial frame 1101 (e.g., a first LIDAR point cloud frame) from a LIDAR sensor of an ADV. Segments extraction process 1102 then extracts segments from the received frame. Note, segments refer to clusters of point clouds, super points, or salient image regions (or voxels), i.e., regions corresponding to individual surfaces, objects, contour, or natural parts of objects. These segments or super point objects may be extracted using structural information of objects detected in the frame. In one embodiment, the segments are categorized into segment types. Example segments or objects types may be cylindrical objects, planar patch objects, or any geometrically identifiable objects that may have peculiar geometric and spatial attributes. Point clouds characteristics can be used for segment extraction, such as intensity, texture, or proximity of objects represented by the point clouds. Segments extraction process may apply a number of algorithms (as part of algorithms/models 313 of FIG. 9) to the point clouds to extract the segments. For example, segments extraction process may apply an image segmentation algorithm, such as an edge detection, dual clustering, region growth, or watershed transformation algorithm, etc. to extract segments from the LIDAR frame” He teaches clusters the point clouds into segments, implicitly by segmenting the point cloud the processing load is reduced)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to modify Di Cicco to include the SLAM (map generation in addition to the localization) as taught by He et al. One would be motivated to implement SLAM (the mapping portions thereof) into Di Cicco’s localization to allow for the vehicle to navigate through new areas where the high definition maps of Di Cicco do not currently exist. This motivation/improvement of being able to travel through new (unmapped) areas in taught in He ([0007] In a first aspect, embodiments of the disclosure provide a computer-implemented method to register point clouds for autonomous driving vehicles (ADV), the method including: receiving a plurality of point clouds and corresponding poses from ADVs equipped with LIDAR sensors capturing point clouds of a navigable area to be mapped, wherein the point clouds correspond to a first coordinate system; partitioning the point clouds and the corresponding poses into one or more loop partitions based on navigable loop information captured by the point clouds; for each of the loop partitions, applying an optimization model to point clouds corresponding to the loop partition to register the point clouds, including transforming the point clouds from the first coordinate system to a second coordinate system; and merging the one or more loop partitions together using a pose graph algorithm, wherein the merged partitions of point clouds are utilized to perceive a driving environment surrounding the ADV.” i.e. from the “to be mapped” is it known that the area doesn’t have a HD map, thus the system allows for new HD maps to be created/for the vehicle to enter new areas not previously map)
As modified, modified Di Cicco (Di Cicco + He) does not teach “wherein, in clustering, a cluster with less than a set number of points is not trusted and not registered, and through this process, a discontinuous noise point is filtered out and only a reliable point is left.”
Hoelscher teaches a Lidar point cloud clustering system/method which includes “in clustering, a cluster with less than a set number of points is not trusted and not registered, and through this process, a discontinuous noise point is filtered out and only a reliable point is left”(Hoelscher [0076] “The output of act 630 can be a set of point clusters from the filtered distance-based point cloud. Process 600 then proceeds to act 640, where the point clusters are further processed to determine which point clusters may correspond to a possible dropped object and which point clusters likely do not correspond to a dropped object. …. For instance, point clusters having fewer than a threshold number of points (e.g., less than 10 points, less than 5 points, less than 2 points) may be removed from the set of point clusters corresponding to possible dropped objects. … It should also be appreciated that actions 630 and 640 may operate together, such that when a point cluster is formed in act 630, one or more criteria may be applied in act 640 to the formed point cluster to assess whether it should be retained or removed from the set of point clusters.” Here teaches clustering of data points includes clustering (and subsequently removing) clusters which contain less then a threshold number of points,),
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to modify Di Cicco to include the removal of clusters (segments of He) which only contain a number of lidar points below a threshold as taught by Hoelscher. One would be motivated to implement this filtering to remove detections/segments which correspond to sensor noise and/or represent transient objects which should be ignored (dust, fog, rain, etc) (Hoelscher [0076] ”…For instance, if the shape of the point cluster remains consistent over a certain period of time (e.g., 5 seconds), the point cluster may be determined to likely correspond to an object rather than something that is transient such as noise or dust. It should also be appreciated that actions 630 and 640 may operate together, such that when a point cluster is formed in act 630, one or more criteria may be applied in act 640 to the formed point cluster to assess whether it should be retained or removed from the set of point clusters.”)
Regarding Claim 2, modified Di Cicco teaches “wherein raw data of each of the first LiDAR and the second LiDAR is expressed as one integrated coordinate through point cloud merge, and a relative position difference between sensors is obtained in an integrated coordinate system and applied to align all point clouds with the sensors in a corrected coordinate system as an origin.”( Di Cicco [0127] As indicated above, if each LiDAR device 1302, 1304 generates its own point cloud, the two point clouds 1316 and 1318 are consolidated, e.g., for use by the perception module 402 (FIG. 4) according to the techniques described below. A consolidated point cloud includes the points from both point clouds 1316 and 1318. In an embodiment, the two point clouds 1316 and 1318 are consolidated or merged or amalgamated or blended together as soon as each of the LiDAR device 1302 and 1304 start generating the point clouds 1316 and 1318 respectively. In an embodiment, the two point clouds 1316 and 1318 are merged after the LiDAR devices 1302 and 1304 finish generating point clouds 1316 and 1318 respectively. One technique for consolidating the point clouds is to normalize the coordinates of each of the points to a common point of reference, e.g., a particular location 1320 on the AV 1300. In this manner, the consolidated point cloud (sometimes referred to as a merged point cloud) is defined using the particular location 1320 as the origin (e.g., coordinates 0,0,0 on Cartesian x-y-z axes, or an origin defined using polar coordinates as described in more detail below). The points from the two point clouds 1316 and 1318 are translated to the consolidated point cloud by normalizing the coordinates to the common origin. In other words, if one point from one cloud and another point from the other cloud were detected at approximately the same location in the environment 190, their coordinates are changed so that they both have approximately the same coordinates and thus occupy approximately the same location in the consolidated point cloud.” The sensor’s individual point clouds are aligned based the relative positions of the sensors to an origin/reference point.)
Regarding Claim 6, modified Di Cicco teaches “The SLAM system of claim 4, wherein, after clustering the point cloud data, smoothness of each point is calculated and divided into edge and planar to extract features, a scan area is divided into a set number of sub-areas and edge and planar extraction is performed for each area to uniformly extract the features, and thereafter, correspondence of the features between two consecutive scans is calculated to obtain the lidar odometry.”(He “ [0109] In one embodiment, features extraction process 1104 extracts features or features representations from each segment of the current frame, and loop detection process 1105 compares the extracted features to features of previous frames. Features extraction is a dimensionality reduction process, where an initial set segment is reduced to a group of features for processing, while still accurately and completely describing the original segments. Examples of features include smoothness, linearity, and continuity of points for a segment (e.g., patterns). If the comparison provides a number of matching features above a predetermined threshold (e.g., quantity or percentage), then a loop closure is detected. Here, features are compared instead of segments because objects may be blocked or partially visible in a current field of view, different from a previous field of view. In one embodiment, features include eigenvalue based features. Eigen value based features can include linearity, planarity, scattering, omnivariance (e.g., characteristics of a volumetric point distribution), etc. features.” Here He teaches that each segment (Cluster) (set sub area) has features extracted for comparison with previous frames, and from He [0107] lidar frames use image based feature extractors which in view of He [0145] plane extraction + He [0086] edge extraction includes both edge and planar features of the point cloud)
Regarding Claim 7, modified Di Cicco teaches “The SLAM system of claim 6, wherein the Lidar odometry is obtained by calculating a transform matrix between the features having correspondence, and at this time, in order to solve the transform matrix as an optimization problem, optimization is performed with edge correspondence and planar correspondence as costs.”( He [0108] Based on the extracted segments, segment based registration process 1103 utilizes a limited set of points from each segment type of the frame and applies an optimization algorithm such as ICP (as part of algorithms/models 313 of FIG. 9) to find matches from the same segment types from the previous immediate frame in the buffer. If initial frame is the only frame, e.g., the first frame, the initial frame can be established as the reference frame and the corresponding pose for the initial frame can be a reference pose.” Here teaches that the optimization to find correspondence (matching) features between a current and the previous/reference frame which earlier when discussing the ICP algorithm includes a transformation matrix between potential point cloud pairs (He [0088]-[0089]))
Regarding Claim 8, modified Di Cicco teaches “in the optimization process, a change of a z-axis in the Lidar odometry of the vehicle and a roll and pitch are measured through matching between the scans measured by LiDAR,”(He [0114] FIG. 11B is a flow chart illustrating an example of a loop closure localization according to one embodiment. Operations 1110 can be performed by loop detection process 1105 of FIG. 11A. Referring to FIG. 11B, when a loop closure is detected 1111, target map or segments map generation process 1112 can generate a target map based on previously extract segments for the frames of the loop. Segments map or target map can be a database, a struct or, a class object storing a list of segments of the frames. The segments can be stored as a number of points corresponding to the segments. FIG. 13 illustrates an example of a target map according to one embodiment. Referring back to FIG. 11B, based on the target map and previous registration results performed by process 1104 of FIG. 11A, process 1113 updates the initial pose by searching a best candidate pose for the initial pose. In one embodiment, based on the updated initial pose, process 1113 applies an iterative method to the initial pose and the registration results to determine a transformation to be further applied to the registration results to reduce a drift caused by SLAM. In this case, the SLAM drift can be reduced because the loop closure provides a second indication for the position and orientation of the initial pose. An example iterative method can be random sample consensus (RANSAC). RANSAC is an iterative method to fit a model from a set of observed data that contains outlier data points, when outlier data points should be accorded no influence on the model to be fitted. Once, the points clouds are registered, an HD point clouds map can be generated using the registration results.” Here teaches pose optimization of the lidar matching (i.e. yaw and pitch) in the HD map (i.e. x , y, and z axis/planes));” when calculating a movement of the vehicle in x and y directions on the road, a route estimation value is provided using inertial measurement unit (IMU) data of the vehicle to complement the Lidar odometry calculation, and data on longitudinal acceleration, lateral acceleration , and yaw rate are output from the ECU of the vehicle, based on which T.x, T.y, and Theta.yaw, which are x, y-axis movement and yaw rotation of the vehicle, are corrected.” (He [0081] FIG. 5 is a block diagram illustrating an example of an HD map generation system according to one embodiment. HD map generation system 500 illustrates an overview for HD map generation. HD map generation system 500 may be part of HD map generation engine 125 of FIG. 1. Referring to FIG. 5, in one embodiment, HD map generation system 500 includes point cloud registration subsystem 501 and HD map generation subsystem 502. Point cloud registration subsystem 501 can receive an IMU signal, a GPS signal, and LIDAR images 503 (e.g., from IMU 213, GPS unit 212, and LIDAR unit 215 respectively) as inputs and generates HD poses 504 (or aligns the poses for the LIDAR images 503) based on the received inputs. HD map generation subsystem 502 can then receive LIDAR images 503 and HD poses 504 as inputs and generate HD map 505 based on the inputs.” Here teaches IMU is used in addition to the lidar in the point cloud registration (i.e. part of the optimization in He [0088]) as part of localization , the use of both (imu and lidar) is teaching a fusion technique (i.e. correcting of one sensors (lidar) estimates/outputs using the other (IMU) ))
Regarding Claim 9 it is a roughly a method equivalent of the SLAM system of claim 1, it has the same grounds of rejection, combination, and motivation for combination as claim 1 for the equivalent limitations. Claim 9 has the additional limitation of “to the obtained odometry by performing SLAM using acceleration data output in a CAN format from an electronic control unit (ECU) inside the vehicle in order to increase precision of odometry and reduce the time required for calculation.”(As modified in claim 1, the SLAM system of He is implemented into Di Cicco, He teaches the use of acceleration data for HD map (lidar odometry data) ( in [0057])) and He teaches that the system/sensors of the vehicle are connected via a CAN bus (i.e. communicate in a CAN format) (HE [0056] Components 110-115 may be communicatively coupled to each other via an interconnect, a bus, a network, or a combination thereof. For example, components 110-115 may be communicatively coupled to each other via a controller area network (CAN) bus. A CAN bus is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other in applications without a host computer. It is a message-based protocol, designed originally for multiplex electrical wiring within automobiles, but is also used in many other contexts.)
.Claims 12-14 are method equivalents to system claims 6-8 above. They have the same grounds of rejection as their respective equivalents.
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over modified Di Cicco as applied to claim 1 above, and further in view of WO 2020142924 A1, “COMMUNICATION METHOD FOR LASER RADAR, LASER RADAR, AND HOST DEVICE”, Long et al.
Regarding Claim 3, modified Di Cicco teaches “The SLAM system of claim 1, wherein the raw data generated by the first LiDAR and the second LiDAR data of each of the two LiDARs is stored in a buffer, and after times of the LiDARs are aligned through time synchronization, the data is converted into a point cloud type and merged.” (Di Cicco + [0127] “As indicated above, if each LiDAR device 1302, 1304 generates its own point cloud, the two point clouds 1316 and 1318 are consolidated, e.g., for use by the perception module 402 (FIG. 4) according to the techniques described below. A consolidated point cloud includes the points from both point clouds 1316 and 1318. In an embodiment, the two point clouds 1316 and 1318 are consolidated or merged or amalgamated or blended together as soon as each of the LiDAR device 1302 and 1304 start generating the point clouds 1316 and 1318 respectively. In an embodiment, the two point clouds 1316 and 1318 are merged after the LiDAR devices 1302 and 1304 finish generating point clouds 1316 and 1318 respectively. “ Here the merging/consolidating after completing the lidar scans teaches the storing of the lidars’ raw data in a buffer until the scan. “ + Di Cicco [0133] FIG. 16 shows components of a system used to generate a consolidated point cloud 1600. Each LiDAR device 1602, 1612 has a processor 1604, 1614 (e.g., microprocessor, microcontroller) each of which is configured with a respective starting angle 1606, 1616 and frequency 1608, 1618. In use, the LiDAR devices 1602, 1612 generates point clouds 1622, 1624 that are received by a processor 1626 (e.g., an implementation of or component of the perception module 402 shown in FIG. 4). The points of the point clouds 1622, 1624 are associated with timestamp data 1628, 1630. The processor 1626 uses the starting angles 1606, 1616, frequencies 1608, 1618, and timestamp data 1628, 1630 to generate the consolidated point cloud 1600. In an embodiment, the LiDAR devices 1602, 1612 are synchronized, e.g., operate according to a common time reference and/or have synchronized clocks. In an embodiment, the processors 1604, 1614 share a common clock 1632 so that their timestamps are generated from a common reference point. In other words, a point having timestamp of t=x generated by one of the LiDAR devices will have been detected at the same time as a point having a timestamp of t=x generated by the other synchronized LiDAR device(s). In an embodiment, the processor 1626 configures the starting angles 1606, 1616 and/or the frequencies 1608, 1618.” The lidars are synchronized based on their time-stamps to a common reference time/point into a merged point cloud)
Modified Di Cicco however does not explicitly teach that the communications of the lidar data is through a UDP format.
Long et al teaches that UDP communication/format is a known/common data transfer communication format for Lidar (Long Background (paragraph [02]): “At present, the common communication method of lidar is Ethernet connection, and uses User Datagram Protocol (User Datagram Protocol, UDP) for communication.”)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the application to modify Di Cicco to communicate the Lidar data in a UDP format as taught by Long. One would be motivated to implement the UDP format as it is data efficient format. (Long Background (paragraph [02]) “…UDP is an efficient but unreliable communication method, and the sender of the data cannot know the data being sent. Whether it was received normally.”)
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. DE 102022111240 A1;
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH MICHAEL DUNNE whose telephone number is (571)270-7392. The examiner can normally be reached Mon-Thurs 8:30-6:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Z Mehdizadeh can be reached at (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KENNETH M DUNNE/Primary Examiner, Art Unit 3669