




為客戶提供自動(dòng)駕駛數(shù)據(jù)采集系統(tǒng)。包括傳感器安裝及標(biāo)定、數(shù)據(jù)采集、數(shù)據(jù)同步、數(shù)據(jù)標(biāo)注、數(shù)據(jù)注入及數(shù)據(jù)回放等。



1x spinning LIDAR:
20Hz capture frequency
32 channels
360° Horizontal FOV, +10° to -30° Vertical FOV
80m-100m Range, Usable returns up to 70 meters, ± 2 cm accuracy
Up to ~1.39 Million Points per Second
5x long range RADAR sensor:
77GHz
13Hz capture frequency
Independently measures distance and velocity in one cycle using Frequency Modulated Continuous Wave
Up to 250m distance
Velocity accuracy of ±0.1 km/h
6x camera:
12Hz capture frequency
1/1.8'' CMOS sensor of 1600x1200 resolution
Bayer8 format for 1 byte per pixel encoding
1600x900 ROI is cropped from the original resolution to reduce processing and transmission bandwidth
Auto exposure with exposure time limited to the maximum of 20 ms
Images are unpacked to BGR format and compressed to JPEG
See camera orientation and overlap in the figure below.

To achieve a high quality multi-sensor dataset, it is essential to calibrate the extrinsics and intrinsics of every sensor. We express extrinsic coordinates relative to the ego frame, i.e. the midpoint of the rear vehicle axle. The most relevant steps are described below:
LIDAR extrinsics:
We use a laser liner to accurately measure the relative location of the LIDAR to the ego frame.
Camera extrinsics:
We place a cube-shaped calibration target in front of the camera and LIDAR sensors. The calibration target consists of three orthogonal planes with known patterns. After detecting the patterns we compute the transformation matrix from camera to LIDAR by aligning the planes of the calibration target. Given the LIDAR to ego frame transformation computed above, we can then compute the camera to ego frame transformation and the resulting extrinsic parameters.
RADAR extrinsics
We mount the radar in a horizontal position. Then we collect radar measurements by driving in an urban environment. After filtering radar returns for moving objects, we calibrate the yaw angle using a brute force approach to minimize the compensated range rates for static objects.
Camera intrinsic calibration
We use a calibration target board with a known set of patterns to infer the intrinsic and distortion parameters of the camera.
In order to achieve good cross-modality data alignment between the LIDAR and the cameras, the exposure of a camera is triggered when the top LIDAR sweeps across the center of the camera’s FOV. The timestamp of the image is the exposure trigger time; and the timestamp of the LIDAR scan is the time when the full rotation of the current LIDAR frame is achieved. Given that the camera’s exposure time is nearly instantaneous, this method generally yields good data alignment. Note that the cameras run at 12Hz while the LIDAR runs at 20Hz. The 12 camera exposures are spread as evenly as possible across the 20 LIDAR scans, so not all LIDAR scans have a corresponding camera frame. Reducing the frame rate of the cameras to 12Hz helps to reduce the compute, bandwidth and storage requirement of the perception system.
取消
清空記錄
歷史記錄
清空記錄
歷史記錄
