Keywords

1 Introduction

Point cloud processing is becoming more and more important for autonomous driving due to the strong improvement of automotive Lidar sensors in the recent years. The sensors of suppliers are capable to deliver 3D points of the surrounding environment in real-time. The advantage is a direct measurement of the distance of encompassing objects [1]. This allows us to develop object detection algorithms for autonomous driving that estimate the position and the heading of different objects accurately in 3D [2,3,4,5,6,7,8,9]. Compared to images, Lidar point clouds are sparse with a varying density distributed all over the measurement area. Those points are unordered, they interact locally and could mainly be not analyzed isolated. Point cloud processing should always be invariant to basic transformations [10, 11].

Fig. 1.
figure 1

Complex-YOLO is a very efficient model that directly operates on Lidar only based birds-eye-view RGB-maps to estimate and localize accurate 3D multiclass bounding boxes. The upper part of the figure shows a bird view based on a Velodyne HDL64 point cloud (Geiger et al. [1]) such as the predicted objects. The lower one outlines the re-projection of the 3D boxes into image space. Note: Complex-YOLO needs no camera image as input, it is Lidar based only.

In general, object detection and classification based on deep learning is a well known task and widely established for 2D bounding box regression on images [12,13,14,15,16,17,18,19,20,21]. Research focus was mainly a trade-off between accuracy and efficiency. In regard to automated driving, efficiency is much more important. Therefore, the best object detectors are using region proposal networks (RPN) [3, 15, 22] or a similar grid based RPN-approach [13]. Those networks are extremely efficient, accurate and even capable of running on a dedicated hardware or embedded devices. Object detections on point clouds are still rarely, but more and more important. Those applications need to be capable of predicting 3D bounding boxes. Currently, there exist mainly three different approaches using deep learning [3]:

  1. 1.

    Direct point cloud processing using Multi-Layer-Perceptrons [5, 10, 11, 23, 24]

  2. 2.

    Translation of Point-Clouds into voxels or image stacks by using Convolutional Neural Networks (CNN) [2,3,4, 6, 8, 9, 25, 26]

  3. 3.

    Combined fusion approaches [2, 7]

1.1 Related Work

Recently, Frustum-based Networks [5] have shown high performance on the KITTI Benchmark suite. The model is rankedFootnote 1 on the second place either for 3D object detections, as for birds-eye-view detection based on cars, pedestrians and cyclists. This is the only approach, which directly deals with the point cloud using Point-Net [10] without using CNNs on Lidar data and voxel creation. However, it needs a pre-processing and therefore it has to use the camera sensor as well. Based on another CNN dealing with the calibrated camera image, it uses those detections to minimize the global point cloud to frustum-based reduced point cloud. This approach has two drawbacks: (i). The models accuracy strongly depends on the camera image and its associated CNN. Hence, it is not possible to apply the approach to Lidar data only; (ii). The overall pipeline has to run two deep learning approaches consecutive, which ends up in higher inference time with lower efficiency. The referenced model runs with a too low frame-rate at approximately 7 fps on a NVIDIA GTX 1080i GPU [1].

In contrast, Zhou et al. [3] proposed a model that operates only on Lidar data. In regard to that, it is the best ranked model on KITTI for 3D and birds-eye-view detections using Lidar data only. The basic idea is an end-to-end learning that operates on grid cells without using hand crafted features. Grid cell inside features are learned during training using a Pointnet approach [10]. On top builds up a CNN that predicts the 3D bounding boxes. Despite the high accuracy, the model ends up in a low inference time of 4 fps on a TitanX GPU [3].

Another highly ranked approach is reported by Chen et al. [5]. The basic idea is the projection of Lidar point clouds into voxel based RGB-maps using handcrafted features, like points density, maximum height and a representative point intensity [9]. To achieve highly accurate results, they use a multi-view approach based on a Lidar birds-eye-view map, a Lidar based front-view map and a camera based front-view image. This fusion ends up in a high processing time resulting in only 4 fps on a NVIDIA GTX 1080i GPU. Another drawback is the need of the secondary sensor input (camera).

1.2 Contribution

To our surprise, no one is achieving real-time efficiency in terms of autonomous driving so far. Hence, we introduce the first slim and accurate model that is capable of running faster than 50 fps on a NVIDIA TitanX GPU. We use the multi-view idea (MV3D) [5] for point cloud pre-processing and feature extraction. However, we neglect the multi-view fusion and generate one single birds-eye-view RGB-map (see Fig. 1) that is based on Lidar only, to ensure efficiency. On top, we present Complex-YOLO, a 3D version of YOLOv2, which is one of the fastest state-of-the-art image object detectors [13]. Complex-YOLO is supported by our specific E-RPN that estimates the orientation of objects coded by an imaginary and real part for each box. The idea is to have a closed mathematical space without singularities for accurate angle generalization. Our model is capable to predict exact 3D boxes with localization and an exact heading of the objects in real-time, even if the object is based on a few points (e.g. pedestrians). Therefore, we designed special anchor-boxes. Further, it is capable to predict all eight KITTI classes by using only Lidar input data. We evaluated our model on the KITTI benchmark suite. In terms of accuracy, we achieved on par results for cars, pedestrians and cyclists, in terms of efficiency we outperform current leaders by minimum factor of 5. The main contributions of this paper are:

  1. 1.

    This work introduces Complex-YOLO by using a new E-RPN for reliable angle regression for 3D box estimation.

  2. 2.

    We present real-time performance with high accuracy evaluated on the KITTI benchmark suite by being more than five times faster than the current leading models.

  3. 3.

    We estimate an exact heading of each 3D box supported by the E-RPN that enables the prediction of the trajectory of surrounding objects.

  4. 4.

    Compared to other Lidar based methods (e.g. [3]) our model efficiently estimates all classes simultaneously in one forward path.

2 Complex-YOLO

This section describes the grid based pre-processing of the point clouds, the specific network architecture, the derived loss function for training and our efficiency design to ensure real-time performance.

2.1 Point Cloud Preprocessing

The 3D point cloud of a single frame, acquired by Velodyne HDL64 laser scanner [1], is converted into a single birds-eye-view RGB-map, covering an area of 80 m \(\times \) 40 m (see Fig. 4) directly in front of the origin of the sensor. Inspired by Chen et al. (MV3D) [5], the RGB-map is encoded by height, intensity and density. The size of the grid map is defined with \(n=1024\) and \(m=512\). Therefore, we projected and discretized the 3D point clouds into a 2D grid with resolution of about g = 8 cm. Compared to MV3D, we slightly decreased the cell size to achieve less quantization errors, accompanied with higher input resolution. Due to efficiency and performance reasons, we are using only one instead of multiple height maps. Consequently, all three feature channels (\(z_r, z_g, z_b\) with \(z_{r,g,b} \in \mathbb {R}^{m \times n}\)) are calculated for the whole point cloud \(\mathcal {P} \in \mathbb {R}^3\) inside the covering area \(\varOmega \). We consider the Velodyne within the origin of \(\mathcal {P}_\varOmega \) and define:

$$\begin{aligned} \mathcal {P}_{\varOmega } = \lbrace \mathcal {P}= [x,y,z]^T | x\in [0,40m], y \in [-40m,40m], z \in [-2m, 1.25m] \rbrace \end{aligned}$$
(1)

We choose \(z \in [-2m, 1.25m]\), considering the Lidar z position of 1.73m [1], to cover an area above the ground to about 3 m height, expecting trucks as highest objects. With the aid of the calibration [1], we define a mapping function \(\mathcal {S}_j = f_{\mathcal {P}\mathcal {S}}(\mathcal {P}_{\varOmega i}, g)\) with \( \mathcal {S} \in \mathbb {R}^{m\times n}\) mapping each point with index i into a specific grid cell \(\mathcal {S}_j\) of our RGB-map. A set describes all points mapped into a specific grid cell:

$$\begin{aligned} \mathcal {P}_{\varOmega i\rightarrow j} =\lbrace \mathcal {P}_{\varOmega i} =[x,y,z]^T | \mathcal {S}_j = f_{\mathcal {P}\mathcal {S}}(\mathcal {P}_{\varOmega i}, g)\rbrace \end{aligned}$$
(2)

Hence, we can calculate the channel of each pixel, considering the Velodyne intensity as \(I(\mathcal {P}_\varOmega )\):

$$\begin{aligned} \begin{aligned} z_g(\mathcal {S}_j)&= \max (\mathcal {P}_{\varOmega i\rightarrow j} \cdot [0,0,1]^T) \\ z_b(\mathcal {S}_j)&= \max (I(\mathcal {P}_{\varOmega i\rightarrow j}))\\ z_r(\mathcal {S}_j)&= \min \left( 1.0,\log (N+1) / 64\right) \quad N=|\mathcal {P}_{\varOmega i\rightarrow j}| \end{aligned} \end{aligned}$$
(3)

Here, N describes the number of points mapped from \(\mathcal {P}_{\varOmega i}\) to \(\mathcal {S}_j\), and g is the parameter for the grid cell size. Hence, \(z_g\) encodes the maximum height, \(z_b\) the maximum intensity and \(z_r\) the normalized density of all points mapped into \(\mathcal {S}_j\) (see Fig. 2).

Fig. 2.
figure 2

Complex-YOLO Pipeline. We present a slim pipeline for fast and accurate 3D box estimations on point clouds. The RGB-map is fed into the CNN (see Table 1). The E-RPN grid runs simultaneously on the last feature map and predicts five boxes per grid cell. Each box prediction is composed by the regression parameters t (see Fig. 3) and object scores p with a general probability \(p_0\) and n class scores \(p_1 ... p_n\).

2.2 Architecture

The Complex-YOLO network takes a birds-eye-view RGB-map (see Sect. 2.1) as input. It uses a simplified YOLOv2 [13] CNN architecture (see Table 1), extended by a complex angle regression and E-RPN, to detect accurate multi-class oriented 3D objects while still operating in real-time.

Euler-Region-Proposal. Our E-RPN parses the 3D position \(b_{x,y}\), object dimensions (width \(b_{w}\) and length \(b_{l}\)) as well as a probability \(p_{0}\), class scores \(p_{1}...p_{n}\) and finally its orientation \(b_{\phi }\) from the incoming feature map. In order to get proper orientation, we have modified the commonly used Grid-RPN approach, by adding a complex angle \(arg(|z|e^{ib_{\phi }})\) to it:

$$\begin{aligned} \begin{aligned} b_x&= \sigma (t_x) + c_x \\ b_y&= \sigma (t_y) + c_y \\ b_w&= p_w e^{t_w} \\ b_l&= p_l e^{t_l} \\ b_\phi&= arg(|z|e^{ib_{\phi }}) = \arctan _2(t_{Im},t_{Re}) \end{aligned} \end{aligned}$$
(4)

With the help of this extension the E-RPN estimates accurate object orientations based on an imaginary and real fraction directly embedded into the network. For each grid cell (32 \(\times \) 16 see Table 1) we predict five objects including a probability score and class scores resulting in 75 features each, visualized in Fig. 2.

Table 1. Complex-YOLO design. Our nal model has 18 convolutional and 5 maxpool layers, as well as 3 intermediate layers for feature reorganization respectively.

Anchor Box Design. The YOLOv2 object detector [13] predicts five boxes per grid cell. All were initialized with beneficial priors, i.e. anchor boxes, for better convergence during training. Due to the angle regression, the degrees of freedom, i.e. the number of possible priors increased, but we did not enlarge the number of predictions for efficiency reasons. Hence, we defined only three different sizes and two angle directions as priors, based on the distribution of boxes within the KITTI dataset: (i) vehicle size (heading up); (ii) vehicle size (heading down); (iii) cyclist size (heading up); (iv) cyclist size (heading down); (v) pedestrian size (heading left).

Complex Angle Regression. The orientation angle for each object \(b_{\phi }\) can be computed from the responsible regression parameters \(t_{im}\) and \(t_{re}\), which correspond to the phase of a complex number, similar to [27]. The angle is given simply by using \(\arctan _2(t_{im},t_{re})\). On one hand, this avoids singularities, on the other hand this results in a closed mathematical space, which consequently has an advantageous impact on generalization of the model. We can link our regression parameters directly into the loss function (7).

Fig. 3.
figure 3

3D bounding box regression. We predict oriented 3D bounding boxes based on the regression parameters shown in YOLOv2 [13], as well as a complex angle for box orientation. The transition from 2D to 3D is done by a predefined height based on each class.

2.3 Loss Function

Our network optimization loss function \(\mathcal {L}\) is based on the the concepts from YOLO [12] and YOLOv2 [13], who defined \(\mathcal {L}_{\text {Yolo}}\) as the sum of squared errors using the introduced multi-part loss. We extend this approach by an Euler regression part \(\mathcal {L}_{\text {Euler}}\) to get use of the complex numbers, which have a closed mathematical space for angle comparisons. This neglect singularities, which are common for single angle estimations:

$$\begin{aligned} \mathcal {L} = \mathcal {L}_{\text {Yolo}} + \mathcal {L}_{\text {Euler}} \end{aligned}$$
(5)

The Euler regression part of the loss function is defined with the aid of the Euler-Region-Proposal (see Fig. 3). Assuming that the difference between the complex numbers of prediction and ground truth, i.e. \(|z|e^{\mathrm {i} b_\phi }\) and \(|\hat{z}|e^{\mathrm {i} \hat{b}_\phi }\) is always located on the unit circle with \(|z|=1\) and \(|\hat{z}|=1\), we minimize the absolute value of the squared error to get a real loss:

$$\begin{aligned} \mathcal {L}_{\text {Euler}}&= \lambda _{coord}\sum _{i=0}^{S^2}\sum _{j=0}^B\mathbbm {1}_{ij}^{obj}\left| (e^{\mathrm {i} b_\phi } - e^{\mathrm {i} \hat{b}_\phi })^2\right| \end{aligned}$$
(6)
$$\begin{aligned}&= \lambda _{coord}\sum _{i=0}^{S^2}\sum _{j=0}^B\mathbbm {1}_{ij}^{obj} \left[ (t_{im} - \hat{t}_{im})^2 + ( t_{re}-\hat{t}_{re})^2 \right] \end{aligned}$$
(7)

where \(\lambda _{coord}\) is a scaling factor to ensure stable convergence in early phases and \(\mathbbm {1}_{ij}^{obj}\) denotes that the jth bounding box predictor in cell i has highest intersection over union (IoU) compared to ground truth for that prediction. Furthermore the comparison between the predicted box \(P_j\) and ground truth G with IoU \(\frac{P_j \cap G}{P_j \cup G}\), where \({P_j \cap G = \{x : x \in P_j \wedge x \in G\}}\), \({P_j \cup G \{x : x \in P_j \vee x \in G\}}\) is adjusted to handle rotated boxes as well. This is realized by the theory of intersection of two 2D polygon geometries and union respectively, generated from the corresponding box parameters \(b_x\), \(b_y\), \(b_w\), \(b_l\) and \(b_{\phi }\).

2.4 Efficiency Design

The main advantage of the used network design is the prediction of all bounding boxes in one inference pass. The E-RPN is part of the network and uses the output of the last convolutional layer to predict all bounding boxes. Hence, we only have one network, which can be trained in an end-to-end manner without specific training approaches. Due to this, our model has a lower runtime than other models that generate region proposals in a sliding window manner [22] with prediction of offsets and the class for every proposal (e.g. Faster R-CNN [15]). In Fig. 5 we compare our architecture with some of the leading models on the KITTI benchmark. Our approach achieves a way higher frame rate while still keeping a comparable mAP (mean Average Precision). The frame rates were directly taken from the respective papers and all were tested on a Titan X or Titan Xp. We tested our model on a Titan X and an NVIDIA TX2 board to emphasize the real-time capability (see Fig. 5).

3 Training and Experiments

We evaluated Complex-YOLO on the challenging KITTI object detection benchmark [1], which is divided into three subcategories 2D, 3D and birds-eye-view object detection for Cars, Pedestrians and Cyclists. Each class is evaluated based on three difficulty levels easy, moderate and hard considering the object size, distance, occlusion and truncation. This public dataset provides 7,481 samples for training including annotated ground truth and 7,518 test samples with point clouds taken from a Velodyne laser scanner, where annotation data is private. Note that we focused on birds-eye-view and do not ran the 2D object detection benchmark, since our input is Lidar based only.

3.1 Training Details

We trained our model from scratch via stochastic gradient descent with a weight decay of 0.0005 and momentum 0.9. Our implementation is based on modified version of the Darknet neural network framework [28]. First, we applied our pre-processing (see Sect. 2.1) to generate the birds-eye-view RGB-maps from Velodyne samples. Following the principles from [2, 3, 29], we subdivided the training set with public available ground truth, but used ratios of 85% for training and 15% for validation, because we trained from scratch and aimed for a model that is capable of multi-class predictions. In contrast, e.g. VoxelNet [3] modified and optimized the model for different classes. We suffered from the available ground truth data, because it was intended for camera detections first. The class distribution with more than 75% Car, less than 4% Cyclist and less than 15% Pedestrian is disadvantageous. Also, more than 90% of all the annotated objects are facing the car direction, facing towards the recording car or having similar orientations. On top, Fig. 4 shows a 2D histogram for spatial object locations from birds-eye-view perspective, where dense points indicate more objects at exactly this position. This inherits two blind spot for birds-eye-view map. Nevertheless we saw surprising good results for the validation set and other recorded unlabeled KITTI sequences covering several use case scenarios, like urban, highway or inner city.

For the first epochs, we started with a small learning rate to ensure convergence. After some epochs, we scaled the learning rate up and continued to gradually decrease it for up to 1,000 epochs. Due to the fine grained requirements, when using a birds-eye-view approach, slight changes in predicted features will have a strong impact on resulting box predictions. We used batch normalization for regularization and a linear activation \(f(x) = x\) for the last layer of our CNN, apart from that the leaky rectified linear activation:

$$\begin{aligned} f(x) = \left\{ \begin{array}{ll} x, &{} x > 0\\ 0.1x, &{} \text {otherwise}\end{array} \right. \end{aligned}$$
(8)
Fig. 4.
figure 4

Spatial ground truth distribution. The figure outlines the size of the birds-eye-view area with a sample detection on the left. The right shows a 2D spatial histogram of annotated boxes in [1]. The distribution outlines the horizontal field of view of the camera used for annotation and the inherited blind spots in our map.

Fig. 5.
figure 5

Performance comparison. This plot shows the mAP in relation to the runtime (fps). All models were tested on a Nvidia Titan X or Titan Xp. Complex-Yolo achieves accurate results by being five times faster than the most effective competitor on the KITTI benchmark [1]. We compared to the five leading models and measured our network on a dedicated embedded platform (TX2) with reasonable efficiency (4 fps) as well. Complex-Yolo is the first model for real-time 3D object detection.

3.2 Evaluation on KITTI

We have adapted our experimental setup and follow the official KITTI evaluation protocol, where the IoU thresholds are 0.7 for class Car and 0.5 for class Pedestrian and Cyclist. Detections that are not visible on the image plane are filtered, because the ground truth is only available for objects that also appear on the image plane of the camera recording [1] (see Fig. 4). We used the average precision (AP) metric to compare the results. Note, that we ignore a small number of objects that are outside our birds-eye-view map boundaries with more than 40 m to the front, to keep the input dimensions as small as possible for efficiency reasons.

Table 2. Performance comparison for birds-eye-view detection. APs (in %) for our experimental setup compared to current leading methods. Note that our method is validated on our splitted validation dataset, whereas all others are validated on the official KITTI test set.
Table 3. Performance comparison for 3D object detection. APs (in %) for our experimental setup compared to current leading methods. Note that our method is validated on our splitted validation dataset, whereas all others are validated on the official KITTI test set.
Fig. 6.
figure 6

Visualization of complex-YOLO results. Note that predictions are exclusively based on birds-eye-view images generated from point clouds. The re-projection into camera space is for illustrative purposes only.

Birds-Eye-View. Our evaluation results for the birds-eye-view detection are presented in Table 2. This benchmark uses bounding box overlap for comparison. For a better overview and to rank the results, similar current leading methods are listed as well, but performing on the official KITTI test set. Complex-YOLO consistently outperforms all competitors in terms of runtime and efficiency, while still manages to achieve comparable accuracy. With about 0.02 s runtime on a Titan X GPU, we are 5 times faster than AVOD [7], considering their usage of a more powerful GPU (Titan Xp). Compared to VoxelNet [3], which is also Lidar based only, we are more than 10 times faster and MV3D [2], the slowest competitor, takes 18 times as long (Fig. 6).

3D Object Detection. Table 3 shows our achieved results for the 3D bounding box overlap. Since we do not estimate the height information directly with regression, we ran this benchmark with a fixed spatial height location extracted from ground truth similar to MV3D [2]. Additionally as mentioned, we simply injected a predefined height for every object based on its class, calculated from the mean over all ground truth objects per class. This reduces the precision for all classes, but it confirms the good results measured on the birds-eye-view benchmark.

4 Conclusion

In this paper we present the first real-time efficient deep learning model for 3D object detection on Lidar based point clouds. We highlight our state of the art results in terms of accuracy (see Fig. 5) on the KITTI benchmark suite with an outstanding efficiency of more than 50 fps (NVIDIA Titan X). We do not need additional sensors, e.g. camera, like most of the leading approaches. This breakthrough is achieved by the introduction of the new E-RPN, an Euler regression approach for estimating orientations with the aid of the complex numbers. The closed mathematical space without singularities allows robust angle prediction.

Our approach is able to detect objects of multiple classes (e.g. cars, vans, pedestrians, cyclists, trucks, tram, sitting pedestrians, misc) simultaneously in one forward path. This novelty enables deployment for real usage in self driving cars and clearly differentiates to other models. We show the real-time capability even on dedicated embedded platform NVIDIA TX2 (4 fps). In future work, it is planned to add height information to the regression, enabling a real independent 3D object detection in space, and to use tempo-spatial dependencies within point cloud pre-processing for a better class distinction and improved accuracy.