%0 Journal Article %A Barea Navarro, Rafael %A Bergasa Pascual, Luis Miguel %A Romera Carmena, Eduardo %A López Guillén, María Elena %A Pérez Gil, Óscar %A Tradacete Ágreda, Miguel %A López, Joaquín %T Integrating state-of-the-art CNNs for multi-sensor 3D vehicle detection in real autonomous driving environments %D 2019 %U http://hdl.handle.net/10017/45108 %X This paper presents two new approaches to detect surrounding vehicles in 3D urban driving scenes and their corresponding Bird’s Eye View (BEV). The proposals integrate two state-of-the-art Convolutional Neural Networks (CNNs), such as YOLOv3 and Mask-RCNN, in a framework presented by the authors in [1] for 3D vehicles detection fusing semantic image segmentation and LIDAR point cloud. Our proposals take advantage of multimodal fusion, geometrical constrains, and pre-trained modules inside our framework. The methods have been tested using the KITTI object detection benchmark and comparison is presented. Experiments show new approaches improve results with respect to the baseline and are on par with other competitive state-of-the-art proposals, being the only ones that do not apply an end-to-end learning process. In this way, they remove the need to train on a specific dataset and show a good capability of generalization to any domain, a key point for self-driving systems. Finally, we have tested our best proposal in KITTI in our driving environment, without any adaptation, obtaining results suitable for our autonomous driving application. %K Electrónica %K Electronics %~ Biblioteca Universidad de Alcala