RT info:eu-repo/semantics/conferenceObject T1 Integrating state-of-the-art CNNs for multi-sensor 3D vehicle detection in real autonomous driving environments A1 Barea Navarro, Rafael A1 Bergasa Pascual, Luis Miguel A1 Romera Carmena, Eduardo A1 López Guillén, María Elena A1 Pérez Gil, Óscar A1 Tradacete Ágreda, Miguel A1 López, Joaquín K1 Electrónica K1 Electronics AB This paper presents two new approaches to detectsurrounding vehicles in 3D urban driving scenes and their corresponding Bird’s Eye View (BEV). The proposals integrate twostate-of-the-art Convolutional Neural Networks (CNNs), such asYOLOv3 and Mask-RCNN, in a framework presented by theauthors in [1] for 3D vehicles detection fusing semantic imagesegmentation and LIDAR point cloud. Our proposals takeadvantage of multimodal fusion, geometrical constrains, andpre-trained modules inside our framework. The methods havebeen tested using the KITTI object detection benchmark andcomparison is presented. Experiments show new approachesimprove results with respect to the baseline and are on parwith other competitive state-of-the-art proposals, being the onlyones that do not apply an end-to-end learning process. In thisway, they remove the need to train on a specific dataset andshow a good capability of generalization to any domain, akey point for self-driving systems. Finally, we have tested ourbest proposal in KITTI in our driving environment, withoutany adaptation, obtaining results suitable for our autonomousdriving application. PB IEEE SN 978-1-5386-7025-5 YR 2019 FD 2019-10 LK http://hdl.handle.net/10017/45108 UL http://hdl.handle.net/10017/45108 LA eng NO 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27-30 Oct. 2019. NO Ministerio de Economía y Competitividad DS MINDS@UW RD 29-mar-2024