Show simple item record

dc.contributor.authorBarea Navarro, Rafael 
dc.contributor.authorBergasa Pascual, Luis Miguel 
dc.contributor.authorRomera Carmena, Eduardo 
dc.contributor.authorLópez Guillén, María Elena 
dc.contributor.authorPérez Gil, Óscar 
dc.contributor.authorTradacete Ágreda, Miguel 
dc.contributor.authorLópez, Joaquín
dc.date.accessioned2020-11-16T16:58:46Z
dc.date.available2020-11-16T16:58:46Z
dc.date.issued2019-10
dc.identifier.bibliographicCitationBarea, R., Bergasa, L. M., Romera, E., López Guillén, E., Pérez, O., Tradacete, M. & López, J. 2019, "Integrating state-of-the-art CNNs for multi-sensor 3D vehicle detection in real autonomous driving environments", en 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 2019, pp. 1425-1431
dc.identifier.isbn978-1-5386-7025-5
dc.identifier.urihttp://hdl.handle.net/10017/45108
dc.description2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27-30 Oct. 2019.en
dc.description.abstractThis paper presents two new approaches to detect surrounding vehicles in 3D urban driving scenes and their corresponding Bird’s Eye View (BEV). The proposals integrate two state-of-the-art Convolutional Neural Networks (CNNs), such as YOLOv3 and Mask-RCNN, in a framework presented by the authors in [1] for 3D vehicles detection fusing semantic image segmentation and LIDAR point cloud. Our proposals take advantage of multimodal fusion, geometrical constrains, and pre-trained modules inside our framework. The methods have been tested using the KITTI object detection benchmark and comparison is presented. Experiments show new approaches improve results with respect to the baseline and are on par with other competitive state-of-the-art proposals, being the only ones that do not apply an end-to-end learning process. In this way, they remove the need to train on a specific dataset and show a good capability of generalization to any domain, a key point for self-driving systems. Finally, we have tested our best proposal in KITTI in our driving environment, without any adaptation, obtaining results suitable for our autonomous driving application.en
dc.description.sponsorshipMinisterio de Economía y Competitividades_ES
dc.description.sponsorshipComunidad de Madrides_ES
dc.format.mimetypeapplication/pdfen
dc.language.isoengen
dc.publisherIEEE
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights© 2019 IEEE
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.titleIntegrating state-of-the-art CNNs for multi-sensor 3D vehicle detection in real autonomous driving environmentsen
dc.typeinfo:eu-repo/semantics/conferenceObjecten
dc.subject.ecienciaElectrónicaes_ES
dc.subject.ecienciaElectronicsen
dc.contributor.affiliationUniversidad de Alcalá. Departamento de Electrónicaes_ES
dc.relation.publisherversionhttps://doi.org/10.1109/ITSC.2019.8916973
dc.type.versioninfo:eu-repo/semantics/acceptedVersionen
dc.identifier.doi10.1109/ITSC.2019.8916973
dc.relation.projectIDinfo:eu-repo/grantAgreement/MINECO//TRA2015-70501-C2-1-R/ES/VEHICULO INTELIGENTE PARA PERSONAS MAYORES/en
dc.relation.projectIDinfo:eu-repo/grantAgreement/MINECO//TRA2015-70501-C2-2-R/ES/SMARTELDERLYCAR. CONTROL Y PLANIFICACION DE RUTAS/en
dc.relation.projectIDinfo:eu-repo/grantAgreement/CAM//P2018%2FNMT-4331/ES/Madrid Robotics Digital Innovation Hub/RoboCity2030-DIH-CMen
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessen
dc.identifier.publicationtitle2019 IEEE Intelligent Transportation Systems Conference (ITSC)
dc.identifier.publicationlastpage1431
dc.identifier.publicationfirstpage1425


Files in this item

Thumbnail

This item appears in the following Collection(s)

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Este ítem está sujeto a una licencia Creative Commons.