Show simple item record

dc.contributor.authorYang, Kailun
dc.contributor.authorHu, Xinxin
dc.contributor.authorBergasa Pascual, Luis Miguel 
dc.contributor.authorRomera Carmena, Eduardo 
dc.contributor.authorKaiwei, Wang
dc.date.accessioned2020-05-22T12:38:23Z
dc.date.issued2019-09-12
dc.identifier.bibliographicCitationK. Yang, X. Hu, L. M. Bergasa, E. Romera and K. Wang, 2019, "PASS: Panoramic Annular Semantic Segmentation," IEEE Transactions on Intelligent Transportation Systems.
dc.identifier.issn1524-9050
dc.identifier.urihttp://hdl.handle.net/10017/42859
dc.description.abstractPixel-wise semantic segmentation is capable of unifying most of driving scene perception tasks, and has enabled striking progress in the context of navigation assistance, where an entire surrounding sensing is vital. However, current mainstream semantic segmenters are predominantly benchmarked against datasets featuring narrow Field of View (FoV), and a large part of vision-based intelligent vehicles use only a forward-facing camera. In this paper, we propose a Panoramic Annular Semantic Segmentation (PASS) framework to perceive the whole surrounding based on a compact panoramic annular lens system and an online panorama unfolding process. To facilitate the training of PASS models, we leverage conventional FoV imaging datasets, bypassing the efforts entailed to create fully dense panoramic annotations. To consistently exploit the rich contextual cues in the unfolded panorama, we adapt our real-time ERF-PSPNet to predict semantically meaningful feature maps in different segments, and fuse them to fulfill panoramic scene parsing. The innovation lies in the network adaptation to enable smooth and seamless segmentation, combined with an extended set of heterogeneous data augmentations to attain robustness in panoramic imagery. A comprehensive variety of experiments demonstrates the effectiveness for real-world surrounding perception in a single PASS, while the adaptation proposal is exceptionally positive for state-of-the-art efficient networks.en
dc.description.sponsorshipMinisterio de Economía y Competitividades_ES
dc.description.sponsorshipComunidad de Madrides_ES
dc.format.mimetypeapplication/pdfen
dc.language.isoengen
dc.publisherIEEE
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights© 2019 IEEE
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectSemanticsen
dc.subjectCamerasen
dc.subjectImage segmentationen
dc.subjectSensorsen
dc.subjectNavigationen
dc.subjectTask analysisen
dc.subjectBenchmark testingen
dc.subjectIntelligent vehiclesen
dc.subjectScene parsingen
dc.subjectSemantic segmentationen
dc.subjectScene understandingen
dc.subjectPanoramic annular imagesen
dc.titlePASS: Panoramic Annular Semantic Segmentationen
dc.typeinfo:eu-repo/semantics/articleen
dc.subject.ecienciaElectrónicaes_ES
dc.subject.ecienciaElectronicsen
dc.contributor.affiliationUniversidad de Alcalá. Departamento de Electrónicaes_ES
dc.relation.publisherversionhttps://doi.org/10.1109/TITS.2019.2938965
dc.type.versioninfo:eu-repo/semantics/acceptedVersionen
dc.identifier.doi10.1109/TITS.2019.2938965
dc.relation.projectIDinfo:eu-repo/grantAgreement/MINECO//TRA2015-70501-C2-1-R/ES/VEHICULO INTELIGENTE PARA PERSONAS MAYORES/en
dc.relation.projectIDinfo:eu-repo/grantAgreement/CAM//P2018%2FNMT-4331/ES/Madrid Robotics Digital Innovation Hub/RoboCity2030-DIH-CMen
dc.date.embargoEndDate2020-09-12
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessen
dc.identifier.publicationtitleIEEE Transactions on Intelligent Transportation Systemsen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Este ítem está sujeto a una licencia Creative Commons.