Show simple item record

dc.contributor.authorBergasa Pascual, Luis Miguel 
dc.contributor.authorRomera Carmena, Eduardo 
dc.contributor.authorWang, Kaiwei
dc.contributor.authorYang, Kailun
dc.date.accessioned2020-06-10T09:11:55Z
dc.date.available2020-06-10T09:11:55Z
dc.date.issued2019-04
dc.identifier.bibliographicCitationYang, K., Bergasa, L. M., Romera, E. & Wang, K. 2019, "Robustifying semantic cognition of traversability across wearable RGB-depth cameras", Applied Optics, vol. 58, no. 12, pp. 3141-3155
dc.identifier.issn1559-128X
dc.identifier.urihttp://hdl.handle.net/10017/43167
dc.description.abstractSemantic segmentation represents a promising means to unify different detection tasks, especially pixelwise traversability perception as the fundamental enabler in robotic vision systems aiding upper-level navigational applications. However, major research efforts are being put into earning marginal accuracy increments on semantic segmentation benchmarks, without assuring the robustness of real-time segmenters to be deployed in assistive cognition systems for the visually impaired. In this paper, we explore in a comparative study across four perception systems, including a pair of commercial smart glasses, a customized wearable prototype and two portable RGB-Depth (RGB-D) cameras that are being integrated in the next generation of navigation assistance devices. More concretely, we analyze the gap between the concepts of “accuracy” and “robustness” on the critical traversability-related semantic scene understanding. A cluster of efficient deep architectures is proposed, which are built using spatial factorizations, hierarchical dilations and pyramidal representations. Based on these architectures, this research demonstrates the augmented robustness of semantically traversable area parsing against the variations of environmental conditions in diverse RGB-D observations, and sensorial factors such as illumination, imaging quality, field of view and detectable depth range.en
dc.description.sponsorshipMinisterio de Economía y Competitividades_ES
dc.description.sponsorshipComunidad de Madrides_ES
dc.description.sponsorshipDirección General de Tráficoes_ES
dc.format.mimetypeapplication/pdfen
dc.language.isoengen
dc.publisherOSA
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights© 2019 Optical Society of America
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.titleRobustifying semantic perception of traversability across wearable RGB-depth camerasen
dc.typeinfo:eu-repo/semantics/articleen
dc.subject.ecienciaElectrónicaes_ES
dc.subject.ecienciaElectronicsen
dc.contributor.affiliationUniversidad de Alcalá. Departamento de Electrónicaes_ES
dc.relation.publisherversionhttps://doi.org/10.1364/AO.58.003141
dc.type.versioninfo:eu-repo/semantics/acceptedVersionen
dc.identifier.doi10.1364/AO.58.003141
dc.relation.projectIDinfo:eu-repo/grantAgreement/MINECO//TRA2015-70501-C2-1-R/ES/VEHICULO INTELIGENTE PARA PERSONAS MAYORES/
dc.relation.projectIDinfo:eu-repo/grantAgreement/DGT//SPIP2017-02305
dc.relation.projectIDinfo:eu-repo/grantAgreement/CAM//S2013%2FMIT-2748/ES/ROBOTICA APLICADA A LA MEJORA DE LA CALIDAD DE VIDA DE LOS CIUDADANOS, FASE III/RoboCity2030-III-CM
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessen
dc.identifier.publicationtitleApplied Opticsen
dc.identifier.publicationvolume58
dc.identifier.publicationlastpage3155
dc.identifier.publicationissue12
dc.identifier.publicationfirstpage3141


Files in this item

Thumbnail

This item appears in the following Collection(s)

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Este ítem está sujeto a una licencia Creative Commons.