Robustifying semantic perception of traversability across wearable RGB-depth cameras
Identificadores
Enlace permanente (URI): http://hdl.handle.net/10017/43167DOI: 10.1364/AO.58.003141
ISSN: 1559-128X
Editor
OSA
Fecha de publicación
2019-04Patrocinadores
Ministerio de Economía y Competitividad
Comunidad de Madrid
Dirección General de Tráfico
Cita bibliográfica
Yang, K., Bergasa, L. M., Romera, E. & Wang, K. 2019, "Robustifying semantic cognition of traversability across wearable RGB-depth cameras", Applied Optics, vol. 58, no. 12, pp. 3141-3155
Proyectos
info:eu-repo/grantAgreement/MINECO//TRA2015-70501-C2-1-R/ES/VEHICULO INTELIGENTE PARA PERSONAS MAYORES/
info:eu-repo/grantAgreement/DGT//SPIP2017-02305
info:eu-repo/grantAgreement/CAM//S2013%2FMIT-2748/ES/ROBOTICA APLICADA A LA MEJORA DE LA CALIDAD DE VIDA DE LOS CIUDADANOS, FASE III/RoboCity2030-III-CM
Tipo de documento
info:eu-repo/semantics/article
Versión
info:eu-repo/semantics/acceptedVersion
Versión del editor
https://doi.org/10.1364/AO.58.003141Derechos
Attribution-NonCommercial-NoDerivatives 4.0 Internacional
© 2019 Optical Society of America
Derechos de acceso
info:eu-repo/semantics/openAccess
Resumen
Semantic segmentation represents a promising means to unify different detection tasks, especially pixelwise traversability perception as the fundamental enabler in robotic vision systems aiding upper-level
navigational applications. However, major research efforts are being put into earning marginal accuracy increments on semantic segmentation benchmarks, without assuring the robustness of real-time
segmenters to be deployed in assistive cognition systems for the visually impaired. In this paper, we
explore in a comparative study across four perception systems, including a pair of commercial smart
glasses, a customized wearable prototype and two portable RGB-Depth (RGB-D) cameras that are being
integrated in the next generation of navigation assistance devices. More concretely, we analyze the gap
between the concepts of “accuracy” and “robustness” on the critical traversability-related semantic scene
understanding. A cluster of efficient deep architectures is proposed, which are built using spatial factorizations, hierarchical dilations and pyramidal representations. Based on these architectures, this research
demonstrates the augmented robustness of semantically traversable area parsing against the variations
of environmental conditions in diverse RGB-D observations, and sensorial factors such as illumination,
imaging quality, field of view and detectable depth range.
Ficheros en el ítem
Ficheros | Tamaño | Formato |
|
---|---|---|---|
Robustifying_Yang_Appl_Optics_ ... | 3.877Mb |
|
Ficheros | Tamaño | Formato |
|
---|---|---|---|
Robustifying_Yang_Appl_Optics_ ... | 3.877Mb |
|
Colecciones
- ELECTRON - Artículos [242]
- ROBESAFE - Artículos [37]