%0 Journal Article %A Bergasa Pascual, Luis Miguel %A Romera Carmena, Eduardo %A Wang, Kaiwei %A Yang, Kailun %T Robustifying semantic perception of traversability across wearable RGB-depth cameras %D 2019 %@ 1559-128X %U http://hdl.handle.net/10017/43167 %X Semantic segmentation represents a promising means to unify different detection tasks, especially pixelwise traversability perception as the fundamental enabler in robotic vision systems aiding upper-level navigational applications. However, major research efforts are being put into earning marginal accuracy increments on semantic segmentation benchmarks, without assuring the robustness of real-time segmenters to be deployed in assistive cognition systems for the visually impaired. In this paper, we explore in a comparative study across four perception systems, including a pair of commercial smart glasses, a customized wearable prototype and two portable RGB-Depth (RGB-D) cameras that are being integrated in the next generation of navigation assistance devices. More concretely, we analyze the gap between the concepts of “accuracy” and “robustness” on the critical traversability-related semantic scene understanding. A cluster of efficient deep architectures is proposed, which are built using spatial factorizations, hierarchical dilations and pyramidal representations. Based on these architectures, this research demonstrates the augmented robustness of semantically traversable area parsing against the variations of environmental conditions in diverse RGB-D observations, and sensorial factors such as illumination, imaging quality, field of view and detectable depth range. %K Electrónica %K Electronics %~ Biblioteca Universidad de Alcala