Unifying terrain awareness for the visually impaired through real-time semantic segmentation
Autores
Yang, Kailun; Wang, Kaiwei; Bergasa Pascual, Luis Miguel; Romera Carmena, Eduardo; Hu, Weijian; [et al.]Identificadores
Enlace permanente (URI): http://hdl.handle.net/10017/43213DOI: 10.3390/s18051506
ISSN: 1424-8220
Editor
MDPI
Fecha de publicación
2018-05-10Patrocinadores
Ministerio de Economía y Competitividad
Comunidad de Madrid
Dirección General de Tráfico
Cita bibliográfica
Yang, K., Wang, K., Bergasa, L.M., Romera, E., Hu, W., Sun, D., Sun, J., Cheng, R., Chen, T. & López, E. 2018, "Unifying terrain awareness for the visually Impaired through real-time semantic segmentation", Sensors 2018, 18, 1506
Palabras clave
Navigation assistance
Semantic segmentation
Traversability awareness
Obstacle avoidance
RGB-D sensor
Visually-impaired people
Proyectos
info:eu-repo/grantAgreement/MINECO//TRA2015-70501-C2-1-R/ES/VEHICULO INTELIGENTE PARA PERSONAS MAYORES/
info:eu-repo/grantAgreement/DGT//SPIP2017-02305
info:eu-repo/grantAgreement/CAM//S2013%2FMIT-2748/ES/ROBOTICA APLICADA A LA MEJORA DE LA CALIDAD DE VIDA DE LOS CIUDADANOS, FASE III/RoboCity2030-III-CM
Tipo de documento
info:eu-repo/semantics/article
Versión
info:eu-repo/semantics/publishedVersion
Versión del editor
https://doi.org/10.3390/s18051506Derechos
Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Derechos de acceso
info:eu-repo/semantics/openAccess
Resumen
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.
Ficheros en el ítem
Ficheros | Tamaño | Formato |
|
---|---|---|---|
Unifying_Yang_Sensors_2018.pdf | 12.42Mb |
|
Ficheros | Tamaño | Formato |
|
---|---|---|---|
Unifying_Yang_Sensors_2018.pdf | 12.42Mb |
|
Colecciones
- ELECTRON - Artículos [243]
- ROBESAFE - Artículos [37]