Unifying terrain awareness for the visually impaired through real-time semantic segmentation
Authors
Yang, Kailun; Wang, Kaiwei; Bergasa Pascual, Luis MiguelIdentifiers
Permanent link (URI): http://hdl.handle.net/10017/43213DOI: 10.3390/s18051506
ISSN: 1424-8220
Publisher
MDPI
Date
2018-05-10Funders
Ministerio de Economía y Competitividad
Comunidad de Madrid
Dirección General de Tráfico
Bibliographic citation
Yang, K., Wang, K., Bergasa, L.M., Romera, E., Hu, W., Sun, D., Sun, J., Cheng, R., Chen, T. & López, E. 2018, "Unifying terrain awareness for the visually Impaired through real-time semantic segmentation", Sensors 2018, 18, 1506
Keywords
Navigation assistance
Semantic segmentation
Traversability awareness
Obstacle avoidance
RGB-D sensor
Visually-impaired people
Project
info:eu-repo/grantAgreement/MINECO//TRA2015-70501-C2-1-R/ES/VEHICULO INTELIGENTE PARA PERSONAS MAYORES/
info:eu-repo/grantAgreement/DGT//SPIP2017-02305
info:eu-repo/grantAgreement/CAM//S2013%2FMIT-2748/ES/ROBOTICA APLICADA A LA MEJORA DE LA CALIDAD DE VIDA DE LOS CIUDADANOS, FASE III/RoboCity2030-III-CM
Document type
info:eu-repo/semantics/article
Version
info:eu-repo/semantics/publishedVersion
Publisher's version
https://doi.org/10.3390/s18051506Rights
Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Access rights
info:eu-repo/semantics/openAccess
Abstract
Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.
Files in this item
Files | Size | Format |
|
---|---|---|---|
Unifying_Yang_Sensors_2018.pdf | 12.42Mb |
![]() |
Files | Size | Format |
|
---|---|---|---|
Unifying_Yang_Sensors_2018.pdf | 12.42Mb |
![]() |
Collections
- ELECTRON - Artículos [166]
- ROBESAFE - Artículos [37]