Show simple item record

dc.contributor.authorYang, Kailun
dc.contributor.authorWang, Kaiwei
dc.contributor.authorBergasa Pascual, Luis Miguel 
dc.contributor.authorRomera Carmena, Eduardo 
dc.contributor.authorHu, Weijian
dc.contributor.authorSun, Dongming
dc.contributor.authorSun, Junwei
dc.contributor.authorCheng, Ruiqi
dc.contributor.authorChen, TIanxue
dc.contributor.authorLópez Guillén, María Elena 
dc.date.accessioned2020-06-12T10:39:31Z
dc.date.available2020-06-12T10:39:31Z
dc.date.issued2018-05-10
dc.identifier.bibliographicCitationYang, K., Wang, K., Bergasa, L.M., Romera, E., Hu, W., Sun, D., Sun, J., Cheng, R., Chen, T. & López, E. 2018, "Unifying terrain awareness for the visually Impaired through real-time semantic segmentation", Sensors 2018, 18, 1506
dc.identifier.issn1424-8220
dc.identifier.urihttp://hdl.handle.net/10017/43213
dc.description.abstractNavigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.en
dc.description.sponsorshipMinisterio de Economía y Competitividades_ES
dc.description.sponsorshipComunidad de Madrides_ES
dc.description.sponsorshipDirección General de Tráficoes_ES
dc.format.mimetypeapplication/pdfen
dc.language.isoengen
dc.publisherMDPI
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectNavigation assistanceen
dc.subjectSemantic segmentationen
dc.subjectTraversability awarenessen
dc.subjectObstacle avoidanceen
dc.subjectRGB-D sensoren
dc.subjectVisually-impaired peopleen
dc.titleUnifying terrain awareness for the visually impaired through real-time semantic segmentationen
dc.typeinfo:eu-repo/semantics/articleen
dc.subject.ecienciaElectrónicaes_ES
dc.subject.ecienciaElectronicsen
dc.contributor.affiliationUniversidad de Alcalá. Departamento de Electrónicaes_ES
dc.relation.publisherversionhttps://doi.org/10.3390/s18051506
dc.type.versioninfo:eu-repo/semantics/publishedVersionen
dc.identifier.doi10.3390/s18051506
dc.relation.projectIDinfo:eu-repo/grantAgreement/MINECO//TRA2015-70501-C2-1-R/ES/VEHICULO INTELIGENTE PARA PERSONAS MAYORES/
dc.relation.projectIDinfo:eu-repo/grantAgreement/DGT//SPIP2017-02305
dc.relation.projectIDinfo:eu-repo/grantAgreement/CAM//S2013%2FMIT-2748/ES/ROBOTICA APLICADA A LA MEJORA DE LA CALIDAD DE VIDA DE LOS CIUDADANOS, FASE III/RoboCity2030-III-CM
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessen
dc.identifier.publicationtitleSensors


Files in this item

Thumbnail

This item appears in the following Collection(s)

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Este ítem está sujeto a una licencia Creative Commons.