How to localize humanoids with a single camera?
Authors
Fernández Alcantarilla, Pablo; Stasse, Olivier; Druon, Sebastien; Bergasa Pascual, Luis Miguel; Dellaert, FrankIdentifiers
Permanent link (URI): http://hdl.handle.net/10017/43393DOI: 10.1007/s10514-012-9312-1
ISSN: 0929-5593
Publisher
Springer
Date
2013-01Funders
Ministerio de Ciencia e Innovación
Comunidad de Madrid
Bibliographic citation
Alcantarilla, P.F., Stasse, O., Druon, S., Bergasa, L.M. & Dellaert, F. 2013, "How to localize humanoids with a single camera?", Autonomous Robots, vol. 34, no. 1-2, pp. 47-71
Keywords
Vision-based localization
Visibility prediction
Humanoid robots
Locally weighted learning
Bundle adjustment
Project
info:eu-repo/grantAgreement/MICINN//TRA2011-29001-C04-01/ES/EVALUACION DE DISTRACCIONES EN CONDUCTORES DEBIDAS A TECNOLOGIAS EMBARCADAS EN EL VEHICULO USANDO FOCALIZACION DE LA MIRADA/
info:eu-repo/grantAgreement/CAM//S-0505%2FDPI%2F000176
Document type
info:eu-repo/semantics/article
Version
info:eu-repo/semantics/acceptedVersion
Publisher's version
https://doi.org/10.1007/s10514-012-9312-1Rights
Attribution-NonCommercial-NoDerivatives 4.0 Internacional
© 2013 Springer Nature
Access rights
info:eu-repo/semantics/openAccess
Abstract
In this paper, we propose a real-time visionbased localization approach for humanoid robots using a single camera as the only sensor. In order to obtain an accurate localization of the robot, we first build
an accurate 3D map of the environment. In the map
computation process, we use stereo visual SLAM techniques based on non-linear least squares optimization
methods (bundle adjustment). Once we have computed
a 3D reconstruction of the environment, which comprises of a set of camera poses (keyframes) and a list of
3D points, we learn the visibility of the 3D points by
exploiting all the geometric relationships between the
camera poses and 3D map points involved in the reconstruction. Finally, we use the prior 3D map and the
learned visibility prediction for monocular vision-based
localization. Our algorithm is very efficient, easy to implement and more robust and accurate than existing
approaches. By means of visibility prediction we predict for a query pose only the highly visible 3D points,thus, speeding up tremendously the data association
between 3D map points and perceived 2D features in
the image. In this way, we can solve very efficiently
the Perspective-n-Point (PnP) problem providing robust and fast vision-based localization. We demonstrate
the robustness and accuracy of our approach by showing several vision-based localization experiments with
the HRP-2 humanoid robot.
Files in this item
Files | Size | Format |
|
---|---|---|---|
How_Alcantarilla_Auton_Robot_2 ... | 1.768Mb |
|
Files | Size | Format |
|
---|---|---|---|
How_Alcantarilla_Auton_Robot_2 ... | 1.768Mb |
|
Collections
- ELECTRON - Artículos [245]
- ROBESAFE - Artículos [37]