ROBESAFE - Ponencias y comunicaciones, etc
http://hdl.handle.net/10017/42855
ROBESAFE - Ponencias y comunicaciones, etc2024-03-29T07:56:46ZSimulación de vehículos autónomos usando V-REP bajo ROS
http://hdl.handle.net/10017/49050
Simulación de vehículos autónomos usando V-REP bajo ROS
Otero Moreira, Cándido; Paz Domonte, Enrique; Sanz Dominguez, Rafael; López Fernández, Joaquín; Barea Navarro, Rafael; Romera Carmena, Eduardo; Molinos Vicente, Eduardo José; Arroyo Contera, Roberto; Bergasa Pascual, Luis Miguel; López Guillén, María Elena
En este artículo se presentan las principales
características del entorno de simulación que se está
utilizando para el desarrollo de diferentes algoritmos
de conducción autónoma. Estos desarrollos forman
parte de un proyecto de conducción autónoma de
vehículo en el marco del Plan Nacional de
Investigación denominado SmartElderlyCar y
desarrollado por la Universidad de Alcalá (UAH) y
la Universidad de Vigo (UVIGO). Se ha realizado de
forma exitosa la simulación de un vehículo comercial
en V-REP controlado mediante nodos desarrollados
bajo el sistema ROS en el campus externo de la UAH
y se ha logrado conducir por sus carriles siguiendo
la línea central mediante un algoritmo de
seguimiento de trayectoria.
XXXVIII Jornadas de Automática, Gijón, 6-8 de Septiembre de 2017
2017-01-01T00:00:00ZAutonomous vehicle control in CARLA Challenge
http://hdl.handle.net/10017/45428
Autonomous vehicle control in CARLA Challenge
Egido Sierra, Javier del; Pérez Gil, Óscar; Bergasa Pascual, Luis Miguel
The introduction of Autonomous Vehicles (AVs) in a realistic urban environment is an
ambitious objective. AV validation on real scenarios involving actual objects such as cars or
pedestrians in a wide range of traffic cases would escalate the cost and could generate
hazardous situations. Consequently, autonomous driving simulators are quickly evolving to
cover the gap to achieve a fully autonomous driving architecture validation. Most used 3D
simulators in self-driving cars field are V-REP (Rohmer, E., 2013) and Gazebo (KOENIG,
N. and HOWARD, A., 2004), due to an easy integration with ROS (QUIGLEY, 2009)
platform to increase the interoperability with other systems. Those simulators provide
accurate motion information (more appropriate for easier scenes like robotic arms) but not a
realistic appearance and not allowing real-time systems, not being able to recreate complex
traffic scenes. CARLA (DOSOVITSKIY, A., 2017) open-source AV simulator is designed
to be able to train and validate control and perception algorithms in complex traffic scenarios
with hyper-realistic environments. CARLA simulator allows to easily modify on-board
sensors such as cameras or LiDAR, weather conditions and also the traffic scene to perform
specific traffic cases. In Summer 2019, CARLA launched its driving challenge to allow
everyone to test their own control techniques under the same traffic scenarios, scoring its
performance regarding traffic rules. In this paper, the Robesafe researching group approach
will be explained, detailing vehicle motion control and object detection adapted from Smart
Elderly Car (GÓMEZ-HUÉLAMO, C., 2019) that lead the group to reach the 4th place in
Track 3 challenge, where HD Map, Waypoints and environmental sensors data (LiDAR,
RGB cameras and GPS) were provided.
Congreso Campus FIT 2020. 24-26 junio 2020, online
2020-06-01T00:00:00ZCan we PASS beyond the Field of View? Panoramic Annular Semantic Segmentation for real-world surrounding perception
http://hdl.handle.net/10017/45410
Can we PASS beyond the Field of View? Panoramic Annular Semantic Segmentation for real-world surrounding perception
Yang, Kailun; Hu, Xinxin; Bergasa Pascual, Luis Miguel; Romera Carmena, Eduardo; Huang, Xiao; Sun, Dongming; Wang, Kaiwei
Pixel-wise semantic segmentation unifies distinct
scene perception tasks in a coherent way, and has catalyzed
notable progress in autonomous and assisted navigation, where
a whole surrounding perception is vital. However, current mainstream semantic segmenters are normally benchmarked against
datasets with narrow Field of View (FoV), and most visionbased navigation systems use only a forward-view camera.
In this paper, we propose a Panoramic Annular Semantic
Segmentation (PASS) framework to perceive the entire surrounding based on a compact panoramic annular lens system
and an online panorama unfolding process. To facilitate the
training of PASS models, we leverage conventional FoV imaging
datasets, bypassing the effort entailed to create dense panoramic
annotations. To consistently exploit the rich contextual cues in
the unfolded panorama, we adapt our real-time ERF-PSPNet to
predict semantically meaningful feature maps in different segments and fuse them to fulfill smooth and seamless panoramic
scene parsing. Beyond the enlarged FoV, we extend focal
length-related and style transfer-based data augmentations, to
robustify the semantic segmenter against distortions and blurs
in panoramic imagery. A comprehensive variety of experiments
demonstrates the qualified robustness of our proposal for realworld surrounding understanding.
2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9-12 June 2019
2019-08-01T00:00:00ZReal-Time Bird's Eye View Multi-Object Tracking system based on Fast Encoders for object detection
http://hdl.handle.net/10017/45389
Real-Time Bird's Eye View Multi-Object Tracking system based on Fast Encoders for object detection
Gómez Huélamo, Carlos; Egido Sierra, Javier del; Bergasa Pascual, Luis Miguel; Barea Navarro, Rafael; Ocaña Miguel, Manuel; Arango Vargas, Juan Felipe; Gutiérrez Moreno, Rodrigo
This paper presents a Real-Time Bird’s Eye View
Multi Object Tracking (MOT) system pipeline for an Autonomous Electric car, based on Fast Encoders for object
detection and a combination of Hungarian algorithm and
Bird’s Eye View (BEV) Kalman Filter, respectively used for
data association and state estimation. The system is able to
analyze 360 degrees around the ego-vehicle as well as estimate
the future trajectories of the environment objects, being the
essential input for other layers of a self-driving architecture,
such as the control or decision-making. First, our system
pipeline is described, merging the concepts of online and realtime DATMO (Deteccion and Tracking of Multiple Objects),
ROS (Robot Operating System) and Docker to enhance the
integration of the proposed MOT system in fully-autonomous
driving architectures. Second, the system pipeline is validated
using the recently proposed KITTI-3DMOT evaluation tool that
demonstrates the full strength of 3D localization and tracking
of a MOT system. Finally, a comparison of our proposal with
other state-of-the-art approaches is carried out in terms of
performance by using the mainstream metrics used on MOT
benchmarks and the recently proposed integral MOT metrics,
evaluating the performance of the tracking system over all
detection thresholds.
2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), September 20-23, 2020, Rhodes, Greece. Virtual Conference.
2020-12-24T00:00:00Z