Contact: Andrea Cherubini
Creation Date: March 2010
In recent research, autonomous vehicle navigation has been often done by processing visual information. This approach is useful in urban environments, where tall buildings can disturb satellite receiving and GPS localization, while offering numerous and useful visual features. We present our recent improvements to the existing Lagadic navigation framework , where a topological path is represented as a series of images.
Navigation is divided into subtasks, each consisting of reaching the next database image. We have contributed to the control scheme, by designing new models for the visual features [1, 2], by proposing a varying reference in the feedback loop [3], and by considering obstacle avoidance [4]. These works are detailed below.
Our vehicle uses a monocular camera, and the path is represented as a series of reference images. Since the robot is equipped with only one camera, it is difficult to guarantee vehicle pose accuracy during navigation. The main contribution of [1] is the evaluation and comparison (both in the image and in the 3D pose state space) of six appearance-based controllers (one pose-based controller, and five image-based) for replaying the reference path. Experimental results, in a simulated environment, as well as on a real robot, are presented. The experiments show that the two image jacobian controllers, that exploit the epipolar geometry to estimate feature depth, outperform the four other controllers, both in the pose and in the image space. We also show that image jacobian controllers, that use uniform feature depths, prove to be effective alternatives, whenever sensor calibration or depth estimation are inaccurate. Further details, including videos of the experiments, can be found here.
A catadioptric vision system combines a camera and a mirror to achieve a wide field of view imaging system. This type of vision system has many potential applications in mobile robotics. In [2], we design a robust image-based control scheme using a catadioptric vision system mounted on a mobile robot. We exploit the fact that the decoupling property contributes to the robustness of a control method. More precisely, from the image of a point, a minimal and decoupled set of features measurable on any catadioptric vision system is presented. Using the minimal set, a classical control method is proved to be robust in the presence of point range errors. Finally, experimental results with a coarsely calibrated mobile robot validate the robustness of the new decoupled scheme. Further details, including videos of the experiments, can be found here.
In [3], we present a controller for visual navigation, which utilizes a time-independent varying reference in the feedback law. The navigation framework relies on a monocular camera, and the path is represented as a series of key images. The varying reference is determined using a vector field, derived from the previous and next key images. Results in a simulated environment, as well as on a real robot, show the advantages of the varying reference, with respect to a fixed one, in the image, as well as in the 3D state space. This clip shows a video of the experiments carried out using the varying reference.
In [4], we propose a general framework for robot task execution, with simultaneous obstacle avoidance. Kinematic redundancy guarantees that obstacle avoidance and the primary task are independent, and the primary task can be merely sensor-based. The problem is solved both in an obstacle-free and in a dangerous context, and the control law is smoothened in the intermediate situations. The control scheme is validated in a series of simulations and real outdoor experiments. Simulations realized within Webots are shown in this clip . Real outdoor experiments are shown in this clip .
The work presented in this site was funded in part by the ANR CityVIP project.
| Lagadic
| Map
| Team
| Publications
| Demonstrations
|
Irisa - Inria - Copyright 2009 © Lagadic Project |