[an error occurred while processing this directive]
Contact
|
I have started my post-doc with Oussama Khatib, in the Computer Science Departement of the Stanford University. You can consult my new home page here.
My PhD defense has occured on the 12th of December, 2006. The defence comitee was composed of Albert BENVENISTE (directeur de recherche, IRISA, Rennes), Etienne DOMBRE (directeur de recherche, LIRMM, Montpellier), Abderrahmane KHEDDAR (directeur de recherche, JRL, Tsukuba, Japan), Jean-Paul LAUMOND (directeur de recherche, LAAS, Toulouse), José SANTOS VICTOR (associate professor, ISR/IST, Lisbon, Protugal), Luigi VILLANI (associate professor, University of Napoli, Italy) and Francois CHAUMETTE (directeur de recherche, IRISA, Rennes).
My thesis is available here (in french):
The slides of the prentation are available, in pdf file (800Ko), in power-point format (6.7Mo) or in zipped power-point format, including the videos (24Mo).
Since September 2003, I have been working as a PhD student in the Lagadic team in IRISA. My adviser is François Chaumette.
I am a former student of the Ecole Normale Superieure de Cachan (2003-2004).
In June 2003, I graduated with a Master (DEA) in computer science, specialized in Image-Vision-Robotics, obtained at the Institut National Polytechnique de Grenoble (INPG).
I graduated with a Master of Engineering obtained at the Ecole Nationale Superieure d'Informatique et Mathematiques Appliquees de Grenoble (ENSIMAG) in 2003.
Complete list (with postscript or pdf files if available)
My research activities are concerned with visual servoing, and more specifically the integration of visual-servoing schemes into real robot applications. This research topic is at the intersection of the fields of robotics, automatic control, and computer vision.
Basically, visual servoing techniques consist in using the data provided by one or several cameras in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, or even virtual robots such as animated virtual humanoids. These techniques are very efficient for numerous types of robotic problems, providing accurate positioning, good robustness to noise, stability, etc. However, the control can become difficult or even erratic when the displacement is large. Moreover, the control is only interested in the links between the camera and the tracked visual features. To integrate the servo into a real complex robotic system, the control should also make sure that it avoids undesirable configurations such as articular joint limits, obstacle or visual occlusions.
The first works addressing these problems are based of path planning methods. However such a solution requires a lot of knowledge about the robot and its environment. It is thus less reactive to changes of the goal, the environments or the constraints. Our research work is mainly focused on a reactive way to solve this problem. In this meaning, our work is more similar to switching control laws or sequencing control laws. Two approaches have been developed in parallel.
The first approach is to propose a high-level execution controller that enables or disables some parts of the global control law to ensure global convergence and obstacle avoidance. Far from any obstacle, the robot moves according to a full task constraining all its degrees of freedom. When it comes closer to a configuration to avoid, the higher-level controller removes one constraint. The obtained DOF can be used to avoid the obstacle. The constraint is activated again when the obstacle is avoided. The controller also ensures the global convergence by activating a partial path-following task when a local minimum is reached. The robot finally accomplishes the global task by sequencing full-constraining tasks, redundant tasks plus avoidance criteria, and path-following phases. This approach as been experimentally validated on the six-DOF manipulator robot Afma-6. |
The second approach proposes several modifications of the classical task function approach to free up some motion directly at the lowest level. A first work, called directional redundancy, proposes a new approach of the redundancy formalism (used to construct a control law that realizes a main task and simultaneously takes supplementary constraints into account). The general idea is to enable the motions produced by the secondary control law that help the main task to be completed faster. The main advantage is to enhance the performance of the secondary task by enlarging the number of available DOF. A second more-recent work proposed to increase the available DOF by enlarging the convergence area. Instead of reaching a specific point, the robot can stop in a region around the goal, when the error is above a fixed threshold. The convergence is then faster, and all the motions into the convergence area are free for any secondary objective. This approach, called qualitative visual servoing, will be used as a first step when sequencing several tasks.
All these works have been experimentally validated on the robotic platform of the laboratory, a six-DOF eye-in-hand robot. A first attempt to adapt and validate our approach to more complex robots has been realized in Lisbon, Portugal, with the IST/ISR laboratory. A comparative work has been realized with a humanoid torso (six-DOF arm plus four-DOF head). Due to difficult system calibration, this robot required an adaptive learning process when sequencing to complete the task.
The first part of my current research is to prove that the tasks sequencing formalism is an appropriate solution to control highly redundant robot realizing complex composed tasks in real environment. This research is part of a collaboration with the Joint Robotic Laboratory, Tsukuba, Japan. The experimental platform is the humanoid robot HRP-2. This robot is a highly redundant humanoid mobile robot. It has several sensors for close-loop control, including cameras. Two arms with hands enable it to realize complex task such as object grasping. Complex executions such as door-opening have already been realized using this robot. The current solution uses a 3D-reconstruction system to map the robot environment, and then a open-loop motion generator to drive the robot to its goal. Although this system provides a complete solution, it can be greatly improves in term of robustness to detection noises and to environment changes by the use of a reactive close-loop control such as task sequencing. We propose thus to adapt and implement the task sequencing formalism to the Humanoid Robot HRP-2 for a grasping-while-walking task. Videos: External view (4.9M, MPEG-1 format) - Embeded-camera view(5.5M, MPEG-1 format) |
Grasping a ball while walking
Another subject of current research concerns the control of mobile robots. The major part of the existing control laws for mobile robot stabilization (ie. point-to-point control) uses time-varying feedback. In these control scheme, the time variable is explicitly used to modify the control behavior and thus overcome the non-holonomic constraint. In that meaning, the control behavior modification by the time variable can be seen as a sequencing of several tasks that are indexed by the time. Rather than using the time variable as an open loop controller to select a sequence of tasks to be applied, we propose to use a similar control structure as already used for manipulator robots to stabilize a mobile robot. Some experiments are currently realized on the mobile robot platform Cycab. It is a car-like robot with double-steering axes, and an embedded mounted on a pan-tilt unit. |