J. Pagès, C. Collewet, F. Chaumette, J. Salvi. An approach to visual servoing based on coded light. In IEEE Int. Conf. on Robotics and Automation, ICRA'2006, Pages 4118-4123, Orlando, Florida, May 2006.
Download paper: Adobe portable document (pdf)
Copyright notice:
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder. This page is automatically generated by bib2html v217, © Inria 2002-2015, Projet Lagadic
Positioning a robot with respect to objects by using data provided by a camera is a well known technique called visual servoing. In order to perform a task, the object must exhibit visual features which can be extracted from different points of view. Then, visual servoing is object-dependent as it depends on the object appearance. Therefore, performing the positioning task is not possible in presence of non-textured objets or objets for which extracting visual features is too complex or too costly. This paper proposes a solution to tackle this limitation inherent to the current visual servoing techniques. Our proposal is based on the coded structured light approach as a reliable and fast way to solve the correspondence problem. In this case, a coded light pattern is projected providing robust visual features independently of the object appearance
@InProceedings{Pages06a,
Author = {Pagès, J. and Collewet, C. and Chaumette, F. and Salvi, J.},
Title = {An approach to visual servoing based on coded light},
BookTitle = {IEEE Int. Conf. on Robotics and Automation, ICRA'2006},
Pages = {4118--4123},
Address = {Orlando, Florida},
Month = {May},
Year = {2006}
}
Get EndNote Reference (.ref)
| Lagadic
| Map
| Team
| Publications
| Demonstrations
|
Irisa - Inria - Copyright 2009 © Lagadic Project |