G. Flandin, F. Chaumette. Visual Data Fusion: Application to Objects Localization and Exploration. Research Report IRISA, No 1394, April 2001.
Download paper: Follow the link
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.
Visual sensors provide exclusively uncertain and partial knowledge of a scene. In this report, we present a suitable scene knowledge representation that makes integration and fusion of new, uncertain and partial sensor measures possible. It is based on a mixture of stochastic and set membership models. We consider that, for a large class of applications, an approximated representation is sufficient to build a preliminary map of the scene. Our approximation mainly results in ellipsoidal calculus by means of a normal assumption for stochastic laws and ellipsoidal over or inner bounding for uniform laws. With these approximations, we coarsely model objects by their including ellipsoid. Then we build an efficient estimation process integrating visual data online in order to refine the location and approximated shape of the objects. Based on this estimation scheme, we perform online and optimal exploratory motions for the camera
@TechReport{Flandin01a,
Author = {Flandin, G. and Chaumette, F.},
Title = {Visual Data Fusion: Application to Objects Localization and Exploration},
Number = {1394},
Institution = {IRISA},
Month = {April},
Year = {2001}
}
Get EndNote Reference (.ref)
This document was translated automatically from BibTEX by bib2html (Copyright 2003 © Eric Marchand, INRIA, Vista Project).