Contact: Éric Marchand, Nicolas Courty
Creation date : May 2000
This demonstration presents an original solution to the camera control problem in a virtual environment. Our objective is to present a general framework that allows the automatic control of a camera in a dynamic environment. The proposed method is based on the image-based control or visual servoing approach. It consists in positioning a camera according to the information perceived in the image. This is thus a very intuitive approach of animation. To be able to react automatically to modifications of the environment, we also considered the introduction of constraints into the control. This approach is thus adapted to highly reactive contexts (virtual reality, video games).
They are many problems associated with the management of a camera in a virtual environments. It is not only necessary to be able to carry out a visual task (often a focusing task or more generally a positioning task) efficiently, but it is also necessary to be able to react in an appropriate and efficient way to modifications of this environment. We chose to use techniques widely considered in the robotic vision community. The basic tool that we considered is visual servoing which consists in positioning a camera according to the information perceived in the image. This image-based control constitutes the first novelty of our approach. The task is indeed specified in a 2D space, while the resulting camera trajectories are in a 3D space. It is thus a very intuitive approach of animation since it is carried out according to what one wishes to observe in the resulting images sequence.
However, this is not the only advantage of this method. Indeed, contrary to previous work [Gleicher and Witkin Siggraph 92], we did not limit ourselves to positioning tasks wrt. virtual points in static environments. In many applications (such as video games) it is indeed necessary to be able to react to modifications of the environment, of trajectories of mobile objects, etc. We thus considered the introduction of constraints into camera control. Thanks to the redundancy formalism, the secondary task (which reflects the constraints on the system) does not have any effect on the visual task. To show the validity of our approach, we have proposed and implemented various classic problems from simple tracking tasks to more complex tasks like occlusion or obstacle avoidance or positioning wrt. lit aspects of an object (in order to ensure good ``photography''). The approach that we proposed has real qualities, and the very encouraging results obtained suggest that the use of visual control for computer animation is a promising technique. The main drawback is a direct counterpart of its principal quality: the control is carried out in the image, thus implying loss of control of the 3D camera trajectory. This 3D trajectory is computed automatically to ensure the visual and the secondary tasks but is not controlled by the animator. For this reason, one can undoubtedly see a wider interest in the use of these techniques within real-time reactive applications.
The goal of this experiments is to see a rectangle a given position in the
image. We consider the painting by Monet as the object of
interest. We want to see it centered in the screen.
camera view (click for the complete animation)
bird's eye view (click for the complete animation)
In this example, we applied the proposed methodology to a navigation task in a complex environment. The target to be followed is moving in a museum-like environment. This ``museum'' has two rooms linked by stairs. The experiment goal is to keep the target in view (ie, to avoid occlusions) while considering on-line the modifications of the environment (ie other moving objects). In this example, we consider a focusing task wrt. an image centered virtual sphere. This task constrains 3 d.o.f of the virtual camera. The following figure shows the camera trajectories for various applied strategies while target and camera are moving in the first room of the environment. Obstacles appear in yellow. The target trajectory is represented as a red dotted line, while the trajectory of another moving object is represented as a blue dotted line. The red trajectory represents the simplest strategy: just focus the object. As nothing is done to consider the environment, occlusions and then collisions with the environment occur. The blue trajectory only considers the avoidance of occlusions by static objects; as a consequence, the occlusion by the moving object occurs. The green trajectory considers the avoidance of occlusions by both static and moving objects.
Museum walkthrough: camera trajectories for various strategies
The next figure shows the views acquired by the camera if no
specific strategy is considered to avoid occlusion of the target and obstacle
avoidance. This leads to multiple occlusions of the target and multiple
collisions with the environment.
The control strategy now
considers the presence of obstacles. This time, the target always remains in
the field of view, and at its desired position in the image. The collisions
with the wall and the occlusions of the target are correctly avoided. Let us
note that the environment is not flat, and the neither the target nor the
camera move within a plane (the target ``gets down'' stairs). Tracking and avoidance process perform well
despite the fact that the target moves in 3D.
On the bird's eye view the yellow volume (associated to the camera-target couple) corresponds to the bounding volumes used to predict the occlusions.
In this experiment the considered task is the same but the target is moving
within a narrow corridor and is turning right. It is not possible to achieve
this task if the distance between the camera and the target remains constant.
If the camera want to keep the target in view an occlusion avoidance process
has to be performed. The problem is that the motion computed to avoid the
occlusion moves the camera toward the wall. An obstacle avoidance process
is then necessary. The resulting control law automatically
produces a motion that moves the camera away from the wall and reduces the
camera-target distance.
camera view(click for the complete animation)
bird's eye view (click for the complete animation)
We
consider a model of the Venus of Milo. Our goal is to control the light position to lit correctly the statue. In this experiments we first consider a
static camera and a moving light. In
a second time, when a minimum of the cost function is reached, we impose a
motion to the camera. The light must then move in
order to maintain a correct lit of the statue.
E. Marchand, N. Courty. Controlling a camera in a virtual environment. The Visual Computer Journal, 18(1):1-19, Février 2002.
E. Marchand, N. Courty. - Image-based Virtual Camera Motion Strategies. - in Proc of Graphics Interface, GI2000 , Montreal, Quebec, May 2000.
E. Marchand, N. Courty. - Visual Servoing in computer Animation. What you want to see is really what you get ! Rapport de Recherche IRISA, No1310, Mars 2000. (extended version of the Graphics Interface paper - get the pdf file)
N. Courty, E. Marchand. - Navigation et contrôle d'une caméra dans un environnement virtuel : une approche référencée image. - 7èmes journées de l'Association Française d'Informatique Graphique, AFIG'99, Reims, Novembre 1999.
| Lagadic
| Map
| Team
| Publications
| Demonstrations
|
Irisa - Inria - Copyright 2009 © Lagadic Project |