Direction des Relations Européennes et Internationales (DREI)
ASSOCIATED TEAM |
Bird |
selection |
2008 |
INRIA Projects: Bunraku / IPARLA |
Foreign
Partner: State Key Lab CAD&CG, |
INRIA Research Center: Rennes / Bordeaux Sud-Ouest |
Country : Chine |
|
Coordinateurs français |
Coordinateur étranger |
|
Last name, first name |
Donikian, Stéphane |
Guitton, Pascal |
Peng, Qunsheng |
Grade/position |
CR1 |
Professeur |
Professeur |
Affiliation |
IRISA/INRIA Rennes |
LABRI/
INRIA Futurs |
State
Key Lab CAD&CG, |
Mail Adress |
Campus de Beaulieu, 35042 Rennes Cedex |
Université Bordeaux 1 |
Zijingang Campus, Hangzhou, 310027, China |
URL |
|||
Phone |
02 99 84 72 57 |
05 40 00 69 18 |
(0086)571-88206681 |
Fax |
02 99 84 71 71 |
05 40 00 66 69 |
(0086) 571-88206680 |
|
Abstract of the proposal
Interactions entre les mondes Réels et Virtuels / Interactions between Real and Virtual Worlds |
The
main purpose of this collaboration is to provide new tools for managing the
interaction between real and virtual worlds. We first want to ease the
interaction between real users and virtual worlds during modelling and
collaborative tasks. Concerning generation of virtual worlds, we will focus
not on fully automatic solutions, but on semi-automatic ones that will take
into account human decisions. This integration of the user is essential
to provide intuitive processes and better immersion. Based on the different
interfaces between virtual and real world (from a simple stylus to a set of
cameras), we have to capture accurately the motions, the gestures, and to
interpret the intentions of humans, in order to correctly integrate these
actions and intentions. Motion interpretation is also crucial for
collaborative tasks between real and virtual humans, because understanding of
human's intention is required to provide correct responses. For modelling,
this would result in more intuitive solutions, since they will be based on
natural ability. For animation, this would result in an increased integration
of virtual and real world, a real-time edition and control of virtual human. Understanding
the content of a representation of the real world, such as given by a video,
is then required to augment real scenes with a dynamic virtual content. To
achieve this goal, we want to work on the realism of virtual humans, the
coherency between the acquired geometry and motion of the real world and the
virtual one, the close integration of the two worlds during the rendering,
the accuracy of the modelling and editing process. |
Qunsheng Peng is born in 1947. He got undergraduate degree from Dept. of
Automation at
Dr. Peng's research
interests include computer simulation and animation, scientific data
visualization, realistic image synthesis, geometric modeling, etc. In the past
years, he published more than 200 papers concerned with shading models, real
time rendering, curved surface modeling, and infrared image synthesis on
academic journals and conferences.
Links to involved
professors resumes:
CAD & CG
State Key Lab and BUNRAKU (formerly SIAMES) and IPARLA project-teams were
partners in the STIC-Asie project on Virtual Reality (2004-2006), supported by
CNRS, INRIA and French Ministry of Foreign Affairs. Project objectives were to
develop a research network on Virtual Reality including collaborations between
Asian and French teams. The project was an opportunity of mutual discovery
between the different teams. Pr. Qunsheng Peng and Anatole Lécuyer attended the
second workshop of the STIC-Asie project together in Strasbourg in december
2005. Pr. Zhigeng Pan and Stéphane Donikian attended the third workshop
together in Tokyo in 2006. These first exchanges were reinforced during other
conferences in the field of computer animation, where researchers from both teams
met each other: CASA 2005, CASA 2006, SCA 2006 and CGI
In parallel, Stéphane
Donikian and Pascal Guitton did a study trip in November 2005 on behalf of the
French Ministry of Foreign Affairs, in order to visit Computer Graphics
laboratories in the vicinity of Shanghaï; one of the visited team was the CAD
& CG Lab. In continuation of this visit, Stéphane Donikian and Pascal
Guitton on the French side, and Pr. Q. Peng (State Key Lab) and X. Denshen
(NUST) on the Chinese side, organized a sino-french seminar in june 2006, with
the support of INRIA and French Consulate in Shanghai (http://www.irisa.fr/prive/donikian/SFS06) in order to extend scientific
relationships between the two countries. Following this seminar, new collaborations
were initiated between French and Chinese teams.
On one hand Pascal Guitton,
Patric Reuter and Xavier Granier have done a visit from the 17th to the 24th of
March 2007, with the French Consulate support, in order to precise the
scientific contents of a possible collaboration. The 3D modeling and
Non-PhotoRealistic Rendering topics, together with their adaptation to mobile
devices raised a great interest. A common paper is in prepation on Intuitive
modeling by Sketching and will be finalized with visit of Dr. Xavier Granier at
the end of October 2007 and the stay of Dr. Hongxin Zhang at the end of
November 2007.
On the other hand, Stéphane
Donikian obtained from the French Consulate in Shanghaï funding for several
visits. First, Franck Multon and Stéphane Donikian visited the State Key
Lab from 9th to 16th of October
Two publications illustrate
our first results in the context of our collaboration:
[Pronost07] N. Pronost, F.
Multon, Q. Li, W. Geng, R. Kulpa, G. Dumont. Techniques
d’animation pour gérer les interactions entre un combattant virtuel et un
sujet réel. Proceedings of the
congress of the French Association for Virtual Reality AFRV, October 2007,
[Pronost08] N. Pronost, F.
Multon, Q. Li, W. Geng, R. Kulpa, G. Dumont. Interactive animation of autonomous characters:
application to virtual kung-fu fighting. submitted to IEEE-VR 2008, 2008
No other
current collaboration exists between INRIA and the State Key Lab CAD&CG of
Funding from the French
Consulate in Shanghai allowed us to start a collaboration with the CAD & CG
lab. First exchanges are currently raised several complementary research
topics, and potential publications. An associated team will allow the
development of longer term research projects.
The proposed associated
team is already composed of two INRIA projects from two different centers, and
will naturally result in closer collaborations. Furthermore, with this
associated tem, INRIA can extend its collaboration with
The IPARLA and
Bunraku projects were partners on a proposal for a CNRS-JST collaboration
program with the
The CAD & CG State Key
Lab is one of the ten best State Key Lab in
Motion
capture is now widely used for animating virtual humans. However, processing
motion capture data (generally based on Cartesian position of external markers)
in order to obtain plausible animations for virtual humans is very complex and
rises many problems. One of the main issues is to adapt the motion of the actor
to the anthropometric sizes of the virtual human because they are generally
different. This “motion retargeting” process has been widely
studied in computer animation by solving geometric constraints such as placing
the feet over the ground without sliding [Gleicher98]. More generally, reusing
motion capture data involves adapting the trajectories in order to take
specific constraints into account, such as touching an object or adapting the
motion to non-flat grounds. Both motion regargetting and this latter adaptation
deal are generally performed thanks to solving geometric constraints.
In some cases, dynamics is
also an important issue for adapting motion to complex situations. For example,
adapting a gait to external perturbations (such as pushes or a slope terrain)
implies dealing with physics. It has mainly been performed to calculate motions
after a hit or a punch occurs [Zordan05]. A passive simulation is calculated
thanks to a dynamic model to calculate an immediate reaction to external
perturbations. Then the system searches a database for a motion that is
compatible with the current state of the system (such as receiving a hit from
the right side).
All these processes require
a lot of computation time and generally require a complete knowledge of the
constraints in advance. Indeed, solving constraints at given times rises the
problem of obtaining continuous motions, leading to iterative methods. As a
consequence, these methods are not suitable for real-time animation and
don’t allow real-time interactions between virtual and real humans. A
solution consists in precomputing several situations and to store the resulting
file in databases. Querying motions into this database can be performed in
real-time.
Motion graphs [Kovar02]
have been introduced to organize motion capture data into graphs which nodes
are poses and links deal with all the possible transitions. Hence, two poses
are linked if they are close one to each other.
Several works have been
carried-out to use such motion graphs in order to deal with various situations
such as displacing a virtual boxer that has to punch a target specified by the
user [Lee04]. However, taking various targets into account leads to huge
databases composed of thousands of motion clips. Precomputation is very long
and relies on data that are adapted to one given skeleton and limited to the
recorded targets.
Several recent works have
proposed to search a database for a motion that satisfies a set of constraints
or descriptions. For example, motion templates were defined to associate a
synthetic description with a clip [Muller05]. Then it’s possible to query
the database for motions that best correspond to a given description
[Muller06]. For example, a user can query motions that involve a fast motion of
the arm in the forward direction. The resulting subset of motions may contain
throws and punches that both correspond to this description. However, the
resulting motions should be adapted in order to accurately satisfy the
requirements of the animation: size of the skeleton and constraints imposed to
some body parts.
The main challenge here is
to associate the above motion retrieval approaches to algorithms capable of
modifying the resulting sequence to accurately deal with various situations.
This is the main topic addressed in this proposal.
INRIA has a good experience
in motion retargeting and adaptation to kinematic constraints. Firstly, we have
defined a morphological independent representation of motion that allows
retargeting clips very rapidly to new characters [Kulpa05a]. Instead of storing
joint angles, this representation is based on mixing Cartesian and orientation
data. These data are divided by the actor’s anthropometrics sizes leading
to adimensional data that can easily be scaled to the dimension of new
characters. Based on this representation, we have also designed algorithms to
solve kinematic and kinetic (control of the center of mass position)
constraints in real-time for hundreds of characters at 30Hz on a common
computer [Kulpa05b].
In real-time animation,
characters generally don’t use a unique motion and have to compose
several different actions, such as displacing, grasping, kicking, manipulating
tools… As a consequence, an animation framework should be able to blend
several motions together. We have proposed a new method to synchronize motions
[Menardais04a] and to blend them in a real-time framework. The user just has to
specify the weight associated to each motion at each time step and to let the
system recalculate these weights to ensure coherence between the motions
[Menardais04b].
All the above works have been gathered into a common animation engine named MKM
for “Management of Kinematic Motions” (www.irisa.fr/siames/MKM) that has been evaluated and used in several companies in video games
and for animating workers in virtual plants [Multon07]. However, the link
between the behavioral model and MKM has generally to be totally redesigned
depending on the applications. One of the problems is the automatic selection
of motions in the database before MKM tries to blend them and adapt the result
to the situation. Motion retrieval has been explored in the State Key Lab
CAD&CG of
Concurrently to these
methods based on kinematic constraints solving, we have also developed a
biomechanical model of human body and a method in order to verify if a modified
motion is physically valid or not [Pronost06T]. The problem is to guide the
modification applied to the motion to verify the physical laws.
In order to make motion
capture widely available, the motion capture data needs to be made reusable.
This means that we may create the needed motions by reusing pre-recorded motion
capture data [Geng03]. Furthermore, with the increased availability of motion
capture data and motion editing techniques, it currently yields a trend to
create the qualified motion by piecing together example motions from a database
[Kovar 02]. This alternative approach potentially provides a relative cheap and
time-saving approach to quickly obtain high quality motion data to animate
their creatures/characters.
Motion database is the
basis for motion reuse. The major weakness in motion capture data is that it
lacks of structure and adaptability.
A typical strategy of motion data organization is based on the directed graph.
Rose et al employ “verb graphs”, in which the nodes represent the
verbs and the arcs represent transitions between verbs [Rose98]. The verb
graph, acts as the glue to assemble verbs (defined as a set of example motions)
and their adverbs into a runtime data structure for seamless transition from
verb to verb for the simulated figures within an interactive runtime system.
Arikan & Forsyth also present a similar framework that generates human
motions by cutting and pasting motion capture data [Arikan02]. The collection
of motion sequence could be represented as a directed graph. Each frame would
be a node. There would be an edge from every frame to every frame that could follow
it in an acceptable splice. They further collapse all the nodes (frames)
belonging to the same motion sequence together. Kovar et al construct a
directed graph called a motion graph that encapsulates connections among the
database [Kovar02]. The motion graph is a directed graph wherein edges contain
either pieces of original motion data or automatically generated transitions.
All edges correspond to clips of motion. Nodes serve as choice points
connecting these clips. i.e., each outgoing edge is potentially the successor
to any incoming edge. New motion can be generated simply by building walks on
the graph.
Yu et al, in the State Key Lab implemented a framework which allows the user to
retrieve motions via Labanotation [Yu05]. For each motion clip in the library
we generate a corresponding Labanotation sequence as additional motion
property, as shown in Figure
Figure 1.1: Example of editing based on motion retrieval.
Upper part of the figure is the query Laban sequence and its corresponding
motion.
Lower part is the resultant matched motion clip from motion retrieval and its
corresponding Laban sequence.
Sketch-drawings is an
intuitive and comprehensive means of conveying movement ideas in character
animation. Davis et al provides a simple sketching interface for articulated
figure animation. The user draws the skeleton on top of the 2D sketch, and then
the system constructs the set of poses that exactly match the drawing, It also
allows the user to guide the system to the desired character pose [Davis03].
Thorne et al focused on the high-level motions, and developed cursive motion
notations that can be used to draw motions. A desired motion is created for the
character by sketching a gesture such as continuous sequence of lines, arcs,
and loops [Thorne04]. Li et al, in State Key Lab, proposed a novel sketch-based
approach to assisting the authoring and choreographing of Kungfu motions at the
early stage of animation creation [Li06]. Given two human figure sketches
corresponding to the initial and closing posture of a Kungfu form, and the
trajectory drawings on specific moving joints, MotionMaster can directly
rapidprototype the realistic 3D motion sequence by sketch-based motion
retrieval and refinement based on a motion database, as shown in Figure 2. The
animators can then preview and evaluate the recovered motion sequence from any
viewing angles. After the 3D motion sequence has been associated with the 2D
sketch drawing, the animator can also interactively and iteratively make
changes on the 2D sketch drawing, and the system will automatically transfer
the 2D changes to the 3D motion data of current interests. It greatly helps the
animator focus on the movement idea development during the evolutionary process
of building motion data for articulated characters.
Figure 1.2: Multiple resulting motion segments matched to the input 2D
sketches.
Motion retrieval leads to
motion that best correspond to the desired situation but doesn’t
guarantee to exactly fit the constraints. For example, the skeleton of the
character should be the same than the one of the actor to ensure that the
choice and the animation are correct. The same way, the selected motion should
be corrected in order to accurately reach a target in space with a given body
part. These limitations could be overcame by coupling motion retrieval with
Bunraku’s work on motion adaptation.
As described above, the
approaches developed in Bunraku and State Key Lab are complementary. On the one
hand, Bunraku has developed methods to synchronize, blend and adapt motions
according to the orders provided by a user. However, a controller is missing to
automatically select the most convenient motions before they are blended and
adapted accurately to the situation. On the other hand, the State Key Lab
proposes methods to organize motion capture data and to retrieve the clip that
best fits the current situation. However it requires huge databases to deal
with numerous different constraints such as reaching points with body parts.
We thus have decided to associate the two approaches in order to give more
autonomy to virtual characters. Hence, according to a given task that the
virtual human has to achieve, the method should select and adapt the best clips
that are supposed to solve the problem. A challenge is to use as less motion as
possible in order to lower computation time used for searching the database and
to decrease the size of the database in memory. This method should also be
compatible with interactive applications where a virtual character is supposed
to react immediately to orders provided unpredictably by users at any time.
In 2007, we have associated
the two methods, as described above, to solve the problem of making virtual
humans interact with real subjects in virtual reality. The selected application
was based on a fight between a real human whose motions were captured in
real-time and a virtual kung-fu master. The latter can be associated with
several different geometric models, with different anthropometric sizes. A
supervisor (human being) is asking the virtual human to kick or punch the real
subject. The subject is free to move in the virtual environment so that the
kung-fu master has to select the best motions to follow and strike him, as
shown in Figure 3.
Figure 3 : example of the fight between a real
subject and a virtual kung-fu master.
A small database composed
with less than 20 motions is used to animate the kung-fu master. The database
is organized in order to facilitate the motion retrieval process. Each motion
is associated with some data, such as semantics (punch, kick or displacement).
The database is organized as clusters to accelerate the search algorithm.
During this search in the database, the current pose of the kung-fu master is
retargeted to the actor’s skeleton (the one that performs each motion of
the database). Indeed, punching a character placed 1.5m far leads to different
motions if the character is small or tall. This problem has to be considered
during motion retrieval. It has been achieved by using the motion retargeting
algorithm developed in MKM.
Once a motion is selected,
it has to be adapted to the accurate position of the target and to the current
pose of the kung-fu master. This task is performed by MKM. This work has been
submitted to IEEE-VR, which is the most important conference for virtual
reality. Review decisions are expected to be announced November 5, 2007. We
will also make a communication in French in the French Association for Virtual
Reality (AFRV) at the end of October (with a full paper printed in the
proceedings).
In the near future, we will
continue to associate the two methods in three main directions.
In order to create simpler
interfaces, new approaches for 3D modeling have been developed, based on the
human ability to quickly draw a global overview of an object. These approaches
are commonly referred as 3D Sketching. Their principle is to infer the shape of
a 3D model and add details thanks to different editing operations (e.g.,
cutting, extrusion), all based on sketched 2D curves.
Teddy [Igarashi et al.
99], as a precursor of 3D freeform modeling, has introduced a gesture grammar
to convert drawn curves into corresponding modeling operations accessible to
non-expert users. Both the interactions and the geometric models have been
improved (e.g., [Karpenko et al. 02,
Tai et al. 04, Schmidt et al. 05]), but
unfortunately, changing the global shape of objects may be a challenging task.
For a flexible approach,
new interfaces and interactions [Levet et al. 07], new
representation of models [Tai et al. 04,Levet and Granier 07]
have to be developed. In this context, the partners are currently working on
three research axis (from the shorter to the longuer term research):
Based on the same
assumptions, the partners whish to also extend the modeling to the realistic
appearance and expressive shading design, taking into account the cultural
differences and similarities in order to provide more adapted processes.
The partners have developed
in parallel some experience on Sketching for free-form modeling, in a
complementary way.
In one side, the State Key
Lab of CAD & CG has achieved some work on the 3D representation using
convolution surfaces [Tai et al. 04]. Such an approach are very-well
suited surfaces with nice geometric properties, but is limited in term of
possible objects that can be generated.
For the INRIA projects,
their experience spans in the development of new interaction
tools [Kerautret et al. 05,Levet et al. 07]. Such approaches increase the
range of possible generated surfaces, but have still some problems in the
geometric quality [Levet and Granier 07].
By combining the different
knowledges, we are working on proving a more robust but still extended sketching
approach.
Over the past decade, Augmented
Reality (AR) [Azuma et al. 2001], which aims to merge virtual objects into the
real scenes, has become an invaluable technique for a wide variety of
applications. Augmented Video [Zhang et al. 2006] is an off-line AR technique
for highly demanding applications such as film-making, television and
environmental assessments, in which seamless composition is of essential
importance. Seamless composition need for geometrical, space-temporal and
colorimetric coherency between virtual and real objects. Geometrical coherency
is ensured by recovering the camera parameters (trajectory, focal length) from
the video sequences, and then the 3D model of the scenes is possible to be
reconstructed by dense depth maps. After this, we render virtual objects
shadows and account for occlusions while considering high quality illumination
effects of outdoor scenes.
Virtual humans’ navigation is
first considered as a motion planning problem. Motion planning techniques and
representations of 3D environments are intensively studied in the Robotics
field [Latombe 1991]. Two main classes of approaches can be distinguished:
first, the roadmap-based approaches and second, the cell-decomposition-based
approaches. Roadmap-based approaches capture the connectivity
of the free space of a given environment into a network of paths. Paths are
ensured to be collision-free with the obstacles of a scene, and feasible
according to the mechanical constraints of the considered system. Several
techniques allow to compute such a roadmap [Arikan, Chenney et al. 2001,Thomas
and Donikian 2000, Bayazit, Lien et al. 2002, Choi et al. 2003, Pettré et al.
2003]. Roadmap-based techniques provide an implicit representation of obstacles
and may provoke unrealistic results due to the lack of explicit representation
of obstacles (shape and distance). Cell-decomposition based
approaches model the environment as a set of interconnected areas. In
opposition to roadmap based approaches, solutions to query are series of
collision free areas instead of collision free paths. The provided solution
thus contains more practical information, such as the distance to obstacles and
the available surrounding free space, easing and improving performances of
reactive navigation processes which account for surrounding dynamic obstacles. Two
main cell-decomposition techniques are to be distinguished: approximate
decomposition [Kuffner 1998, Tecchia and Chrysanthou 2000, Shao and Terzopoulos
2005, Pettré et al. 2006, Bandi and Thalmann 1998], and exact decomposition
[Kallmann et al. 2003, Lamarche 2004].
While path planning techniques
provide a global solution-path leading to a goal, dynamic obstacles along the
path are taken into account using Reactive Navigation techniques. Reactive
Navigation process may rely on particle systems [Helbing 2000], rule based
systems [Reynolds 1987], or be predictive [Paris 2007]. Reactive Navigation is
a crucial for achieving realistic navigation, as well as taking into account
psychological factors in the decision process [Wiener et al 2003].
A short-term objective of the
collaboration between Bunraku and CAD&CG is to develop techniques for integrating
virtual humans into real video scenes. Each team developed complementary
expertise to address this problem. Our first goal is to consider the problem as
a Computer Animation one. We want to provide the animation designer a tool for
populating a video scene with virtual humans, by controlling their goals, path
and timing of their locomotion. When dealing with a high number of entities, it
is not conceivable to define the motion of each single virtual human. We must
then provide the designer with high-level control, while taking into account
the need for interactivity. To reach this goal, a number of problems have to be
addressed:
From the CAD&CG lab
point of view, the problem is to compute a 3D representation of the real world
from a video sequence to enable virtual object integration. From the Bunraku
point of view, the problem is to exploit this representation in order to enable
interaction between virtual and real objects in the final scene.
Figure 3.1 - A
The Bunraku team is
key-actor in the field of virtual human simulation. We acquired expertise on
several topics related to the collaboration objectives:
In the context of our
collaboration with the State Key Lab of CAD&CG, we want to address the
problem of designing interactively the motion of virtual humans in real scenes.
Complexity of the motion planning problem needs addressing: we benefit our
experience in crowd simulation. Figure 3.1 illustrates our previous works on
crowd design, simulation and rendering for Computer Animation purposes. Using
cell-decomposition techniques, we were able to populate large-scale
environments with virtual humans from simple high-level directives.
Figure 3.2 - Some images from a real video sequence and the extracted 3D
model
CAD&CG Lab team has
strong background on augmented video and reality. We have solved related
problems about high quality augmented reality:
The key of the interaction
of the real scenes to virtual mass is the sensitivity of the motions of real
scenes, such as human beings and cars, therefore the virtual mass can react
correctly. CAD&CG lab has tracked moving cars in a video sequences and then
integrate a virtual car into the video sequences with the same pose, which
demonstrate high quality of tracking ability about cars, and it has been
successful to track a pedestrian in a video sequence. CAD&CG lab also
developed method to segment video sequences according to the motion of scenes
instead of intensity of scenes, which is necessary for generating mask of
pedestrians and cars [An at. 2006]. For dynamic scenes, such as walking mass,
the integration of virtual objects and real scenes requires higher quality of
3D model, and clear edges and pose of dynamic obstacles.
Julien Pettré visited the
State Key Lab of CAD&CG in may 2007. During his stay, he initiated the
collaboration with associate professor Dr. X. Qin. Objectives of the
collaboration and working plan were defined. Dr. Qin and one of her PhD student
Mr. Zhong Fan came for a one-month stay in the Bunraku team in October 2007. During
their stay we started implementing required modules and validated data-flows
between them. Bunraku is in charge of providing the tool for designing the
motion of virtual humans in a given scene, with high level control of
trajectories and ability to consider a large number of entities. CAD&CG is
in charge to develop tools for extracting camera parameters and the trajectory
of real moving obstacles in the scene. Firsts results and publication is
planned for the beginning of year 2008.
The long-term objective of
this collaboration is to develop tools for integrating real humans in real
video scenes. Performances of algorithms need addressing in order to reach
real-time integration of virtual humans in the scene and apply our solutions to
the Virtual Reality field. Movie Industry is also targeted, which requires for
enhanced video-matting techniques in order to superpose real and virtual
objects seamlessly. Realistic rendering techniques of virtual humans are
required as well.
In the next year, our objective is
to continue on the three topics defined in this proposal. In order to reach the
objectives of the project, Nicolas Pronost will spend a 9 months stay at the State Key Lab of CAD&CG for a post doctoral
position with Pr. Geng. We will apply for a joint PhD (the
suggested student is Yijiang
Zhang)
co-directed by Pr. Qunsheng Peng and Stéphane Donikian on the topic of
Augmented Video (with Dr. X. Qin and J. Pettré as co-advisors) and one PhD
co-directed by Hongxin Zhang and Pascal Guitton on Intuitive Modelling. In
order to evaluate the first results obtained in collaboration and define next
steps of the work plan, a joint seminar will be organized at the end of year
2008.
We also want to extend our
collaboration on new topics as well, with other colleagues from the CAD & CG State Key Lab,
as for example Professor Zhigeng Pan. Pr. Pan
research topics are human behaviour modeling, virtual reality and sports. Here
is a list a possible extensions to our current collaboration:
1. Co-financement
La collaboration a déjà bénéficié de financements de la
part du Consulat de France en Chine (10 k€ en 2006 et 6 k€ en 2007) et de la DREI à travers un accessit en 2007.
Nous souhaitons soumettre deux demandes de financement pour une thèse en co-tutelle dans le cadre de programmes de bourses en alternance (comme celui proposé par l'ambassade de France en Chine). Cette bourse comprend une allocation mensuelle pour l'étudiant, les frais de couverture sociale et la garantie de responsablilité civile. L'équipe associée pourra nous permettre de couvrir les frais de logement et de transports nécessaire pour la co-tutelle.
Un séminaire sino-français pour promouvoir de nouvelles collaborations entre la France et la Chine sera organisé par Pr. Q. Peng durant l'année 2008. Ce séminaire bénéficie du soutien financier du Consulat de France à Shanghai à une hauteur estimée de 10 k€.
Il est prévu
de déposer un dossier coté chinois l’année prochaine pour obtenir des
financements complémentaires pour cette collaboration auprès de la NSFC et
auprès du Ministère Chinois des Sciences et Technologies. Notre partenaire
chinois est confiant sur la probabilité d’obtenir l’un des deux
financements du fait que leur laboratoire est un laboratoire clé d’état,
et le numéro un en Chine de son secteur. Par ailleurs, ils ont
obtenu un financement pour la venue de Nicolas Pronost en séjour post-doctoral.
ESTIMATION PROSPECTIVE DES CO-FINANCEMENTS |
|
Organisme |
Montant |
NSFC |
40-60 k€ |
OU Ministère Chinois des Sciences et Technologies |
80-100 k€ |
Consulat de France en Chine (2008) séminaire
sino-français |
10 k€ |
Ambassade de France en Chine (2008-2011) co-tutelle |
19 k€ |
Séjour
Post-doctoral N. Pronost |
4 k€ |
|
|
Total |
73-133 k€ |
2. Echanges
Description des échanges prévus dans les deux sens : accueil de chercheurs
de votre partenaire et missions INRIA vers votre partenaire.
Motivez l'utilité et l'intérêt spécifique des échanges et la
complémentarité des équipes.
Précisez s'il s'agit de chercheurs confirmés ou de juniors (stagiaires,
doctorants, post-doctorants). Spécifiez si ces échanges ont lieu dans le cadre
d'un travail scientifique, d'organisation d'événements conjoints, de
séminaires, tutoriels ou écoles, de formation par la recherche : indiquez les
étudiants impliqués dans la collaboration, donnez une estimation de leur nombre
de chaque côté et précisez si des thèses -éventuellement en co-tutelle- sont
prévues (pour chaque échange, précisez la durée et le calendrier prévisionnel).
Plusieurs échanges sont prévus entre les équipes partenaires. Par thématique:
Egalement, nous souhaitons financer les missions de chercheurs des équipes partenaires INRIA pour participer au séminaire sino-français organisé en Chine.
ESTIMATION DES DÉPENSES |
Montant |
|||
|
Nombre |
Accueil |
Missions |
Total |
Chercheurs confirmés |
13 |
6k€ |
23 k€ |
29 k€ |
Post-doctorants |
1 |
|
11 k€ |
11 k€ |
Doctorants |
3 |
10.5 k€ |
3 k€ |
13.5 k€ |
Stagiaires |
|
|
|
|
Autre (précisez) : |
|
|
|
|
Total |
|
|
|
53.5 k€ |
|
|
- total des co-financements |
22.5 k€ |
|
|
Financement "Équipe Associée" demandé |
31 k€ |
Remarques ou observations :
Le montant des frais de missions comprend la participation aux séminaires qui se tiendront en Chine pour permettre la réduction des coûts totaux d'organisation.
© INRIA - mise à jour le 02/08/2006