[an error occurred while processing this directive]
I realized my PhD thesis at IRISA/Inria Rennes in the Lagadic team, under the supervision of Alexandre Krupa. I worked in the field of medical robotics where I proposed new solutions in the context of the ultrasound visual servoing for positioning and tracking of ultrasound images.
November 2011: I received my PhD degree from Université de Rennes 1
Three years of electrical engineering and automation background including a mobility semester at "Ecole Polytechnique Fédérale de Lausanne" (EPFL) in the Master "Robotique et Systèmes Autonomes" .
October 2008: Engineering Master degree in Electrical engineering and automation.
Since January 2012, I work as research engineer in the Lagadic team where I continue my research in ultrasound visual servoing. More particularly I'm now involved in the ANR project PROSIT that aims at developing a light-weight and dedicated multi-DOF robotic system usable for a tele-echography diagnostic application.
An increasing number of image-based robotic systems are developed to assist minimally invasive surgery procedures. Ultrasound (US) imaging devices are particularly well-adapted to such application insofar as they provide real time images during the operation. Moreover, in opposition to other modalities such as MRI or CT, US scanning is non invasive, low cost and may be repeated as often as necessary.
In this context, visual servoing approaches allow to directly control in closed-loop either the motion of the imaging device (eye-in-hand configuration) or the motion of a medical instrument (eye-to-hand configuration). Traditional visual servoing control schemes refer to vision data acquired from a camera mounted on a robotic system. In this case, the vision sensor provides a projection of the 3D world to a 2D image and the coordinates of a set of 2D geometric primitives can be used to control the 6 degrees of freedom (DOF) of the system. However, a 2D US transducer provides complete information in its image plane but not any outside of this plane. Therefore, the interaction matrix relating the variation of the chosen visual features to the probe motion is far different and has to be modeled.
PrincipleTo allow a global convergence of the probe positioning, even in the case of roughly symmetric objects of interest, we propose to consider a multi-plane US probe. This probe provides three 2D US images according to three orthogonal planes rigidly linked together. We define the optimal 2D features that can be extracted from these three planes and model the corresponding interaction matrix to control the six DOF of the system. Application to a registration taskImage-to-image registration techniques are useful in medical field to transfer information from preoperative data to an intraoperative image. The aim is to compute the homogeneous transformation that transforms the coordinates of a pixel in the US intraoperative image frame into voxel position expressed in the preoperative volume frame. Usual registration algorithms use an initialization of the parameters of this transformation based on artificial or anatomical landmarks identified in the preoperative 3D image and in a set of intraoperative US images. These parameters are then iteratively altered to optimize a similarity measure between both data sets according to a Powell-Brent search method. In our approach we propose to solve the registration task thanks to the visual servoing approach applied to a virtual multi-plane probe interacting with the preoperative volume. |
Multi-plane probe |
Multi-modal registration |
We develop a new approach of US image-based robotic control. In order to avoid any segmentation process and to be robust to changes in the organ topology, the proposed control directly uses the B-scan image provided by the US probe as visual feature. The interaction matrix enabling the control of the six DOF of the system from the image intensity is computed from the 3D image gradient of the US scan. As the estimation of this 3D gradient requires additional parallel images, we focus on tracking and local positioning tasks where the interaction matrix can be pre computed at the desired pose and used in the algorithm without being updated. |
Anthropomorphic arm equipped with a 2D US probe and a force sensor |
Tracking task |
Complete list (with postscript or pdf files if available)