Vincent Auvray - Contributions
(Back)
Quick insight of the transparent motion estimation
Although the problem of transparent motion estimation is very specific, it has interested a small but dynamic community of reseachers.
Most of them do not focus on medical images but consider transparency phenomena that can appear in video sequences: reflections
on mirrors, motion being translucent surfaces, etc.
Layer model
|
|
Transparency example in videos.
|
We distinguish three main approches:
Partial adaptation of classical motion estimators. Transparency in video sequences can sometimes be addressed
by methods that do not model it specifically. When the phenomenon is marginal and deviate only slightly from the
brightness conservation assumption, estimation methods that are robust enough can give reliable estimates (Black and Anadan,
The robust estimation of multiple motions: parametric and piecewise smooth flow field
, Irani and Peleg,
Motion analysis for image enhancement: resolution , occlusion and transparency).
These frameworks though work only on particular transparent images and can not be expected to address general transparent motion.
Plane detection in the 3D Fourier space. Shizawa and Mase (Principle
of superposition: a common computational framework for analysis of multiple motions) noticed that the 3D Fourier
transform of a transparent sequence whose transparent layers are in constant translation is made of different planes, one
for each layer. Their orientation allows to estimate the considered motions, and the planes' values theorically allows to rebuild
the different layers.
Many methods have been proposed to detect those planes. Let us name Pingault and Pellerin
(Motion estimation of transparent objects in the frequency domain) or Yu, Sommer
and Daniilidis (Multiple motion analysis in spatial domain or in spectral domain?).
However, these frameworks have the major disadvantage of needing a constant transparent motion over a dozen of frames. At
video frame rates, this account for half a second, that is to say half a cardiac cycle. It thus does not hold on medical
image sequences.
| |
|
Transparent motion estimation in the direct space. Pingault and Pellerin (
Optical flow constraint equation extended to transparency) established an equivalent for the brightness
assumption on transparent images that needs constant motions over three successive frames only (in the case of two transparent
layers).
Consider three successive frames I(p,t) made of two layers moving with the velocities
u(p,t) and v(p,t). p is a local position,
and t the time instant. Assuming the motions constant between t-1 and t+1:
Many classical motion estimation methods have been adapted to the transparency case subtituting the brightness assumption with
this equation: block-matching techniques (Stuke, Aach, Mota et Barth,
Estimation of multiple motions using block-matching and random Markov fields),
estimations of motions on a B-spline base (Pingault, Bruno, Pellerin, A robust
multi-scale B-spline function decomposition for estimating motion transparency) or dense motion field estimation
in a markovian formalism (Stuke, Aach, Mota et Barth,
Estimation of multiple motions using block-matching and random Markov fields).
Our contributions belong to this last class of approches. Considering the application of motion estimation for fluoroscopic image sequences,
our priorities are first to be robust to noise, and then to need a limited computational time (since the final denoising filter
would need to work real time). (Back)
Our contributions in bitransparent configuration
In a first time, we have developed a synthetic image generation process which follows the physics of X-Ray
image formation. That allows us to compare our estimates to a known ground truth.
(More)
We have developped three generations of motion estimators dedicated to the bitransparency. This refers to a transparency
made of two layers over the image. (And two layers only!)
Our first contribution (presented to ICIP'05) did not allow us to reach an estimation accuracy
good enough on clinical sequences with a realistic noise.
This is why we imagined a second algorithm, presented at MICCAI'05. To be as robust to noise
as possible, we tried to constraint the problem as much as we could. Many real exams observations convinced us that the anatomical
motions (heart beating, lungs dilation, diaphragm translation) could be modeled with affine displacement fields.
This approach allows for precisions of 0.6 and 2.8 pixels on images typical for diagnostic and interventional images respectively.
We finally proposed in ICIP'06 an evolution of this algorithm, reaching precisions of 0.6
pixels on diagnostic images and 1.2 on fluoroscopic images, and presented real examples.
(More)(Demos)
Estimated motion fields on a part of cardiac exam in situation of bi-transparency.
Our contributions in bidistributed transparency
Our contributions until that point estimated the transparent motions present in a sequence showing one configuration only:
two layers having coherent motions were present over the whole image. Most of the contributions on this topic refer to this
situation.
Now, the real images are more complex: they contain more than two layers overall, but rarely more than two locally. Which is why
we have introduced the concept of bidistributed transparency to account for images that can be segmented in areas
containing one or two layers.
We present in ICIP'06 a joint transparent motion estimation and segmentation
method that allows for precions of 0.6 pixels on diagnostic images and 1.2 on fluoroscopic images.(More)
Two images for a fluoroscopic sequence, the estimated motions (the field corresponding to the background is worth
zero and thus invisible) and the resulting segmentation.
Processing of a video sequence in a situation of bi-distributed transparency. Top left: the segmentation in layers,
top right the estimated motions, and on the bottom the compensated difference images.
We also present a demo, showing the different stages of motion estimation.
Application to denoising
We also need to know how to use this motion information to denoise our sequences. We have proposed an original framework to do so
in MICCAI'05.
. It yielded a very disappointing asymptotic deoising of 20%.
(More)
We have therefore developed the so-called hybrid filters, which decide locally between different possible motion compensations:
Transparent motion compensation when both layers are textured.
Compensation of the motion of the more textured layer in any other case (one layer only is textured, or both layers are homogeneous). This way,
we preserve the useful information without impairing the achievable denoising power.
This approach can be used both in purely temporal denoising filter, as well as in spatio-temporal denoising filters. In the former case, we achieve
a denoising of about 50% (in standart deviation), without useful information blurring (submitted to ISBI'07).
Fluoroscopic sequence processing with the temporal hybrid filter (left) and a classical adaptative temporal filter (right).
The heart is much more contrasted on the left for an equivalent denoising.
Back