When dealing with novel 3D visual data modalities, conventional compression approaches become rapidly limited. This is mostly due to two factors. First, they are often consumed in a way that only a small part of them is watched at a given time. Predictive coding, as a cornerstone of the conventional methods, imposes to send the entire data whereas a significant part of it will be useless for the user’s experience. I will present our different contributions to the “compression with random access” problem. In short, we have theoretically and practically proven that replacing predictive coding by our proposed incremental compression enables to send only what is needed without loosing compression efficiency.
Second, 3D data often rely on irregular topology. The conventional compression tools well suited to euclidean spaces are no longer defined. We have proposed different solutions based on the recent advances in graph signal processing to rebuild efficient compression/processing pipeline for 3D data such as omnidirectional images, light fields, 3D meshs, etc.
*Reviewers:
- Frédéric DUFAUX, DR, CNRS, CentraleSupelec, Univ. Paris-Saclay
- Enrico MAGLI, Prof., Politecnico di Torino, Italy
- Marta MRAK, Prof., Queen Mary University of London & BBC, UK
*Examiners:
- Marc ANTONINI, DR, CNRS, Université Côte d’Azur
- Luce MORIN, Prof., INSA Rennes
- Dong TIAN, Senior Scientist, InterDigital, Princeton, NJ, USA