Contact Details | Skip Links | Site Map | Privacy & Cookies

You are here: Home > Business and Enterprise > Events > Events Archive > Distributed Visual Information Processing in Camera Sensor Networks


Distributed Visual Information Processing in Camera Sensor Networks

Monday 01 June 2009, 1300-1400
C60 InfoLab21

By Dr. Pier Luigi Dragotti, Imperial College, London, UK

The paradigm of point-to-point communications is well established, and dates back to the seminal work of Shannon. With the recent advances of the sensor network technology, a new paradigm for signal processing and communication is emerging and this will have a dramatic impact on the way we acquire and process signals and the way we transport and reconstruct them.

In particular, since sensors are low-power devices and communication bandwidth is critical, there are new trade-offs between accuracy, compression, computation and transmission power that need to be investigated.

In this talk, we consider the case of camera sensor networks and discuss the entire signal processing pipeline: distributed data acquisition, distributed data compression and data reconstruction at the receiver. We first introduce the plenoptic function that well models the spatio-temporal structure of the visual data and study its sampling. We then focus on the problem of distributed compression and propose a distributed compression scheme that allows for a flexible allocation of bit-rates amongst the sensors. Finally, the data fusion problem is discussed and new results on image super-resolution and scene segmentation are presented.

We also briefly analyze the fundamental trade-offs between the reconstruction fidelity, the number and locations of cameras and the overall compression rate.

This is joint work with N. Gehrig (ICL), Jesse Berent (ICL), Loic Baboulaz (ICL), M. Gastpar (UC&Berkeley) and M. Vetterli (EPFL).