Compressive Sensing of Videos
Short Course at CVPR 2012
We are on the throes of a ”data crisis". We are building sensors of increasing capabilities --- be it of resolution, frame rate or dimensions. Simultaneously, large scale deployment of such sensors are being increasingly common. Traditional models for sensors do not extend easily to such scenarios. This is especially relevant in the context of high speed imaging and multi-spectral imaging. We need a scalable theory of sensing. One such theory is that of compressed sensing.
In this course, we will present an extensive overview of computational imaging, and compressive sensing techniques while providing key ideas and insights into their workings. The participants will learn about topics related to computer vision, computational photography and compressive sensing. We hope to provide enough fundamentals to satisfy the technical specialist as well as tools/softwares to aid graphics and vision researchers, including graduate students.
Richard G. Baraniuk
Aswin C. Sankaranarayanan
1. Introduction (pdf) (ppt)
2. A brief introduction to compressive sensing (pdf) (ppt)
3. Imaging systems for video compressive sensing (pdf) (ppt)
4. Compressive sensing beyond videos (pdf) (ppt)
Target audience: This tutorial has a very broad target audience. From the point of view of field, one of the goals of the tutorial is to bridge the gap between researchers who work on video processing and researchers who work on video acquisition. There are several common themes among both these groups of researchers and the tutorial will highlight these commonalities paying special attention to methods that involve sparse representations. We believe that this will lead to a synergy and accelerate research progress in both areas. The tutorial is also designed to interest graduate students, faculty and industrial researchers. We hope to provide enough fundamentals to satisfy the technical specialist as well as tools/softwares to aid graphics and vision researchers, including graduate students.
[to be updated]
Acknowledgement: The organizers gratefully acknowledge NSF awards CCF-1117939 and IIS-1116718.