Most of the current deep learning based solutions for image restoration use feed-forward networks to learn mapping from corrupted image to clean image. But whenever the type or degradation changes, network architecture needs to be modified and the parameters need to be re-learned. On the other hand, generative models are known to be flexible. The main idea in this project is to use deep generative models provide a task agnostic and much more versatile solution to image restoration tasks by modeling the image prior distribution.
Compressive Image Recovery using Recurrent Generative Model
In proceedings of ICIP 2017, poster demo at ICCP 2017
[paper][code]
We propose to use a deep generative model, RIDE by Theis et al., as an image prior for compressive signal recovery. Since RIDE models long-range dependency in images using spatial LSTM, we are able to recover the image better than other competing methods
Abstract
Reconstruction of signals from compressively sensed measurements is an ill-posed problem. We leverage the recurrent generative model, RIDE, as an image prior for compressive image reconstruction. Recurrent networks can model long-range dependencies in images and hence are suitable to handle global multiplexing in reconstruction from compressive imaging. We perform MAP inference with RIDE using back-propagation to the inputs and projected gradient method. We propose an entropy thresholding based approach for preserving texture in images well. Our approach shows superior reconstructions compared to recent global reconstruction approaches like D-AMP and TVAL3 on both simulated and real data.
Image Inpainting
Original Image
Masked Image(80%)
During Gradient Ascent
Recovered Image
Single Pixel Camera reconstruction
Original Image
Initial Image
During Gradient Ascent
Recovered Image
Real SPC reconstruction
Figure shows the real reconstructions obtained from measurements of Single Pixel Camera (SPC). As we can see, in challenging cases like reconstruction from 15% measurements (bottom row) our method performs better than other approaches.
Acknowledgments
This project is supported by Qualcomm Innovation Fellowship, India 2016.