Blind Deconvolution with Model Discrepancies

Jan Kotera, Filip Šroubek, Václav Šmídl; IEEE Transactions on Image Processing, 2017

Abstract

Blind deconvolution is a strongly ill-posed problem comprising of simultaneous blur and image estimation. Recent advances in prior modeling and/or inference methodology led to methods that started to perform reasonably well in real cases. However, as we show in our paper, they tend to fail if the convolution model is violated even in a small part of the image. Methods based on Variational Bayesian inference play a prominent role. In this work, we use this inference in combination with the same prior for noise, image and blur that belongs to the family of independent non-identical Gaussian distributions, known as the Automatic Relevance Determination prior. We identify several important properties of this prior useful in blind deconvolution, namely, enforcing non-negativity of the blur kernel, favoring sharp images over blurred ones, and most importantly, handling non-Gaussian noise, which, as we demonstrate, is common in real scenarios. The presented method handles discrepancies in the convolution model and thus extends applicability of blind deconvolution to real scenarios, such as photos blurred by camera motion and incorrect focus.

Short description

Our work solves the problem of finding the sharp image $u$ from a single blurred image $g$ modeled as

$$ g = h*u+\epsilon, \tag{1}$$

where $\epsilon$ is the observation noise. We conjecture that in real-world conditions, the model (1) holds sufficiently well only in a certain part of the image and while in some parts it can be completely violated. Our work thus focuses on blind deconvolution sufficiently robust to severe non-Gaussian model errors.

Our method is based on the work of Tzikas et al. Variational bayesian sparse kernel-based blind image deconvolution with Student’s-t priors, IEEE Transactions on Image Processing, 2009. We use the ARD prior for noise, image, and blur. In the noise case it has the form

$$ p(\epsilon) = \prod_i \mathcal{N}\left(\epsilon_i| 0,(\alpha\gamma_i)^{-1}\right) \propto \prod_i (\alpha\gamma_i)^{1/2}\exp\left(-\frac{\alpha\gamma_i}{2}(g_i-H_iu)^2\right), \tag{2} $$

where $\alpha$ is the precision (inverse of variance) of the Gaussian noise present in the input and $\gamma_i$ captures the local model fidelity in pixel $i$. We iteratively estimate these parameters from the data using Variational Bayesian inference and pixels with low data-fidelity are effectively rejected from the blur estimation.

Some of our results from the paper or the supplementary material are presented on this page for more convenient viewing.

Paper and codes

Full paper
MATLAB code

BibTeX

@ARTICLE{7869370, 
	author={J. Kotera and V. Smidl and F. Sroubek}, 
	journal={IEEE Transactions on Image Processing}, 
	title={Blind Deconvolution with Model Discrepancies}, 
	year={2017},
	month={may}, 
	volume={26}, 
	number={5}, 
	pages={2533-2544}, 
	keywords={Bayes methods;Computational modeling;Convolution;Deconvolution;Estimation;Gaussian distribution;Kernel;Gaussian scale mixture;Variational Bayes;automatic relevance determination;blind deconvolution}, 
	doi={10.1109/TIP.2017.2676981}, 
	ISSN={1057-7149},}
		

Experimental results

Input Ours-$\gamma$ Ours-$\alpha\gamma$ Tzikas09 Pan16 Xu10

Test images (benchmark dataset)

Along with the paper and code we provide a dataset of test images with authentic motion blur, intended for evaluation of blind deblurring algorithms. Each data file consists of a triplet $(u,h,g)$, where $u$ is the sharp image, $h$ is the blur PSF (measured to the best of our abilities) and $g$ is the "valid" convolution $u*h$, up to noise and measurement errors.
Download