### CPSC548 Graduate Directed Study (Jan–Apr, 2008)

This directed graduate study course was conducted under the supervision of Dr. Wolfgang Heidrich.

The target of this study was primarily to analyze the algorithm by Yuan et al[1]. Blur Kernel estimation is very important for computational photography ideas involving blurring. Blurring can occur due to camera motion or defocus. Although the paper[1] worked on motion blur, this project investigated the application of this algorithm to measure defocus blur, particularly in the presence of an aperture filter. Although the algorithm was found to be robust with synthetic cases, it often failed to recover the blur kernel from real test cases.

The Report can be found here.

### A very short description of the paper[1]

#### The problem

If exposure time is too low, we get a noisy but sharp image (N); if exposure time is high, we get a good signal to noise ratio but a blurry image (B) due to camera shake and object movement. Then we have two of these extreme choices (N and B) – but none of them is perfect. In fact there can be a continuum of images from most noisy to most blurry – and if we are not lucky then all of them would be noisy or blurry or both!

We want an image (I) that has a good signal to noise ratio, but it also has to be sharp – contradictory goals!

In short,
N ≡ noisy I
B ≡ blurred I
We want I ≡ deblurred B ≡ denoised N

#### Mapping on to Deconvolution

The authors assumed the camera can only be translated; no rotation and no scaling. This assumption let them pose the problem as a deconvolution problem (by shift invariance of the kernel). Now they needed to compute the kernel and deconvolve the image!

#### Method

So what the authors did is that they took these two images, computed the camera shake blur kernel (K). Then they deconvolved B.

A very important contribution of this paper is that the authors could realize that often you can take two such extreme images (N and B) but not the optimum one; and they showed how to use both of these images to come up with the 'optimum' image.

### Explaining 'vectorization' of Kernel K

Note: In the dicussion below, I is the noisy image, not the target image. No distinction was made since this section was written with an intention to describe only the vectorization concept

When trying to estimate the kernel, the kernel pixels are the unknowns "x". Convolution is a linear combination, so now it can be written in a matrix-vector form. if K is the kernel of size m×m then x would be like (for row major ordering):
[K11 K12 . . . K1m K21 K22 . . . K2m . . . Kmm]T

Now, the right hand side has B = vector form of b (blurred image), which can be once again in row or column major order.

Each row of A×K=B represents one equation. Look at some arbitrary one that has bij on the right side.
Ar*x = bij

So each element in A is an image pixel value. Let Br (at r-th row of the column vector B) has the value bij. You know which pixels of original image I is contributing to bij, and how much, since it is a convolution operation. You don't know the kernel though, but you do know which pixel of I gets multiplied by (say) Kpq to contribute to bij for some p, q ∈[1...m]. There is a very simple repeatitive pattern in it.

In other words, Ar will contain the vectorized form of a m×m sized window centered at i, j.

For example, if i=j=(m+1)/2 (assuming m is odd and 1-indexing) then Ai would be:
[ I11, I12, ... I1m, I21, I22, ..., I2m, ... Imm ]

If i=(m+1)/2 but j=(m+1)/2+1 instead, then Ai:
[ I12, I13, ... I1(m+1), I22, I23, ..., I2(m+1), ... Im(m+1) ]

and so on.

#### Sample (Matlab) code

Note that the explanation above uses row major ordering while the code here uses column major, some food for thought :D.

 %read image I=double(imread('cameraman.tif'))/255; %for now, assume we know I b=I; % no convolution m=3; %size of the (square) kernel %or actually blur b. %K=[0 0 0; 0 1 0; 0 0 0]; K=K./sum(sum(k)); m=max(size(K)); %b=conv2(b,K); %init m2=(m+1)/2; [r c] = size(I); B=b(:); %gives column major vectorization of blurred image b. %noise may be added to I here, to simulate blurred-noisy pair. %I=max(0,I+(rand(size(I))-0.5)*.02); % 1-percent uniform random error %pad I so the out-of-bounds error doesn't occur Ip=zeros(r+m-1,c+m-1); %assuming single color channel Ip(m2+(0:r-1), m2+(0:c-1)) = I; A=zeros(r*c,m*m); %column major to stay consistent with b k=0; for j=1:c disp(j); pause(.1); for i=1:r k=k+1; %cut out a portion of Ip and vectorize it Ak=Ip(i+(0:m-1), j+(0:m-1) ); A(k,:)=Ak(:)'; end end %solve... x=A\B; %<-- not a good idea. the paper has a better approach ;-) %transform into a matrix kernel=reshape(x,m,m)

the "recovered" kernel should be equal (or close) to:
 [0 0 0; 0 1 0; 0 0 0]

Play around with other blur kernels to understand how it works.

### References

[1] L. Yuan, J. Sun, L. Quan and H.-Y. Shum, Image Deblurring with Blurred/Noisy Image Pairs, ACM SIGGRAPH, 2007. [PDF]