Main video that summarizes our approach. We show that blur in videos can be significantly attenuated by learning how to aggregate information from nearby frames.


Abstract

Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.


Paper

Paper [DeepVideoDeblurring_CVFOpenAccess.pdf (8MB)]
[DeepVideoDeblurring_arXivPreprint.pdf (8MB)]
Supplementary [DeepVideoDeblurring_Supplementary.pdf (10MB)]


Results

Results (videos only) [DeepVideoDeblurring_Results_Videos_Only.zip (383MB)]
Results (w/ original frames) [DeepVideoDeblurring_Results.zip (10.0GB)]


Dataset

Dataset [DeepVideoDeblurring_Dataset.zip (3.7GB)]
Dataset (High FPS) [DeepVideoDeblurring_Dataset_Original_High_FPS_Videos.zip (2.8GB)]


Code

Code [DeepVideoDeblurring_Code_GitHub]


All images are © IEEE 2017, reproduced here by permission of IEEE for your personal use. Not for redistribution.