Sparsity priors and boosting for learning localized distributed feature representations

ID
TR-2010-04
Authors
B. Chen, K. Swersky, B. Marlin, and N. de Freitas
Publishing date
March 29, 2010
Length
18 pages
Abstract
This technical report presents a study of methods for learning sparse codes and localized features from data. In the context of this study, we propose a new prior for generating sparse image codes with low-energy, localized features. The experiments show that with this prior, it is possible to encode the model with significantly fewer bits without affecting accuracy. The report also introduces a boosting method for learning the structure and parameters of sparse coding models. The new methods are compared to several existing sparse coding techniques on two tasks: reconstruction of natural image patches and self taught learning. The experiments examine the effect of structural choices, priors and dataset size on model size and performance. Interestingly, we discover that, for sparse coding, it is possible to obtain more compact models without incurring reconstruction errors by simply increasing the dataset size.