Artist generated clip-art images typically consist of a small number of distinct, uniformly colored regions with clear boundaries. Legacy artist created images are often stored in low-resolution (100x100px or less) anti-aliased raster form. Compared to anti-aliasing free rasterization, anti-aliasing blurs inter-region boundaries and obscures the artist’s intended region topology and color palette; at the same time, it better preserves subpixel details. Recovering the underlying artist-intended images from their low-resolution anti-aliased rasterizations can facilitate resolution independent rendering, lossless vectorization, and other image processing applications. Unfortunately, while human observers can mentally deblur these low-resolution images and reconstruct region topology, color and subpixel details, existing algorithms applicable to this task fail to produce outputs consistent with human expectations when presented with such images. We recover these viewer perceived blur-free images at subpixel resolution, producing outputs where each input pixel is replaced by four corresponding (sub)pixels. Performing this task requires computing the size of the output image color palette, generating the palette itself, and associating each pixel in the output with one of the colors in the palette. We obtain these desired output components by leveraging a combination of perceptual and domain priors, and real world data. We use readily available data to train a network that predicts, for each antialiased image, a low-blur approximation of the blur-free double-resolution outputs we seek. The images obtained at this stage are perceptually closer to the desired outputs but typically still have hundreds of redundant differently colored regions with fuzzy boundaries. We convert these low-blur intermediate images into blur-free outputs consistent with viewer expectations using a discrete partitioning procedure guided by the characteristic properties of clip-art images, observations about the antialiasing process, and human perception of anti-aliased clip-art. This step dramatically reduces the size of the output color palettes, and the region counts bringing them in line with viewer expectations and enabling the image processing applications we target. We demonstrate the utility of our method by using our outputs for a number of image processing tasks, and validate it via extensive comparisons to prior art. In our comparative study, participants preferred our deblurred outputs over those produced by the best-performing alternative by a ratio of 75 to 8.5.
Additional comparisons with prior methods
Comparisons with VectorMagic
Note: some of the input models shown are copyrighted by third parties and used with their permission. See the paper for details.
@article{Deblurring22, author = {Yang, J. and Vining, N. and Kheradmand, S. and Carr, N. and Sigal, L. and Sheffer, A.}, title = {Subpixel Deblurring of Anti-Aliased Raster Clip-Art}, journal = {Computer Graphics Forum}, volume = {42}, number = {2}, pages = {61-76}, doi = {https://doi.org/10.1111/cgf.14744}, url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14744}, eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14744}, year = {2023} }
- Source Code (Github - coming soon!)
- Appendix
- Comparisons to alternative methods 1 - ESRGAN, Lightroom, VectorMagic
- Comparisons to alternative methods 2 - Kuwahara, MMPX, XBR
- Comparisons of vectorizations of our inputs and outputs
- Comparative user study summary
- Perception user study - anti-aliased clip-art palette
- Perception user study - anti-aliased clip-art region