Example of our ToF reconstruction and comparison on a real scene. The mesh geometry
is color-coded according to surface normal (i.e., blue indicates left-faced surface while red the opposite).
The naive results are generated by built-in software in ToF cameras. Our method clearly improves the results
by reducing flying pixels and blurriness and suppressing the noise in both amplitude and depth.
Continuous-wave time-of-flight (ToF) cameras show great promise as
low-cost depth image sensors in mobile applications. However, they
also suffer from several challenges, including limited illumination
intensity, which mandates the use of large numerical aperture lenses,
and thus results in a shallow depth of field, making it difficult to
capture scenes with large variations in depth. Another shortcoming is
the limited spatial resolution of currently available ToF sensors.
In this paper we analyze the image formation model for blurred ToF images.
By directly working with raw sensor measurements but
regularizing the recovered depth and amplitude images, we are able to
simultaneously deblur and super-resolve the output of
ToF cameras. Our method outperforms existing methods on both
synthetic and real datasets. In the future our algorithm should extend
easily to cameras that do not follow the cosine model of
continuous-wave sensors, as well as to multi-frequency or multi-phase
imaging employed in more recent ToF cameras.