
- University of British Columbia, Canada
- University of British Columbia, Canada
- NVIDIA, Canada and University of British Columbia, Canada
- University College London, United Kingdom and Adobe Research, United Kingdom
- University of British Columbia, Canada

Artist-drawn sketches only loosely conform to analytical models of perspective projection; the deviation of human-drawn perspective from analytical perspective models is persistent and well documented, but has yet to be algorithmically replicated. We encode this deviation between human and analytic perspectives as a continuous function in 3D space and develop a method to learn it. We seek deviation functions that (i) mimic artist deviation on our training data; (ii) generalize to other shapes; (iii) are consistent across different views of the same shape; and (iv) produce outputs that appear human-drawn. The natural data for learning this deviation is pairs of artist sketches of 3D shapes and best-matching analytical camera views of the same shapes. However, a core challenge in learning perspective deviation is the heterogeneity of human drawing choices, combined with relative data paucity (the datasets we rely on have only a few dozen training pairs). We sidestep this challenge by learning perspective deviation from an individual pair of an artist sketch of a 3D shape and the contours of the same shape rendered from a best-matching analytical camera view. We first match contours of the depicted shape to artist strokes, then learn a spatially continuous local perspective deviation function that modifies the camera perspective projecting the contours to their corresponding strokes. This function retains key geometric properties that artists strive to preserve when depicting 3D content, thus satisfying (i) and (iv) above. We generalize our method to alternative shapes and views (ii,iii) via a self-augmentation approach that algorithmically generates training data for nearby views, and enforces spatial smoothness and consistency across all views. We compare our results to potential alternatives, demonstrating the superiority of the proposed approach.

More results.

Applying learned perspective to new shapes.
@inproceedings{yang25capturinghumanperspective, author = {Yang, Jinfan and Foord-Kelcey, Leo and Takikawa, Suzuran and Vining, Nicholas and Mitra, Niloy and Sheffer, Alla}, title = {Capturing Non-Linear Human Perspective in Line Drawings}, booktitle = {ACM SIGGRAPH Asia 2025 Conference Proceedings}, year = {2025}, numpages = {11}, series = {SIGGRAPH Asia Conference Papers '25} }