Will Harvey

I am a final-year PhD student at the University of British Columbia, supervised by Frank Wood. Previously I've interned at Google DeepMind and studied for an MEng in Engineering Science at the University of Oxford.

My research focuses on generative modeling, especially with diffusion models. I investigate applications both in the video domain and for problems with more explicit structural information.

Email  /  Google Scholar  /  CV

profile photo
Refereed Publications
Trans-Dimensional Generative Modeling via Jump Diffusion Models
, William Harvey, , , ,
Spotlight at NeurIPS 2023


We generalise diffusion models to be applicable to data with varying dimensionality, like molecules with varying numbers of atoms. We use jump diffusion to simultaneously sample the dimensionality along with the state.

Graphically Structured Diffusion Models
, William Harvey,
Oral at ICML 2023


A framework for incorporating known problem structure in diffusion model design. Doing so is necessary to scale to large problem sizes on problems including matrix factorisation and Sudoku-solving.

Visual Chain-of-Thought Diffusion Models
William Harvey,
CVPR 2023 Workshop on Generative Models for Computer Vision, ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling


Conditional image diffusion models typically outperform unconditional diffusion models, and we show that the gap between them grows as more information is provided to condition on. We then present a simple method to close this gap, yielding improved unconditional and class-conditional image generation.

Flexible Diffusion Modeling of Long Videos
William Harvey, , , ,
NeurIPS 2022


We present a diffusion model for video. Given a few frames of context it can generate videos over an hour long.

Conditional Image Generation by Conditioning Variational Auto-Encoders
William Harvey, ,
ICLR 2022


We present conditional VAEs which can be quickly trained by leveraging existing unconditional VAEs. The resulting models provide a more faithful representation of uncertainty than GAN-based approaches with similar training times.

Planning as Inference in Epidemiological Models
, , , , , William Harvey, , , , ,
Frontiers in Artificial Intelligence | Medicine and Public Health 2022


We demonstrate how existing software tools can be used to automate parts of infectious disease-control policy-making via performing inference in existing epidemiological dynamics models.

Our new architecture for amortized inference.

Attention for Inference Compilation
William Harvey*, *, , ,
SIMULTECH 2022


We present a transformer-based architecture for improved amortized inference in probabilistic programs with complex and stochastic control flow.

Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training
William Harvey, ,
IJCNN 2022


Bayesian experimental design can be used to find near-optimal attention locations for a hard attention mechanism. These can be used to speed up the later training of hard attention mechanisms.

Assisting the Adversary to Improve GAN Training
, William Harvey,
IJCNN 2021


We improve image quality by training a GAN generator in a way that accounts for a sub-optimal discriminator.

Structured Conditional Continuous Normalizing Flows for Efficient Amortized Inference in Graphical Models
, , William Harvey,
AISTATS 2020


We use knowledge about the structure of a generative model to automatically select a good normalizing flow architecture.

End-to-end Training of Differentiable Pipelines Across Machine Learning Frameworks
, , , William Harvey, , ,
NIPS Autodiff Workshop 2017


We present an interface for gradient-based training of pipelines of machine learning primitives. This allows joint training of machine learning modules written in different languages, making it useful for automated machine learning (AutoML).


Website source forked from Jon Barron.