Will Harvey

I am a third-year PhD student at the University of British Columbia, and member of the Programming Languages for Artificial Intelligence group led by Frank Wood. Previously I studied for an MEng in Engineering Science at the University of Oxford.

My research focuses on the intersection of Bayesian inference and deep learning. One side of this is using deep learning to speed up Bayesian inference. Conversely, methodology from Bayesian inference can improve neural network training.

Email  /  Google Scholar  /  CV

profile photo
Publications
Image Completion via Inference in Deep Generative Models
William Harvey, ,
arXiv preprint, 2021


Pre-trained hierarchical VAEs can be adapted into surprisingly good models for stochastic image completion.

Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training
William Harvey, ,
arXiv preprint, 2019


Bayesian experimental design can be used to find near-optimal attention locations for a hard attention mechanism. These can be used to speed up the later training of hard attention mechanisms.

Assisting the Adversary to Improve GAN Training
, William Harvey,
arXiv preprint, 2020


We improve image quality by training a GAN generator in a way that accounts for a sub-optimal discriminator.

Planning as Inference in Epidemiological Models
, , , , , William Harvey, , ,
arXiv preprint, 2020


We demonstrate how existing software tools can be used to automate parts of infectious disease-control policy-making via performing inference in existing epidemiological dynamics models.

Structured Conditional Continuous Normalizing Flows for Efficient Amortized Inference in Graphical Models
, , William Harvey,
AISTATS 2020


We use knowledge about the structure of a generative model to automatically select a good normalizing flow architecture.

Our new architecture for amortized inference.

Attention for Inference Compilation
William Harvey*, *, , ,
PROBPROG 2020


We present a transformer-based architecture for improved amortized inference in probabilistic programs with complex and stochastic control flow.

End-to-end Training of Differentiable Pipelines Across Machine Learning Frameworks
, , , William Harvey, , ,
NIPS Autodiff Workshop 2017


We present an interface for gradient-based training of pipelines of machine learning primitives. This allows joint training of machine learning modules written in different languages, making it useful for automated machine learning (AutoML).


Website source forked from Jon Barron.