My research interests are primarily in machine learning and continuous optimization. I am particularly interested in designing reliable, tuning-free algorithms for stochastic, non-convex optimization, with applications to fitting neural networks. My overall goal is to develop algorithms for learning complex models that are fast and practical, but also theoretically sound. Recently, I have been working on line-search methods for stochastic gradient descent.

I am also interested in methods for scalable Bayesian inference, such as variational inference and Monte Carlo methods. Inference in Gaussian processes was my first introduction to approximate Bayesian inference and I still find this model class fascinating.


I received my bachelor’s degree in computer science from UBC in 2018. During my batchelor’s, I worked with David Poole and Giuseppe Carenini on preference elicitation and was the primary architect and developer of Web ValueCharts, a visualization system for multi-criteria decision making. All the code is open source and available on github.

The last six months of my undergraduate degree were spent with Emtiyaz Khan at the RIKEN Center for Advanced Intelligence Project (AIP), where I worked on low-rank approaches to Gaussian variational inference in Bayesian neural networks.

Currently, I am doing my master's at UBC, where I am fortunate to be supervised by Mark Schmidt.


I was reviewer for NeurIPS 2019 and will review for ICML 2020. Recently, I was invited to review my first paper for the Journal of Machine Learning Research (JMLR)!

Aaron Mishkin