DMP 310 - Hugh Dempster Building, 6245 Agronomy Road
Probabilistic programming is the idea of writingmodels from statistics and machine learning using program notations and r easoning about these models using generic inference engines. Recently its combination with deep learning has been explored intensely, which led to the development of so called deep probabilistic programming languages, such as Pyro, Edward and ProbTorch. At the core of this development lie inf erence engines based on stochastic variational inference algorithms. When asked to find information about the posterior distribution of a model writ ten in such a language, these algorithms convert this posterior-inferencequery into an optimisation problem and solve it approximately by a form o f gradient ascent or descent. In this paper, we analyse one of the most f undamental and versatile variational inference algorithms, called score estimator or REINFORCE, using tools from denotational semantics and program analysis. We formally express what this algorithm does on models denotedby programs, and expose implicit assumptions made by the algorithm on the models. The violation of these assumptions may lead to an undefined optimisation objective or the loss of convergence guarantee of the optimisation process. We then describe rules for proving these assumptions, which can be automated by static program analyses. Some of our rules use nontrivial facts from continuous mathematics, and let us replace requirements about integrals in the assumptions, such as integrability of functions defined in terms of programs' denotations, by conditions involving differentiation or boundedness, which are much easier to prove automatically (and manually). Following our general methodology, we have developed a static pro gram analysis for the Pyro programming language that aims at discharging the assumption about what we call model-guide support match. Our analysis is applied to the eight representative model-guide pairs from the Pyro webpage, which include sophisticated neural network models such as AIR. It finds a bug in two of these cases, and shows that the assumptions are met in the others.
This is joint work with Wonyeol Lee and Hangyeol Yu (KA IST, South Korea) and Xavier Rival (INRIA Paris/CNRS/ENS/PSL Research University, France).
Hongseok Yang received his PhD degree in 2001 from the Department of Computer Science, University of Illinois at Urbana-Champaign, USA. Then, he was a postdoc researcher first at KAIST from 2001 until 2003 and next at Seoul National University from 2003 until 2006. After finishing this postdoc training, he took a lectureship (corresponding to assistant professorship) at Queen Mary, University of London, UK for about five years, before he move to the University of Oxford, where he took an associate professorship (2011-2014) and then a full professorship (2014-2017). While he was at Oxford, he also held tutorial fellowship at Worcester College, one of the most beautiful Oxford colleges. He joined KAIST in 2017. He is a co-winner of the 2016 CAV award, received distinguished paper awards in CONCUR 2012, PLDI 2014 and ICSE 2019, and held an EPSRC Advanced Research Fellowship from 2007 until 2012.
Host: Associate Professor Frank Wood, Computer Science Department, UBC
CS Contact: Kath Imhiran email@example.com
Find more undergrad events on our internal portal at https://my.cs.ubc.ca.
This event's address: https://my.cs.ubc.ca/event/2019/07/abstract-probabilistic-prog ramming-idea-writing