Leveraging The Exact Likelihood Of Deep Latent Variable Models

Authors:
Pierre-Alexandre Mattei ITU Copenhagen
Jes Frellsen IT University of Copenhagen

Introduction:

Deep latent variable models (DLVMs) combine the approximation abilities of deep neural networks and the statistical foundations of generative models.The purpose of this work is to study the general properties of this quantity and to show how they can be leveraged in practice.

Abstract:

Deep latent variable models (DLVMs) combine the approximation abilities of deep neural networks and the statistical foundations of generative models. Variational methods are commonly used for inference; however, the exact likelihood of these models has been largely overlooked. The purpose of this work is to study the general properties of this quantity and to show how they can be leveraged in practice. We focus on important inferential problems that rely on the likelihood: estimation and missing data imputation. First, we investigate maximum likelihood estimation for DLVMs: in particular, we show that most unconstrained models used for continuous data have an unbounded likelihood function. This problematic behaviour is demonstrated to be a source of mode collapse. We also show how to ensure the existence of maximum likelihood estimates, and draw useful connections with nonparametric mixture models. Finally, we describe an algorithm for missing data imputation using the exact conditional likelihood of a DLVM. On several data sets, our algorithm consistently and significantly outperforms the usual imputation scheme used for DLVMs.

You may want to know: