A Latent-Constrained Variational Neural Dialogue Model for Information-Rich Responses

2019 
The variational neural models have achieved significant progress in dialogue generation. They are of encoder-decoder architecture, with stochastic latent variables learned at the utterance level. However, latent variables are usually approximated by factorized-form distributions, the value space of which is too large relative to latent features to be encoded, leading to the sparsity problem. As a result, little useful information is carried in latent representations, and generated responses tend to be non-committal and meaningless. To address it, we initially propose the Latent-Constrained Variational Neural Dialogue Model (LC-VNDM). It follows variational neural dialogue framework, with an utterance encoder, a context encoder and a response decoder hierarchically organized. Particularly, LC-VNDM uses a hierarchically-structured variational distribution form, which considers inter-dependencies between latent variables. Thus it defines a constrained latent value space, and prevents latent global features from being diluted. Therefore, latent representations sampled from it would carry richer global information to facilitate the decoding, generating meaningful responses. We conduct extensive experiments on three datasets using automatic evaluation and human evaluation. Experiments prove that LC-VNDM significantly outperforms the state-of-the-arts and can generate information-richer responses by learning a better-quality latent space.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    55
    References
    0
    Citations
    NaN
    KQI
    []