Conditional Joint Model for Spoken Dialogue System

2019 
Spoken Language Understanding (SLU) and Dialogue Management (DM) are two core components of a spoken dialogue system. Traditional methods model SLU and DM separately. Recently, joint learning has made much progress in dialogue system research via taking full advantage of all supervised signals. In this paper, we propose an extension of joint model to a conditional setting. Our model does not only share knowledge between intent and slot, but also efficiently make use of intent as a condition to predict system action. We conduct experiments on popular benchmark DSTC4, which includes rich dialogues derived from real world. The results show that our model gives excellent performance and outperforms other popular methods significantly, including independent learning methods and joint models. This paper gives a new way for spoken dialogue system research.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    2
    Citations
    NaN
    KQI
    []