Please use this identifier to cite or link to this item: doi:10.22028/D291-30974
Volltext verfügbar? / Dokumentlieferung
Title: Improving Variational Encoder-Decoders in Dialogue Generation
Author(s): Shen, Xiaoyu
Su, Hui
Niu, Shuzi
Demberg, Vera
Language: English
Title: Proceedings of the 2018 AAAIACM Conference on AI, Ethics, and Society
Pages: 9
Publisher/Platform: ACM
Year of Publication: 2018
Place of publication: New York, NY
Title of the Conference: AIES 2018
Place of the conference: New Orleans, Louisiana, USA
Publikation type: Conference Paper
Abstract: Variational encoder-decoders (VEDs) have shown promising results in dialogue generation. However, the latent variable distributions are usually approximated by a much simpler model than the powerful RNN structure used for encoding and decoding, yielding the KL-vanishing problem and inconsistent training objective. In this paper, we separate the training step into two phases: The first phase learns to autoencode discrete texts into continuous embeddings, from which the second phase learns to generalize latent representations by reconstructing the encoded embedding. In this case, latent variables are sampled by transforming Gaussian noise through multi-layer perceptrons and are trained with a separate VED model, which has the potential of realizing a much more flexible distribution. We compare our model with current popular models and the experiment demonstrates substantial improvement in both metric-based and human evaluations.
Link to this record: hdl:20.500.11880/29740
http://dx.doi.org/10.22028/D291-30974
ISBN: 978-1-4503-6012-8
Date of registration: 24-Sep-2020
Faculty: MI - Fakultät für Mathematik und Informatik
Department: MI - Informatik
Professorship: MI - Prof. Dr. Vera Demberg
Collections:SciDok - Der Wissenschaftsserver der Universität des Saarlandes

Files for this record:
There are no files associated with this item.


Items in SciDok are protected by copyright, with all rights reserved, unless otherwise indicated.