stat441F18/TCNLM: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
=Presented by=
=Presented by=
Yan Yu Chen
*Yan Yu Chen
Qisi Deng
*Qisi Deng
Hengxin Li
*Hengxin Li
Bo Chao Zhang
*Bochao Zhang


=Introduction=
=Introduction=

Revision as of 13:02, 5 November 2018

Presented by

  • Yan Yu Chen
  • Qisi Deng
  • Hengxin Li
  • Bochao Zhang

Introduction

Topic Compositional Neural Language Model (TCNLM) simultaneously captures both the global semantic meaning and the local word-ordering structure in a document. A common TCNLM incorporates fundamental components of both a neural topic model (NTM) and a Mixture-of-Experts (MoE) language model. The latent topics learned within a variational autoencoder framework, coupled with the probability of topic usage, are further trained in a MoE model. (Insert figure here)

TCNLM networks are well-suited for topic classification and sentence generation on a given topic. The combination of latent topics, weighted by the topic-usage probabilities, yields an effective prediction for the sentences. TCNLMs were also developed to address the incapability of RNN-based neural language models in capturing broad document context. After learning the global semantic, the probability of each learned latent topic is used to learn the local structure of a word sequence.

Topic Model

LDA

Neural Topic Model

Language Model

RNN (LSTM)

Neural Language Model