stat441F18/TCNLM: Difference between revisions
No edit summary |
|||
Line 20: | Line 20: | ||
==RNN (LSTM)== | ==RNN (LSTM)== | ||
Recurrent Neural Networks (RNNs) capture the temporal relationship among input information, where the outputs are assumed to be dependent on a sequence of input data. Comparing to traditional feedforward neural networks, RNNs maintains internal memory by looping over previous information inside each network. For its distinctive design, RNNs have shortcomings when learning from long-term memory as a result of the zero gradients in back-propagation, which prohibits states distant in time from contributing to the output of current state. Long short-term Memory (LSTM) or Gated Recurrent Unit (GRU) are variations of RNNs that were designed to address the vanishing gradient issue. | |||
==Neural Language Model== | ==Neural Language Model== |
Revision as of 12:07, 5 November 2018
Presented by
- Yan Yu Chen
- Qisi Deng
- Hengxin Li
- Bochao Zhang
Introduction
Topic Compositional Neural Language Model (TCNLM) simultaneously captures both the global semantic meaning and the local word-ordering structure in a document. A common TCNLM incorporates fundamental components of both a neural topic model (NTM) and a Mixture-of-Experts (MoE) language model. The latent topics learned within a variational autoencoder framework, coupled with the probability of topic usage, are further trained in a MoE model. (Insert figure here)
TCNLM networks are well-suited for topic classification and sentence generation on a given topic. The combination of latent topics, weighted by the topic-usage probabilities, yields an effective prediction for the sentences. TCNLMs were also developed to address the incapability of RNN-based neural language models in capturing broad document context. After learning the global semantic, the probability of each learned latent topic is used to learn the local structure of a word sequence.
Topic Model
LDA
Neural Topic Model
Language Model
RNN (LSTM)
Recurrent Neural Networks (RNNs) capture the temporal relationship among input information, where the outputs are assumed to be dependent on a sequence of input data. Comparing to traditional feedforward neural networks, RNNs maintains internal memory by looping over previous information inside each network. For its distinctive design, RNNs have shortcomings when learning from long-term memory as a result of the zero gradients in back-propagation, which prohibits states distant in time from contributing to the output of current state. Long short-term Memory (LSTM) or Gated Recurrent Unit (GRU) are variations of RNNs that were designed to address the vanishing gradient issue.