# User:Cvmustat

## Combine Convolution with Recurrent Networks for Text Classification

Team Members: Bushra Haque, Hayden Jones, Michael Leung, Cristian Mustatea

Date: Week of Nov 23

## Introduction

Text classification is the task of assigning a set of predefined categories to natural language texts. It is a fundamental task in Natural Language Processing (NLP) with various applications such as sentiment analysis, subject labeling and intent detection. A classic example involving text classification is given a set of News articles, is it possible to classify the genre or subject of each article?

Text classification is useful as text data is a rich source of information, but extracting insights from it directly can be difficult and time consuming as most text data is unstructured. NLP text classification can help automatically structure and analyze text, quickly and cost-effectively, allowing for individuals to extract import features from the text easier than before.

In practice, Convolutional neural networks can be used to classify texts based on the semantics of the sentence. Recurrent Neural Networks can be used to classify texts based on the context of a word in relation to the sentence. This paper suggests a new method involving these two networks. Using NLP to classify texts can help b

## CRNN Model Architecture

RNN Pipeline:

The goal of the RNN pipeline is to input each word in a text, and retrieve the contextual information surrounding the word and compute the contextual representation of the word itself. This is accomplished by use of a bi-directional RNN, such that a Neural Tensor Layer (NTL) can combine the results of the RNN to obtain the final output. RNNs are well-suited to NLP tasks because of their ability to sequentially process data such as ordered text.

A RNN is similar to a feed-forward neural network, but it relies on the use of hidden states. Hidden states are layers in the neural net that produce two outputs: $\hat{y}_{t}$ and $h_t$. For a time step $t$, $h_t$ is fed back into the layer to compute $\hat{y}_{t+1}$ and $h_{t+1}$.

The pipeline will actually use a variant of RNN called GRU, short for Gated Recurrent Units. This is done to address the vanishing gradient problem which causes the network to struggle memorizing words that came earlier in the sequence. Traditional RNNs are only able to remember the most recent words in a sequence, which may be problematic since words that came in the beginning of the sequence that are important to the classification problem may be forgotten. A GRU attempts to solve this by controlling the flow of information through the network using update and reset gates.

Let $h_{t-1} \in \mathbb{R}^m, x_t \in \mathbb{R}^d$ be the inputs, and let $\mathbf{W}_z, \mathbf{W}_r, \mathbf{W}_h \in \mathbb{R}^{m \times d}, \mathbf{U}_z, \mathbf{U}_r, \mathbf{U}_h \in \mathbb{R}^{m \times m}$ be trainable weight matrices. Then the following equations describe the update and reset gates:

$z_t = \sigma(\mathbf{W}_zx_t + \mathbf{U}_zh_{t-1}) \text{update gate} \\ r_t = \sigma(\mathbf{W}_rx_t + \mathbf{U}_rh_{t-1}) \text{reset gate} \\ \tilde{h}_t = \text{tanh}(\mathbf{W}_hx_t + r_t \circ \mathbf{U}_hh_{t-1}) \text{new memory} \\ h_t = (1-z_t)\circ \tilde{h}_t + z_t\circ h_{t-1}$

Note that $\sigma, \text{tanh}, \circ$ are all element-wise functions. The above equations do the following:

1. $h_{t-1}$ carries information from the previous iteration and $x_t$ is the current input
2. the update gate $z_t$ controls how much past information should be forwarded to the next hidden state
3. the rest gate $r_t$ controls how much past information is forgotten or reset
4. new memory $\tilde{h}_t$ contains the relevant past memory as instructed by $r_t$ and current information from the input $x_t$
5. then $z_t$ is used to control what is passed on from $h_{t-1}$ and $(1-z_t)$ controls the new memory that is passed on

Thus, each $h_t$ can be computed as above to yield results for the bi-directional RNN.

CNN Pipeline:

The goal of the CNN pipeline is to learn the relative importance of words in an input sequence based on different aspects. The process of this CNN pipeline is summarized as the following steps:

1. Given a sequence of words, each word is converted into a word vector using the word2vec algorithm which gives matrix X.
2. Word vectors are then convolved through the temporal dimension with filters of various sizes (ie. different K) with learnable weights to capture various numerical K-gram representations. These K-gram representations are stored in matrix C.
• The convolution makes this process capture local and position-invariant features. Local means the K words are contiguous. Position-invariant means K contiguous words at any position are detected in this case via convolution.
• Temporal dimension example: convolve words from 1 to K, then convolve words 2 to K+1, etc
3. Since not all K-gram representations are equally meaningful, there is a learnable matrix W which takes the linear combination of K-gram representations to more heavily weigh the more important K-gram representations for the classification task.
4. Each linear combination of the K-gram representations gives the relative word importance based on the aspect that the linear combination encodes.
5. The relative word importance vs aspect gives rise to an interpretable attention matrix A, where each element says the relative importance of a specific word for a specific aspect.

## Merging RNN & CNN Pipeline Outputs

The results from both the RNN and CNN pipeline can be merged by computed by simply multiplying the output matrices. That is, we compute $S=A^TH$ which has shape $z \times 3m$ and is essentially a linear combination of the hidden states. The concatenated rows of S results in a vector in $\mathbb{R}^{3zm}$, and can be passed to a fully connected Softmax layer to output a vector of probabilities for our classification task.

To train the model, we make the following decisions:

• Use cross-entropy loss as the loss function
• Perform dropout on random columns in matrix C in the CNN pipeline
• Perform L2 regularization on all parameters
• Use stochastic gradient descent with a learning rate of 0.001

## Interpreting Learned CRNN Weights

Recall that attention matrix A essentially stores the relative importance of every word in the input sequence for every aspect chosen. Naturally, this means that A is an n-by-z matrix, because n is the number of words in the input sequence and z is the number of aspects being considered in the classification task.

Furthermore, for a specific aspect, words with higher attention values are more important relative to other words in the same input sequence. For a specific word, aspects with higher attention values make the specific word more important compared to other aspects.

For example, in this paper, a sentence is sampled from the Movie Reviews dataset and the transpose of attention matrix A is visualized. Each word represents an element in matrix A, the intensity of red represents the magnitude of an attention value in A, and each sentence is the relative importance of each word for a specific context. In the first row, the words are weighted in terms of a positive aspect, in the last row, the words are weighted in terms of a negative aspect, and in the middle row, the words are weighted in terms of a positive and negative aspect. Notice how the relative importance of words is a function of the aspect.