Extreme Multi-label Text Classification

From statwiki
Revision as of 08:57, 9 November 2020 by Mhwu (talk | contribs) (→‎Motivation)
Jump to navigation Jump to search

Presented By

Mohan Wu

Introduction

In this paper, the authors are interested a field of problems called extreme classification. These problems involve training a classifier to give the most relevant tags for any given text; the difficulties arises from the fact that the label set is so large that most models give poor results. The authors propose a new model called APLC-XLNet which fine tunes the generalized autoregressive pretrained model (XLNet) by using Adaptive Probabilistic Label Clusters (APLC) to calculate cross entropy loss. This method takes advantage of unbalanced label distributions by forming clusters to reduce training time. The authors experimented on five different datasets and achieved results far better than existing state-of-the-art models.

Motivation

Extreme classification has a diverse application in problems such as estimating word representations of a large vocabulary [1], tagging Wikipedia with relevant labels [2] and product descriptions for search advertisements [3].