Research Papers Classification System: Difference between revisions

From statwiki
Jump to navigation Jump to search
No edit summary
Line 12: Line 12:
== Data Preprocessing ==
== Data Preprocessing ==
'''Crawling of Abstract Data'''
'''Crawling of Abstract Data'''
Under the assumption that audiences tend to first read the abstract of papers to gain an understanding of the papers. As a result, the abstract of any paper may include “core words” that can be used to effectively classify papers’ subjects.
An abstract is crawled to have its stop words removed. Stop words are words that are usually ignored by search engines, such as “the”, “a”, and etc. Afterwards, nouns are extracted, as a more condensed representation for efficient analysis.
'''Managing Paper Data'''
'''Managing Paper Data'''
To construct an effective keyword dictionary using abstract data and keywords data in all of the crawled papers, the authors categorized keywords with similar meanings using a single representative keyword. The approach is called stemming, which is common in cleaning data. 1394 keyword categories are extracted, which is still too much to compute. Hence, only the top 30 keyword categories are used.


== Topic Modeling Using LDA ==
== Topic Modeling Using LDA ==

Revision as of 17:06, 24 November 2020

Please Do NOT Edit This Summary

Presented by

Jill Wang, Junyi Yang, Yu Min Wu, Chun Kit (Calvin) Li

Introduction

This paper introduces a paper classification system that utilizes the Term frequency-inverse document frequency (TF-IDF), Latent Dirichlet Allocation (LDA), and K-means clustering. The most important technology the system used to process big data is the Hadoop Distributed File Systems (HDFS). The system can handle quantitatively complex research paper classification problems efficiently and accurately.

General Framework


Data Preprocessing

Crawling of Abstract Data

Under the assumption that audiences tend to first read the abstract of papers to gain an understanding of the papers. As a result, the abstract of any paper may include “core words” that can be used to effectively classify papers’ subjects.

An abstract is crawled to have its stop words removed. Stop words are words that are usually ignored by search engines, such as “the”, “a”, and etc. Afterwards, nouns are extracted, as a more condensed representation for efficient analysis.

Managing Paper Data

To construct an effective keyword dictionary using abstract data and keywords data in all of the crawled papers, the authors categorized keywords with similar meanings using a single representative keyword. The approach is called stemming, which is common in cleaning data. 1394 keyword categories are extracted, which is still too much to compute. Hence, only the top 30 keyword categories are used.

Topic Modeling Using LDA

Term Frequency Inverse Document Frequency (TF-IDF) Calculation

Term Frequency (TF)

Document Frequency (DF)

Inverse Document Frequency (IDF)


Paper Classification Using K-means Clustering

System Testing Results

Conclusion

Critique

Reference