Text Classification Research with Attention-based Recurrent Neural Networks

  • Changshun Du Beijing Jiaotong University
  • Lei Huang Beijing Jiaotong University

Abstract

Text classification is one of the principal tasks of machine learning. It aims to design proper algorithms to enable computers to extract features and classify texts automatically. In the past, this has been mainly based on the classification of keywords and neural network semantic synthesis classification. The former emphasizes the role of keywords, while the latter focuses on the combination of words between roles. The method proposed in this paper considers the advantages of both methods. It uses an attention mechanism to learn weighting for each word. Under the setting, key words will have a higher weight, and common words will have lower weight. Therefore, the representation of texts not only considers all words, but also pays more attention to key words. Then we feed the feature vector to a softmax classifier. At last, we conduct experiments on two news classification datasets published by NLPCC2014 and Reuters, respectively. The proposed model achieves F-values by 88.5% and 51.8% on the two datasets. The experimental results show that our method outperforms all the traditional baseline systems.

References

[1] Bahdanau, D.; Kyunghyun Cho, K.; Bengio Y. (2014); Neural machine translation by jointly learning to align and translate, ICLR 2015, arXiv preprint arXiv, 1409.0473, 2014.

[2] Chung, J.; Gulcehre, C.; Cho, K. et al. (2015); Gated feedback recurrent neural networks, International Conference on Machine Learning, 37, 2067-2075, 2015.

[3] Graves, A.; Schmidhuber, J. (2005); Framewise phoneme classification with bidirectional LSTM and other neural network architectures, Neural Networks, 18(5), 602–610, 2005.

[4] Hua, L. (2007); Text Categorization Base on Key Phrases, Journal of Chinese Information Processing, 21(4), 34–41, 2007. (in Chinese)

[5] Huang, E.H.; Socher, R.; Manning, C.D.; et al. (2012); Improving word representations via global context and multiple word prototypes, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, Association for Computational Linguistics, 873–882, 2012.

[6] Li, W.; Wu, G.; Zhang, F.; Du, Q. (2017); Hyperspectral Image Classification Using Deep Pixel-Pair Features, IEEE Transactions on Geoscience and Remote Sensing, 55(2), 844-853, 2017.
https://doi.org/10.1109/TGRS.2016.2616355

[7] Luong, T.; Socher, R.; Manning, C.D. (2013); Better Word Representations with Recursive Neural Networks for Morphology, CoNLL, 104–113, 2013.

[8] Mikolov, T.; Sutskever, I.; Chen, K.; et al. (2013); Distributed representations of words and phrases and their compositionality, Advances in neural information processing systems, 3111–3119, 2013.

[9] Mikolov, T.; Yih, W.T.; Zweig, G. (2013); Linguistic regularities in continuous space word representations, Proceedings of NAACL HLT 2013, Atlanta, USA, 746-751, 2013.

[10] Nitish, S.; Salakhutdinov, R.R.; Hinton G.E. (2013); Modeling documents with deep boltzmann machines, Uncertainty in Artificial Intelligence - Proceedings of the 29th Conference, 616-624, 2013.

[11] Pennington, J.; Socher, R.; Manning, C.D. (2014); GloVe: Global vectors for word representation, Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, 1532–1543, 2014.

[12] Socher, R.; Huval, B.; Manning, C.D.; et al. (2012); Semantic compositionality through recursive matrix-vector spaces, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Association for Computational Linguistics, 1201–1211, 2012.

[13] Socher, R.; Perelygin, A.; Wu, J.Y.; et al. (2013); Recursive deep models for semantic compositionality over a sentiment treebank, Proceedings of the conference on empirical methods in natural language processing (EMNLP), 1631–1642, 2013.

[14] Srivastava, N.; Hinton, G.; Krizhevsky, A.; et al. (2014); Dropout: A simple way to prevent neural networks from overfitting, The Journal of Machine Learning Research, 15(1), 1929– 1958, 2014.

[15] Xu, X. ; Li, W.; Ran, Q.; et al. (2018); Multisource Remote Sensing Data Classification Based on Convolutional Neural Network, IEEE Transactions on Geoscience and Remote Sensing, 56(2), 937-949, 2018.
https://doi.org/10.1109/TGRS.2017.2756851

[16] Yao, Q.Z.; Song, Z.L.; Peng, C. (2011); Research on text categorization based on LDA, Computer Engineering and Applications, 47(13), 150–153, 2011. (in Chinese)

[17] Zeng, D.; Liu, K.; Lai, S.; et al. (2014); Relation Classification via Convolutional Deep Neural Network, COLING, 2335–2344, 2014.

[18] Zhang, A.-L., Liu, G.-L., Liu C.-Y. (2004); Research on multiple classes text categorization based o SVM, Journal of Information, 9, 6–10, 2004. (in Chinese)
Published
2018-02-12
How to Cite
DU, Changshun; HUANG, Lei. Text Classification Research with Attention-based Recurrent Neural Networks. INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, [S.l.], v. 13, n. 1, p. 50-61, feb. 2018. ISSN 1841-9844. Available at: <http://univagora.ro/jour/index.php/ijccc/article/view/3142>. Date accessed: 13 aug. 2020. doi: https://doi.org/10.15837/ijccc.2018.1.3142.

Keywords

machine learning, text classification, attention mechanism, bidirectional RNN, word vector