Comparison of Text Sentiment Analysis based on Bert and Word2vec

Document Type

Conference Proceeding

Publication Date



Most of the existing sentiment classification models use Word2Vec, GloVe, etc. to obtain the word vector representation of the text. But these methods ignore the context of words. In response to this problem, a neural network model based on the combination of BERT (bidirectional encoder representations from transformers) pre-trained language model and BLSTM (bidirectional long short-term memory network) and attention mechanism is proposed for text sentiment analysis in this paper. First, the word vector which including contextual semantic information is obtained through the BERT pre-training model. Then this paper uses the two-way long and short-term memory network to extract context-related features for deep learning. Finally, the attention mechanism is introduced to assign weights to the extracted information, highlight the important information, and perform text sentiment classification. The test accuracy rate can reach 89.17% on the SST (stanford sentiment treebank) data set, which shows that this method has a certain degree of improvement in this type of accuracy compared with other methods.

Publication Title

2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer, ICFTIC 2021

First Page Number


Last Page Number




This document is currently not available here.