| In the information age,micro-blog and online shopping platform are playing an increasingly important role in people's daily life,study and communication.The data on the platform hide rich sentimental information.Nowadays,more and more researchers are involved in the research of short text data such as micro-blog and online shopping data.How to effectively represent the semantic features of the text and deeply explore their potential sentimental information are the key and difficult points.The features used in traditional machine learning methods are too sparse on the vector space model and lack the semantic information of the short text,so they cannot identify the semantic features and potential sentimental features of the short text well.Although short text sentiment classification has achieved good results in deep learning,it cannot explain the origin of features extracted from the network layer due to the "black box" nature of deep learning model.To solve the above problems,this paper from the sentiment words,the part of speech and syntactic structure of the perspective of grammatical relations in short text,respectively to extract shallow learning features,including the location information of word,the sentimental part of speech,and dependencies among sentimental words,combined with the attention mechanism in the deep learning and features of convolutional neural network,through the integration of shallow and deep learning approach to learning the semantic information and potential emotional information in this paper,so as to promote this short text characteristics of sentimental expression effectively,improve the effect of short text sentiment classification.Therefore,aiming at "black box" of neural network,in order to improve the interpretability of the network,this paper proposes a bidirectional long-term memory network model based on sentimental multichannel(BM-ATT-BiLSTM).Aiming at the relatively simple algorithm of traditional machine learning methods and the limitations of describing sample features,this paper proposes two neural network models,one is convolutional neural network model combined with sentimental attention(CS-ATT-CNN),and other is multi-core convolutional neural network model combined with sentimental attention(CS-ATT-TCNN).The BM-ATT-BiLSTM model:(1)firstly maps shallow learning features,such as the sentiment part of speech,the position information and dependencies among sentimental words to high-dimensional space to form three high dimensional continuous row feature vectors.(2)secondly fuses three feature vectors and word embedding,and input to the BiLSTM to form three channels;(3)thirdly adds the attention mechanism to the three channels respectively;(4)fourthly batch normalizes three channels and transmits them to three full connection layers;(5)fifthly merges the three full connection layer to input to softmax classification.The BM-ATT-BiLSTM and comparative experiments were verified on three data sets,such as COAE2014,NLPIR and NLPCC2014,and the experimental results showed that in all the comparison experiments,the BM-ATT-BiLSTM presented in this paper had the best effect.In COAE2014,F1 was 95.54%,which was 3.07% higher than LSTM model.On NLPIR,F1 is 88.76%,which is 1.67% higher than LSTM model.In COAE2014,F1 was 73.06%,0.66% higher than LSTM model.CS-ATT-CNN and CS-ATT-TCNN have similar structure and function,but CSATT-CNN is better than CS-ATT-TCNN,only CS-ATT-CNN is analyzed here.The CSATT-CNN model:(1)firstly extracts short text features word embedding using convolution neural network;(2)secondly extracts sentimental attention features from word embedding using attention mechanism;(3)thirdly the short text feature extracted by convolutional neural network and sentimental attention feature are fused to form a new feature vector;(4)fourthly input the new feature vector to support vector machines;(5)fifthly takes the support vector machine as the final classifier.The CS-ATT-CNN and comparative experiments were verified on three data sets,such as COAE2014,NLPIR and NLPCC2014,the experimental results showed that the CS-ATT-CNN were excellent on precision ratio,recall ratio and F1,and was superior to the general convolutional neural network,spent less training time than that of LSTM and its derivative LSTM models,but has poor whole performance than BM-ATT-BiLSTM method proposed in this paper.To sum up,BM-ATT-BiLSTM,CS-ATT-CNN and CS-ATT-TCNN proposed in this paper learn context semantic information in short texts,to a certain extent,effectively mine the hidden sentimental information in texts,alleviate the problems such as semantic loss,sparse feature matrix and dimension explosion.The three models have more generalization ability and can effectively explain sentimental tendencies in short texts. |