Font Size: a A A

Based On Bi-GRU And L-Softmax Text Classification Model

Posted on:2022-02-10Degree:MasterType:Thesis
Country:ChinaCandidate:Y LiFull Text:PDF
GTID:2518306323496524Subject:Applied Statistics
Abstract/Summary:PDF Full Text Request
The text contains rich information.The research and analysis of the feedback information of goods and services has played a huge role in product marketing and market trend and other related decisions.Therefore,text classification has become one of the important research contents of understanding users' preferences,and has been widely used in public opinion monitoring,news classification and comment classification.The existing classification methods for text data are mainly divided into two categories: the methods based on traditional machine learning and the methods based on deep learning.Deep learning can avoid the problems of dimension disaster,feature sparsity and semantic ambiguity in traditional machine learning.In recent years,deep learning is widely used in text classification.Recurrent Neural Network(RNN)model can extract the potential features of text sequence information well,but gradient problem will appear in the long dependence problem.The Long-term and Short-term Memory Network model(LSTM)solves the long-term dependence problem in the recurrent neural network model.Each unit of LSTM is controlled by four gates(output gate,forgetting gate,memory gate and output gate),and the calculation of corresponding LSTM parameters will be more complicated.Therefore,a more simple and practical network GRU is more commonly used.GRU only uses two gates,and the model is simpler than the standard LSTM model,but it still continues the advantages of LSTM model in solving long dependency problems.Bi-GRU is composed of two unidirectional gating neural network models with opposite directions.At last,the positive and negative features are combined to get the text features with context information.Softmax Loss function,which is used by Softmax classifier in neural network model,is simple and practical in the application process,but it can not clearly guide the features with high discrimination in network learning.In this research direction,L-Softmax Loss,a variant of softmax,is introduced into the model.On the basis of Softmax,positive integer expansion is introduced to extend the angle between different classes,so as to adjust the angle interval between different classes,so as to make the classification conditions more stringent and increase the accuracy of classification.This paper uses deep learning model GRU network and Large-Margin Softmax loss(L-Softmax)to extract text features from Bi-GRU model,and then uses the improved L-Softmax to build the final classifier.Taking the output features of Bi-GRU model as the input of L-Softmax model,an improved fusion algorithm model large margin Bi-GRU is proposed.The final experimental results show that the large margin Bi-GRU model has excellent performance in news text classification task.
Keywords/Search Tags:Bi-GRU, Softmax Loss, L-Softmax, News text classification
PDF Full Text Request
Related items