Font Size: a A A

Research On Text Affective Computing Based On Deep Learning

Posted on:2020-05-14Degree:DoctorType:Dissertation
Country:ChinaCandidate:J F ZengFull Text:PDF
GTID:1368330590958990Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
Due to the increasing popularity of the mobile Internet,social media platforms and e-commerce platforms have accumulated massive sentiment resources.How to efficiently use computer techniques to excavate sentiment information from massive text data has become particularly important.As a result,Text affective computing has become a hot topic in contemporary cognitive science.Deep learning algorithm is a kind of neural network that includes multi-layer nonlinear transformation.By means of layerwize feature transformation,the feature representation of the sample in the original space is transformed into a new feature space where decisions can be made easily.The biggest difference between deep learning methods and traditional pattern recognition methods is that deep learning methods are capable of they automatically learn semantic representations from high dimensional original data without carefully designed feature engineering.Therefore,in the study of text affective computing,deep learning techniques are effective in learning highly distinguishable sentiment features.In this paper,we mainly investigate how to use deep learning algorithm to solve two important problems in text affective computing: text sentiment classification and sentiment-oriented text retrieval.To sum up,the main contributions of this paper include the following four aspects.(1)Syntactic semantic information has proved to enhance the ability of representing sentence in the area of sentence-level text sentiment classification.In terms of structure,a paragraph is composed of seveval sentences.In order to address document-level text sentiment classification,in this paper,we propose a syntax-aware encoder which hierarchically extract sentiment features in the word level and sentence level.In the word level,the goal is to introduce attention mechanism into the Child-Sum Tree-LSTM which is built upon the dependency syntax Tree and generate effective sentence representation.In the sentence level,the goal is to process each sentence using an attention-based LSTM model and produce document representation.The experimental results show that it is effective to introduce syntactic information into the study of document-level text sentiment classification,and good classification results are achieved in terms of Accuracy and RMSE.What's more,our designed syntax-aware method beats NSC in terms of convergence speed.(2)We observe a large number of review data and find that the sentiment polarity of each object evaluated in one review is often determined by its context words nearby,which obeys the spatial locality.Taking the spacial locality into consideration,in this paper,we put forward a positional attention-based LSTM model,dubbed Pos ATTLSTM,which not only takes into account the importance of each context word but also incorporates the position-aware vectors which represents the explicit position context between the aspect and its context words.The most important and challenging issues in our model are: 1)how to model the position context,and 2)how to exploit the position-aware vectors to enhance attention-based LSTM networks for aspect-level sentiment classification.Experiments on real-world datasets show that this algorithm is superior to other algorithms in items of Accuracy.What's more,we conduct another experiment where IAN is strengthened with the designed position-aware vectors,and get the performance boosted,further demonstrating the effectiveness of our designed position-awre vectors.(3)For reviews containing more than two aspect terms,methods modeling aspects singly process different aspects in one opinionated sentence in isolation,which is bound to be interfered by other aspects so that the prediction accuracy of sentiment classification is decreased.In other words,existing attentive methods ignore the disturbance of other aspects in the same sentence when computing the attention vector for the current aspect.Aiming to address the issue,in this paper,we develop a deep model which models multi-aspects within one opinionated sentence all at once using the positional attention mechanism for aspect-level sentiment classification.This algorithm adds a penalty item based on Frobenius Norm to the cross entropy loss as a new objective function,and is specifically used to predict the sentiment polarities of reviews containing more than two aspects.Firstly,the pre-trained Pos ATT-LSTM is exploited to initialize the proposed model parameters to compute attention probability distribution for each aspect.Then,according to the penalty item based on Frobenius Norm the probablity distribution matrix is adjusted so that different aspects are described by different parts of the review during training,i.e.,the redundancy and interference are excluded as much as possible when predicting the sentiment polarity with respect to current aspect.Experiments on real-world datasets show that this algorithm is superior to the nine commonly used algorithms in terms of Accuracy when dealing with reviews containing more than two aspect terms.(4)Sentiment-oriented text retrieval has attracted more and more attention due to the explosive growth of sentiment-expressed reviews from social networks like Twitter,Facebook,Instagram,etc.However,most current text hashing algorithms exploit traditional machine learning algorithms to learn hash functions unsupervisedly and largely rely on manually designed features,resulting in that the generated hash codes cannot preserve sentiment-level similarity well.What's worse,sentiment is ignored when measuring the similarity between two text.To solve these problems,in the absence of hash tags,this paper proposes a self-supervised deep hashing approach for sentiment-oriented text retrieval which is composed of two stages.In the first stage,NSC+UPA is used to generate semantic document representations which are transferred into approximate hash tags via LE algorithm.In the second stage,a deep hash algorithm is designed to learn the hash function under the joint supervision of sentiment labels and the approximate hash tags,mapping high-dimensional data to low-dimensional semantic hash codes.Extensive experiments conducted on three well-known datasets have demonstrated that the ultimate hash codes stem from semantic sentiment-oriented representations and can preserve sentiment-level similarity well.
Keywords/Search Tags:Text Affective Computing, Syntax-aware, Positional Attention, Modeling Multi-aspects, Self-supervised Deep Hashing
PDF Full Text Request
Related items