| In recent years,natural language processing has made rapid progress in various fields.Deep learning,one of the core technologies that support its development,has made great achievements in both academic and industrial fields.Many scholars use deep learning technology to address text information.They use neural network and attention mechanism to perform text sentiment analysis.Due to the complexity of the language itself,a sentence or a paragraph of text may contain multiple different sentiment polarities.It encounters a bottleneck when using deep learning to analyze the sentiment polarity of a longer text.Therefore,performing fine-grained aspect-level sentiment analysis has become a hot research topic in the current sentiment analysis field.However,the existing scientific study has several shortcomings as follows:First,the position information of different words in the sentence relative to the aspect term is easily ignored,which leads to the fact that the neural network and attention mechanism cannot reasonably allocate the weights when initializing word embeddings.Secondly,the neural network used is simple,which makes the model ineffectively learn the semantic features in the sentence.At a certain moment,the neural network will erase the important word embedding weights,which generated by the previous moment.Finally,the design of the attention mechanism is unreasonable.It doesn’t work efficiently.Therefore,this study proposes three different models to address the above problems.The first model introduces position information when initializing the word vector to construct a position vector.The second model combines two different neural networks to fully learn semantic features.The third model focuses on improving the design and application of the attention mechanism.The main contributions of our research work are listed as follows:1.A model based on position features and a multi-level interactive attention network is proposed for aspect-level sentiment analysis.The model constructs a position vector according to the distance of different contexts relative to the aspect word in the sentence,which enriches the word embedding.Considering that words boast different distances have different contributions in judging the sentiment polarity of the specific aspect term,the model assigns different weights accordingly.In order to reduce the training time,the semantic features are extracted using a bidirectional multi-level interactive gated recurrent unit(GRU).Finally,the attention mechanism is used to construct the final vector representation of the sentence from aspect to context and context to aspect.The experimental results on four common datasets show that the proposed model has a great improvement over other baseline models.2.A model using interaction matrix and global attention neural network is proposed for aspect-based sentiment analysis.On the basis of considering the position information between words in a sentence,this model introduces two different types of neural networks,long shortterm memory network and convolutional neural network.After fully learning the semantic features in a sentence,the relationship between the aspect and context is fused and matrixed.and then they combined with the global attention mechanism to calculate the final sentence representation.The experimental results on five common datasets show that the proposed model has a great improvement over other baseline models.3.A novel network with multiple attention mechanisms is proposed for aspect-level sentiment analysis.This model is an improved architecture based on basic BERT.Different from the mentioned above two models,this model uses BERT when initializing the word embeddings.Then it uses two different types of attention mechanisms: intra-level attention mechanism and inter-level attention mechanism.The intra-level attention mechanism is a stacked structure,which mainly includes a multi-head self-attention mechanism and a pointwise feed forward network.The inter-level attention mechanism uses an interactive global attention structure.In particular,we put forward a feature focus attention mechanism module to help the model capture contextual information.The results on the five datasets show that the performance of our model is better than those proposed in the last two years. |