Extracting core information of opinion from comment sentences is a fundamental task of fine-grained Aspect-based Sentiment Analysis.The task aims to extract pairs of aspect items and opinion items from comment sentences,where the aspect item describes what target/aspect the reviewer expresses his opinion on,and the opinion item refers to what words the reviewer uses to express his opinion.This paper mainly studies the problem of pair-wise aspect and opinion terms extraction.Existing methods need to perform a large number of complex annotations on the data or generate a large number of negative samples,consume a lot of manpower and computationally expensive,in order to solve this problem,this paper first transforms the problem,converting the extraction task into a generation task,and proposes a generation framework based on a Sequence-to-Sequence(Seq2Seq)model to generate Pair-wise aspect and opinion terms.Based on the generative model,the knowledge of the text context information is enhanced to improve the accuracy of the model for the pair-wise aspect and opinion terms extraction.The main work of this paper includes:1.Existing methods need to perform a large number of complex annotations on the data or generate a large number of negative samples,consume a lot of manpower and computationally expensive,in order to solve this problem,converting the task of pairwise aspect and opinion terms extraction into a text generation task,an end-to-end generation framework based on sequence-to-sequence(Seq2Seq)model is given to generate pair-wise aspect and opinion terms.The encoder and decoder of the large pretrained model BART are adopted as the encoder and decoder of the Seq2 Seq model in the proposed framework,combine the pointer mechanism to generate pair-wise aspect and opinion terms directly during decoding.Experimental results show that the proposed model is better than other baseline models on the three data sets.2.The pre-training model is based on statistical method modeling,and learns the implicit association of entities in the text according to the co-occurrence information,which causes the pre-training model to lack common sense knowledge,and does not have the ability of deep understanding and logical reasoning.Knowledge can provide pre-trained models with more comprehensive and rich entity semantics and entity association information.Therefore,we adopt a method of embedding knowledge bases into large-scale pre-trained models,thereby using structured artificial knowledge to enhance the representation of context.Based on the above sequence-to-sequence based aspect-view item pair generative model,we perform knowledge augmentation with contextual information.We first retrieve relevant entity embeddings in Word Net using an integrated entity linker,and then update the contextual vector representations through a form of word-to-entity attention.Finally,the knowledge-augmented contextual representation is used for the aspect-view item pair extraction task.The experimental results show that the model after knowledge enhancement has better performance than the model without knowledge enhancement. |