| The rise of social networks enables users to discuss and share their views on a certain event or a certain product on the Internet.It is very important for developers and governments to mine this type of information and analyze its sentiment polarity.The analysis of the sentiment polarity of text has become one of the hot research directions in the field of natural language processing.In the field of text sentiment analysis,this paper aims at sentence-level text sentiment classification and target-oriented text sentiment classification tasks,and according to the different ways of user expression on the current social network,a text single-modal sentiment classification model and a text multi-modality model are proposed.For sentence-level text sentiment analysis,a pre-trained differential training orthogonal LSTM model architecture is proposed.Aiming at the problem that the attention mechanism cannot fully play its role due to the excessive aggregation of hidden layer states in the traditional LSTM model,The LSTM model is improved,and Ro BERTa is pre-trained twice using the artificially constructed task-adaptive pre-training method in the text representation stage;further,the model for sentence-level text sentiment classification is applied to target-oriented text In the sentiment classification task,by adding target embedding,the model can focus on the sentiment classification of the expression of the target in the sentence,and by adding transfer learning,it solves the problem of small data sets in related fields.Through comparative experiments with mainstream models on public data sets,it shows the superiority of the model in evaluation indicators.In addition,in view of the fact that users on social media are more inclined to use pictures to assist text for sentiment expression,a pre-trained text-image multi-modal sentiment classification model is designed.By using two pre-trained models for different modal The information is extracted,and the information of different modalities is fused and aligned through multiple pre-training tasks,and the momentum distillation method is used to make the model more stable during the training process.The model achieved performance improvements on various tasks. |