Font Size: a A A

Image Sentiment Prediction Network Based On Saliency Region

Posted on:2022-04-18Degree:MasterType:Thesis
Country:ChinaCandidate:F LinFull Text:PDF
GTID:2518306554458504Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development of Internet of Things technology and artificial intelligence technology,AI Internet of Things technology(AIOT)has begun to provide services for people in all areas of people's life,such as intelligent logistics,intelligent transportation,intelligent medical care,intelligent home,etc.Getting machines to understand human emotional states while providing services is a challenge.So as sentiment analysis work more and more attention,visual image emotion became emotional analysis work important constituent,in the process of emotional analysis was carried out on the image there is always a unable to fend off a key problem,that is between low-level visual features and high-level emotional is how to make contact,"emotional gap".Most of the existing image sentiment analysis work uses the whole image as the input,which ignores the fact that there are differences in the expression of emotions in the regions of the image.This does not conform to the biological characteristics of the human eye.Studies show that the human eye will focus on a certain part of the image when processing visual information,and a large number of features related to weak emotion may affect the performance of the model.Now some researchers have been trying to isolate the image from the image of emotional expression plays an important role in regional(emotions),but the method used is mainly extract the complete object in the image region or area of colour difference is bigger,so don't get accurate emotion significant regional boundary,At the same time,it is not applicable to images without specific objects or images without obvious color difference.In order to solve the above problems,this paper proposes a framework of depth to automatically discover induced image visual emotional expression of the pixel area,the framework is different from most studies only focus on the characteristics of the overall image of research methods,first of all,we through convolutional neural network visualization during the first training process to extract image emotional significance of the area,Then,feature enhancement was performed on the features of the significance region,and finally,the processed data were used to retrain the model.The aim is to make the convolutional neural network in the process of learning to pay more attention to the emotion classification work characteristic,the resulting model of image visual emotional state recognition has stronger robustness,at the same time the framework does not depend on the object categories,applicable to any type of image,more generality than existing methods.The results obtained by this model on both Abstract and Emotion6 data sets are better than those obtained by the current methods.
Keywords/Search Tags:Artificial Intelligence Internet of Things, image visual emotion, emotion salient region, Emotional gap
PDF Full Text Request
Related items