Font Size: a A A

Research On Video Sentiment Content Analysis Method Based On Protagonist And Convolutional Neural Network

Posted on:2018-01-29Degree:MasterType:Thesis
Country:ChinaCandidate:Z B JiangFull Text:PDF
GTID:2358330536456335Subject:Software engineering
Abstract/Summary:PDF Full Text Request
With the rapid development of intelligent devices and technology,a large number of videos are shared on the internet every day.Automatic video content analysis is needed in order to effectively organize these massive video data.Unlike the traditional content-based video analysis,which typically identifies the main event happened in a video and rarely care about the emotions elicit by the video,affective content analysis is to identify videos that can evoke certain emotions in the users.Affective recognition is an important and challenging task for video content analysis.However,most of the previous methods are focused on how to effectively extract features from videos for affective analysis.There are several issues are worth to be investigated.For example,what information can be used to express emotions in videos,and which information is useful to affect audiences' emotions.Most of the previous methods only utilize the video spatial domain information for emotion analysis,and few people use the temporal domain information.Taking into account these issues,in this paper,we proposed a new video affective content analysis method based on protagonist information via Convolutional Neural Network(CNN).The main work of this paper includes:(1)The classic affective analysis methods only consider the low-level features such as audio,but ignore the video frames which is an important emotion information carrier.In this paper,we first extract key frame from videos,then the still image features are extracted by CNN,finally we add these image feature for affective analysis.Considering that not all parts of the image is useful to induce emotion,we extract image patches from the key frame based on SIFT descriptor.These image patches are then used to represent the affective content of video clips.In this paper,we also explore the effectiveness of different feature fusion methods for video affective analysis.(2)Inspired by the fact that people mostly focus on the actors especially the protagonist in videos.Therefore,this paper presents a video affective analysis method based on human face and protagonist face respectively.Specifically,we add the face detection and recognition steps to the key frame extraction part.However,the affective analysis method based on human and protagonist face need a more certain requirement.Thus,in this paper,we build an emotional annotation video database.(3)Most of the existing method only take care about the spatial domain information for affective analysis.In this paper we integrate the classical temporal domain information optical flow into the proposed method.Specifically,we transform the extracted video optical flow to RGB image,then CNN(Convolution Neural Network)is used to extract image features from these optical flow image for affective analysis.Through the help of optical flow,we can get a certain degree of video action information,meanwhile the action information of videos is useful to stimulate people's emotions.
Keywords/Search Tags:Affective analysis, Video content analysis, Protagonist information, Convolutional Neural Network, Optical flow
PDF Full Text Request
Related items