Font Size: a A A

Study On Mapping Method Of Image Features And Emotional Semantics

Posted on:2009-03-15Degree:MasterType:Thesis
Country:ChinaCandidate:J LiFull Text:PDF
GTID:2178360245965362Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
Along with rapid development of Human Computer Interaction system, emotion in image is a topic that has received much attention during the last few years, in the context of feature extraction as well as in automatic image classification. Excavating effectively mapping rules between the image and the emotion, which is exploration of emotion computation in the image processing aspect, is brand-new and big challenge front topic.From the signal processing point of view, image signal includes the vision feature, linguistic information and emotion. Emotions are traditionally classified into two main categories: Primary (basic) and Secondary (derived) emotions. Primary emotions, including fear, anger joy, sadness and disgust, are generally those, which are experienced by all social mammals and have particular manifestations associated with them. Secondary emotions, such as pride, gratitude, tenderness and surprise are variations of combinations of primary ones, and may be unique to humans.Recent research has shown that the visual features of an image, such as color, texture and shape, play an important role in the content-based image retrieval (CBIR). Image affecting people emotional feeling not only depend on low-level features such as color and texture; some objects in high-level image also give people different emotional reaction. The dog in the image may make one feel comfortable and flowers may make one feel well. Same object may make different emotional effect, one feel kind for the water as it is source of life but afraid of flood. This is the difference form general image semantic retrieval. Combination of high-level semantic features and low-level visual features should be highlighted in emotional semantic classification, but implementation of the combination is difficult.The traditional classified model carries on the classification using the pre-defined label establishes, but emotion of real world is richly colorful. Emotion has strong subjectivity and fuzziness and emotion boundary is fuzzy each other. So we are unable to realize the clear classification. Based on the idea of "First Clustering Then Classifying", we first cluster the image vision, make labels on this characteristics using clustering model, produce emotion labels. After that we carry on classification experiments of vision-emotion characteristics, realize the classification, and finish the mapping from image features to emotion semantic. We make some work list that emphatically:Firstly, we accumulate the corresponding relations between the emotion and the image by a great deal of experiments, the research and the reference, study the emotion recognition function of present commonly used vision characteristic including color, texture and shape the characteristic, analyze the MPEG-7 descriptor, and select useful characteristic based on to SOM characteristic feasibility studying;Secondly, we propose an emotion space method of portrayal quantification emotion for realization of emotion digitization by synthesizing classical emotion space classification method;Thirdly, we establish the preliminary visual characteristic emotion recognition model by SOM, carry on the experiments of image emotion clustering and the simulation, discover the mapping rules from the visual characteristic and finish mapping between image and emotion by emotion similar;And finally, we choose 852 images from CAPS image storehouse of Chinese Academy of Science to carry on the mapping experiments, and make a classified comparison using three kind of classified algorithms (Naive Bayes, Random Forest and Support Vector Machine) to test the feasibility and the accuracy of proposed method. The experimental result is produced and we carry on the analysis and the discussion to the result.
Keywords/Search Tags:emotion qualification, feature extraction, self-organization mapping, MPEG-7
PDF Full Text Request
Related items