Font Size: a A A

Large Scale Image Retrieval Based On Visual Attributes And Semantic Relations

Posted on:2015-06-03Degree:MasterType:Thesis
Country:ChinaCandidate:Y LiuFull Text:PDF
GTID:2298330467985650Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
With the development of Internet and mobile devices, the amount of digital images is growing with a speed as never before. As a result, large scale image retrieval has been advocated in computer vision community. Traditional CBIR systems rely on base features, which inevitably leads to two problems:the semantic gap between low-level features and high-level semantics; and the intention gap between query images and user intentions. To deal with these problems, attribute-based image retrieval was proposed. Attributes are observable properties in images which serve as middle-layer cues to bridge the gap between low-level features and high-level semantic labels. In this paper, we address the problem of image retrieval based on semantic attributes.Specifically, we focus on multi-attribute image retrieval with semantic relations. Multi-attribute image retrieval systems usually involve three steps:feature extraction, attribute classifier learning and similarity measurement. Although impressive retrieval results were reported based on semantic attributes in recent years, there are still some aspects almost untouched. First, traditional attribute classifiers use the same feature for different attributes without considering the characteristic of individual attributes. Second, traditional systems build flat classifiers for attribute classification without taking the richer semantic relations among attributes into account. What’s more, existing methods tends to learn the co-occurrence relations of attributes for image retrieval, which is restricted by the coverage of the training set and thus leads to poor generality. In this work, we address the above problems. Our contributions are threefold:(1) We study a multi-label dimensionality reduction method MDDM. By analyzing it’s principles, we propose to involve semantic and visual relations between attribute labels into the MDDM learning process, which was proved to improve the performance of the feature selection process.(2) We use external semantic knowledge to build a hierarchical attribute classifier. By implicitly utilizing the semantic relations, our classifiers can achieve better performance compared with flat ones. What’s more, considering a single taxonomy might not be enough to represent the rich relations between attributes, we build a semantic forest which can leverage the semantic information from multiple taxonomies.(3) We propose to involve the semantic relations in addition to co-occurrence relations between attributes into the structural learning process in order to improve the generality of the system. Extensive experiments are conducted on several attribute benchmarks. The results have shown that our approach outperforms several state-of-the-art methods and achieves promising results in cross-dataset utilizations.
Keywords/Search Tags:Visual Attributes, Image Retrieval, Semantic Relations
PDF Full Text Request
Related items