Font Size: a A A

Research On Cross-modal Hashing And Quantization Retrieval Based On Discriminative Analysis

Posted on:2020-01-19Degree:MasterType:Thesis
Country:ChinaCandidate:X M LiFull Text:PDF
GTID:2428330575963083Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
With the development of Internet information technology,the amount of multimedia data grows explosively.How to achieve fast search across different modalities is still a challenging problem for massive data in information retrieval.Cross-modal hashing and quantization embed heterogeneous multimedia data into isomorphic space and encode high-dimensional features into compact binary codes.Due to their low memory cost and computational cost,cross-modal hashing and quantization have been attracting extensive interest.Moreover,label information is an important feature and used to construct semantic correlation among multimedia data.In general,existing cross-modal approaches aim at preserving inter-modal and intra-modal similarities.However,they ignored the discriminative property of binary codes.This dissertation studies cross-modal hashing and cross-modal quantization methods with label information,respectively.The main contents of this dissertation are summarized as follows:1.An algorithm of discriminative discrete hashing(DDH)for cross-modal retrieval is proposed.Firstly,with the help of linear classifier,hash codes are viewed as the features from class labels with classification.Then,DDH learns hash functions by finding linear mappings from original feature space to encoding space among multimedia data,respectively.In the procedure of code learning,the local similarities of original features are presented as the graph Laplacian.Finally,unified binary codes are generated by modality-specific hash functions for different modalities.Comparative experiments are conducted on Wiki and NUS-WIDE datasets,which demonstrate the effectiveness of the proposed method.2.An algorithm of discriminative correlation quantization(DCQ)for cross-modal retrieval is proposed.Firstly,DCQ learns category-specific features for different modalities from class labels with classification.Simultaneously,DCQ finds map functions that transform multimedia data into common category space.Finally,the category-specific features from different modalities are represented as uniform quantization codebooks and quantization codes.Comparative experiments are conducted on MIRFlickr and NUS-WIDE datasets,which demonstrate the effectiveness of the proposed method.
Keywords/Search Tags:cross-modal hashing, cross-modal quantization, discriminative analysis, similarity retrieval
PDF Full Text Request
Related items