Font Size: a A A

Research On Video Search Reranking

Posted on:2010-10-01Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y LiuFull Text:PDF
GTID:1118360275455552Subject:Signal and Information Processing
Abstract/Summary:PDF Full Text Request
The explosive growth and widespread accessibility of community-contributed multimedia contents on the Internet have led to surge of research activity in video search.Due to the great success of text search,most popular video search engines, such as Google,Yahoo!,Live and Baidu,build upon text search techniques by using the text information associated with video data.This kind of video search approach has proven unsatisfying as it often entirely ignores the visual contents and human perception on the search results.To address this issue,video search reranking has received increasing attention in recent years.It is defined as reordering video shots based on multimodal cues to improve search precision.In this thesis,we first propose a novel query-independent learning based video search framework;then we investigate the key problems of video search reranking in three paradigms:self-reranking,which only uses initial search results; query-example based reranking,which leverages user provided query examples; CrowdReranking,which aims to mine relevant visual patterns from the search results of external search engines.Obviously,such three paradigms cover most of existing reranking framework or approaches.Accordingly,this thesis conducts a deep research on video search reranking,and obtains the following achievements:(1) We firstly propose a novel query-independent learning(QIL) framework for video search by investigating relevance from query-shot pairs.Unlike conventional query-dependent learning framework,it is more general and suitable for real-world search applications.Under this framework,we can use various machine learning technologies.Therefore,we further propose a SVM-based(Support Vector Machine) supervised query-independent learning and a multi-graph-based semi-supervised query-independent learning approach.(2) For self-reranking,we propose a typicality-based video search reranking. Conventional learning-based approaches to video search reranking only care the relevance or diversity of the selected examples for building the reranking model,while video typicality is usually neglected.In this thesis,we propose to select the most typical samples to build reranking model,considering that typicality indicates the representativeness of each sample,so that more robust ??reranking model could be learned.We first define the typicality score of image/video based on sample distribution,and then formulate the example selection as an optimization scheme that takes into account both the image typicality and the initial ranking order in the initial search results.Based on the selected examples we build the reranking model by using SVM.(3) For query-example-based reranking,we present a novel supervised approach to video search reranking with several query examples.Conventional supervised reranking approaches empirically convert the reranking as a classification problem in which each document is determined relevant or not,followed by reordering the documents according to the confidence scores of classification. We argue that reranking is essentially an optimization problem in which the ranked list is globally optimal if any two arbitrary documents from the list are correctly ranked in terms of relevance,rather than simply classifying a document into relevant or not.Under the framework,we further propose two effective algorithm,called straight reranking and insertion reranking,to solve the problem more practically.(4) For CrowdReranking,we have proposed a new paradigm for visual search reranking called CrowdReranking,which is characterized by mining relevant visual patterns from image search results of multiple search engines available on.the Internet.To the best of our knowledge,the proposed CrowdReranking represents the first attempt towards leveraging crowdsourcing knowledge for visual reranking.This is a great difference from existing self-reranking and query-example-based reranking.We first construct a set of visual words based on the local image patches collected from multiple image search engines.We then explicitly detect two kinds of visual patterns,i.e.,salient and concurrent patterns,among the visual words.Finally,we formalize the reranking as an optimization problem on the basis of the mined visual patterns and propose a close-form solution.
Keywords/Search Tags:video search reranking, content-based video search, semantic analysis, supervised learning, semi-supervised learning, optimization, concept detection, sample selection, CrowdReranking
PDF Full Text Request
Related items