Font Size: a A A

Relevance feedback in content-based retrieval: A study of practical aspects of implementation and evaluation

Posted on:2009-01-11Degree:Ph.DType:Dissertation
University:University of VirginiaCandidate:Jin, XiangyuFull Text:PDF
GTID:1448390002491403Subject:Computer Science
Abstract/Summary:
Relevance feedback, a technique for introducing user preference into an iterative search session, is a critical component of content-based retrieval (CBR) applications. The diversity of newly appearing CBR environments motivated the research for new relevance feedback approaches to adapt to these application areas and media types. However relevance feedback research has been carried out by many different research communities (e.g., information retrieval, database management, machine learning) each having a different focus, experimental traditions and evaluation methodologies. This situation has yielded significant confusion in evaluating competing techniques owing to studies that are hard or impossible to compare. So, many relevance feedback approaches have been proposed but there is no general framework in which to compare them. It is hard for later researchers to find the intrinsic relationships among these approaches. Although relevance feedback has been historically shown to be useful for various experimental systems, we seldom see this technique applied in operational information retrieval systems.; In this dissertation, I establish a general framework for relevance feedback research so that each approach can be treated as a special case. Within this general framework, it is possible to study the intrinsic relationships among different, competing relevance feedback approaches. I also investigate why the reported effective approaches in the laboratory world are seldom applied in the real world. I find that the reasons lie in two aspects. First, the approaches to relevance feedback expected to be effective in real-world settings are often hard to apply in practice. To solve this applicability problem I provide the corresponding solutions to make relevance feedback easier to apply to operational retrieval systems. Second, existing work often uses improper evaluation methodologies to compare techniques resulting in unfair comparisons or exaggerating the performance gain of relevance feedback. To solve this problem I provide appropriate evaluation methodologies and demonstrate them in a variety of media settings to help researchers to avoid similar problems and lead to unbiased conclusions. By doing so, it is possible to assess when and why relevance feedback can be expected to perform well in practical situations.
Keywords/Search Tags:Relevance feedback, Retrieval, Evaluation, Intrinsic relationships among
Related items