| The problem of building test collections is central to the development of information retrieval systems such as search engines. The primary use of test collections is the evaluation of IR systems. The widely employed "Cranfield paradigm" dictates that the information relevant to a topic be encoded at the level of documents, therefore requiring effectively complete document relevance assessments. As this is no longer practical for modern corpora, numerous problems arise, including scalability, reusability, and applicability.;We propose a new method for relevance assessment based on relevant information, not relevant documents. Once the relevant information is collected, any document can be assessed for relevance, and any retrieved list of documents can be assessed for performance. Starting with a few relevant "nuggets" of information manually extracted from existing TREC corpora, we implement and test a method that finds and correctly assesses the vast majority of relevant documents found by TREC assessors, as well as many relevant documents not found by those assessors. We then show how these inferred relevance assessments can be used to perform IR system evaluation. We also demonstrate a highly efficient algorithm for simultaneously obtaining both relevant documents and relevant information. Our technique exploits the mutually reinforcing relationship between relevant documents and relevant information, yielding test collections whose efficiency and efficacy exceeds those of typical Cranfield-style collection construction methodologies. Using TREC assessments as feedback, we later demonstrate that using automatically extracted relevant nuggets from documents as features for learning to rank algorithms significantly outperforms standard learning to rank features.;Our main contribution is a methodology for producing test collections that are highly accurate, scalable, reusable, and have great potential for future applications. |