Font Size: a A A

A framework for evaluating electronic resources

Posted on:2016-03-20Degree:Ph.DType:Dissertation
University:The Pennsylvania State UniversityCandidate:Coughlin, Daniel MFull Text:PDF
GTID:1478390017483400Subject:Information Science
Abstract/Summary:
University libraries can provide access to tens of thousands of journals and spend millions of dollars annually on electronic resources. With several commercial entities providing these electronic resources, the result can be silo systems, processes, and measures to manage the access and to evaluate cost and usage of these resources, making it extremely difficult to provide meaningful analytics for a holistic evaluation. Librarians responsible for collection management spend much of their time manually aggregating data from various sources and have little time to invest in the analysis, which is crucial for effective collection management.;Our research leverages a web-analytics approach for three objectives 1) the creation of a process to evaluate university electronic resources, 2) the creation of a linear regression model to predict usage among these journals, and 3) the development of a system for evaluation of electronic resources at a large research library. This web-analytics foundation will enable understanding the value that specific journals provide university libraries. The first objective is implemented by comparing the impact to the cost, titles, and usage for the set of journals and by assessing the funding area (e.g., social sciences, arts & humanities, physical & mathematical sciences, etc.). Overall, the results highlight the benefit of a web-analytics evaluation framework for university libraries and the impact of classifying titles based on the funding area. By removing the outliers and maintaining the variance in usage and cost among the funding areas, this analysis illustrates the importance for evaluating journals by funding area. In the second objective we categorize metrics into two classes, global (e.g., journal impact factor, Eigenfactor, etc.) that are journal focused and local (e.g., local downloads, local citation rate, etc.) that are institution dependent. Using 275 journals for our training set, our analysis shows that a combination of global and local metrics creates the strongest model for predicting full-text downloads. These results demonstrate the value in creating local metrics for the evaluation of library content collections in order to better inform purchasing decisions versus relying solely on global metrics. In the third objective, we create a conceptual model, implement this model (i.e., the system), and validate this implementation using real-world data. The resulting implementation provides a more sophisticated model of evaluation with a simpler model of implementation than currently employed by many large research libraries. This model and system architecture is proven to scale for evaluation of electronic resources at a large research library. The system aggregates several data sources providing an authoritative repository of information to evaluate journals based on local metrics (i.e. how often an institution cites a journal, how much an institution pays for a journal, how often a particular journals is downloaded, etc.) and global metrics such as Impact Factor or Eigenfactor. The combination of these objectives creates a framework for evaluating electronic resources at scale for a large research library. This framework provides practical methods to classify and evaluate journals, predict usage, and create automated processes and systems to aid in this work. This work adds to the research around collection management and continual improvement within a key component of a research library's mission to provide access to relevant scholarly materials.
Keywords/Search Tags:Electronic resources, Journals, Provide, Research library, Access, Framework, Evaluating, Libraries
Related items