Font Size: a A A

Model-driven memory optimizations for high performance computing: From caches to I/O

Posted on:2013-12-10Degree:Ph.DType:Thesis
University:The Pennsylvania State UniversityCandidate:Frasca, MichaelFull Text:PDF
GTID:2458390008974305Subject:Engineering
Abstract/Summary:
High performance systems are quickly evolving to keep pace with application demands, and we observe greater complexity in system design at all scales. Parallelism, in its many forms, is a fundamental change agent of current system and software architecture, and the greatest source of power and performance challenges. We understand that dynamic techniques are required to optimize computation in this environment and propose model-driven techniques to first understand performance inefficiencies, then respond with online and adaptive mechanisms. In this thesis, we recognize that the parallelism employed creates contention within and throughout the memory hierarchy, and we therefore focus our analysis in this domain.;The memory hierarchy extends from on-chip caches through persistent storage in I/O subsystems, and we analyze and develop models of shared data and cache use to understand how parallel applications interact with hardware and why parallel scalability is often poor. Through this lens of these memory models, we develop dynamic optimization techniques for disparate layers of the memory hierarchy. For on-chip multi-core caches, we seek to improve data sharing characteristics for sparse high performance algorithms. Our approach leverages model-driven insight to dynamically change inter-thread access behavior so that it efficiently maps to the given hardware topology. In the I/O subspace, we target the interference caused by concurrent applications accessing a shared storage caches. We design model-driven techniques to both isolate application behavior and dynamically alter inefficient caching policies.
Keywords/Search Tags:Performance, Model-driven, Caches, Memory, Techniques
Related items