Font Size: a A A

Study On Data Consistency And Process Scheduling For In-memory File System

Posted on:2018-10-27Degree:MasterType:Thesis
Country:ChinaCandidate:Z L SunFull Text:PDF
GTID:2348330533961364Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In data era,we need analyze massive data in order to obtain the required information.In recent years,research community has developed a kind of in-memory file system based on Storage Class Memory(SCM),which has shown great potential in high performance processing of large amounts of real-time data.When designing file systems,it is critical to make the file system work reliably and continuously,which can be guaranteed by maintaining the data consistency when updating data.On one hand,in-memory file systems have different I/O path with the traditional block-based file systems.The existing consistency mechanisms cannot fully utilize the features of SCM.On the other hand,the performance gain of in-memory file systems comes with utilizing large proportion of memory bandwidth,intensive file accesses can take up too much memory bandwidth,thus affecting other processes in the system.In this paper,we propose an efficient consistency policy,called Amphibian Update Strategy(AUS),which can make good use of the characteristics of byte-addressability,random access and reading/writing directly by virtual address in SCM media.AUS determines to use Direct Copy(DC)or Atomic Update(AU)for consistency according to the size of update request.We implement the proposed strategies based on SIMFS and evaluate the performance using the file system test benchmark tool IOZone.The experimental results show AUS can achieve the best performance in all the implementing consistency.In order to solve the problem of excessive memory bandwidth consumption when using CFS under intensive memory accesses.We build an ILP model to get the shortest final execution time and propose Bandwidth-Fit algorithm to schedule the work set under bandwidth constraint.Although ILP can always find the optimal solution,the exponential run time makes it difficult to solve big problems.The O(n)time heuristic algorithm can be more practical in real situations.We implement this strategy as a user-space demon in Linux and the experimental results show this strategy can reduce the execution time by 33.33%.
Keywords/Search Tags:Data consistency, In-memory File system, Virtual address space, Memory Contention, Schedule
PDF Full Text Request
Related items