Font Size: a A A

Multicore Real-Time Scheduling Algorithm And Analysis Under Cache Warm-Up Overheads

Posted on:2012-08-22Degree:DoctorType:Dissertation
Country:ChinaCandidate:W ShengFull Text:PDF
GTID:1228330368493611Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
As the power consumption and thermal dissipation problems, researchers propose Chip Multiple Processor. Now multicore processors are not only used into general purpose computing systems, and are constantly applied into embedded systems, which are usually real-time systems. In these systems, the correctness of computation depends not only on correctness of results but also the time when the results are generated.Multicore processor brings real-time systems the efficient process ability and the complex task scheduling. In the single processor system, a general assumption is that the cache lines of a task are all in the cache. We can get its worst case execution time(WCET) with static analysis by cache hit delay. However, in the multicore processor system, there are some cores on which the caches don’t have any lines of the task. Once a task runs on these cores, its WCET will dilate due to cache warm-up overhead. It is an important problem for solving the impacts of cache warm-up overhead in multicore real-time scheduling.Schedulability analysis is an important means for real-time scheduling research. There are mainly two methods for schedulability analysis: simulation and utilization bound test. Simulation is easy to implement, but it only tests finite running paths of a system. So the result of scheduable tasksets is not reliable. Bound utilization test often gets sufficient conditions and the result is pessimistic. For the disadvantages of the two methods, model checking is utilized by some researchers for schedulability analysis. However, they don’t focus on the impact of cache warm-up overhead to schedulability analysis. For this problem, based on model checking this thesis proposes a schedulability analysis method considering cache warm-up overhead and illustrates the efficiency of this method with a classical scheduling algorithm named rate monotonic(RM).For the impact of cache warm-up overhead, there are some tasksets missing their deadlines under multicore RM algorithm. To reduce the unpredictability brought by cache warm-up overhead, some researchers propose the new hardware architecture Push Block, proactively migrating the cache lines of a task to the cache of the target core. When the task runs on the core which already has the cache lines of the task, there is no time wasting in warming up the cache of the core. However, current scheduling algorithms don’t support this architecture. For this problem, this paper is constructed on the static priority scheduling algorithm RM. Using the function provided by the new architecture, we presents three strategies and proposes a new static priority scheduling algorithm, which improves system timeliness and reduces the number of tasksets missing deadlines.In static priority scheduling, the priority of a task is fixed during the system running. This may result in a high-priority task holds a CPU for a long time and low-priority tasks have to wait. Therefore, the dynamic priority scheduling is proposed. It can change priorities of tasks in system running and improve the adjust ability to variable environments. Unfortunately, cache warm-up overhead still causes the system with the dynamic scheduling algorithm missing deadline. In addition, the current multicore dynamic scheduling algorithms don’t use the new architecture named Push Block and can’t improve the performance with this mechanism. For these problems, this thesis proposes a new dynamic scheduling algorithm based on the classical dynamic scheduling algorithm called earliest deadline first(EDF) and presents good performance through experiments.In summary, the main innovative contributions of this thesis include the three aspects,1) For the previous schedulability analyses don’t consider cache warm-up overhead, we propose an exact schedulability analysis method based on model checking and illustrates the efficiency with RM algorithm.2) For cache warm-up overhead results in the timeliness of the classical static priority scheduling algorithm RM reduces, we present three strategies and propose a new algorithm named WM-RM, which retains the performance of RM and improves the ability for adapting to cache warm-up overhead.3) For cache warm-up overhead results in the timeliness of the classical dynamic priority scheduling algorithm EDF reduces, we propose a new algorithm called WM-EDF, which improves the timeliness of a system comparing to EDF.
Keywords/Search Tags:real-time systems, multicore scheduling, schedulability analysis, task migration, cache warm-up overhead
PDF Full Text Request
Related items