Font Size: a A A

Research On Task Scheduling And Shared Cache Partition Policies On Mulicore Platform

Posted on:2012-04-15Degree:DoctorType:Dissertation
Country:ChinaCandidate:B H ZhouFull Text:PDF
GTID:1228330467981079Subject:Computer system architecture
Abstract/Summary:PDF Full Text Request
The traditional method of increasing clock frequency is mainly to improve the performance of single core processor. As the large-scale computing applications and complex applications are applied widely, the demand of processor computing power and storage system design has greatly increased. Since it is difficult for the single core processor to solve the power consumption and heat treatment problem, the method of raising the processor clock frequency has not satisfied the demand. Multi-core processors which are integrated two or more processor cores on a single chip have been used widely because of their low power, reliability and security characteristics. The complex applications scheduling and cache design will meet a great challenge because of the special structure of multicore processors.The multicore processor scheduling is more complicated than the single core processor scheduling. The multicore processor scheduling method distributes tasks to different cores using the shortest time under the whole system performance and power constraints condition. Most of the current multi-core processors use shared cache structure, but the conflict access between the parallel applications will reduce system performance. In order to solve these problems, the paper researches on the task scheduling and shared cache partition policy on multicore platform.Aiming at the time-sharing tasks on the multicore platform, this paper proposes a scheduling algorithm based on task graph. The space parallel mode is optimized by the parallel node mergence and allocation algorithm, and then the time parallel mode is improved by pipeline design method. At last, the scheduling method which combines the optimized space and time parallel technology is presented. The experiment results show that the proposed scheduling algorithm is able to reduce the communication and synchronization cost effectively, thus it improves the computational efficiency and throughput. Aiming at the real time tasks on the multicore platform, this paper emphatically analyzes the constant utilization server algorithm, and then an improved method is proposed. The new CPU reserved algorithm is able to deal with the deadline, CPU budget and migration time effectively. Based on the modified CPU reserved method, this paper proposes a global reserved scheduling algorithm based on deadline driven system. The algorithm protects the absolute CPU width for the real time tasks, thus it ensures that real-time tasks can be completed before the deadline. The experiment results show that the proposed algorithm can guarantee the real time characteristic and system stability effectively. Moreover, the paper discusses the problem of processor cores allocation in the new algorithm environment, and then the new allocation policy is presented. The new allocation algorithm makes the utilization and response time of each processor core close to the average value of the multi-core processor. As a result, the load balance and system optimization problem is well solved.Aiming at the problem of declining performance caused by parallel application competing shared cache, the shared cache allocation method is proposed. The situation of access failure is divided into two categories, namely local miss and global miss. The miss monitor which can distinguish and count different kinds of access failure is designed. The paper emphatically analyzes the performance changes when the parallel applications obtain or loss shared cache resources, and then the performance evaluation model is presented. Based on the feedback information of the miss monitor and the gain function of the performance evaluation model, we can efficiently select the optimal shared cache allocation method. The experiment results show that the shared cache allocation algorithm can allocate the shared cache reasonably, as a result, the frequency of conflict access is reduced effectively, and the computational efficiency is improved.
Keywords/Search Tags:multicore platform, space parallel, time parallel, real timescheduling, global reserve, shared cahe, miss monitor
PDF Full Text Request
Related items